Monday, May 16, 2016

Why is Matt a Historian?


"So Matt, you’re getting a PhD in History… what are you going to do with that?” Whenever I hear that question—and as you can imagine, I’ve gotten that question an awful lot over the years—I kind of laugh it off. I’ve known for a long time that what I wanted to do was to build a career in the academy, even though for the last decade or so that has increasingly become a seemingly poor life choice. I want to find a spot at a state university where I can lecture, work with graduate students as they develop and pursue their own research interests, and, of course, to write. As I’m (ever so slowly) ramping up to this dissertation, I’m just now beginning to think about what kind of histories I want to write, and that’s a philosophical adventure all its own.  But what I want to write about today is the other question I hear every time someone asks “what are you going to do with your degree?”—“so just why do you even want to be a historian anyway?”

What’s funny is, despite all this time pursuing this path, it hasn’t necessarily been a question I’ve really tried to seriously answer until now. My idea for this blog post actually came from one of my middle school age students at Boys and Girls Club (yes, that’s how long ago I started this damn post) asking for help on her history homework. She was having trouble because, to paraphrase her, “History is boring and pointless.” Myself, taken aback, replied “I’m a historian, I do history for a living!” to which her completely candid response was “Gross!” This is not an entirely uncommon conversation, I have learned, to have with middle and high school students. I understand a bit that it’s a matter of perspective: when you’ve only lived 10 years, everything seems pretty static and unchanging. It’s hard to see yourself as a part of or a product of history. Also, if you’re a student living in the First World, history seems to have worked out pretty well for you up ‘til this point so why bother to ask how we got here, right?

I’m a little loathe to admit that until my sophomore year of high school (2003-04?), I was largely the same. I was a good student, so I got good grades in history, but much of it didn’t particularly peak my interest. From what I can remember, I did a pretty fun diorama about the Battle of Bunker Hill when I was in 5th grade, had a blast making a model cross-bow (non-working, obviously) with my dad and grandpa for a 6th grade world history project on the medieval period, hated the Kansas history unit we were forced to do in 7th grade (though I do have a distinct memory of this class discussing the 2000 election, where I learned that the electoral college was a thing and that everything I had been told about American politics up to that point was complete bullshit). I honestly can’t remember much from 8th or 9th grade, but around this time I was becoming invested in the other love of my life, the one that would ultimately lead me into History’s open arms: that’s right kids, I’m talking about communism.

“Just why in the world are you interested in Russia/the Soviet Union, Matt?” Because I get this question a lot too, I’m going to try to piece this together as best I can, well, historically. I really want to say that it all started with watching Enemy at the Gates (a film I later learned rankles the hackles of actual historians like no other, and I hope to return to this soon to illustrate just why that is) one fine afternoon with my good friends Derek Payne and Britt Dahlstrom.[1] Little did I know it, but this would be the start of a life-long love affair with the country, the city, and the history of the Soviet Union. Even with the still relatively uncritical mind of an 8th grader, I was nonetheless captivated by the film’s aesthetic, of the story of the city’s heroic defenders, and the depiction of the struggle between these two political systems that, at the time, I still couldn’t really properly understand. I also thought that snipers were obviously super fucking cool and that a war movie was dope and there’s even that one sex scene even though you don’t see boobs and you know, 8th grader stuff. That year I would even try to recreate battles from the movie using plastic army guys and 4th of July fireworks in my driveway, using old textbooks from middle school and cardboard boxes as buildings (which were of course very historically-accurately burned to the ground). But this was just the start of obsession with the Soviet Union, because to understand it, I would need to figure out just what this “communism” thing was.

That same summer that I recreated the Battle of Stalingrad, I also went to the McPherson Public Library and checked out the only copy of Marx and Engel’s Communist Manifesto. Now, I fully admit that I had no fucking clue what dialectical materialism actually meant, and I most certainly pronounced “bourgeoisie” in my head as “borg-ee-osey.” However, even though I had to chew through like, 80 super confusing pages to get to it, “Workers of the world, unite! You have nothing to lose but your chains” still struck a chord. Engels really knew how to sell an idea, even to a 13 year old kid well over a century later. I had found a means of grappling with all my questions regarding the complexities of human nature, and I was pretty smug about it.
I think its worth noting here, too, that as a kid raised in a small Midwest town, brought up in a Methodist church, created some pretty stubborn paradoxes for my early political thinking. Upon learning what socialism was all about, combined with how derisively it was still talked about by nearly everyone around me, despite being pretty verbatim all the things we’re supposed to do anyway according to the Bible… I’ve never been one keen to tolerate hypocrisy well, I guess is what I’m saying. Even while away at church camp that summer I found myself talking about socialism with other kids at camp—there was one older kid named Karl (I shit you not) who had a similar obsession, and he and I bonded pretty quickly discussing the contradictions between the teachings of the church and actual American politics.

Really though, in the grand scheme of things, this was all a lead up to sophomore year of high school when, as a good student, I enrolled in AP World History with the man, the legend, John Lujano. How to explain Lujano to those of you who don’t already know him? He was a demanding teacher who taught a demanding class, but under his harsh demeanor, he really cared about his students learning history. He was genuinely fucking hilarious, but he could also be absolutely terrifying, both in controlling the room and in detailing his high expectations for us. For those of you who didn’t take AP Hist in high school, the class is designed to culminate in an exam that actually gave college credit. The test is graded on a scale of 1-5, and you have to get at least a 3 to “pass.” From the first day of class, we were told that if we didn’t focus, that if we didn’t take this shit seriously, we would not get a 3. We started writing practice exam questions in the first month of class, just to show us what it entailed.

I think one of the things that instantly made AP World my favorite class of my sophomore year was that it was so challenging. I was a student where most subjects came easily for me. This didn’t. It was a whole new way of compiling, processing, and making sense of information, not to mention really having to develop my skills as a writer. In addition, I’m so glad that the class focused on WORLD history. When so much of the history we’re fed as kids is so focused around American History or Western Civ… HEY GUYS GUESS WHAT? ASIA AND AFRICA HAVE INCREDIBLY DYNAMIC AND IMPORTANT HISTORIES, TOO! So many more stories were out there that I had never heard of before, so many incredible stories. Go look up Ibn Batuta (who was basically the first anthropologist who walked all over Africa just to observe its diverse cultures), or Xiang Hi (who was a eunuch, commanded a fleet of ships that would have made Colombus cry, and brought a fucking giraffe back to the court of the Emperor of China).  I still remember these stories so vividly, because Lujano put them front and center. We were critical of topics other history teachers would have rather have not talked about: the trans-Atlantic slave trade, European colonial practice, American internment camps… what have you. Not only that, but we talked about history not so much in terms of linear development to the present day, but rather an extended set of logical consequences stemming from broad scale human interaction over time. I learned more about the way human beings work that year more than my whole life prior. While I didn’t quite fully understand why just yet, I was definitely hooked on history. As a subject, at least. Still didn’t think of it as a career. That come’s much later in the story.

I made it through AP World and got a 4 on the exam. That meant that I could cash it in for college credit later, which was great I guess. At the time, I assumed I was knocking out one of my gen eds. Because until my freshman year of college, I thought I was going to be an architect. I loved drawing, and I was pretty handy with CAD (even if I spent most of that class in high school playing Unreal Tournament/NES games/Impossible Creatures/Halo). It wasn’t until I got to K-State and saw the sheer amount of math and physics and tedious time spent in a studio to get that degree that I began to have second thoughts. I ended up falling back toward the subject that I had been fond of in high school, for really no other reasons than it interested me and I was good at it. I started taking Arabic as well, because when people used to ask me “what will you do with a Bachelor’s in history?” I thought to myself “well… work for the state department? I guess?”

Arabic ended up being a bust, and so did Mandarin Chinese a year later. I don’t have a particular knack for languages, so maybe trying to pick of two of the hardest ones possible for no other reason than it might come in handy down the line if I try to work for the government wasn’t the best idea. Especially as it became apparent, the more political science that I took, that my personal input into state affairs would be valued less than game theory and real politick—observing U.S. foreign policy abroad both then and now has made it very apparent to me that there is no place for idealism in neoliberal practice. As my disillusion became more acute and I became more and more critical of the state and politics in general, the less I could reconcile the idea of being an imperialist stooge for a living. Every time I’ve gone to a professionalization talk and hear from an academic turned policy advisor, I give Past Matt a high-five for bailing on that when he did.
(It was late in my senior year of high-school that I got into Against Me! and the political punk music scene—the theme for this blog post might very well be Propagandhi’s “A People’s History of the World”: https://www.youtube.com/watch?v=OihwaWQOI54)

It was ultimately my junior year before I was able to take a class dedicated strictly to the study of Russian history, taught by none other than my former professor now friend David Stone, a military historian wunderkind with a passion for Russian history to match my own. I threw myself into the class, getting I think some of the only 4.0/A+s of my undergrad career. I chewed through our readings, had my hand up in every class discussion (where I got to debate and argue with the future friends Ashlyn Yarnell, Gloria Funcheon, and Ben Harkins), and devoted more time and effort to papers than I knew myself capable of. It was clear then that I had found my area of study, and beyond that, that there was still so much I needed to know. I would need to go on to grad school, that’s all there was to it. In Dr. Stone’s words: “when you get there, you’re going to feel like you know nothing. It will be absolutely true, but it’s OK. You just need to read more. And you will read… lots more.” And that has been exactly what I’ve done for the past 5 years, and I’m certainly not done yet!

There are so many people I need to properly thank somehow for pushing me to keep going with grad school even when the going got tough—Dave, for his sound advice of “what do you mean you haven’t taken Russian language yet?? Do that! Do that now!”; Sirs Ken Yohn and Peter “TheLibrarian,” who taught me how to ace an admissions process while we were all hungover on a sidewalk in front of a church in God knows where Iowa; Eve Levin, who’s infinite knowledge of the comings and goings of the Russian field not only pushed me to reach my full potential and even matched me with the perfect advisor at UW… So many others, not nearly enough time to do them all justice. I’m aready freaking out about how many books I’m going to need to write just to dedicate them to everyone possible (and the inevitable +1 I’ll have to do to devote to my patient wife and loving family… ughhhhhhhhh). Basically though, grad school has been a process of solid reinforcement and continual challenge, the environment where I thrive, and in pursuit of doing something I love. Looking back, it’s hard to see how anyone could have talked me into doing anything else.

Now though, as the realities of adult life press in on all sides, and my dissertation has all but come to a halt even before it could begin due to a lack of funding, I’m really pressed to answer the question. Would I have been better off learning a trade? Should I have sought out a job right out of college and put those skills to use, even if it’s not actually doing history? Who really is actually going to benefit from my work? Why even bother, when the average middle-schooler’s and, hell, the average adult’s first reaction to me talking about what I do will be “Gross!”

I’ve tried to look at this in two parts: the benefits of my being a historian to myself and then to society. They’re definitely somewhat interrelated concepts, obviously, because what I subjectively think is good for society might not always be what others (or society itself) thinks is good for it, but whatever. From my own standpoint, the decision to make history, and therefore academia, my career was a sort of process of omission. The more and more I looked at the world around me and the paths that I could take as I made my way through it, I came to see jobs that I simply couldn’t reconcile myself to do. I couldn’t work for a state that would regularly violate my own principles. I certainly couldn’t work in the private sector—to make a buck within the current system to me seems predicated on the exploitation of someone, somehow, somewhere down the line. Though this is no means the case, the onerous nature of the alternatives somehow made the academy seem like a bastion of neutrality, a port in the storm where I could nurture my own values and find a way to make them reach out into the world.

Beyond that—and this is something I’ve known for a long, long time—if there’s one thing in this world that I love and can appreciate, it’s the telling of a good story. Whether it be a funny anecdote, a powerful lived experience, or the unbridled imaginary creation of entire worlds for my D&D group, I have always been able to tell a story. (To put this further into nerd terms, revealed above by my Dungeons and Dragons confession, at heart I’m a Bard—CHA has always been my highest stat, no question). Not only do I love a good story, but I’ve come to appreciate now just how stories shape our selves, our ideas, our cultures, and our societies. Stories play such a central role in the way we form our worldview, that all of a sudden I feel that my calling as a storyteller is imbued with a power that is hard to access and few will appreciate, but nonetheless it’s still there. The one thing I want to do for the rest of my life is be a story teller, like the priests and shamans of old, a keeper of knowledge and wisdom who can, through my words, pass this gift on to others.

Which I guess brings me to the value of my work to the rest of you. Before I get to why I think my particular research is important, just in general, I’ll take a page from the idea of the “Life Boat Debate” (for those who haven’t heard me expound on this yet, I first heard about it from a fantastic This American Life story that aired back in 2010.[2] Basically undergrads pack a lecture hall, which is then sealed to form the “life boat” of all that remains of humanity after some disaster, while 9 representatives from various academic departments sit on stage and try to convince the students that theirs is the discipline that will be most important to human survival.) My argument to the kids would be “WELP history shows that odds are one of you is going to emerge as a despot after all this, and you know what really helps to have on your side? The people who write this history books.” This is of course, my joke answer, and in no way reflects what I would actually do, but it point to something I think is important: so much of the darkness that permeates American politics today is the subscription and cover of bad histories. Why would we need Black Lives Matter? (because 300+ years of slavery, oppression, and systematic targeting of minority groups by the United States government didn’t have that much of an impact, you see) We can’t have immigrants coming in and ruining the country! (Worked out really poorly in the past, clearly, we let in a whole lot of white supremacists who keep spouting jingoistic bigotry)  Japan would have never surrendered if we didn’t drop not one, but two atomic bombs on cities, not military targets or installations but civilian populations. We HAD to do it! So it’s totally not a war crime.[3]

The problem is, just as often as not, the bad history isn’t coming from outside the discipline. The call is coming from inside the house, so to speak. If there’s one thing I’ve learned from reading a lot on Soviet and Modern European history, produced for and by all corners of the political spectrum, it’s that there is a culture war going on in the pages of text books. The past itself is up for grabs, and historians are the foot soldiers trying to stake claim on not just the past, but the future. Because how we remember the past matters a whole lot, a lot more than probably you’ve been willing to think about up to this point if you haven’t devoted your entire career to it for the last 10 years. But I have.
I hate bad histories. I hate what subscribing to them might they engender in the future. The whole “those who forget history are doomed to repeat it” is bullshit. We willingly wipe away the truth again and again so we don’t have to face it, and precisely so we can do the exact same horrible shit in the future. We practice selective amnesia so we can sleep at night. Those who preserve history make it less likely for the same kinds of people who did horrible things in the past won’t get away with it in the future. That is our job.

Still though, many of my readers living here in the States might yet be asking, “why the hell should we care what happened in Stalingrad after they beat the Nazi’s? What use can that possibly be to us?” My answer to that is fairly straightforward, though perhaps not entirely convincing. The experience of those Soviet men and women who survived the war was an entirely human one. These were people that looked into the open, waiting jaws of annihilation as they knew it, and only just pulled back from the brink. They had hopes and dreams for their future, their children and grandchildren’s futures, and they all had a past, one that you could both be proud of and conflicted about at the same time (and surely that’s relatable to some of us, right?). They struggled, together, to rebuild a community that was lost, and to in turn produce something entirely new. They strove to achieve socialism, whatever that might have meant to them individually, even while confronting some of the harsher realities of what the revolution had produced. Through their trials and tribulations, their joys and their sorrows, I hope to learn--as well as to show others--something about ourselves as human beings.

There’s definitely something this little hiatus from doing history has taught me: there is no other work in this world that I would rather be doing than doing research and telling stories. A combination of factors has been really wearing me down lately, and it doesn’t look like the funding situation is going to get any better anytime soon, so it’s hard to say when I’ll get to go back to the actual work I want to be doing. I’m trying to get things moving that direction, but… the process, in light of reality of circumstance, is going to be slow.

Anyway, thank the Gods for their rainy intervention this morning to finally give me the time to get this damn post done and posted. Hopefully I’ve answered my own question, and, finally, all of yours. Perhaps I’ve revealed a window into what it is that makes me tick, or if you know me well, maybe this is all old news. Either way, its good to finally get it all down on paper. Next time someone asks, “Hey Matt, so why are you a historian?” I can just push my imaginary glasses up my nose, geek snort, and say, “well, if you go on my blog…”

Until next time, comrades…

-MC





[1] I would like to point out that, too this day, I still prefer the light, buttery taste of Nestle Town House Crackers over Keebler Club Crackers
[2] http://www.thisamericanlife.org/radio-archives/episode/402/save-the-day (The Life Boat Debate is Act II, skip ahead to about the 42nd minute if you don't want to listen to the full episode).
[3] This is becoming an increasingly disturbing refrain from the American government, apparently you can only commit war crimes if you don’t have the backing of the world’s largest and most invasive military. See http://www.npr.org/sections/thetwo-way/2016/04/29/476178817/pentagon-report-says-airstrikes-on-afghan-hospital-wasnt-a-war-crime, or https://consortiumnews.com/2014/08/06/the-enduring-myth-of-hiroshima/

Monday, May 2, 2016

WSP Presents: Automation (by Mongolian correspondent Eric Chase)

Hi Everyone!
I really should have posted this a day ago, being the First of May and the international day of labor and what not. Anyway, since my last post was focused on labor in both past and present, I thought I would ask one of my friends actually capable of thinking about the future to give me his take on the labor of tomorrow. He was thankfully willing to do so, and even more generous to let me post it here because I'm lazy and terrible about keeping the deadlines I set for myself. Whatever. Without further ado, I give you Eric Chase's "Automation."

----------------

Artificial intelligence, automation, and the future of labor

I should begin this piece by stating unequivocally that I am an enthusiastic amateur with regard to computer and information technology, and am out of my element (Donny) a bit with regard to some of the assertions below. I try to avoid any idle speculation and base everything on something I've actually read, but if you have any corrections, please let me know.

Wouldn't it be fascinating if computers could learn in the same way we can? True, a computer can calculate the trajectory of an asteroid from billions of miles away, but can it recognize that the thing inside that piece of bread is a cat's face?

This question is, essentially, what this piece will be about: computers learning to process information without any direct commands from either a human or its programming. This process, called machine learning, is based on the concept of neural networks. Though not new in theory, machine learning is still in its infancy in the artificial intelligence (AI) field, and seems poised to fundamentally alter human society in ways foreseeable and unforeseeable.


I have recently been surprised at the number of people who don't know that self-driving cars are a thing. A few days ago on a walk with some friends, I made a passing comment about how soon we likely won't need traffic lights, and won't it be nice when we can sit back and read a book while the car drives itself? That friend then said, “What? Self-driving cars? That doesn't seem possible.”

This friend can be forgiven for her skepticism; as recently as 2004, a prominent study proposed that, “executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behavior...” [link0]. And yet, it's not only possible, it's imminent [link1]. Google is just one of a handful of companies who have already built several self-driving cars. In the millions of miles these cars have driven, only one accident has been recorded as being the fault of the driverless car (a 2 mph collision with a bus that the car's AI assumed would allow it to merge [link2]). The shift to driverless cars is in its infancy, a nascent trend which will culminate in another victory for the internet of things (if you are unfamiliar with the Internet of Things, here's a very comprehensive [link3]).

Ultimately, self-driving cars are merely one aspect of a trend which has been gaining momentum in recent years and decades: that of automation. Robots and AI have contributed mightily to the death of America's manufacturing sector [link4] and, just as crucially, to the unions which used to proliferate in them [link5]. For example: a truck driven by  AI doesn't need to take a break or sleep, can travel with other trucks in convoys literally inches from each others' bumpers to reduce drag, and requires no pay other than maintenance and an initial investment. Where does that leave the Teamsters Union? 

Some changes self-driving cars might herald are predictable: the loss of jobs, the significantly reduced instances of death in car crashes, increased revenue from fuel efficiency, and the ability to travel safely at any time of day and in virtually any conditions. If you've ever driven across country in the wee hours of the morning, imagine that previously-empty road filled with cars, all moving at a uniform speed.

There are, however, some alterations which are unpredictable. What impact will driverless cars have on police revenues from traffic tickets? Will people still need to be licensed to drive? Evidence suggests that over time not only will self-driving cars be popular, but eventually they will be mandatory [link6]– the culture wars that might spark provide an enticing mental exercise. 

And yet, the shift to self-driving cars is just one aspect of automation that will have a profound impact on human society. AI has been making moves forward in leaps and bounds, performing tasks that scientists and specialists believed were still years if not decades away. The stories concerning Google's AlphaGo beating Go world champion Lee Sedol bespeak a feat even more impressive than their bombastic headlines indicate. Go was considered to be far too complex and too rooted in intuition for computers to learn, and yet here we are. AlphaGo actually learned how to play the game by first learning the rules, reviewing the greatest known matches, and playing against itself millions of times. The number of possible board positions in Go is greater than the number of atoms in the universe; even a computer as sophisticated as AlphaGo can't sort through that much data to try to optimize without making decisions and trying to predict how its opponent will play.

Though Sedol's defeat was the result of a highly specialized bit of programming and as such probably poses no immediate threat to your job right now, it represents another new trend in AI: deep learning. Deep learning is a bit more complex than yours truly can adequately explain, but it goes something like this: it's a way to structure a computer network into multiple layers that are designed and programmed to function much like the neurons in our own brains – one part of a neural network (several computers or parts of a computer linked together to feed each other information) may be good at recognizing element X, whereas the actual problem they're looking at is composed of elements X, Y, and Z. But somewhere else in this neural network is a component that is very good at recognizing element Y, and somewhere else is elements Z. They feed this information up to the central control (picture the little man that runs the computer screen behind your eyes) which is good at recognizing and sorting the data it receives, and what parts of that data are useful to solving whatever problem it is facing.

So, to vastly oversimplify, a computer which used to have a lot of trouble recognizing that that thing was a cat with a piece of toast around its face can now do so with no trouble. What deep learning is, then, is those networks becoming ever deeper – allowing more and more elements to be incorporated to solve more and more complex problems. The voice search function on your phone, which seems to get better every year? That's a product of machine learning. The function which allows you to take a picture of a check on your phone and deposit it without visiting the bank? Machine learning. The Google search function which recognizes misspelled words and corrects them for you automatically? The recommended movies on Netflix? The Ads that you see on Facebook? All products of deep learning. Deep learning also, of course, makes driverless cars possible, and computer feats like AlphaGo.

As these networks become deeper and more integrated, the problems they will be able to solve without any human input will become more complex. And again, that raises interesting and important questions with regard to labor, both blue and white collar. For example, a recent report concluded that well over 114,000 jobs will be lost in the legal profession to automation over the next two decades in the US. That may not seem like much, but it accounts for over 39% of the profession's total employment, and the report was running on the assumption of business as usual (i.e., computers not getting smarter).

In the financial industry, there are already companies out there (Quantium and Binatix, for example) which employ deep-learning techniques to predict how the market will behave in certain situations, and pick viable investments for long and short term. Though they are still fledgling companies, they can demonstrate some success in the markets – for example, five-year-old company Binatix no longer needs funding from outside sources, and runs on its own profits [link7].

While it's true that there are several tasks which are currently beyond even deep learning, it seems unlikely that they will remain impossibly difficult; a computer's capacity for learning, after all, is tied to its ability to compute data. This brings us to a brief discussion of Moore's Law, the observation that computing power doubles about once every two years. This was predicted in 1972 and has been observed to be true with remarkable accuracy ever since [link8]. This trend has seen computers go from being the size of a house to having a computer in your pocket that could outperform those that put men on the moon in 1969.

The predictions of the death of Moore's Law have been plentiful since its inception, but the trend trudges onward. Obviously this is not really a “law” per se; the law itself is based on the fact that chips can hold more and more transistors every two years as our ability to make them smaller and smaller is refined. That trend simply cannot continue forever; we're already approaching the atomic scale for transistors, and eventually we'll hit a bottom limit. Many in the industry, including the man who made the original observation, predict the trend will run out of steam some time around 2025.

Yet that prediction is based on the physical limitations of transistors and, again, a business-as-usual approach to theorizing. Looking at Moore's Law as a function of computing power, however (and not just transistors), there are exciting technologies out there that could extend Moore's Law far into the future. For instance, quantum computing (another terrifically complicated concept that I cannot adequately explain) could theoretically harness the power of quantum phenomena to vastly improve computing power without using any transistors at all [link9]. But perhaps most intriguing is the idea that integrated networks that employ deep learning techniques might continue the trend of computer power doubling without requiring any improvements in physical computing power [link10].

This paints an interesting and perhaps slightly disturbing picture: a computer than can not only learn, but gets smarter as it learns. To return to the discussion of labor: what jobs could a self-improving computer not learn to do? [Of course, this leaves behind the discussion of a technological singularity or AI consciousness, warned against by the likes of Stephen Hawking and Elon Musk, a fascinating topic in its own right. If you're interested, there are dozens of very good videos and TED talks on YouTube concerning the singularity from multiple perspectives. If you'd prefer to read about it, I've found works by Ray Kurzweil to be informative.]

Employment itself is in jeopardy. As John Maynard Keynes predicted in 1933, there is a point at which we will face widespread and damaging unemployment “...due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.” In the face of this change, which seems inevitable, human society will be forced to make a choice: preserve the old way of doing things in the economy, or fundamentally alter the way we view employment.

From Ulaangom, Mongolia
-Eric Chase


Link0: http://press.princeton.edu/titles/7704.html
Link2: http://www.engadget.com/2016/02/29/google-self-driving-car-accident/
Link3: http://www.pcmag.com/article2/0,2817,2418471,00.asp
link10: http://www.extremetech.com/extreme/203490-moores-law-is-dead-long-live-moores-law

Other links of interest:
----------------

I swear to god I'm going to get my shit together here soon and start writing again. Thanks again, Eric! Miss you like crazy buddy.

Until next time, comrades...

-MC