Monday, May 2, 2016

WSP Presents: Automation (by Mongolian correspondent Eric Chase)

Hi Everyone!
I really should have posted this a day ago, being the First of May and the international day of labor and what not. Anyway, since my last post was focused on labor in both past and present, I thought I would ask one of my friends actually capable of thinking about the future to give me his take on the labor of tomorrow. He was thankfully willing to do so, and even more generous to let me post it here because I'm lazy and terrible about keeping the deadlines I set for myself. Whatever. Without further ado, I give you Eric Chase's "Automation."

----------------

Artificial intelligence, automation, and the future of labor

I should begin this piece by stating unequivocally that I am an enthusiastic amateur with regard to computer and information technology, and am out of my element (Donny) a bit with regard to some of the assertions below. I try to avoid any idle speculation and base everything on something I've actually read, but if you have any corrections, please let me know.

Wouldn't it be fascinating if computers could learn in the same way we can? True, a computer can calculate the trajectory of an asteroid from billions of miles away, but can it recognize that the thing inside that piece of bread is a cat's face?

This question is, essentially, what this piece will be about: computers learning to process information without any direct commands from either a human or its programming. This process, called machine learning, is based on the concept of neural networks. Though not new in theory, machine learning is still in its infancy in the artificial intelligence (AI) field, and seems poised to fundamentally alter human society in ways foreseeable and unforeseeable.


I have recently been surprised at the number of people who don't know that self-driving cars are a thing. A few days ago on a walk with some friends, I made a passing comment about how soon we likely won't need traffic lights, and won't it be nice when we can sit back and read a book while the car drives itself? That friend then said, “What? Self-driving cars? That doesn't seem possible.”

This friend can be forgiven for her skepticism; as recently as 2004, a prominent study proposed that, “executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behavior...” [link0]. And yet, it's not only possible, it's imminent [link1]. Google is just one of a handful of companies who have already built several self-driving cars. In the millions of miles these cars have driven, only one accident has been recorded as being the fault of the driverless car (a 2 mph collision with a bus that the car's AI assumed would allow it to merge [link2]). The shift to driverless cars is in its infancy, a nascent trend which will culminate in another victory for the internet of things (if you are unfamiliar with the Internet of Things, here's a very comprehensive [link3]).

Ultimately, self-driving cars are merely one aspect of a trend which has been gaining momentum in recent years and decades: that of automation. Robots and AI have contributed mightily to the death of America's manufacturing sector [link4] and, just as crucially, to the unions which used to proliferate in them [link5]. For example: a truck driven by  AI doesn't need to take a break or sleep, can travel with other trucks in convoys literally inches from each others' bumpers to reduce drag, and requires no pay other than maintenance and an initial investment. Where does that leave the Teamsters Union? 

Some changes self-driving cars might herald are predictable: the loss of jobs, the significantly reduced instances of death in car crashes, increased revenue from fuel efficiency, and the ability to travel safely at any time of day and in virtually any conditions. If you've ever driven across country in the wee hours of the morning, imagine that previously-empty road filled with cars, all moving at a uniform speed.

There are, however, some alterations which are unpredictable. What impact will driverless cars have on police revenues from traffic tickets? Will people still need to be licensed to drive? Evidence suggests that over time not only will self-driving cars be popular, but eventually they will be mandatory [link6]– the culture wars that might spark provide an enticing mental exercise. 

And yet, the shift to self-driving cars is just one aspect of automation that will have a profound impact on human society. AI has been making moves forward in leaps and bounds, performing tasks that scientists and specialists believed were still years if not decades away. The stories concerning Google's AlphaGo beating Go world champion Lee Sedol bespeak a feat even more impressive than their bombastic headlines indicate. Go was considered to be far too complex and too rooted in intuition for computers to learn, and yet here we are. AlphaGo actually learned how to play the game by first learning the rules, reviewing the greatest known matches, and playing against itself millions of times. The number of possible board positions in Go is greater than the number of atoms in the universe; even a computer as sophisticated as AlphaGo can't sort through that much data to try to optimize without making decisions and trying to predict how its opponent will play.

Though Sedol's defeat was the result of a highly specialized bit of programming and as such probably poses no immediate threat to your job right now, it represents another new trend in AI: deep learning. Deep learning is a bit more complex than yours truly can adequately explain, but it goes something like this: it's a way to structure a computer network into multiple layers that are designed and programmed to function much like the neurons in our own brains – one part of a neural network (several computers or parts of a computer linked together to feed each other information) may be good at recognizing element X, whereas the actual problem they're looking at is composed of elements X, Y, and Z. But somewhere else in this neural network is a component that is very good at recognizing element Y, and somewhere else is elements Z. They feed this information up to the central control (picture the little man that runs the computer screen behind your eyes) which is good at recognizing and sorting the data it receives, and what parts of that data are useful to solving whatever problem it is facing.

So, to vastly oversimplify, a computer which used to have a lot of trouble recognizing that that thing was a cat with a piece of toast around its face can now do so with no trouble. What deep learning is, then, is those networks becoming ever deeper – allowing more and more elements to be incorporated to solve more and more complex problems. The voice search function on your phone, which seems to get better every year? That's a product of machine learning. The function which allows you to take a picture of a check on your phone and deposit it without visiting the bank? Machine learning. The Google search function which recognizes misspelled words and corrects them for you automatically? The recommended movies on Netflix? The Ads that you see on Facebook? All products of deep learning. Deep learning also, of course, makes driverless cars possible, and computer feats like AlphaGo.

As these networks become deeper and more integrated, the problems they will be able to solve without any human input will become more complex. And again, that raises interesting and important questions with regard to labor, both blue and white collar. For example, a recent report concluded that well over 114,000 jobs will be lost in the legal profession to automation over the next two decades in the US. That may not seem like much, but it accounts for over 39% of the profession's total employment, and the report was running on the assumption of business as usual (i.e., computers not getting smarter).

In the financial industry, there are already companies out there (Quantium and Binatix, for example) which employ deep-learning techniques to predict how the market will behave in certain situations, and pick viable investments for long and short term. Though they are still fledgling companies, they can demonstrate some success in the markets – for example, five-year-old company Binatix no longer needs funding from outside sources, and runs on its own profits [link7].

While it's true that there are several tasks which are currently beyond even deep learning, it seems unlikely that they will remain impossibly difficult; a computer's capacity for learning, after all, is tied to its ability to compute data. This brings us to a brief discussion of Moore's Law, the observation that computing power doubles about once every two years. This was predicted in 1972 and has been observed to be true with remarkable accuracy ever since [link8]. This trend has seen computers go from being the size of a house to having a computer in your pocket that could outperform those that put men on the moon in 1969.

The predictions of the death of Moore's Law have been plentiful since its inception, but the trend trudges onward. Obviously this is not really a “law” per se; the law itself is based on the fact that chips can hold more and more transistors every two years as our ability to make them smaller and smaller is refined. That trend simply cannot continue forever; we're already approaching the atomic scale for transistors, and eventually we'll hit a bottom limit. Many in the industry, including the man who made the original observation, predict the trend will run out of steam some time around 2025.

Yet that prediction is based on the physical limitations of transistors and, again, a business-as-usual approach to theorizing. Looking at Moore's Law as a function of computing power, however (and not just transistors), there are exciting technologies out there that could extend Moore's Law far into the future. For instance, quantum computing (another terrifically complicated concept that I cannot adequately explain) could theoretically harness the power of quantum phenomena to vastly improve computing power without using any transistors at all [link9]. But perhaps most intriguing is the idea that integrated networks that employ deep learning techniques might continue the trend of computer power doubling without requiring any improvements in physical computing power [link10].

This paints an interesting and perhaps slightly disturbing picture: a computer than can not only learn, but gets smarter as it learns. To return to the discussion of labor: what jobs could a self-improving computer not learn to do? [Of course, this leaves behind the discussion of a technological singularity or AI consciousness, warned against by the likes of Stephen Hawking and Elon Musk, a fascinating topic in its own right. If you're interested, there are dozens of very good videos and TED talks on YouTube concerning the singularity from multiple perspectives. If you'd prefer to read about it, I've found works by Ray Kurzweil to be informative.]

Employment itself is in jeopardy. As John Maynard Keynes predicted in 1933, there is a point at which we will face widespread and damaging unemployment “...due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.” In the face of this change, which seems inevitable, human society will be forced to make a choice: preserve the old way of doing things in the economy, or fundamentally alter the way we view employment.

From Ulaangom, Mongolia
-Eric Chase


Link0: http://press.princeton.edu/titles/7704.html
Link2: http://www.engadget.com/2016/02/29/google-self-driving-car-accident/
Link3: http://www.pcmag.com/article2/0,2817,2418471,00.asp
link10: http://www.extremetech.com/extreme/203490-moores-law-is-dead-long-live-moores-law

Other links of interest:
----------------

I swear to god I'm going to get my shit together here soon and start writing again. Thanks again, Eric! Miss you like crazy buddy.

Until next time, comrades...

-MC

No comments:

Post a Comment