The Arrival of Artificial Intelligence
The Arrival of Artificial Intelligence
Chris Dixon opened a truly wonderful piece in the Atlantic entitled How Aristotle Created the Computer like this:
The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.
Dixon goes on to describe the creation of Boolean logic (which has only two values: TRUE and FALSE , represented as 1 and 0 respectively), and the insight by Claude E. Shannon that those two variables could be represented by a circuit, which itself has only two states: open and closed . 1 Dixon writes:
Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)
Dixon is being modest: the distinction may be obvious to computer scientists, but it is precisely the clear articulation of said distinction that undergirds Dixon’s remarkable essay; obviously “computers” as popularly conceptualized were not invented by Aristotle, but he created the means by which they would work (or, more accurately, set humanity down that path).
Moreover, you could characterize Shannon’s insight in the opposite direction: distinguishing the logical and the physical layers depends on the realization that they can be two pieces of a whole. That is, Shannon identified how the logical and the physical could be fused into what we now know as a computer.
To that end, the dramatic improvement in the physical design of circuits (first and foremost the invention of the transistor and the subsequent application of Moore’s Law) by definition meant a dramatic increase in the speed with which logic could be applied. Or, to put it in human terms, how quickly computers could think.
50 Years of AI
Earlier this week U.S. Treasury Secretary Steve Mnuchin, in the words of Dan Primack , “breezily dismissed the notion that AI and machine learning will soon replace wide swathes of workers, saying that ‘it’s not even on our radar screen’ because it’s an issue that is ’50 or 100 years’ away.”
Naturally most of the tech industry was aghast: doesn’t Mnuchin read the seemingly endless announcement of artificial intelligence initiatives and startups on Techcrunch?
Then again, maybe Mnuchin’s view makes more sense than you might think; just read this piece by Maureen Dowd in Vanity Fair entitled Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse :
In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
The rest of the article is pre-occupied with the question of what might happen if computers are smarter than humans; Dowd quotes Stuart Russell to explain why she is documenting the debate now:
“In 50 years, this 18-month period we’re in now will be seen as being crucial for the future of the A.I. community,” Russell told me. “It’s when the A.I. community finally woke up and took itself seriously and thought about what to do to make the future better.”
50 years: that’s the same timeline as Mnuchin; perhaps he is worried about the same things as Elon Musk? And, frankly, should the Treasury Secretary concern himself with such things?
The problem is obvious: it’s not clear what “artificial intelligence” means.
Defining Artificial Intelligence
Artificial intelligence is very difficult to define for a few reasons. First, there are two types of artificial intelligence: the artificial intelligence described in that Vanity Fair article is Artificial General Intelligence, that is, a computer capable of doing anything a human can. That is in contrast to Artificial Narrow Intelligence, in which a computer does what a human can do, but only within narrow bounds. For example, specialized AI can play chess, while a different specialized AI can play Go.
What is kind of amusing — and telling — is that as John McCarthy, who invented the name “Artificial Intelligence”, noted , the definition of specialized AI is changing all of the time. Specifically, once a task formerly thought to characterize artificial intelligence becomes routine — like the aforementioned chess-playing, or Go, or a myriad of other taken-for-granted computer abilities — we no longer call it artificial intelligence.
That makes it especially hard to tell where computers end and artificial intelligence begins. After all, accounting used to be done by hand:
Within a decade this picture was obsolete, replaced by an IBM mainframe. A computer was doing what a human could do, albeit within narrow bounds. Was it artificial intelligence?
Technology and Humanity
In fact, we already have a better word for this kind of innovation: technology. Technology, to use Merriam-Webster’s definition , is “the practical application of knowledge especially in a particular area.” The story of technology is the story of humanity: the ability to control fire, the wheel, clubs for fighting — all are technology. All transformed the human race, thanks to our ability to learn and transmit knowledge; once one human could control fire, it was only a matter of time until all humans could.
It is technology that transformed homo sapiens from hunter-gatherers to farmers, and it was technology that transformed farming such that an ever smaller percentage of the population could support the rest. Many millennia later, it was technology that led to the creation of tools like the flying shuttle, which doubled the output of weavers, driving up the demand for spinners, which drove its own innovation like the roller spinning frame, powered by water. For the first time humans were leveraging non-human and non-animal forms of energy to drive their technological inventions, setting off the industrial revolution.
You can see the parallels between the industrial revolution and the invention of the computer: the former brought external energy to bear in a systematic way on physical activities formerly done by humans; the latter brings external energy to bear in a systematic way on mental activities formerly done by humans. Recall the analogy made by Steve Jobs:
I remember reading an article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet Earth, how many kilocalories did they expend to get from point A to point B. And the condor came in at the top of the list, it surpassed everything else, and humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.
But somebody there had the imagination to test the efficiency of a human riding a bicycle. The human riding a bicycle blew away the condor, all the way off the top of the list, and it it made a really big impression on me that we humans are tool builders, and we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes. And so for me, a computer has always been a bicycle of the mind.
In short, while Dixon traced the logic of computers back to Aristotle, the very idea of technology — of which, without question, computers are a part — goes back even further. Creating tools that do what we could do ourselves, but better and more efficiently, is what makes us human.
Machine Learning
That definition, you’ll note, is remarkably similar to that of artificial intelligence; indeed, it’s tempting to argue that artificial intelligence, at least the narrow variety, is simply technology by a different name. Just as we designed the cotton gin, so we designed accounting software, and automated manufacturing. And, in fact, those are all related: all involved overt design, in which a human anticipated the functionality and built a machine that could execute that functionality on a repeatable basis.
That, though, is why today is different.
Recall that while logic was developed over thousands of years, it was only part way through the 20th century that said logic was fused with physical circuits. Once that happened the application of that logic progressed unbelievably quickly.
Technology, meanwhile, has been developed even longer than logic has. However, just as the application of logic was long bound by the human mind, the development of technology has had the same limitations, and that includes the first half-century of the computer era. Accounting software is in the same genre as the spinning frame: deliberately designed by humans to solve a specific problem.
Machine learning is different. 2 Now, instead of humans designing algorithms to be executed by a computer, the computer is designing the algorithms. 3 It is still Artificial Narrow Intelligence — the computer is bound by the data and goal given to it by humans — but machine learning is, in my mind, meaningfully different from what has come before. Just as Shannon fused the physical with the logical to make the computer, machine learning fuses the development of tools with computers themselves to make (narrow) artificial intelligence.
This is not to overhype machine learning: the applications are still highly bound and often worse than human-designed systems, and we are far, far away from Artificial General Intelligence. It seems clear to me, though, that we are firmly in Artificial Narrow Intelligence territory: the truth is that humans have made machines to replace their own labor from the beginning of time; it is only now that the machines are creating themselves, at least to a degree. 4
Life and Meaning
The reason this matters is that pure technology is hard enough to manage: the price we pay for technology progress is all of the humans that are no longer necessary. The Industrial Revolution benefitted humanity in the long run, but in the short run there was tremendous suffering, interspersed with wars that were far more destructive thanks to technology.
What then are the implications of machine learning, that is, the (relatively speaking) fantastically fast creation of algorithms that can replace a huge number of jobs that generate data (data being the key ingredient to creating said algorithms)? To date automation has displaced blue collar workers; are we prepared for machine learning to displace huge numbers of white collar ones?
This is why Mnuchin’s comment was so disturbing; it also, though, is why the obsession of so many technologists with Artificial General Intelligence is just as frustrating. I get the worry that computers far more intelligent than any human will kill us all; more, though, should be concerned about the imminent creation of a world that makes huge swathes of people redundant. How many will care if artificial intelligence destroys life if it has already destroyed meaning?
- This is only Part 1! Definitely read the whole thing [ ↩ ]
- Not, to be clear, re-named analytics software [ ↩ ]
- Albeit guided by human-devised algorithms [ ↩ ]
- And, by extension, there is at least a plausible path to general intelligence [ ↩ ]
文章版权归原作者所有。