To Be a Machine Read online

Page 6


  One such person was Dmitry Itskov, a thirty-four-year-old Russian tech multimillionaire and founder of the 2045 Initiative, an organization whose stated aim was “to create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.” One of Itskov’s projects was the creation of “avatars”—artificial humanoid bodies that would be controlled through brain-machine interface, technologies that would be complementary with uploaded minds. He had funded Randal’s work with Carboncopies, and in 2014 they organized a conference together at New York’s Lincoln Center called Global Future 2045, aimed, according to its promotional blurb, at the “discussion of a new evolutionary strategy for humanity.”

  When we spoke, Randal was working with another tech entrepreneur named Bryan Johnson, who had sold his automated payment company to PayPal a couple of years back for $800 million, and who now controlled a venture capital concern called the OS Fund, which, I learned from its website, “invests in entrepreneurs working towards quantum leap discoveries that promise to rewrite the operating systems of life.” This language struck me as strange and unsettling in a way that revealed something crucial about the attitude toward human experience that was spreading outward from its Bay Area epicenter—a cluster of software metaphors that had metastasized into a way of thinking about what it meant to be a human being. (Here’s how Johnson had put it in a manifesto on the fund’s website: “In the same way that computers have operating systems at their core—dictating the way a computer works and serving as a foundation upon which all applications are built—everything in life has an operating system (OS). It is at the OS level that we most frequently experience a quantum leap in progress.”)

  And it was the same essential metaphor that lay at the heart of Randal’s emulation project: the mind as a piece of software, an application running on the platform of flesh. When he used the term “emulation,” he was using it explicitly to evoke the sense in which a PC’s operating system could be emulated on a Mac, as what he called “platform independent code.”

  The relevant science for whole brain emulation is, as you’d expect, hideously complicated, and its interpretation deeply ambiguous, but if I can risk a gross oversimplification here, I will say that it is possible to conceive of the idea as something like this: First, you scan the pertinent information in a person’s brain—the neurons, the endlessly ramifying connections between them, the information-processing activity of which consciousness is seen as a by-product—through whatever technology, or combination of technologies, becomes feasible first (nanobots, electron microscopy, etc.). That scan then becomes a blueprint for the reconstruction of the subject brain’s neural networks, which is then converted into a computational model. Finally, you emulate all of this on a third-party non-flesh-based substrate: some kind of supercomputer, or a humanoid machine designed to reproduce and extend the experience of embodiment—something, perhaps, like Natasha’s Primo Posthuman.

  The whole point of substrate independence, as Randal pointed out to me whenever I asked him what it would be like to exist outside of a human body—and I asked him many times, in various ways—was that it would be like no one thing, because there would be no one substrate, no one medium of being.

  This was the concept transhumanists referred to as “morphological freedom”: the liberty to take any bodily form technology permits.

  “You can be anything you like,” as an article about uploading in Extropy magazine put it in the mid-1990s. “You can be big or small; you can be lighter than air, and fly; you can teleport and walk through walls. You can be a lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a ceiling.”

  What really interested me about this idea was not how strange and far-fetched it seemed (though it ticked those boxes resolutely enough), but rather how fundamentally identifiable it was, how universal. When I was talking to Randal, I was mostly trying to get to grips with the feasibility of the project, and with what it was he envisioned as a desirable outcome. But then we would part company—I would hang up the call, or I would take my leave and start walking toward the nearest BART station—and I would find myself feeling strangely affected by the whole project, strangely moved.

  Because there was something, in the end, paradoxically and definitively human in this desire for liberation from human form. I found myself thinking often of W. B. Yeats’s “Sailing to Byzantium,” in which the aging poet writes of his burning to be free of the weakening body, the sickening heart—to abandon the “dying animal” for the man-made and immortal form of a mechanical bird. “Once out of nature,” he writes, “I shall never take/My bodily form from any natural thing/But such a form as Grecian goldsmiths make.”

  Yeats, clearly, was not writing about the future so much as an idealized phantasm of the ancient world. But the two things have never been clearly separated in our minds, in our cultural imaginations. All utopian futures are, in one way or another, revisionist readings of a mythical past. Yeats’s fantasy here is the fantasy of being an archaic automaton invested with an incorruptible soul, a mechanical bird singing eternally. He was writing about the terror of aging and bodily decline, about the yearning for immortality. He was asking the “sages” to emerge from a “holy fire” and to gather him “into the artifice of eternity.” He was dreaming of a future: an impossible future in which he would not die. He was dreaming, I came to feel, of a Singularity. He was singing of what was past, and passing, and to come.

  In May 2007, Randal was one of thirteen participants at a workshop on mind uploading held at the Future of Humanity Institute. The event resulted in the publication of a technical report, coauthored by Anders Sandberg and Nick Bostrom, entitled “Whole Brain Emulation: A Roadmap.” The report began with the statement that mind uploading, though still a remote prospect, was nonetheless theoretically achievable through the development of technologies already in existence.

  A criticism commonly raised against the idea of simulating minds in software is that we don’t understand nearly enough about how consciousness works to even know where to start reproducing it. The report countered this criticism by claiming that, as with computers, it wasn’t necessary to comprehend a whole system in order to emulate it; what was needed was a database containing all the relevant information about the brain in question, and the dynamic factors that determine changes in its state from moment to moment. What was needed, in other words, was not an understanding of the information, but merely the information per se, the raw data of the person.

  A major requirement for the harvesting of this raw data, they wrote, was “the ability to physically scan brains in order to acquire the necessary information.” A development that appeared especially promising in this regard was something called 3D microscopy, a technology for producing extremely high-resolution three-dimensional scans of brains.

  Another of the workshop’s invited participants was a man named Todd Huffman, the CEO of a San Francisco company called 3Scan, which happened to be pioneering exactly this technology. Todd was among the collaborators Randal had mentioned—one of the people who kept him regularly updated about their work, and its relevance to the overall project of uploading.

  Although one of 3Scan’s initial sources of start-up funding was Peter Thiel—a man who, although he did not explicitly identify with the transhumanist movement, was famously invested in the cause of vastly extending human life spans, in particular his own—it was not a company that had any explicit designs on the brain uploading market. (And the major reason for this was that such a market was nowhere near existing.) It promoted its technology as a tool for the diagnosis and analysis of cell pathologies, as a medical device. But when I met with Todd at 3Scan’s offices in Mission Bay, he was open about the extent to which his work was motivated by a long-standing preoccupation with translating individual human minds into computable code. He was not, he said, interested in standing on the sidelines and waiting for
the Singularity to just happen, by sheer force of some quasi-mystical historical determinism.

  “You know what they say,” said Todd. “The best way to predict the future is to create it.”

  He was a fully paid-up transhumanist, Todd: he was a member of Alcor, and had an implant in the tip of his left ring finger that, by means of a mild vibration, allowed him to sense the presence of electromagnetic fields. Visually, he was a cut-up composite of two or three different guys: vigorous rustic beard, plumage of pink hair, Birkenstocks, black-painted toenails.

  The people who worked for him, he said, knew of his long-term interest in whole brain emulation, but it was not something that drove the company in its day-to-day dealings. It just so happened that the sort of technology that would ultimately prove useful for scanning human brains for emulation was, he said, useful right now for more immediate projects like analyzing pathologies for cancer research.

  “The way I look at it,” he said, “is that mind uploading is not driving industry, but industry is driving mind uploading. There are a lot of industries that have nothing to do with mind uploading, but that are driving the development of technologies that will be used for uploading. Like the semiconductor industry. That space has developed techniques for very fine-grained milling and measurement, and also the kind of electron microscopes that turn out to be very useful for doing high-resolution 3D reconstructions of neurons.”

  The moon-shot ethos of Silicon Valley was such that Todd never felt uncomfortable discussing his interest in brain uploading, but neither was it something that tended to come up in business meetings. There was a very small community, he said, of people who were thinking seriously about this stuff at a high scientific level, and an even smaller number of people who were working on it.

  “I know people who are doing work on uploading,” he said, “and doing it in secret, because they’re afraid of being ostracized within their scientific communities, or of being passed over for funding or tenure or promotions. I don’t have that; I work for myself, so no one is going to throw me out of the building.”

  He walked me around the lab, cracking his knuckles intermittently as we moved among the bewildering assemblages of optics and digitizing devices, the fine slices of rodent brains preserved in glass like ostentatious servings of neural carpaccio. These slices had been imaged and digitized using the 3D microscopes, for detailed databasing—of neuron placement, dimensions and arrangements of axons, dendrites, synapses.

  Looking at these brain slices, I understood that, even if a greatly scaled-up version of this scanning technology eventually made it possible to perform whole brain emulation, it would be impossible to emulate the brain of an animal without killing that animal—or at least killing the original, embodied version. This was an acknowledged problem among advocates of emulation, and the idea of nanotechnology—technology on a scale sufficiently minuscule for the manipulation of individual molecules and atoms—was an area that offered some hope. “We can imagine,” writes Murray Shanahan, a professor of cognitive robotics at Imperial College London, “creating swarms of nano-scale robots capable of swimming freely in the brain’s network of blood vessels, each one then attaching itself like a limpet to the membrane of a neuron or close to a synapse.” (Randal, for his part, spoke enthusiastically of something called “neural dust,” a technology that was being developed at U.C. Berkeley which would allow the application of infinitesimally tiny wireless probes to neurons, allowing the extraction of data without causing any damage. “It’d be like taking an aspirin,” he said.)

  I began to think of these brain slices as illustrating the strange triangulations of the relationship between humans and nature and technology. Here was a sliver of an animal’s central nervous system, pressed and mounted in glass in order to make its contents readable by a machine. What did it mean to do this to a brain—to an animal brain, a human brain? What would it mean to make consciousness readable, to translate the inscrutable code of nature into the vulgate of machines? What would it mean to extract information from such a substrate, to transpose it to some other medium? Would the information mean anything at all outside the context of its origins?

  I felt suddenly the extreme strangeness of this notion of ourselves as essentially information, as contained in some substrate that was not what we were, but merely the medium for our intelligence—as though our bodies might be categorized, along with the glass slides in which these brain slices were preserved, as mere casing. A certain kind of extreme positivist view of human existence insists that what we are is intelligence; and intelligence, as well as referring to the application of skills and knowledge, also means information that is gathered, extracted, filed.

  “Most of the complexity of a human neuron,” writes Ray Kurzweil, “is devoted to maintaining its life-support functions, not its information processing capabilities. Ultimately, we will be able to port our mental processes to a more suitable computational substrate. Then our minds won’t have to stay so small.”

  At the root of this concept of whole brain emulation, and of transhumanism itself as a movement or an ideology or a theory, was, I realized, the sense of ourselves as trapped in the wrong sort of stuff, constrained by the material of our presence in the world. To talk of achieving a “more suitable computational substrate” only made sense if you thought of yourself as a computer to begin with.

  —

  In philosophy of mind, the notion that the brain is essentially a system for the processing of information, and that in this it therefore resembles a computer, is known as computationalism. As an idea, it predates the digital era. In his 1655 work De Corpore, for instance, Thomas Hobbes wrote, “By reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or subtract.”

  And there has always been a kind of feedback loop between the idea of the mind as a machine, and the idea of machines with minds. “I believe that by the end of the century,” wrote Alan Turing in 1950, “one will be able to speak of machines thinking without expecting to be contradicted.”

  As machines have grown in sophistication, and as artificial intelligence has come to occupy the imaginations of increasing numbers of computer scientists, the idea that the functions of the human mind might be simulated by computer algorithms has gained more and more momentum. In 2013, the EU invested over a billion euros of public funding in a venture called the Human Brain Project. The project, based in Switzerland and directed by the neuroscientist Henry Markram, was set up to create a working model of a human brain and, within ten years, to simulate it on a supercomputer using artificial neural networks.

  Not long after I left San Francisco, I traveled to Switzerland to attend something called the Brain Forum, an extravagantly fancy conference on neuroscience and technology at the University of Lausanne, where the Human Brain Project is based. One of the people I met there was a Brazilian named Miguel Nicolelis, a professor at Duke University. Nicolelis is one of the world’s foremost neuroscientists, and a pioneer in the field of brain-machine interface technology, whereby robotic prostheses are controlled by the neuronal activity of human beings. (Randal had referred to this technology several times during our discussions.)

  Nicolelis was a fulsomely bearded man with an impish manner about him; the Nikes he wore with his suit seemed less an affectation than an insistence on comfort over convention. He was in Lausanne to give a talk about a brain-controlled robotic exoskeleton he had developed, which had allowed a quadriplegic man to kick the first ball during the 2014 World Cup opening ceremony in São Paulo.

  Given the frequency with which his own work was cited by transhumanists, I was curious to find out what Nicolelis thought about the prospect of mind uploading. He did not, it turned out, think much of it. The whole idea of simulating a human mind in any kind of computational platform was fundamentally at odds, he said, with the dynamic n
ature of brain activity, of what we think of as the mind. It was for this same reason, he said, that the Human Brain Project was utterly ill conceived.

  “The mind is much more than information,” he said. “It is much more than data. That’s the reason you can’t use a computer to find out how the brain works, what is going on in there. The brain is simply not computable. It cannot be simulated.”

  Brains, like many other naturally occurring phenomena, processed information; but this didn’t mean, for Nicolelis, that such processing could be rendered algorithmically and run on a computer. The central nervous system of a human being had less in common with a laptop than it did with other naturally occurring complex systems like schools of fish or flocks of birds—or, indeed, stock markets—where elements interact and coalesce to form a single entity whose movements were inherently unpredictable. As he put it in The Relativistic Brain, a book coauthored with the mathematician Ronald Cicurel, brains constantly reorganize themselves, both physically and functionally, as a result of actual experience: “Information processed by the brain is used to reconfigure its structure and function, creating a perpetual recursive integration between information and brain matter….The very characteristics that define a complex adaptive system are the ones that undermine our capacity to accurately predict or simulate its dynamic behavior.”

  Nicolelis’s skepticism about the computability of the brain put him in a minority at the Brain Forum. No one was talking about anything as remote and abstract as brain uploading, but almost every sentence I heard reinforced the consensus that the brain could be translated into data. The underlying message of the conference as a whole seemed to be that scientists were still almost entirely ignorant about how the brain did what it did, but the scanning of brains and the building of vast dynamic models was absolutely necessary if we were to begin learning the first thing about what was going on inside our heads.