Singularity: The Schools of Techno-faith

IMG_2256.jpeg
 

“A singularity is a sign that your model doesn’t apply past a

certain point, not infinity arriving in real life.”

-Forum comment

 

In 1949, a Jesuit priest named Pierre Teilhard de Chardin proposed that all future machines would be linked by a vast global network that would allow human minds to emerge (O’Gieblyn). This would eventually lead to the “Omega Point”, that the universe is destined to spiral towards a final point of complete unification, allowing humans to live forever and become one with God (Draper). This was not the first prediction of spiraling technology, the consequence being the rise of a superintelligence and human immortality. Nor will it be the last. But the faith-heavy prediction and the parallel inferences similar to the term Technological Singularity may lead us to wonder about the contemporary claims of the ‘inevitable’ rise of a superintelligence precipitating the end of humans or the ascension of humans as immortal spiritual machines. Meghan O’Gieblyn, in the final paragraph of their article, concludes: “It was late. The cafe had emptied and a barista was sweeping near our table. As we stood to go, I felt that our conversation was unresolved. I suppose I’d been hoping that Benek would hand me some portal back to the faith, one paved by the certitude of modern science. But if anything had become clear to me, it was my own desperation, my willingness to spring at this largely speculative ideology that offered a vestige of that first religious promise. I had disavowed Christianity, and yet I had spent the past 10 years hopelessly trying to re-create its visions by dreaming about our post-biological future – a modern pantomime of redemption. What else could lie behind this impulse but the ghost of that first hope?” Is it an act of faith to warn of doomsday or promise immortality? It depends on who you ask.

 

In 1958, Stanislaw Ulam wrote in the obituary for mathematician John von Neumann, that he recalled a conversation with Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue” (Ulam). In 1965, I. J. Good elaborated on a similar premise in his speculative essay, “Speculations Concerning the First Ultraintelligent Machine.” In his essay, Good predicts an “intelligence explosion” of recursive self-improvement of machine intelligence. Good’s premise would not be popularized until a quarter of a century later with works by retired professor of math and computer science and science fiction writer Victor Vinge. Vinge’s 1993 essay “The Coming Technological Singularity,” where he wrote that a singularity would signal the end of the human era.

 

What is Technological Singularity? It depends on how one defines “singularity” and in that regard there are many definitions. AI researcher Eliezer Yudkowsky identifies three schools of thinking when it comes to singularity. Accelerating Change, Event Horizon, and Intelligence Explosion

 

What is distinct about the three schools is that the core claims tend to be consistent, but the strong claims about the consequences contradict (Yudkowsky). These three schools offer various claims as to the nature of singularity and how it will manifest, which this essay will delve into using the core texts from followers of these schools. One major school that is not on this list that should be, which will be explored near the end, is the fourth school of thought that singularity maybe unlikely if not impossible (Baez). This is the school that seems to get overlooked because of its realist (pessimistic relative to the others) point of view, but at the same time it is arguably the more popular and thoroughly argued.

 

Accelerating Change is the first school. The core claim of this school is that change and development is exponential. Change is not linear as we commonly perceive. The change that was witnessed last century was faster by magnitudes than the whole first millennium. It is not therefore reasonable to use the past as a gauge to project our future. This school of thought, since technology advances in a typically exponential way, predicts when new technology will arrive, and when they cross certain thresholds, such as the emergence of an artificial intelligence (Yudkowsky).

 

One of this school’s more prominent followers is Ray Kurzweil. Ray Kurzweil is a futurist responsible for technology related to accessibility like text-to-speech improvements, speech recognition, and optical character recognition as well as hardware to help persons with disabilities. Kurzweil believes that predictions of the future are short-sighted in their scope and that predictors often underestimate and misinterpret the speed of future projections. “Virtually every presenter looked at the progress of the last fifty years and used it as a model for the next fifty years,” (Kurzweil). Kurzweil claims that an assessment of the history of technology would show differently, that the rate of change is exponential. “Exponential growth is a feature of evolutionary process, of which technology is an example.”

 

This comparison of the history of technology to evolution is not without its flaws, since evolution from my knowledge does not “advance” since determining an evolutionary feature as more advanced than the last is open to debate. The flawed analogy however does not mean that the argument that technology improves exponentially isn't valid, but it does reveal Kurzweil’s assumptions about the future of technology being biological in some way. He critiques the idea the speed at which past technologies have advanced can be used to generally predict the shape of the future. He calls it “intuitive linearity”. It could be framed as a bias in some way, that the perception of past advancement can predict future advancement and is the core of his reason for criticizing the view.

 

However, the same logic could apply to his claim of accelerating change. Using past explosions in advancement like the first industrial revolution in the 19th century, and the second in the early 1900s, these may help project future exponentiality. The window one chooses to frame future projections, linear or exponential, fails to account for assumptions of how the world works–the paradigm-shifts are spuriously chosen, and they are shown in a definitive light when they could best be described as blurry. In his essay, Kurzweil uses a lot of graphs of log curves of landmark advances to show that the past has advanced exponentially; correlating his constructions with the future and generalizing specific advances to support a singularity where technology accelerates towards an unfathomable rate. Like the people he criticizes, he also relies on past trends to overestimate timelines for the emergence of a super intelligence or lengthened human lifespans.

 

“I emphasize the exponential-versus-linear perspective because it's the most important failure that prognosticators make in considering future trends. Most technology forecasts and forecasters ignore altogether this historical exponential view of technological progress. Indeed, almost everyone I meet has a linear view of the future. That's why people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details) but underestimate what can be achieved in the long term (because exponential growth is ignored).” Exponential vs Linear is a false dilemma, or at least a mis-framed one. It is not unprecedented for technology to regress, plateau, or grow in an unpredictable and messy way. The first 2 centuries of the Middle Ages saw the decline of literacy and the near-death of Latin. Hunter-gatherer societies can be sustainable without requiring farming, continuing to use stone, flint, or copper tools. Should this be described as a Plateau or a regression? Why? What use would a wheel be when you carry almost nothing, or a plough when you can get nutrition easily and that is reliably available. Technology advanced to a point that was largely necessary for the environment and lifestyle. Ray Kurzweil is a genius when it comes to technology that was needed– pertaining to accessibility. The language chosen has implications for how we might describe the way technology is used in practice. But what about his predictions. By “around 2020” a human intelligence emulation would have existed since 2010, and would now be available for $1000.

 

Accelerated change is not without its merits despite how Kurzweil characterises the idea.  It can happen in bursts in pockets of society, like the Agricultural and Industrial revolutions that have happened over the years. It is possible that computers might develop to a point where they appear intelligent. But that is outside of the realm of predictability because using past events can support any argument, depending on how the past is framed. Accelerating change as a general rule lacks the true scope of technology in life and I believe it assumes that there is an inevitability in technological advancement.

 

The second school is Event Horizon. For a hundred and fifty thousand years humans have been the generally dominant intelligence on the planet. All the progress thus far was created by human brains. Technology will likely advance to the point of making improvements to human intelligence via a brain interface or similar technology. The future according to this school would be unimaginable and “weirder by far than most science fiction.” The strong claim of this school of thought is to know what a superintelligence would do; you have to be at least that smart. Therefore, the future will be unfathomable and unpredictable to us today. Much like how we don’t exactly know what’s beyond the event horizon of a blackhole (Yudkowsky).

 

This school is tempting, as it predicts the future’s predictability. This school is advocated for by Vernor Vinge, the man responsible for the popularization of singularity. “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended” (Vinge). Technological progress has been the central feature of the 20th century. What will this progress look like projected forward? Intelligence according to Vinge means that we are capable of making “what if” calculations to provide insights and improvements to our physical world and our technology. He contrasts this with the slower manifestation of iterative improvement with natural selection, which has to produce it’s “what if” calculations physically in the form of a new mutation in a new animal. Intelligence is faster than natural selection at iterative and calculated improvement. The argument is if intelligence improves, then calculated improvement happens faster. And if calculated improvement happens faster, then eventually the improvement will move at a speed beyond comprehension and beyond human control.

 

This school is about consequences, Vinge feels that Good and Kurzweil don’t explore the true consequences of a singularity. To Vinge, the consequence is the end of humans or human dominance: “From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century.” He argues in this essay how we can contain an emerging singularity to prevent it running away from us and how to avoid a human extinction. However, I don’t think he sufficiently proves his original premise to satisfactorily speculate about policy and consequences. The premise, like the Accelerating Change school, is a bit too vague to come to the conclusions presented. Near the beginning of the essay Vinge lists four means by which science can create a superhuman intelligence:

●      There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)

●      Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.

●      Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.

●      Biological science may provide means to improve natural human intellect.

These are interesting topics to explore, and to examine the futures each of these means would entail. But Vinge begins this list with a parenthetical that leads me to believe that this is all wishful thinking or poorly misinterpreted: “(and this is another reason for having confidence that the event will occur)”. It’s a non sequitur: the means to create the event is evidence that the event will happen, especially since it is still an open question if the means even truly exist. Why would the existence of scientific means to initiate a runaway event make said event any more inevitable? Vinge’s argument is hard to parse to answer this question–interesting speculations are directly followed by assertions that this is why all of them are, at worst inevitable or at best likely. Speculation being built on speculation ends up giving the future prediction an uncanny quality between science fiction and fan fiction.

 

Then there is the third and final school: Intelligence Explosion. This school is the older of the three and was coined by I.J. Good. The core claim of this school is that if an intelligence is capable of making an intelligence smarter than itself, this will create a positive feedback loop. The smarter one gets, the more intelligent the iterative replacement can become. This school then claims that this cycle will accelerate exponentially, triggering smarter and smarter intelligence. The ascent will eventually lead to an intelligence magnitude more intelligent than we are today–the Superintelligence. Physical limitation applies (Yudkowsky).

 

I.J. Good is the prominent advocate of this school. “Speculations Concerning the First Ultraintelligent Machine” begins by saying: “The survival of man depends on the early construction of an ultra-intelligent machine. In order to design an ultraintelligent machine we need to understand more about the human brain or human thought or both. In the following pages an attempt is made to take more of the magic out of the brain by means of a “subassembly” theory, which is a modification of Hebb’s famous speculative cell-assembly theory. My belief is that the first ultraintelligent machine is most likely to incorporate vast artificial neural circuitry, and that its behavior will be partly explicable in terms of the subassembly theory. Later machines will all be designed by ultra-intelligent machines, and who am I to guess what principles they will devise? But probably Man will construct the deus ex machina in his own image.” The core of the idea surrounding the Intelligence Explosion is that a recursive self-iterative chain of machines will improve the artificial intelligence of each subsequent machine. I would like to highlight the last part of the quote: “…Man will construct the deus ex machina in his own image.” It is optimistic about the consequences of the recursive self-iterative intelligence explosion, that an unexpected entity that saves us from a hopeless situation that will apparently act as we do. A comforting thought.

 

The Intelligence Explosion is actually not the final school. There is a fourth school that is not explored very deeply by singularity supporters and the criticisms of the Intelligence Explosion are at the core of this school. This school’s main claim is that there will be no singularity. The three schools, it is claimed, just aren’t capable of convincingly and logically backing the idea that humans will become or create superintelligence that will cause us to become obsolete. The first issue is the idea of a recursive self-iterative intelligence. The existence of such a program is questionable for some mathematical reasons. A program of a certain size: n-bits, can only have 2n possible bit strings. This creates a finite ability to self-improve, inherently preventing any kind of explosion in artificial intelligence. Taking this further, not all bit combinations result in instructions that will work on a particular computer, and a nonsense instruction simply results in the system coming to a complete halt. Another is an issue in the combination of natural selection and the nature of computers, that, according to AI research Roman Yampolskiy, will cause a program to accumulate errors over time and eventually lead the program to have erroneous behaviours and severely affect the function of the intelligence to evaluate subsequent intelligences.Part of this is driven by the fact that the software doesn't actually know what an improvement is, and would have to conduct nearly random experiments in the hopes of finding something "better", without knowing what "better" really is.

 

One critic of the idea of singularity is Gordon E. Moore, the namesake for the law that the number of transistors doubles about every two years, while the cost of computers is halved. It is a law used by Singularity proponents to prove their predictions. In an IEEE special report on Singularity, Moore stated, “I am a skeptic. I don't believe this kind of thing is likely to happen, at least for a long time. And I don't know why I feel that way. The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans.” This opinion argues that it’s a matter of embodied complexity that is a product of trillions of iterations of individuals over the life time of the earth. I think giving an intelligence a body and giving it the appropriate parameters to learn how to control it would probably refute this.

Another critic of Singularity is a professor of psychology at Harvard, Steven Pinker. He states: “There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems,” (IEEE). This is more of a pointed criticism at making predictions in the first place. The singularity schools put emphasis on their claims and predictions and framing them as conceptual inevitabilities of technology. When historicity of technological predictions is taken into account however, technological predictions rarely come to fruition or ubiquity as originally foreseen.

Techno-faith has been a word that I’ve seen when researching the term “Singularity”, and how it is used, I am beginning to see it as a form of faith. Specific predictions rarely come true, and the general ones never to the degree or form the original prediction took. Some take these predictions seriously, similar to the idea of Pascal’s wager that we have more to lose if we take these predictions seriously than we have to gain if we don’t. There is still hope, I believe, to reject the idea of inevitability in our predictions. In an influential essay about computer science by Vannevar Bush written for the Atlantic in 1945 called “As We May Think”, he looked at where we could be and how technology will get us there concluded:

“The applications of science have built man a well-supplied house, and are teaching him to live healthily therein. They have enabled him to throw masses of people against one another with cruel weapons. They may yet allow him truly to encompass the great record and to grow in the wisdom of race experience. He may perish in conflict before he learns to wield that record for his true good. Yet, in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome.”

 

References

 

O’Gieblyn, Meghan. “God in the machine: my strange journey into transhumanism” The Guardian. 18 April 2017. Accessed 2 November 2020

 

Draper, Lucy. “Could artificial intelligence kill use off?” Newsweek. 24 May 2015. Accessed 2 November 2020.

 

Ulam, Stanislaw. “Tribute to John von Neumann” May 1958. Accessed 2 November 2020. PDF.

 

Yudkowsky, Eliezer. “Three Major Singularity Schools” yudkowsky.net. September 2007. Accessed 2 November 2020.

 

Baez, John. “This Week Finds (Week 311)” johncarlosbaez.wordpress.com. 7 March 2011. Accessed 2 November 2020. Interview with Eliezer Yadkowsky.

 

Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. 2005. Accessed 5 November 2020. PDF.

 

Good; Irving John. “Speculations Concerning the First Ultraintelligent Machine” vtechworks.lib.vt.edu. 1965. Accessed 5 November 2020. PDF.

 

Vinge, Vernor. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. 1993. Accessed 5 November 2020

 

Yampolskiy, Roman. “From Seed AI to Technological Singularity via Recursively Self-Improving Software.” University of Louisville. Accessed 11 November 2020. PDF.

 

IEEE Special Report: The Singularity. https://spectrum.ieee.org/static/singularity. Accessed 11 November 2020

 

Bush, Vannevar. “As We May Think” The Atlantic. July 1945. Accessed 11 November 2020

Previous
Previous

Send Moods

Next
Next

Touch in Photography: Kirlian, Aura, and Thought Photography