This Is No Ordinary School (Singularity School Book 1)
Why does anything exist at all? And yet the answer is obvious. The First Cause must be obvious. It has to be obvious to Nothing, present in the absence of anything else, a substance formed from -blank-, a conclusion derived without data or initial assumptions. What is it that evokes conscious experience , the stuff that minds are made of? We are made of conscious experiences. There is nothing we experience more directly.
How does it work? We don't have a clue. Two and a half millennia of trying to solve it and nothing to show for it but "I think therefore I am. Perhaps the solutions operate outside the representations that can be formed with the human brain. If so, then our descendants, successors, future selves will figure out the semantic primitives necessary and alter themselves to perceive them.
The Powers will dissect the Universe and the Reality until they understand why anything exists at all, analyze neurons until they understand qualia. And that will only be the beginning. It won't end there. Why should there be only two hard problems? After all, if not for humans, the Universe would apparently contain only one hard problem, for how could a non-conscious thinker formulate the hard problem of consciousness?
Might there be states of existence beyond mere consciousness - transsentience? Might solving the nature of reality create the ability to create new Universes, manipulate the laws of physics, even alter the kind of things that can be real - "ontotechnology"? That's what the Singularity is all about.
So before you talk about life as a Power or the Utopia to come - a favorite pastime of transhumanists and Extropians is to discuss the problems of uploading , life after being uploaded, and so on - just remember that you probably have a much better chance of solving both hard problems than you do of making a valid statement about the future. This goes for me too. I'll stand by everything I said about humans, including our inability to understand certain things, but everything I said about the Powers is almost certainly wrong. There are better ways and I'm sure They - or It, or [sound of exploding brain] will "find them".
A Perceptual Transcend occurs when the semantic structures of one generation become the semantic primitives of the next. To put it another way, one PT from now, the whole of human knowledge becomes perceivable in a single flash of experience, in the same way that we now perceive an entire picture at once. Computers are a PT above humans when it comes to arithmetic - sort of. While we need to manipulate an entire precarious pyramid of digits, rows and columns in order to multiply by , a computer can spit out the answer - - in a single obvious step.
These computers aren't actually a PT above us at all, for two reasons.
- A Life Rebuilt Over Time.
- CONDOTTIERE DU RIALTO ROMAN (French Edition).
- A Womans Will;
- Featured Essay:.
First of all, they just handle numbers up to two billion instead of 9; after that they need to manipulate pyramids too. Far more importantly, they don't notice anything about the numbers they manipulate, as humans do. If you multiply by , using the wedding-cake method of multiplication, you won't multiply by 2 twice in a row; you'll just steal the results from last time. If one of the interim results is or or , you'll notice that, too. The way computers manipulate numbers is actually less powerful than the way we manipulate numbers. Would the Powers settle for less? A PT above us, multiplication is carried out automatically but with full attention to interim results, numbers that happen to be prime, and the like.
If I were designing one of the first Powers - and, down at the Singularity Institute , this is what we're doing - I would create an entire subsystem for manipulating numbers, one that would pick up on primality, complexity, and all the numeric properties known to humanity. A Power would understand why times equals , with the same understanding that would be achieved by a top human mathematician who spent hours studying all the numbers involved.
And at the same time, the Power will multiply the two numbers automatically. For such a Power, to whom numbers were true semantic primitives, Fermat's Last Theorem and the Goldbach Conjecture and the Riemann Hypothesis might be obvious. Somewhere in the back of its mind, the Power would test each statement with a million trials, subconsciously manipulating all the numbers involved to find why they were not the sum of two cubes or why they were the sum of two primes or why their real part was equal to one-half.
From there, the Power could intuit the most basic, simple solution simply by generalizing. Perhaps human mathematicians, if they could perform the arithmetic for a thousand trials of the Riemann Hypothesis, examining every intermediate step, looking for common properties and interesting shortcuts, could intuit a formal solution.
But they can't, and they certainly can't do it subconsciously, which is why the Riemann Hypothesis remains unobvious and unproven - it is a conceptual structure instead of a conceptual primitive. Perhaps an even more thought-provoking example is provided by our visual cortex. On the surface, the visual cortex seems to be an image processor. In a modern computer graphics engine, an image is represented by a two-dimensional array of pixels 7.
All thetas, representing the angle, have a constant added. The polar coordinates are then converted back to rectangular. There are ways to optimize this process, and ways to account for intersecting and empty pixels on the new array, but the essence is clear: To perform an operation on an entire picture, perform the operation on each pixel in that picture. At this point, one could say that a Perceptual Transcend depends on what level you're looking at the operation. If you view yourself as carrying out the operation pixel by pixel, it is an unimaginably tedious cognitive structure, but if you view the whole thing in a single lump, it is a cognitive primitive - a point made in Hofstadter's Ant Fugue when discussing ants and colonies.
Not very exciting unless it's Hofstadter explaining it, but there's more to the visual cortex than that. For one thing, we consciously experience redness. If you're not sure what conscious experience a. Qualia are the stuff making up the indescribable difference between red and green. The term "semantic primitive" describes more than just the level at which symbols are discrete, compact objects. It describes the level of conscious perception. Unlike the computer manipulating numbers formed of bits, and like the imagined Power manipulating theorems formed of numbers, we don't lose any resolution in passing from the pixel level to the picture level.
We don't suddenly perceive the idea "there is a bear in front of me"; we see a picture of a bear, containing millions of pixels, every one of which is consciously experienced simultaneously. A Perceptual Transcend isn't "just" the imposition of a new cognitive level; it turns the cognitive structures into consciously experienced primitives. Even if it were initially, it won't be too long before "PT" itself is Transcended and the Powers jump out of the system yet again.
After all, the Singularity is ultimately as far beyond me, the author, as it is beyond any other human, and so my PTs will be as worthless a description as the doubling sequence discarded so long ago. Even if we accept the PT as the basic unit of measure, it simply introduces a secondary Singularity. Maybe the Perceptual Transcends will occur every two consciously experienced years at first, but then will occur every conscious year, and then every conscious six months - get the picture?
It's like the "Birthday Cantatatata The PTs may introduce a second Singularity, and a third Singularity, and a fourth, until Singularities are coming faster and faster and the first w -Singularity is imminent - Or the Powers may simply jump beyond that system. The Powers are beyond our ability to comprehend. Great Big Numbers It's hard to appreciate the Singularity properly without first appreciating really large numbers. I'm not talking about little tiny numbers, barely distinguishable from zero, like the number of atoms in the Universe or the number of years it would take a monkey to duplicate the works of Shakespeare.
I invite you to consider what was, circa , the largest number ever to be used in a serious mathematical proof. The proof, by Ronald L. Graham , is an upper bound to a certain question of Ramsey theory. In order to explain the proof, one must introduce a new notation, due to Donald E. Knuth in the article Coping With Finiteness. Written as a function: This number is small enough to visualize. Larger than 27, but so small I can actually type it. Nobody can visualize seven trillion of anything, but we can easily understand it as being on roughly the same order as, say, the gross national product.
The number is now beyond the human ability to understand, but the procedure for producing it can be visualized. Repeat seven trillion times. Both the number and the procedure for producing it are now beyond human visualization, although the procedure can be understood.
- .
- Kevin And Noah Hatch An Egg.
- Transformative Quality: The Emerging Revolution in Health Care Performance?
- Black December?
Let x equal an exponential tower of threes of height x. And yet, in the words of Martin Gardner: Graham's number is far beyond my ability to grasp. I can describe it, but I cannot properly appreciate it. Perhaps Graham can appreciate it, having written a mathematical proof that uses it.
This number is far larger than most people's conception of infinity. I know that it was larger than mine. My sense of awe when I first encountered this number was beyond words. It was the sense of looking upon something so much larger than the world inside my head that my conception of the Universe was shattered and rebuilt to fit. All theologians should face a number like that, so they can properly appreciate what they invoke by talking about the "infinite" intelligence of God. My happiness was completed when I learned that the actual answer to the Ramsey problem that gave birth to that number - rather than the upper bound - was probably six.
Why was all of this necessary, mathematical aesthetics aside? Because until you understand the hollowness of the words "infinity", "large" and "transhuman", you cannot appreciate the Singularity. Even appreciating the Singularity is as far beyond us as visualizing Graham's number is to a chimpanzee. Farther beyond us than that. No human analogies will ever be able to describe the Singularity, because we are only human. The number above was forged of the human mind.
It is nothing but a finite positive integer, though a large one. It is composite and odd, rather than prime or even; it is perfectly divisible by three. Encoded in the decimal digits of that number, by almost any encoding scheme one cares to name, are all the works ever written by the human hand, and all the works that could have been written, at a hundred thousand words per minute, over the age of the Universe raised to its own power a thousand times. And yet, if we add up all the base-ten digits the result will be divisible by nine.
The number is still a finite positive integer. It may contain Universes unimaginably larger than this one, but it is still only a number. It is a number so small that the algorithm to produce it can be held in a single human mind. The Singularity is beyond that. We cannot pigeonhole it by stating that it will be a finite positive integer. We cannot say anything at all about it, except that it will be beyond our understanding. If you thought that Knuth's arrow notation produced some fairly large numbers, what about T n?
How many states does a Turing machine need to implement the calculation above? What is the complexity of Graham's number , C Graham? Probably on the order of And moreover, T C Graham is likely to be much, much larger than Graham's number. And with the extra space, we might even be able to introduce an even more computationally complex algorithm. In fact, Knuth's arrow notation may not be the most powerful algorithm that fits into C Knuth states. T n is the metaphor for the growth rate of a self-enhancing entity because it conveys the concept of having additional intelligence with which to enhance oneself.
I don't know when T n passes beyond the threshold of what human mathematicians can, in theory, calculate. The point is that after a few iterations, we wind up with T Now, I don't know what T will be equal to, but the winning Turing machine will probably generate a Power whose purpose is to think of a really large number. That's what the term "large" means. Smarter Than We Are It's all very well to talk about cognitive primitives and obviousness, but again - what does smarter mean? The meaning of smart can't be grounded in the Singularity - I haven't been there yet.
So what's my practical definition? Puzzles the writer needs months to solve, or to design, the character may solve in moments. But God help the writer if his abnormally bright character is wrong! Once I tried something similar. John Campbell's letter of rejection began: Neither can anyone else. You can write about super-fast thinkers, eidetic memories, lightning calculators; a character who learns a dozen languages in a week, who can read a textbook in an hour, or who can invent all kinds of wonderful stuff - as long as you don't have to produce the actual invention.
But you can't write a character with a higher level of emotional maturity, a character who can spot the obvious solution you missed, a character who knows and can tell the reader the Meaning Of Life, a character with superhuman self-awareness. Not unless you can do these things yourself. Let's take a concrete example, the story Flowers for Algernon later the movie Charly , by Daniel Keyes. I'm afraid I'll have to tell you how the story comes out, but it's a Character story, not an Idea story, so that shouldn't spoil it. Flowers for Algernon is about a neurosurgical procedure for intelligence enhancement.
This procedure was first tested on a mouse, Algernon, and later on a retarded human, Charlie Gordon. The enhanced Charlie has the standard science-fictional set of superhuman characteristics; he thinks fast, learns a lifetime of knowledge in a few weeks, and discusses arcane mathematics not shown. Then the mouse, Algernon, gets sick and dies. Charlie analyzes the enhancement procedure not shown and concludes that the process is basically flawed. That's a science-fictional enhanced human. A real enhanced human would not have been taken by surprise.
A real enhanced human would realize that any simple intelligence enhancement will be a net evolutionary disadvantage - if enhancing intelligence were a matter of a simple surgical procedure, it would have long ago occurred as a natural mutation. This goes double for a procedure that works on rats! As far as I know, this never occurred to Keyes. I selected Flowers, out of all the famous stories of intelligence enhancement, because, for reasons of dramatic unity, this story shows what happens to be the correct outcome.
Note that I didn't dazzle you with an abstruse technobabble explanation for Charlie's death; my explanation is two sentences long and can be understood by someone who isn't an expert in the field. It's the simplicity of smartness that's so impossible to convey in fiction, and so shocking when we encounter it in person. All that science fiction can do to show intelligence is jargon and gadgetry.
Singularity
A truly ultrasmart Charlie Gordon wouldn't have been taken by surprise; he would have deduced his probable fate using the above, very simple, line of reasoning. He would have accepted that probability, rearranged his priorities, and acted accordingly until his time ran out - or, more probably, figured out an equally simple and obvious-in-retrospect way to avoid his fate. If Charlie Gordon had really been ultrasmart, there would have been no story. There are some gaps so vast that they make all problems new.
Imagine whatever field you happen to be an expert in - neuroscience, programming, plumbing, whatever - and consider the gap between a novice, just approaching a problem for the first time, and an expert. Even if a thousand novices try to solve a problem and fail, there's no way to say that a single expert couldn't solve the problem casually, offhandedly. If a hundred well-educated physicists try to solve a problem and fail, an Einstein might still be able to succeed. If a thousand twelve-year-olds try for a year to solve a problem, it says nothing about whether or not an adult is likely to be able to solve the problem.
If a million hunter-gatherers try to solve a problem for a century, the answer might still be obvious to any educated twenty-first-century human. And no number of chimpanzees, however long they try, could ever say anything about whether the least human moron could solve the problem without even thinking. There are some gaps so vast that they make all problems new; and some of them, such as the gap between novice and expert, or the gap between hunter-gatherer and educated citizen, are not even hardware gaps - they deal not with the magic of intelligence, but the magic of knowledge, or of lack of stupidity.
I think back to before I started studying evolutionary psychology and cognitive science. I know that I could not then have come close to predicting the course of the Singularity. We're all familiar with individual variations in human intelligence, distributed along the great Gaussian curve; this is the only referent most of us have for "smarter".
But precisely because these variations fall within the design range of the human brain, they're nothing out of the ordinary. One of the very deep truths about the human mind is that evolution designed us to be stupid - to be blinded by ideology, to refuse to admit we're wrong, to think "the enemy" is inhuman, to be affected by peer pressure. Variations in intelligence that fall within the normal design range don't directly affect this stupidity. That's where we get the folk wisdom that intelligence doesn't imply wisdom, and within the human range this is mostly correct 8.
The variations we see don't hit hard enough to make people appreciate what "smarter" means. I am a Singularitarian because I have some small appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, every day. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from "impossible" to "obvious".
Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards And I know that my picture of the Singularity will still fall short of the truth. I may not be modest, but I have my humility - if I can spot anthropomorphisms and gaping logical flaws in every alleged transhuman in every piece of science fiction, it follows that a slightly higher-order genius never mind a real transhuman! Call it experience, call it humility, call it self-awareness, call it the Principle of Mediocrity; I've crossed enough gaps to believe there are more.
I know, in a dim way, just how dumb I am. I've tried to show the Beyondness of the Singularity by brute force, but it doesn't take infinite speeds and PTs and w s to place something utterly beyond us. All it takes is a little tiny bit of edge, a bit smarter, and the Beyond stares us in the face once more. I've never been through the Singularity. I've never been to the Transcend.
I just staked out an area of the Low Beyond. This page is devoted to communicating a sense of awe that comes from personal experience, and is, therefore, merely human. From my cortex, to yours; every concept here was born of a plain old Homo sapiens - and any impression it has made on you was likewise born of a plain old Homo sapiens.
Someone who has devoted a bit more thought, or someone a bit more extreme; it makes no difference. Whatever impression you got from this page has not been an accurate picture of the far future; it has, unavoidably, been an impression of me. And I am not the far future. Only a version of "Staring into the Singularity" written by an actual Power could convey experience of the actual Singularity. Take whatever future shock this page evoked, and associate it not with the Singularity; associate it with me, the mild, quiet-spoken fellow infinitesimally different from the rest of humanity.
Don't bother trying to extrapolate beyond that. Nobody can - not you, not me. Sooner Than You Think Since the Internet exploded across the planet, there has been enough networked computing power for intelligence. If the Internet were properly reprogrammed, it would be enough to run a human brain, or a seed AI.
On the nanotechnology side, we possess machines capable of producing arbitrary DNA sequences, and we know how to turn arbitrary DNA sequences into arbitrary proteins 9. We have machines - Atomic Force Probes - that can put single atoms anywhere we like, and which have recently [] been demonstrated to be capable of forming atomic bonds. Hundredth-nanometer precision positioning, atomic-scale tweezers If we had a time machine, K of information from the future could specify a protein that built a device that would give us nanotechnology overnight.
Ever since the late 90's, the Singularity has been only a problem of software. And software is information, the magic stuff that changes at arbitrarily high speeds. As far as technology is concerned, the Singularity could happen tomorrow. One breakthrough - just one major insight - in the science of protein engineering or atomic manipulation or Artificial Intelligence, one really good day at Webmind or Zyvex , and the door to Singularity sweeps open. Drexler has written a detailed, technical, how-to book for nanotechnology.
After stalling for thirty years, AI is making a comeback. Computers are growing in power even faster than their usual, pedestrian rate of doubling in power every two years. Quate has constructed a head parallel Scanning Tunnelling Probe. IBM has announced the Blue Gene project to achieve petaflops 10 computing power by , with intent to crack the protein folding problem.
For example, population growth is hyperbolic. Maybe you learned it was exponential in math class, but it's hyperbolic to a much better fit than exponential. If that trend continues, world population reaches infinity on Aug 17, , plus or minus 1. It is, of course, impossible for the human population to reach infinity. Some say that if we can create AIs, then the graph might measure sentient population instead of human population. These people are torturing the metaphor.
Nobody designed the population curve to take into account developments in AI. It's just a curve, a bunch of numbers. It can't distort the future course of technology just to remain on track. If you project on a graph the minimum size of the materials we can manipulate, it reaches the atomic level - nanotechnology - in I forget how many years the page vanished , but I think around For that matter, we now have the artificial atom "You can make any kind of artificial atom - long, thin atoms and big, round atoms.
As of '95, Drexler was giving the ballpark figure of I suspect the timetable has been accelerated a bit since then. My own guess would be no later than Similarly, computing power doubles every two years eighteen months. Does this mean we can program smarter people? Does this take into account any breakthroughs between now and then?
Does this take into account the laws of physics? Is this a detailed model of all the researchers around the planet? It's just a graph. The "amazing constancy" of Moore's Law entitles it to consideration as a thought-provoking metaphor of the future, but nothing more. The Transcended doubling sequence doesn't account for how the faster computer-based researchers can get the physical manufacturing technology for the next generation set up in picoseconds, or how they can beat the laws of physics.
That's not to say that such things are impossible - it doesn't actually strike me as all that likely that modern-day physics has really reached the ultimate bottom level. Maybe there are no physical limits. The point is that Moore's Law doesn't explain how physics can be bypassed. Mathematics can't predict when the Singularity is coming. Well, it can, but it won't get it right. Even the remarkably steady numbers, such as the one describing the doubling rate of computing power, A describe unaided human minds and B are speeding up, perhaps due to computer-aided design programs. Statistics may be used to predict the future, but they don't model it.
What I'm trying to say here is that "" is just a wild guess, and it might as well be next Tuesday. In truth, I don't think in those terms. I do not "project" when the Singularity will occur. I have a "target date". I would like the Singularity to occur in , which I think I would have a reasonable chance of doing via AI if someone handed me a hundred million dollars a year. The Singularity Institute would like to finish up in or so. Above all, I would really, really like the Singularity to arrive before nanotechnology, given the virtual certainty of deliberate misuse - misuse of a purely material and thus, amoral ultratechnology, one powerful enough to destroy the planet.
We cannot just sit back and wait. To quote Michael Butler, "Waiting for the bus is a bad idea if you turn out to be the bus driver. We may not know about all the research out there, after all. Uploading Maybe you don't want to see humanity replaced by a bunch of "machines" or "mutants", even superintelligent ones?
You love humanity and you don't want to see it obsoleted? You're afraid of disturbing the natural course of existence? The Singularity is the natural course of existence. Every species - at least, every species that doesn't blow itself up - sooner or later comes face-to-face with a full-blown superintelligence It happens to everyone. It will happen to us. It will even happen to the first-stage transhumans or the initial human-equivalent AIs. But just because humans become obsolete doesn't mean you become obsolete.
Yudkowsky - Staring into the Singularity
You are not a human. You are an intelligence which, at present, happens to have a mind unfortunately limited to human hardware. With any luck, all persons on this planet who live to or or whenever - and maybe some who don't - will wind up as Powers. Transferring a human mind into a computer system is known as "uploading"; turning a mortal into a Power is known as "upgrading". The archetypal upload is the Moravec Transfer, proposed by Dr. Hans Moravec in the book Mind Children. The key assumption of the Moravec Transfer is that we can perfectly simulate a single neuron, which Penrose and Hameroff would argue is untrue.
The following discussion assumes that either A the laws of physics are computational or B we can build a "superneuron", a trans-Turing computer that does the same thing a neuron does. Penrose and Hameroff have no objection to the latter proposition. If a neuron can take advantage of deep physics to perform noncomputable operations, we can do the same thing technologically.
The scenario given also assumes sophisticated nanomedicine ; i. The Moravec Transfer gradually moves rather than copies a human mind into a computer. You need never lose consciousness. The details which follow have been redesigned and fleshed out a bit by yours truly from the original in Mind Children. A neuron-sized robot swims up to a neuron and scans it into memory. An external computer, in continuous communication with the robot, starts simulating the neuron. The robot waits until the computer simulation perfectly matches the neuron. The robot replaces the neuron with itself as smoothly as possible, sending inputs to the computer and transmitting outputs from the simulation of a neuron inside the computer.
This entire procedure has had no effect on the flow of information in the brain, except that one neuron's worth of processing is now being done inside a computer instead of a neuron. Repeat, neuron by neuron, until the entire brain is composed of robot neurons. Despite this, the synapses links between robotic neurons are still physical; robots report the reception of neurotransmitters at artificial dendrites and release neurotransmitters at the end of artificial axons.
In the next phase, we replace the physical synapses with software links. For every axon-dendrite transmitter-receiver pair, the inputs are no longer reported by the robot; instead the computed axon output of the transmitting neuron is added as a simulated dendrite to the simulation of the receiving neuron. At the end of this phase, the robots are all firing their axons, but none of them are receiving anything, none of them are affecting each other, and none of them are affecting the computer simulation.
The robots are disconnected. You have now been placed entirely inside a computer, bit by bit, without losing consciousness. In Moravec's words, your metamorphosis is complete. If any of the phases seem too abrupt, the transfer of an individual neuron, or synapse, can be spread out over as long a time as necessary. To slowly transfer a synapse into a computer, we can use weighted factors of the physical synapse and the computational synapse to produce the output.
The weighting would start as entirely physical and end as entirely computational. Since we are presuming the neuron is being perfectly simulated, the weighting affects only the flow of causality and not the actual process of events. Slowly transferring a neuron is a bit more difficult. The robot encloses the neuron, the axons, and the dendrites with a robotic "shell", all without disturbing the neural cell body.
That's going to take some pretty fancy footwork, I know, but this is a thought experiment. The Powers will be doing the actual uploading.
The Technological Singularity
The robotic dendrites continue to receive inputs from other neurons, and pass them on to the enclosed neural dendrites. The output of the biological neuron passes along the neural axon to the enclosing robotic axon, which reads the output and forwards it to the external synapse, unchanged. Since, by hypothesis, the neuron is being perfectly simulated, this does not change the actual output in any way, only the flow of causality.
The biological neuron is discarded. Assuming we can simulate an individual neuron, and that we can replace neurons with robotic analogues, I think that thoroughly demonstrates the possibility of uploading, given that consciousness is strictly a function of neurons. And if we have immortal souls, then uploading is a real snap.
Detach soul from brain. Copy any information not stored in soul. Attach soul to new substrate. At this point it is customary to speculate about how one goes about eating, drinking, walking around. People state that they are unwilling to give up physical reality, worry about whether or not they will have sufficient computational power to simulate a hedonistic world of their wildest desires, and so on and so on ad nauseam. Even Vinge himself, discoverer of the Singularity, has gone on record as wondering whether one's true self would be diluted by Transcendence. I hope that by this point in the page you have been sufficiently impressed by the power and scope and incomprehensibility and general Transcendence of the Singularity that these speculations sound silly.
If you wish to remain undiluted, you will be able to arrange it. You will be able to make backups. You will be able to preserve your personality regardless of substrate. The only folks who have to worry about being unwillingly diluted are the first humans to Transcend, and even they may have nothing to worry about if there's a Friendly, AI-born superintelligence to act as transition guide. Of course, it may be that any being of sufficient intelligence wants to be diluted. Exercising anxiety over that possibility seems spectacularly pointless, analogous to children worrying that, as adults, they will no longer want to be thoughtlessly cruel to other children.
If you want to be diluted, it's not a wrongness that we should worry about. Maybe, after Transcending, you'll be different. If that is so, then that change is inevitable and there is nothing you can do about it. Many proponents of the theory believe that the machines eventually will see no use for humans on Earth and simply wipe us useless eaters out — their intelligence being far superior to humans, there would be probably nothing we could do about it.
They also fear that the use of extremely intelligent machines to solve complex mathematical problems may lead to our extinction. The machine may theoretically respond to our question by turning all matter in our Solar System or our galaxy into a giant calculator, thus destroying all of humankind. Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it.
They also attack the methodology that is used to "prove" the theory by suggesting that Moore's Law may be subject to the law of diminishing returns, or that other metrics used by proponents to measure progress are totally subjective and meaningless.
Popular Essays:
Theorists like Theodore Modis argue that progress measured in metrics such as CPU clock speeds is decreasing, refuting Moore's Law [2]. As of , not only Moore's Law is beginning to stall, Dennard scaling is also long dead, returns in raw compute power from transistors is subjected to diminishing returns as we use more and more of them, there is also Amdahl's Law and Wirth's law to take into account, and also that raw computing power simply doesn't scale up linearly at providing real marginal utility.
Then even after all those things, we still haven't taken into account of the fundamental limitations of conventional computing architecture. Moore's law suddenly doesn't look to be the panacea to our problems now, does it? Adherents see a chance of the technological singularity arriving on Earth within the 21 st century, a concept that most [ Who? Some of the wishful thinking may simply be the expression of a desire to avoid death , since the singularity is supposed to bring the technology to reverse human aging, or to upload human minds into computers.
However, recent research, supported by singularitarian organizations including MIRI and the Future of Humanity Institute, does not support the hypothesis that near-term predictions of the singularity are motivated by a desire to avoid death, but instead provides some evidence that many optimistic predications about the timing of a singularity are motivated by a desire to "gain credit for working on something that will be of relevance, but without any possibility that their prediction could be shown to be false within their current career".
Don't bother quoting Ray Kurzweil to anyone who knows a damn thing about human cognition or, indeed, biology. He's a computer science genius who has difficulty in perceiving when he's well out of his area of expertise. Eliezer Yudkowsky identifies three major schools of thinking when it comes to the singularity. Under this school of thought, it is assumed that change and development of technology and human or AI assisted intelligence will accelerate at an exponential rate. So change a decade ago was much faster than change a century ago, which was faster than a millennium ago.
While thinking in exponential terms can lead to predictions about the future and the developments that will occur, it does mean that past events are an unreliable source of evidence for making these predictions. The "event horizon" school posits that the post-singularity world would be unpredictable. Here, the creation of a super-human artificial intelligence will change the world so dramatically that it would bear no resemblance to the current world, or even the wildest science fiction.
Indeed, our modern civilization is simply unimaginable for those living, say, in ancient Greece. This school of thought sees the singularity most like a single point event rather than a process — indeed, it is this thesis that spawned the term "singularity. This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky who endorses this view "What would humans with brain-computer interfaces do with their augmented intelligence?
There is also a fourth singularity school which is much more popular than the other three: It may be worth taking a look at a similar precursory concept developed by a Jesuit priest by the name of Pierre Teilhard de Chardin , a paleontologist who, like Russian Orthodox ascetic Nikolai Fedorov before him, was inspired by Darwinism and believed that humans could direct their own evolution never mind that that wouldn't be evolution to bring about the resurrection and the kingdom of God.
In , Teilhard proposed that in the future all machines would be linked in a vast global network that would merge the consciousnesses of all humans. Eventually this would prime an exponentially incomprehensible intelligence explosion which he termed the "Omega Point" which would allow humanity to break through the material framework of Time and Space and become one with God without having to die, despite God already being omnipresent and inside everyone. The intelligence explosion singularity is by far the most unlikely. According to present calculations, a hypothetical future supercomputer may well not be able to replicate a human brain in real time.
We presently don't even understand how intelligence works, and there is no evidence that intelligence is self-iterative in this manner — indeed, it is not unlikely that improvements on intelligence are actually more difficult the smarter you become, meaning that each improvement on intelligence is increasingly difficult to execute. Indeed, how much smarter than a human being something can be is an open question.
Energy requirements are another issue; humans can run off of Doritos and Mountain Dew Dr. Pepper, while supercomputers require vast amounts of energy to function. Unless such an intelligence can solve problems better than groups of humans, its greater intelligence may well not matter, as it may not be as efficient as groups of humans working together to solve problems.
There are highly intelligent people in the world in the present day, but they aren't able to upgrade their own intelligence or that of others. Another major issue arises from the nature of intellectual development; if an artificial intelligence needs to be raised and trained, it may well take twenty years or more between generations of artificial intelligences to get further improvements. More intelligent animals seem to generally require longer to mature, which may put another limitation on any such "explosion".
Accelerating change is questionable; in real life, the rate of patents per capita actually peaked in the 20 th century, with a minor decline since then, despite the fact that human beings have gotten more intelligent and gotten superior tools. As noted above, Moore's Law has been in decline, and outside of the realm of computers, the rate of increase in other things has not been exponential — airplanes and cars continue to improve, but they do not improve at the ridiculous rate of computers.
It is likely that once computers hit physical limits of transistor density, their rate of improvement will fall off dramatically, and already even today, computers which are "good enough" continue to operate for many years, something which was unheard of in the s, where old computers were rapidly and obviously obsoleted by new ones. See also Wirth's law.