Login
Password

Forgot your password?

Here's Why the Technological Singularity is Going to Happen

By Edited Aug 19, 2016 1 2

The intelligence explosion

One of the really big questions of our time, with much concern from notable billionaires (like Elon Musk) and physicists (like Stephen Hawking) is the question:  can we create something smarter than ourselves?  To answer this question, I love thinking about the history of technological innovations - how we got to where we are now can tell us a great deal about where we might be going.  I've written extensively about extrapolating from past innovations, though, and therefore I propose a change of pace here.  The aim of this article is merely to demonstrate the feasibility of a technological singularity, strong AI, and artificial general intelligence (AGI). I just want to show you that it is theoretically possible to create a machine that is smarter than us, and explain one particular method by which it might happen. In future articles, I will attempt to explain why it might happen within my lifetime, but that will remain outside of the scope of this one.  Instead, I'd like to keep our eyes on the prize:  an understanding that there is, in fact, a way in which we can become not only more intelligent than we are today, but many thousands of times more intelligent; and furthermore, when this happens, a singularity is, in fact, inevitable.  That is, we don't presently have any way to conceive of what the future will be like after this initial "intelligence explosion" occurs. 

For the intellectually robust reader, you likely will have already come up with the following deductive reasoning, and may have come up with far more likely scenarios.  Nevertheless, these are concepts the layman can grasp immediately, and it's the tactic I often employ when discussing such heavy concepts with the uninitiated.  I think you'll find value in the logic behind the argument, and I hope that you find a great deal of use in convincing others about not only how plausible strong AI is, but also how important it is that we start focusing on it.

The long and illustrious history of AI

From Greek automata to IBM's Watson

For a long time now, people have been exploring the possibilities of artificial intelligence, although it wasn't called "artificial intelligence" until John McCarthy suggested the emerging field be called this at the Dartmouth Conferences, a sort of meeting of the minds of  logicians, philosophers, and scientists interested in quantifying what "learning" and "intelligence" means on a very specific scale, so that, ultimately, a machine could replicate the human experience of "thinking."  Long before this moment, there were automata in ancient Greece that resembled human beings, and throughout the ages, there have been similar mechanically "programmed" machines designed to simulate human beings or other forms of natural life.  

 There are actually numerous examples of AI in existence in our daily lives, from Google's searching algorithms, to voice to text apps on your phone, to the Soundhound music app that picks out a song playing in the background, and so on. However, these all fit within the realm of so-called "narrow artificial intelligence":  machines capable of accomplishing very specific (narrow) tasks, but not generally solving complex problems on their own, or adapting to situations.  

Another form of AI that is ever-present in our daily experience is called "heuristic" artificial intelligence.  Heuristic means, literally, "capable of teaching oneself."  In the same way that a toddler will hesitantly take that first step before falling over and then starting over again, Heuristic programs will blunder frequently, learning from their mistakes (and, often, from the feedback from their human users).  

Heuristic artificial intelligence and "narrow AI",  while both are extremely useful in our daily lives, still fall well short of the holy grail of AI: strong or general AI.  The quest for artificial general intelligence has been in serious pursuit for at least the last half a century or so, and in the beginning of this period, AI mavens heralded machine intelligence as capable of solving all of humanity's problems within a few short years.  

Suffice it to say, this was wildly optimistic.

Since then, there have been many approaches to teaching machines how to think, from neural nets to brute force rules and definitions, to the aforementioned heuristics.  All have made slow, plodding progress, but thanks to Moore's Law and what Ray Kurzweil calls his "Law of Accelerating Returns", most scientists in the field believe that the pace of progress will accelerate dramatically in the near future.  

Two notable examples worth pointing toward are Deep Blue, which upset chess world champion Garry Kasparov in 1996, and IBM's Watson, which destroyed both of Jeopardy's all time winningest champions (Ken Jennings and Brad Rutter) by a large margin in 2011.  Twenty years before either of these things happened, the public might have been under the impression that "intelligence" would be required to win a chess world championship, or to beat champions on Jeopardy, but the funny thing about this is that our definition of "intelligence" seems to keep changing.  Every time a mark is met, the bar is set considerably higher.  

But that's not really what this article is about.  This article is about cheating.

Cheating

The human brain

One thing all of these approaches have in common, is that they are all based on the idea that there must be a more efficient way to replicate what the brain does, rather than replicating it piece by piece.  However, the one shining example of supreme "general intelligence" that the universe offers, as far as we know, is the human brain itself.  While there are numerous approaches attempt to model the human brain, including programs that start from scratch and have to learn everything, all of these programs have been marginally successful at best, and still have a long, long way to go.  But what about "cheating"?

"Cheating" would be extremely straightforward:  all one would need to do in order to accomplish this, would be too replicate a human brain, atom by atom.  Once complete, we'd have an exact replica of the very best computer the known universe has to offer, at least as far as "thinking" and "reasoning" goes.  

Now, I'm certain that this approach will immediately raise several objections in your mind, not the least of  which is likely to be that there are virtually incalculable amount of atoms in the human brain. That is absolutely correct.  However, one can see that in concept- in theory at least - this approach will work:  that is to say, we would have an exact replica of a human brain at the end of the process.

After considering this briefly, you may come up with an even more pertinent important objection.  Even if I am able to replicate a human brain, I still don't actually have anything smarter than a human being.  It's still just the same as having an organic brain by itself, and no smarter than anything (or anyone) available today.

And you'd be absolutely right again, but there is a substantial caveat to this.  Right now, inside the human brain, information is processed by a chemical transfer.  While this process is rapid from a biological standpoint (around 250 miles per hour!), the speed at which the chemicals propagate through the brain, and ultimately into the body in some cases, is virtually still from the standpoint of the laws of nature. Information traveling through a computer is capable of traveling at C, the fastest possible speed in the universe. This is, however, 2,682,467 times faster than the speed at which information currently travels in an organic brain.

Well, sure, but isn't it still just a human brain? It can't do anything a normal human brain can't do, other than calculate things quickly.   This is, again, completely correct:  all Superbrain here can do is calculate things quickly. However, let me explain this concept by way of analogy:

Suppose that I gave you a very complicated math problem to solve (or, if you really hate mathematics, an interesting thought experiment). Suppose that I gave you 5 minutes to solve this very complicated problem.  Unless you have the mind of an Einstein, you probably aren't going to be able to solve this problem that quickly.  Whether there's some complicated math involved, or some powerful deductive reasoning, this sucker is just going to take a good long while to work out all the way through.

I'll take the exact same problem, and give yourself  25 years to solve it. Do you think you could solve it then? Stop for a moment, and think about what you were doing a quarter of a century ago.  25 years ago, I was probably learning how to drive, or starting to learn how to wrestle, or sitting down to art class in high school.  What were you doing then?  Now imagine that you have had this amount of time to solve one individual problem, completely focused every minute of every day on said problem (or, at least as focused as you could be in the span of five minutes). Do you think you could solve a pretty complicated problem in this matter?   Twenty-five years of travel at 250 miles per hour is the same as 5 minutes of travel at the speed of light, and information traveling around in a brain that didn't rely on chemicals to propagate said information, but instead sent the information around at C, would send this information around two and a half million times faster.

Put another way:  if the brain is inside your head, you are capable of doing calculations and thinking in general at two and a half million times the normal rate. You are, effectively, two and a half million times smarter.

PET scan of brain
Credit: Wikimedia Commons

Another likely objection to arise at this point is that you won't necessarily be able to remember these things. For example if I was locked in a room for a thousand years, without pen and paper, and I was asked to solve this complicated problem, I'd be far more likely to go insane then come up with a solution. I simply wouldn't be able to remember any of the information.

However, there is a simple answer to even this problem, and it's actually to a large degree already in existence today. This concept is called "the cloud", and it's where an increasing amount of computing happens nowadays. Soon, it will be where virtually all of the computing that is done in the world. The cloud works by connecting individual computers to remote servers, so they don't have to remember information. An early example of this that was very successful has been Dropbox.  More recent iterations would include Microsoft Cloud Services, Google Drive (which I use religiously), and iCloud.  Amazon pioneered the concept of cloud computing in the mid 2000s with AWS.

Assuming that we have a pretty strong mastery of the technology involved to replicate the human brain, atom by atom, and we're smart enough to replace the neurons with a material by which information propagates faster, say silicon, it's probably also reasonable to assume that we are smart enough to embed tiny computers inside the human brain as well. These tiny computers could effectively transmit information - memory - to the cloud which, respectively, removes the brain's need to store memories.  From within the brain, up to the cloud and back, all at the speed of light, memories can be called upon whenever they're needed, and information can be accessed immediately.

There is an additional obvious, far reaching implication of utilizing the cloud in this manner. This is a benefit that we are already using today. All of human knowledge. Google unprecedented levels of information. With your brain already connected to the cloud, you will be able to access this information virtually instantaneously, without having to swipe your smartphone, use voice to text into it as I am using to write this article right now, or type on a keyboard. Interfaces, while considerably better than previous interfaces involving reading punch cards, and high levels of expertise, still pale in comparison to simply thinking. When all you have to do in order to access information, and ultimately to send information, is think, then we have the next big leap forward.

OneDrive
Amazon Price: $0.00 $0.00 Buy Now
(price as of Aug 19, 2016)

It is my personal belief that this is not the way we will actually achieve the technological singularity or artificial general intelligence, but this is merely one example of the way that it could, in principle, happen.  For one thing, if we have achieved the technical mastery in order to replicate atoms precisely, as I've described, we are likely to have already figured out a vastly more effective method of intelligence itself than the human brain.  As mentioned, there are already several different promising method of artificial intelligence already in use today, from the "top-down" logic based approach to the "bottom-up" (extremely specific tasks) approach, and everything in between.  There's a great deal of belief that the two approaches will meet somewhere in the middle, making the metaphorical Golden Spike tie the fields together in the first AGI ever.

I do believe, however, that the hybrid thinking of interfacing with the cloud is likely to arrive much, much sooner than many people think.   This is the way we're going to solve a lot of the world's major problems, and it's the way that AGI is most likely to be created.  Consider the way computer chips are designed nowadays - it's not with a human mind and eye for design any longer, but rather a combination of computers and humans, working side by side.  The interface leaves a great deal to be desired, though:  thinking about what you want to do, clicking things with a mouse or (at best) talking to the computer screen, and so forth.  

Whatever model we end up using in order to create strong AI, you can bet that it will draw on collective human intelligence, whether by the cloud or some sort of collective consciousness that arises from all of us talking to one another with our brains, and almost certainly via the Internet, and likely with communications devices inside our brains.  After all, millions of minds are automatically smarter than one mind, and there are, in fact, already examples in the real world of computer interfaces with neurons. People are regaining the use of their limbs who never had use of them before, and paraplegics are once again walking, thanks to interfaces between brains and computers.  Parkinson's patients have long had so-called "brain pacemakers" installed in their heads, "talking" to the nerves when they start to shake, and hedging the tremors off just in time.  It is only a matter of time before similar devices for more general use are hooked up to the internet, allowing the ultimate human-machine interface.

Likely outcomes

Can you see how you could replicate a human brain if you had the ability to have to copy atoms on a massive scale?  To me, it's obvious that it is within the laws of physics to accomplish this feat, and anything that is within the laws of physics for us to do, we'll eventually do.  However, it is unlikely that we will need this level of sophisticated exact replication in order to do what the brain does. For example, if we could determine the exact inner workings of a neuron, and then figure out the axons, dendrites, and synapses (component parts) do in precise detail, we may well have enough information to replicate a functioning human brain without going to the atomic scale.

Consider how far along scanning of the brain has come.  In 1900, there was no way to look inside of the brain of a human being other than to cut their skull open and start poking around in there.   During World War I, X-rays began to gain popularity on the battlefield (thanks in large part to real-life superhero scientist Marie Curie), but when the X-rays were tuned to the living human brain, they only yielded the very roughest of approximations as to what lie inside.  Finally, during the 1970s, with computerized axial tomography, or CAT scanning, arose as a way to get some kind of detail in there.  PET (positron emission tomography) and MRI scanning (magnetic resonance imaging) took brain imaging to a new paradigm, and at last, fMRI scanning (functional magnetic resonance imaging) took us up to the present day, for the most part.  Nowadays, brain computer interfaces are cutting edge for imaging as well, with tiny computers inside the brain taking a look in there. 

CT scan of brain
Credit: Wikimedia Commons

The knee of the curve

Somewhere along the way, it occurred somebody that would be a good idea to create an intelligent, or thinking, machine. Alan Turing was absolutely one of the pioneers, with the eponymous "Turing Machine" (a computer capable of solving any imaginable problem) named after him, although a hundred years earlier, Charles Babbage and Ada Lovelace worked to invent the first programmable computer.  At any rate, the important thing is that the idea did, in fact, arise that a machine could be built that that could think better than human being does. Speculation about the architecture of such a machine has, of course, caused tremendous amount of debate, much of which is outlined above.

There is little question, though, that once we do create one individual ultra fast, ultra intelligent brain, we will have a mind capable of presumably out thinking any individual human brain, and, quite probably, all collective human brains. Even if this ultra fast brain isn't smarter than all current brains collectively, we can just make a million more at that point.  Once we have more thinking power than the current current collective intelligence of the planet Earth, we will be able to create an even more powerful mind, if such a thing is possible.  A mind a billion times more powerful than a normal human brain will almost certainly be fully capable of creating a more intelligent mind then the dullards on planet Earth created, and then whatever super powerful brain that brain creates, should be also vastly more capable of creating an even more powerful brain. This runaway effect of strong AI is precisely what people mean when they discuss the singularity concept. There's simply no way for human beings to be able to understand what is going to happen, or the reasoning behind what will happen next, in much the same way that our actions are utterly incomprehensible to an individual bacterium.

Right now is the best time to be alive, ever.  We are at the knee of the exponential curve, right as things begin to get really interesting, and right as innovation becomes faster than we can perceive without a little help understanding what the innovations mean.  Already it's almost impossible to stay "in the know" with AI developments, physics discoveries, and technology in general without doing considerable research on each particular innovation.  By the time we learn about what just happened, the next big thing has already happened.

Advertisement
Advertisement

Comments

Nov 12, 2014 11:00am
Marlando
As always I admire your writing but, quite frankly, I disagree with your conclusions. Singularity is a concept that disregards consciousness and, as far as I know, is a concept of the reductionists. Nevertheless, I like your style and unique topics so again two thumbs way up and yep...also a rating. (I do not have to agree with you, to like what you do).
Nov 12, 2014 1:55pm
goatfury
Marlando- I've actually written extensively about consciousness as it pertains to the Singularity. In a nutshell, I believe that if consciousness is continuous, you're still you. I am not a dualist.

I wish I could refer you to some of my other writing on the subject, but I'm guessing linking here in the comments would be a no-no.

Anyway, I'd love to hear more of your thoughts on this if you have time to chat. Here is fine.
Add a new comment - No HTML
You must be logged in and verified to post a comment. Please log in or sign up to comment.

Explore InfoBarrel

Auto Business & Money Entertainment Environment Health History Home & Garden InfoBarrel University Lifestyle Sports Technology Travel & Places
© Copyright 2008 - 2016 by Hinzie Media Inc. Terms of Service Privacy Policy XML Sitemap

Follow IB Technology