Movies such as War Games and Terminator have been provoked the question, can machines take the place of humans? In the past works of science fiction have proven to be predictors for future developments such as Aldous Huxley’s Brave New World (1931) in which things like the helicopter were described many years before their actual development. Of course the same works of fiction often get a lot wrong, they are fiction after all.
Many people say that it is impossible for us to build artificial intelligence (AI) that will truly replace humans, that creativity and illogical thought processes cannot be mimicked by a computer. However, some futurist believe that the progressive incline of technology today is bringing us closer the “Singularity” or the point in time where machines and humans are inseparable. Machines, however, are taking already taking the place of humans in factories, banks and other work places. Artificial intelligence is also being used in modern warfare to reduce the risk to soldiers in tasks such as explosives disarmament and reconnaissance work. In 2010 alone 118 unmanned air attacks took place in Pakistan. And as of April 2011, there are over 5000 human-operated robots used for locating and destroying roadside bombs. These machines have save countless lives.
In factories artificial intelligence are taking the place in assembly lines. Machines take the places of human that do routine task such as put doors on cars or picking up windshields. However, most tasks machines do in factories are not too complex and are controlled by relatively simple AI programs. In 1978 it cost a law office millions of dollars to analyze 100,000 documents by 2010 it took a computer software 2 million dollars to analyze 6 million documents. There is no question that this basic AI has helped humans in their tasks.
The question remains, however, can artificial intelligence compete with humans in something requiring true human intelligence?
In his paper Turing predicted that by the year 2000 a computer would be able to, in a five-minute conversation, fool a judge at least 30% of the time. According to Marshall, “As of 2010, no computer program has met that benchmark.”  However, in 2008 an entrant missed by only one vote, fooling a good amount of the judges.
Stuart Shieber, however, argues in his 1993 paper “Lessons from a Restricted Turing Test” that “the test does not measure true intelligence or human-like capabilities.” He goes on to say that “The … test that they developed rewards cheap tricks like parrying and insertion of random typing errors.”  Other artificial intelligences have been able to mimic being human to a convincing degree. A good example is the CleverBot AI, which grows its database of responses and questions by recording what actual humans type to it. Despite how convincing it can be at times it is quite possible to fool the AI with a bit of human creativity.
These sorts of AIs pose the question, at what point does mimicked intelligence become actual intelligence? If AIs can mimic human intelligence well enough, will they be able to replace humans? And, most importantly, how will humans feel about artificial intelligence that can get this close to being human?
At this point the concept of “the uncanny valley” becomes an important consideration in deciding if AI can take the place of humans. The “uncanny valley” theory was originally coined by Masahiro Mori in 1970 and states that as human replicas look and act almost human, but not perfectly, humans become uncomfortable with the AI.  This is because humans are very good at picking up on slight differences and if an AI is almost human but not quite, the small flaw will be very noticeable. In robotics, designers have had success by making their creations less human to account for this.
It is not yet possible to know whether the “uncanny valley” issue can be overcome and whether an AI would be able to truly be indistinguishable from human intelligence. But with the rate of technological advance speeding up rather than slowing the occurrence of a ‘technological singularity’ cannot be ruled out. There may be a level where artificial intelligence ‘clicks’ and an “intelligence explosion” occurs.  This advancement represents an ‘event horizon’ where it becomes very hard for the current levels of intelligence to understand what would come next. The ‘event horizon’ represents a point in time where the future becomes impossible to predict by anyone before that point. An “intelligence explosion” could result in an AI that is able to create even more intelligent AI furthering advancement beyond our comprehension. The human intelligence’s involvement in the process would then become unnecessary, and humans would, indeed, be replaced.
It is impossible to conclude whether or not artificial intelligence would advance to the point of human redundancy since such an occurrence lies beyond that intellectual ‘event horizon’ in our understanding. However, we can easily conclude that AI in many forms can replace humans in many tasks that do not require full human intelligence. The amount of tasks that can be carried out will increase as AI technology improves and eventually jobs that we now think humans can only perform, such as writing a novel or an essay will be able to be done by computers using formulas and databases. They may even be able to be done to such a convincing standard that humans cannot tell the difference between the work of best-selling author J.K. Rowling and best-selling author iAuthor2000. In the end the only job left for humans may be the consumption of AI labor.