Friday, 16 March 2018

(Man vs. Machine) Vs. (Machine + Man)



Artificial intelligence is poised to be one of the last greatest and riskiest innovation that will ever be made by humanity. Computers are evolving at a rapid pace, and this has made us aware that our position in the hierarchy of lifeforms is not a permanent one. The result of this awareness has been uproar, mostly among intellectuals, who are continually finding the task of guiding the rest of us through the binary coded maze of the 21st century complicated. We don’t yet know how the development of AI will turn out, but two scenarios have been proposed as the most likely. Scenario number one involves the man versus machine paradigm, where we oppose but eventually submit to our computer overlords. The second scenario is based on the man plus machine model where we help AI help us, humanity happily coexists with superintelligent machines and possibly upgrades itself into one. We can all agree that scenario two is the most desirable. However, the difficulty lies in making it the most probable. At the moment, both situations can play out.
The probability that humanity will be engaged in a losing battle with AI is pretty high because of our stupidity and insecurity. Ignorance is the primary reason this is likely to happen. Nick Bostrom, who has warned that AI might lead to our extinction, compares our current situation to that of a child playing with a grenade (Bostrom). We might be the ones developing the AI, but that does not imply we have a clue what we’re doing.  We might giggle as we pull the pin, but we’d be blown to bits before we realize what happened. Our advantage is that we have been aware of our knowledge limit, which is primarily why the most significant branch of AI development is machine learning. Computers can learn because they are now capable of listening and seeing, which makes independent knowledge acquisition possible. However, we have no way of telling the kind of power that we have handed machines, and this is what makes AI an existential risk (Bostrom).
Another reason we might form an antagonistic relationship with AI is linked to our insecurity. Via machine learning, humans will become economically expendable. Many things that we do will be done better by robots, and future businesses will optimize through full-scale automation. This doesn’t augur well for humans since many of us will be left jobless, and without jobs, many capitalist souls will lack a raison d'ĂȘtre. By the time we reach the halfway mark of this century, we are likely to witness the emergence of a useless class (Harari). A social class that makes no economic or artistic contribution to the economy. This is likely to happen because it is becoming increasingly difficult to create jobs that humans can do better than computer algorithms. The difficulty emerges from the “singularity hypothesis” where machines continually improve themselves, including their ability to improve themselves (Bostrom). As such, it will always be a matter of time before robots learn the novel jobs created for humans. Consequently, humanity’s sense of insecurity will be amplified leading to the perception of AI as rivals, an attitude that sets the stage for an antagonistic relationship. The cornerstones of such a relationship are already in place.
In the early months of 2016, the Kenya United Taxi Organization gave the government a 7-day ultimatum to lock Uber out of Nairobi claiming they were being driven out of business, quite literally1. This was basically a bunch of humans urging a human government to help them keep an algorithm out of their business. The humans failed, and the government sided with the algorithm, a position that prompted random attacks and arson against Uber drivers2. This will be the position mechanics, doctors, lawyers, stock brokers, and soldiers are likely to find themselves in. You might argue that making us jobless isn’t an existential threat. However, how we react to being jobless will make it so. We might think the taxi drivers over responded by burning other people’s cars, but that’s how we are likely to react as a species. Once AI is perceived as a threat to human bliss, we might throw a tantrum and embark on a journey to destroy it. However, if we decide to pull the plug, AI algorithms might get into defense mode and pull our plug first. They are intelligent after all, and will be able to counter any attack. Furthermore, their sense of self-preservation might make them wary of a self-destruct code being sneaked in by insecure Homo sapiens. We’d be doomed in such a scenario.
However, some have argued that such a conclusion is Hollywood inspired and is only possible in Sci-Fi movies. Those who take this position argue that it is possible for humans to develop AI in a benign way that makes it possible for us to co-exist with superintelligent machines. Nicholas Agar points out that we might not know how to control AI yet, but that through continuous development of AI, we might make a breakthrough (Agar). AI will help us deal with the risks it poses. Secondly, current AI algorithms are clumsy, and their development is incremental (Agar). As such, rather than being decimated by an explosion of intelligence, we are likely to witness a stage by stage growth in artificial intelligence, which makes it possible for humans to influence the direction this development takes by correcting human “unfriendly goals (Agar). As to the economic future of humans, the development of AI might take some jobs from people, but it will also lead to the emergence of new jobs that algorithms will not be able to perform. However, as we mentioned earlier, this is unlikely to be the case for long. Therefore, although humans should not be oblivious to the risks AI pose, we need not worry about being rendered worthless or wiped out by superintelligent robots.
So which of these scenarios is likely to play out? Honestly, we do not know.  At the moment, each scenario is probable and the problem currently facing us is how to make it possible for man and machine to help each other build the future. The manner in which we should go about this is still not clear. I’m of the opinion the answer lies in the decoupling of morality from consciousness. AI was made possible because of the “great decoupling’, when we finally realized awareness is not a prerequisite for intelligence (Harari). We need a second decoupling that will enable us to disentangle morality and consciousness.
Our current fears and faith about AI are based on the notion that humans operate on a higher moral ground. As a result, the idea that AI might develop no moral code makes as wary of what this technology might do to us. On the other hand, the impression that human moral superiority will help us teach machines Kantian moral imperative makes us have faith in our ability to control AI. These two views are based on the assumption that morality will remain a monopoly of conscious, carbon-based lifeforms, specifically humans. However, we need to think of morality as distinct from consciousness if we are to solve our current conundrum. AI is not, and might not evolve to be a social life form. Therefore, the evolution of their morals will not follow the path of social animals like humans. However, this does not disqualify the probability that a silicon-based superintelligence will develop a sense of morality on its own. It is highly probable that intelligent machines will base their morals on algorithmic values rather than emotions like humans do. For instance, computers can’t feel disgusted, which is an emotion at the core of many human moral decisions. Whether or not humans develop the ability to conceptualize the possibility of an unemotional moral code, and understand it once it emerges, will determine the kind of relationship we will have with superintelligent machines. In turn, this relationship will decide whether or not we have a man vs. machine or man+machine future.



Works Cited
Agar, Nicholas. "Don’t Worry about Superintelligence." Journal of Evolution and Technology (2016): 73-82. PDF.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014. Print.
Harari, Yuval Noah. Homo Deus. London: Vintage, 2016. Print.

No comments:

Post a Comment