Artificial Intelligence Will Not Take Over the World
But it can improve lives if implemented properly and with awareness of its limitations.
Image: P4PE.CO
Is artificial intelligence (AI) computing software automating society in ways that are eliminating jobs? In some business sectors such as insurance and law, we are seeing how the tedious work of humans pouring over actuarial tables and voluminous case law can be accomplished faster and more accurately by computers in general and AI in particular.
The quick answer of whether AI is replacing humans in some areas of business is…yes, it is. The displacement of humans by AI automation is inevitable.
Some are asking much larger questions: Is advanced AI development poised to assume much greater roles in our lives? Can AI make decisions better than humans? Is it to be expected that AI will be in charge of everything vital to human existence?
A group of AI experts, including Elon Musk, have released an open letter to call for a six-month "pause” in developing systems more powerful than the newly launched GPT-4. The company behind ChatGPT and GPT-4, OpenAI, was co-founded by Musk himself, so of course this grabs everyone’s attention. And maybe that was the point all along.
The AI Race
In the push to embed AI into as many of their products as possible, rival companies including Google and a host of startups have accelerated their development efforts, attracting a lot of investment funding and giving OpenAI some real competition.
But now the experts aligned with OpenAI claim that “AI systems with human-competitive intelligence can pose profound risks to society and humanity” as justification for the “pause” in advanced AI development. The open letter goes on to ask, “Should we develop nonhuman minds that might eventually outnumber, outsmart, and replace us?” On the surface, these sound like practical questions to ask. We would not want to create a genuine Skynet, would we?
In reality, OpenAI is using scare tactics to consolidate its number one market position by generating fictional scenarios of an out-of-control network of AI-powered servers taking over the world and becoming a “risk to society.” The six-month developmental “pause” would prevent some of OpenAI’s competitors from gaining ground and cause a lot of investment capital to be re-directed elsewhere. With $10 billion in Microsoft backing, OpenAI can afford to take the summer off and watch their competitors wither.
By suggesting a host of “risk to society” consequences, OpenAI is cynically misrepresenting the capabilities of AI algorithms. There is a big difference between cognitive automation and autonomy. AI is entirely automation; it is good at pattern matching restricted by mathematical rules but it has no ability to create, or to be skeptical, or to empathize. In other words, it cannot become “self-aware” similar to that of the fictional movie computer HAL-9000 and make decisions that were not already pre-programmed by developers. So no, there is no “risk to society” except that which is coded into the platform by humans.
Is a “Pause” Really Necessary?
And it’s not likely that AI developers will “pause” their work to satisfy the ramblings of those in direct competition with them. Do we really think developers in China or India will stop their work due to the science fiction fever dreams put forth by OpenAI’s leadership and financial supporters?
By all means, keep perfecting AI and expand its presence in our lives so we can make better human decisions. But always keep the application of AI in the proper context; lightning speed data analysis is very useful in many areas of science, business and law, but it is not proof of creative or autonomous intelligence. Algorithms can be predictive to a degree, but no more predictive as human beings looking at the same statistical data.
What about the promise of neural networks that can “mimic the behavior of the human brain?” It has been under development for over a decade and has not yet reached a cognitive level of thinking to match that of people, nor will it likely ever reach that goal. Why would we want it to think like people in the first place? I personally do not wish to be operated on by an AI-powered surgery robot that wishes it had gone to law school, or be driven by an autonomous vehicle that is still miffed over the argument it had with the garage door opener. No, AI can serve mankind better in specialized applications without the ability of being distracted by thinking, emoting, empathizing, brooding, or otherwise having attention being steered away from doing its job. You know, like humans.
The Machine Cannot Program Itself As People Can
Because AI platforms possess the same limitations as the computer modeling of the earth’s climate (e.g. “garbage in, garbage out”), one must always be mindful of the pre-programmed bias and opinions of the people that engineered these platforms. This is not a problem when AI is used for a chatbot or the making of cartoons, but it is a big problem when AI is used for influencing parts of our lives that should be decided by humans with genuine cognitive capabilities. In this case, we should assume that those who would outsource decision making to AI are, in fact, escaping accountability for those decisions. “I was only following the instructions of the AI program” would be the excuse. That is the real threat, not the machine.
Because there will never be such a thing as an objective AI, we must always keep AI on a rhetorical leash; algorithms do not think and cannot create. They only follow pre-programmed instructions, no matter how impressive their processing skills. If we keep this in mind, we can allow AI development to flourish and continually improve our lives on a scale we humans can only dream of.
Oh, and AI cannot dream either. Thankfully.