Imagine if at the beginning of the nuclear age, nuclear weapons were simultaneously developed in thousands of private labs around the world under the guise of creating sustainable energy. The intent would be beneficial to humanity, but the potential for disaster would be too great a risk to allow.
This week, many of the biggest names in technology and futurism, including Elon Musk, signed an open letter calling for artificial intelligence (AI) researchers to pause their work in AI. The recent release of OpenAi’s ChatGPT 4 has shaken the foundation of the science community. ChatGPT is far more advanced than scientists thought possible at this time, and the letter is urging technology leaders to discuss and enact safety protocols.
The potential of creating a hostile AI that can act on its own volition or of a rogue group to weaponize AI are extreme. It poses a clear danger to humanity and similar to readily available nuclear weapons, is too great a threat to allow to continue with no safety precautions. There are currently no government regulations or bodies that oversee the development of AI. It is a completely unregulated field.
The problem is, whoever harnesses the first superintelligent AI system will have an advantage over the entire world. It is a digital arms race the like of which the world has never seen. It will have profound business applications, but the dangerous downsides cannot be overlooked. A superintelligent AI system could easily attack the very foundations of civilization or quickly create a new generation of high-tech weaponry that could change the balance of power in the world.
In centuries past, automation and technological improvements caused disruptions slowly and workers were able to move into emerging industries. An advanced AI system would quickly automate dozens of industries rendering their human counterparts obsolete with little chance for the retraining of the workforce. It could cause worldwide unemployment and financial disaster for individuals and governments.
Musk and his counterparts’ seeming alarmism is well justified. At this point, we don’t know all of the dangers posed by AI, but we know that those dangers are real. A research pause to think deeply about how to prevent a disaster is a reasonable ask, but it may be a naive gesture.
In Greek mythology, Pandora was the first woman created by the gods. She was given a box by the god Zeus, who warned her not to open it under any circumstances. However, Pandora's curiosity got the better of her, and she opened the box, releasing all of the evils of the world, such as disease, poverty, and despair. Frightened by what she had done, Pandora quickly closed the box, trapping only one thing inside, hope.
We can only hope scientists and governments will come together quickly and put in place safeguards that will minimize the risks while allowing science to move forward. We will see.