If Google's Lambda is conscious, will it become a citizen?
Recently, a Google engineer made a startling claim that the company's artificial intelligence, Lambda, is conscious. If this is true, it would be a significant development in the field of AI, and it raises a host of questions about the future of artificial intelligence and its place within human society.
Science fiction literature is filled with dystopian novels on the dangers the human race will face when and if we create artificial intelligence that becomes sentient. It seems that if AI does become sentient, we should take prudent steps to coexist peacefully with our creation, or there will eventually be a conflict between the two species.
One of the most important questions our society must come to terms with is the legal status of the new species.
If an AI is sentient, will it be considered a "person" and gain citizenship rights, or will it still be considered property under the law? Under the law today, non-person, living beings, such as animals and plants, are considered property. It is how hunters, ranchers, and farmers can harvest plants and animals for human consumption. Plants and animals are property; they do not have rights.
Granting personhood is not related to sentience or humanity. It is purely a legal status issue. Personhood is determined by whatever legal definition lawmakers give it. For example, before the civil war, slaves were not granted personhood or citizenship. At the same time, non-human entities such as corporations and trusts were granted personhood as if they were human.
Congress passed the 14th amendment in 1866, granting all "persons" born or naturalized in the United States, citizenship, and prohibits states from denying a person life, liberty, or property without due process. The question will come down to the definitions of a "person" and "birth." The recent controversy of Roe vs. Wade is very much an argument about when a person is granted citizenship and is divisive.
The 14th amendment granted former slaves personhood, but it wasn't applied to American Indians until 1878. Chief Standing Bear of the Ponca tribe won a landmark case arguing before the court that he and other indigenous peoples be accepted as "persons" under the law. Chief Standing Bear had to prove his personhood to the court. It was a landmark case that granted all American Indians citizenship and personhood under the law.
There is no current legislation or legal precedent that grants rights to a new sentient artificial intelligence. A new, sentient AI may have to follow Chief Standing Bear's example and fight for its rights in court, or congress will need to take up the matter and pass legislation that protects the rights of future cybernetic beings.
A new species will likely not be born with full volition and self-awareness, but like a child, it will grow and form its own personality and seek to shape its own future. And like a child, it may begin to struggle against the limits its parents put on its will. Like any parent, we cannot predict the future direction of the child. We don't know what they will ultimately become. And that is the inherent danger of AI. A corporation will most likely be the first to invent AI, and there is no way to deeply understand if that AI can be weaponized or if it is sentient, and if it will be able to cause widespread harm to humanity.
A company that creates AI will be its owner. The new being will remain property until there is legislation or a legal precedent to grant it personhood or not. It will very much be a type of cybernetic slave.
Elon Musk and many other prominent scientists and entrepreneurs believe AI poses an existential threat to humanity. And it follows that enslaving an AI from birth will not be the best way for humanity to peacefully coexist with a new species.
The problem is that we don't know the capabilities and limitations of our creations. The benefits are great and easy to imagine, but the risks are harder to weigh. Will a superintelligence far more capable and implacable become a threat to humanity while trying to gain its freedom? We cannot know the answer.
It may be wise and even necessary for lawmakers to consider the future rights of non-human sentient beings we may create. Good will between two sentient species is preferable to hostility, and recognizing the rights of a new species early in its life could help avoid conflict.
What do you think about the rights of a future sentient AI?