One of the great dangers that we perceive with the modern definitions of Artificial Intelligence is granting that system some sort of “personhood”.
When we talk about AI systems such as an Amazon algorithm or even the thermostat on a car radiator, it obviously doesn’t make a lot of realistic sense to treat this entity as a person with rights.
An important distinction to be made here is perhaps between the Artificial Intelligence definition that simply indicates something that could be done by a human yet can be automated (as in the above examples) or an Artificial General Intelligence, which roughly speaking is something that models the way that a human being would reason in a large sense.
This definition would presumably include credible “inductive reasoning skills” and passing the Turing Test; that is, effectively passing itself off as human intelligence to someone who didn’t realize they were speaking to a computer (e.g., a really sophisticated customer service chatbot).
There’s comically little reason to think that a standard Artificial Intelligence would need any sort of personhood or legal protection. It’s hard to argue that we would be making the life of a thermostat miserable by putting it to its purpose of clicking over at a certain temperature; arguably, at this point it would be meeting its telos by performing this action day-in and day-out.
However, the definitions become a bit muddy when we think about an Artificial General Intelligence. Given that such a system doesn’t exist in any true form, this might be pointless. There’s a good argument that this is something that we should concern ourselves with in the future when someone actually comes up with such a system; that is, a bridge that perhaps would be more effectively crossed when we actually arrive at it.
At the risk of anthropomorphizing, with such a system it might be interesting to see whether such a machine even has a telos. If it’s programmed to have a particular goal or end in mind, then what happens when it decides that this isn’t its passion? This would be much like a born-and-bred Kansas farm boy who, in a flash of inspiration, chooses to move to New York to become a drag performer.
That’s a better outcome than the AI morphing into Skynet, but at the fundamental level it’s the same flip. No one would begrudge our farm boy his jazz-hand-and-sequins dreams, but there’s an argument to be made that the computer needs a bit of discipline instilled.
That is, however cute a story, the problem with such attributions of personhood is that they do in fact anthropomorphize. It’s vitally important to remember that any AI is a machine that is developed for a particular purpose and that its ultimate goal is actually to be “intelligent” only in the service of that purpose.
Insofar as we give ground to attributions of personhood to the machines that are built to serve humanity, then we unconsciously cede ground. This unfairly prepares humanity to be subservient to machines that are obviously designed for the converse.
It’s important to make this distinction in reasoning right now: at the current stage where machine intelligence has its benefits but is thoroughly, demonstrably different from General Intelligence of the human sort. If we ever reach Artificial General Intelligence it will definitely take us over if we believe that it should.