On the surface, it seems a bit of a silly question. After all, in a practical sense, the modern incarnation of Artificial Intelligence is specifically aimed at getting the computer to do boring, repetitive, menial, or colossal tasks that would be either boring or impractical for people to do.
In such a case, this would of course free up the average person to work on the really demanding tasks that only human beings could do. Maybe.
One could make an argument along the lines that using maps on our phones reduces our reliance on the ability to navigate without such tools. Not in the sense that the simple absence of the tool makes us “dumber,” but rather that using the tool and not developing practical skills outside the tool actually makes us less capable.
It’s easy to forget what it’s like to have to navigate direction by looking at the sun (it sets in the West) or which side of the trees the moss is growing on (the North, unless you’re in the Southern Hemisphere, in which case it’s the South). Similarly, it’s easy to wonder how people ever managed to meet up before texting. Perhaps take a moment to meditate on this: if people didn’t do so, you never would have been born.
There might be a somewhat more manipulative side to Artificial Intelligence, however. The technologist-philosopher Jaron Lanier points out a significant problem of AI recommendation engines in his book Dawn of the New Everything.
Lanier takes Netflix’s recommendation as an example. A Lanier points out, it’s easy to describe the algorithm as AI, but providing good recommendations to Netflix subscribers may not in fact be its primary use.
Rather, suggests Lanier, it might actually be more of a misdirection technique to point us in the direction of certain programs in order to keep us from seeing that Netflix doesn’t actually have as comprehensive a library as one might initially assume.
Think about it like this: you enter a store and are immediately blindfolded and taken to your item by the shop assistant and only then allowed to see the item you have requested and perhaps a few others that might be in the same section. You have no real idea of how large the store is, and the assistant standing next to you certainly wouldn’t give you any idea.
In fact, the assistant herself might not even know the size of the whole store but only a part. Perhaps you attempt to ask leading questions: the assistant is paid to misdirect these, or if it’s a computer, it simply won’t understand them.
Perhaps the real question is whether you’d actually walk into a store where you knew that the clerk was about to blindfold you. It’s a curious question whether this sort of us of AI gives us “the best options” or is making us more tractable as beings, actively willing to be blindfolded and even paying for the privilege.