AI: Human progress or just another business?
Is artificial intelligence really an enabler that will free us from menial tasks and allow us to focus on the strategic?
I had a chat with some people recently, about the future of jobs and future workers. I was astonished by the fatalist attitude behind statements such as ‘men won’t need to do that, now there are algorithms for making decisions, robots that can do what people used to do…’ and was surprised that they assumed tech is perfect and ignored the glitches and issues associated with it. Then, I bumped into an article about RoboArt that drove me crazy. Fuming and upset, I walked back and forth in my room as an art lover would do, and bothered my friends with my concerns. What’s the point? In a world with finite resources and much bigger issues, why do we need to invest money in AI for this? Is this the priority?
Let’s park aside some amazing tech applications (such as those in the health and care sector) to focus on these two aspects:
- We often hear that AI is an enabler, that it will free us from pesky tasks to focus on more strategic issues. Are we sure about this? Our brain can’t cope with complexity and strategy if we don’t exercise it because AI does most things for us.
- Secondly, we are giving robots nice jobs too.
About strategic and higher level tasks, we know that not all of us are into strategy, or good at it. And our higher cognitive functions and social skills can’t develop in a vacuum where AI does everything for us – it would be like expecting to win a marathon without daily training. Of course, we were not perfect before tech was developed either. But, especially for this reason, why are we giving up more and more chances to improve ourselves? Despite the enthusiastic narrative, tech has dangerous effects that are enhanced by our lack of moderation. Indeed, we often use tech devices for over eight hours a day. Among the many drawbacks, our sense of ownership and agency is under threat and our memory span gets shorter and shorter. Don’t forget that attention is our gateway to thinking.
Regarding the allocation of ‘nice’ tasks to AI, let’s keep in mind that we are a diverse bunch of people in this world. There’s a vocational model (such as Holland’s theory) that identifies people with, for instance, a realistic attitude. They value practical things they can see and touch, and they are less likely to be happy if their job becomes virtual and only involves monitoring a machine. Many car mechanics are happy to work as car mechanics. This model is a simplification, of course, but it still offers another lens to look at tech progress and wellbeing. Moreover, AI is being improved to master the higher level tasks that were promised to us, humans, when they told us that tech would take over boring, routine processing tasks. An example? McCann Erickson, an advertising agency network, appointed the world’s first AI creative director in Japan.
With all these thoughts/questions (and more) spinning in my head, suddenly another question mark became my headlight: what is human progress? In fact, many people talk about progress and innovation to justify the AI trend. Google told me that in intellectual history,
‘the Idea of Progress is the idea that advances in technology, science, and social organization can produce an improvement in the human condition.’
Yes, an automated search engine reminded me of how inhuman many investment choices are today. And how much we over engineer simple things for the sake of nothing.
The point is that, in some instances, we are simply improving AI — but AI should be one of our tools, not our final objective.
This happens because behind the curtains there are just people. When we are short-sighted and profit-oriented, the idea that a machine can replace dozens of workers smells of (money) progress to (business) people.
Fortunately, diversity and contrasting interests can save us. For instance, Elon Musk (yes, the man behind electric cars, hyperloops and life on Mars, among the many) donated $10M to research projects aimed at ‘keeping AI beneficial’ to humanity. The Future of Humanity Institute at Oxford University and the Centre for the Study of Existential Risk at Cambridge University are leading in this area. Moreover, recent trends tell us a lot about our need to go back to basics: our future is not just about robots…but that is content for another blog!
My final question is this: why don’t we focus on human improvements? They might require tech. But they might not!