As is well known to journalists, when a dog bites a human, it is not news, but it is when the opposite occurs. Because, of course, it alarms us and evokes the extraordinary, the unusual, sometimes even more so when it includes a certain morbid tint, something often labeled as gruesome. Few issues can be more so than the replacement of human labor—or even, as they say, of humans themselves—by artificial intelligence. We are flooded, in the media around the world, with daily and incessant headlines: how artificial intelligence (AI) will change computing, culture, and the course of history, how it will transform our lives and labor markets, how it threatens employment, why it is necessary to slow down—or even suspend—its development for fear of generating a new omnipotent and uncontrollable power: we are, they tell us, recklessly playing the part of apprentice sorcerers, suicidal Prometheans.
What Mark Twain said about the news of his own death applies to much of what is being said about AI: it is highly exaggerated.
First, we must distinguish between general or strong artificial intelligence, which would be capable of performing general intelligent actions and fully resembling human intelligence, and narrow or weak artificial intelligence, which operates within a limited, previously defined, algorithmic range.
General artificial intelligence neither exists—beyond imagination or desire—nor is it expected anytime soon, as we lack scientific foundations for it and do not even know if it will ever be possible (the idea of “singularity,” where machines become more intelligent than humans, is, today, more literature—or even poetry—than science).
What is true, however, is that the deep and accelerated development of computational capacity and the digitalization process, along with machine learning algorithms, which build models based on sample data (also known as training data) in order to make predictions or decisions without being explicitly programmed to do so, combined with access to exponentially growing giant databases (big data) in the cloud, is enabling narrow or weak artificial intelligence to advance extraordinarily.
One of the clearest examples of this is the now-famous ChatGPT, a chatbot (i.e., a computer program capable of holding a conversation with an internet user on a specific topic) based on the development of what are known as large language models, algorithms that learn statistical associations between billions of words and phrases to perform tasks such as generating summaries, translating, answering questions, and classifying texts. Impressive, certainly; but it remains a recreation, organization, modification, or processing of what already exists: it lacks true creativity, cannot represent the model of the world (space, time), lacks common sense (that is, the ability to operate efficiently in a complex system), cannot create mental experiments (so important for science), is not capable of sophisticated reasoning about abstract ideas, and cannot imagine what will happen in a specific process of which we lack sufficient examples (such as in many legal disputes or social or political conflicts).
However, it is undeniable that the narrow artificial intelligence that already exists and is being developed at great speed will affect employment. This, by the way, is nothing new in the historical relationship between technology and employment. As James Maniyika, Vice President of Technology and Society at Google, recently summarized, three things will happen simultaneously as a result of AI development: jobs will be created, jobs will be lost, and jobs will change. The MIT has proposed, in a recent publication on AI and the future of employment, what I consider to be a useful strategy to address this issue: start by considering the tasks that constitute each specific job and imagine which of them can be done better by computers and which by people. Adopting this approach means thinking less in terms of people OR computers and more in terms of people AND computers.
Artificial intelligence is transforming the way we work while driving productivity and offering an opportunity to improve our working lives, even though it may also generate risks of inequality and discrimination, security and privacy concerns, or confusion between our personal and work lives, among others.
It is time for intelligent regulation, as has always been done in human history—often with a certain delay, because technology usually precedes its regulation—when new ways of relating technically to the world, of creating and producing with greater capacity and efficiency, have emerged. Regulation with prudence, but also with optimism, because technology, although it may entail risks and drawbacks, significantly contributes to our well-being and is one of the main drivers of economic growth, if not the main one. Artificial intelligence is no exception.