Artificial intelligence is already part of our everyday experience It will bring benefits, but will also pose major ethical challenges
It has also been shown that AI can influence election results “Humans do well what computers find very difficult, like empathising”
According to The Millennium Project, a global think tank focused on the future that includes organisations, institutions and researchers from around the world, the list of new technologies that will change life in the next decade is growing, and includes such things as quantum computing, synthetic biology, nanotechnology, drones, virtual and augmented reality, and 3 and 4D printing. However, for the moment, top of the list are artificial intelligence (AI) and robots.
Artificial intelligence is already part of our everyday experience. It is not only in Siri on Apple phones, but it also decides which messages our email sends to the junk folder, or learns how we play a video game to make a match more interesting to us. Yet, it is still simple, and nowhere near the artificial entities that come to dominate humanity in our imaginations, such as the Skynet of the Terminator films, for example. But what’s the limit when the capacity to do calculations is combined with a capacity to learn?
DeepMind is an English AI company that presented AlphaZero in 2014, a chess-playing computer program that became the best player in the world in just four hours while knowing only the rules of the game. Simply by playing against itself, it quickly came to understand the game better than any human mind can. And it did so creatively and intuitively, taking risks and leaving the best players in the world astounded.
The use we make of it
More recently, the world’s media reported an experiment carried out by Facebook, which involved connecting two AI programs with the aim that they learn between and from each other. It was reported that the two bots developed their own language and the researchers were forced to pull the plug. Facebook denied it, insisting that they had not made it clear in the programming that the two bots had to communicate in intelligible English, which meant the devices resorted to using abbreviations that the scientists did not understand. Yet, the idea of two robots talking to each other in a language only they understand is concerning. Perhaps the most worrying thing of all, however, is not that computers might develop self-awareness (which for the moment is still science fiction), but the use that humans might make of their capacities.
“These tools will help us solve problems in both the personal and professional spheres. Their impact will be high because they increasingly have more data to understand processes in all areas: personal, mobility, consumption, research, innovation, management…,” says Raúl Benítez, professor in the Automatics department at the Polytechnic University of Catalonia and researcher at the Biomedical Engineering Research Centre (CREB-UPC). This will bring benefits, but also pose major ethical challenges. For example, the United Nations has warned that if the artificial intelligence of the future learns from the environment that created it – in which there were relatively few women – there’s a good chance it will be sexist. The problem of bias is a serious one: an algorithm was used in courts in the United States – a program known as Compas – to predict the chance of a convicted person reoffending. However, it made terrible mistakes and, to top it all, was racist, by attributing more false cases of reoffending to blacks than whites (and that is bearing in mind that it did not include racial data, only data from the general social environment).
It has also been shown that AI can influence election results by using social media to target undecided voters, while it also makes facial recognition possible with 97% accuracy with data taken from the Internet, and makes it easier to sell products and services by analysing our digital footprint and discovering our preferences. All of this raises doubts: How to control it? Will it become a business? Will it make the rich richer and the poor poorer? What will be the risks? “Society will have to decide how to develop the technology and how to make use of it. My view is that the users, companies and public sector have to coordinate to guarantee the ethical and sustainable use of technology. It would be a mistake to make decisions purely motivated by economics,” says Benítez.
Machines and humans
Another question is how AI and robotics will affect the future job market. Will we use the technology to work less or create more work? “Neither one nor the other,” says Benítez. “My prediction is that it will allow us to devote our efforts to those things in which the natural characteristics of human beings bring more added value. Humans do well what computers find very difficult, such as empathising with other people, explaining things to other humans, making innovative proposals or divergent thinking. In general, humans are much better when it comes to making difficult decisions in complicated situations and with very little data.”
Against the apocalyptic vision there are voices in science that insist AI is not dangerous. One pioneer in this area is the Canadian researcher Yoshua Bengio, winner of the Turing Prize (the Nobel Prize of computer scientists), who makes it clear when asked: “We are a long way from superintelligent AI systems, and there could even be basic obstacles to going beyond human intelligence.”
It’s true that computer programs do not have emotions and even less awareness. We may be far from Sykynet, but the innovations do not stop coming. For example, an AI program already exists that with a single word can compose song lyrics without ever repeating itself.
As Benítez explains, in the future AI systems will be creative by identifying patterns of different artists and making new creations that combine diverse elements. “A little bit like what humans do, whereby we learn techniques and integrate existing ideas to make new creations. Yet, for the moment there is little need to worry, because humans have something that machines are a long way from having: a childhood, permanent contact with other humans... We’re constantly exposed to various views and ways of doing things throughout our lives. What’s more, our brains are always open and taking in information from different sources, which they integrate with emotions, dreams, hormonal systems, and so on. This richness is difficult to reproduce with a machine: for the moment, machines have no childhood, nor do they experience suffering or sadness, nor do they go to school, or have family and friends,” he continues.
Almost half of all jobs will be robotocised in the next few decades, and that means that the owners of robots will have to pay their social security in the future. “As I understand it, robots that do the work that is no longer done by a human will have to pay if we want society to maintain its current levels of social welfare. The aim will be to find an economic model in which humans devote themselves to tasks that are suitable for them and avoid doing repetitive tasks and those that bring no added value,” says Benítez.
The fear of losing one’s job to robots is real, as is the possibility that there will be robots that kill in the future. One of the current fears in the academic world - and something the United Nations has warned about - is scientific advances being converted into more effective weapons. “The biggest threat is what is being researched in secret,” Yoshua Bengio has also said recently, although he also insisted that the idea of cyborgs is still a long way off. However much technological advances allow for improving people’s physical – and soon no doubt also mental - conditioning, it is still too soon to talk about superhumans.
“What we’ll see are humans helped by machines in different tasks, including some that are not necessarily repetitive but linked to decision making. We already use our mobiles to make decisions, when deciding when to arrive or when choosing a restaurant. What will come will no doubt surprise us, in the same way that our grandparents might now be surprised that we can know what the weather is like in a specific city merely by looking at our mobiles,” says Benítez.
Commodification of science
Companies increasingly have, and will have, more interest in developing technological research projects with exclusively commercial aims. “Society,” says Benítez, “has to promote and fund not-for-profit research in order to find solutions to the challenges that we are faced with in such areas as the environment, energy and health. That said, it would be a mistake to only do research on energy and health, as history shows us that the solutions to major problems often come from relatively unrelated areas, such as mathematics, chemistry or geology.”
We will just have to wait to see what discoveries await us in the future. Years ago, superconductivity was touted as the solution for the world’s energy problems, then nanotechnology promised to revolutionise medicine, while now it seems as if it is AI that will change everything.
“It’s not certain, we still need lots of people doing basic and applied research in many different areas to be able to face the challenges of the future. We have to remember that what had the biggest impact on health in the past centuries was the discovery of microbes and vaccines, and that did not require any more technology than a microscope,” Benítez concludes.
dossier Science & technology
dossier science & technology