Features

Limits on machines

The EU and the UK are working on regulatory frameworks that they aim to turn into global benchmarks to control artificial intelligence

Anxiety about artificial intelligence is nothing new, but the British computer scientist Geoffrey Hinton, the ’Godfather of AI’, has raised the level of alarm. In May, the 75-year-old left his Google job to be able to freely condemn the dangers of a technology he helped develop and which allows the generation of content on the basis of large amounts of existing digital data.

Analysing its potential, Hinton paints a scary picture: the possibility AI will surpass human intelligence faster than expected, that it will flood the Internet with false information, that it will replace ever more jobs, and even that it may end up developing autonomous weapons, true “killer robots”.

The nightmare of a dystopia is looming in which machines have their own goals that are different from those of their human creators. “These systems do not have an ethical understanding, they have no sense of truth,” says AI expert, Gary Marcus.

These warnings have multiplied since the emergence of ChatGPT, AI software capable of holding a conversation with an internet user on a variety of subjects. After this program was released by the OpenAI company last November, Google released its own version of the chatbot, Bard, four months later.

The challenge is how to benefit from the advantages provided by this revolutionary technology, for medical diagnosis, for example, or economic productivity or the fight against climate change, while avoiding it becoming harmful to society.

One of the main players in putting limits on new technology is the EU, which has new legislation to regulate AI with the aim of guiding the political debate on the issue on a global scale. Meanwhile, the White House recently brought together the heads of major companies working on AI, while China has approved a draft regulation requiring security assessments for all products that use generative AI systems – such as ChatGPT – before they can be launched on the market.

Secure, transparent, traceable

In the EU’s case, the European Parliament’s Committee on Internal Market and Consumer Protection approved its version of the proposed law on AI in May. MEPs want to ensure that autonomous content generation systems are supervised by humans, are secure, transparent, traceable, non-discriminatory and environmentally friendly.

The legislation prohibits real-time mass surveillance systems in public spaces - except to prevent crime - and bans models that use subliminal techniques to “substantially” alter a person’s behaviour without their knowledge. In addition, it qualifies as high risk a series of AI systems with very specific uses that can only be introduced on the market if they respect the EU’s basic rights and values. For example, those that could be used to influence an election, or that financial institutions use to assess a person’s credit rating, for example. The law also foresees big fines for companies if they breach the regulations. The plenary of the European Parliament passed the proposal in July and will next negotiate the final version with the Commission and the Council of the EU.

Meanwhile, the UK passed a white paper on AI in March. London is looking for more flexible and less strict regulations than those in the EU with the aim of striking a balance between public confidence and at the same time helping companies grow and create jobs. The UK’s aim is to create a regulatory framework that is also more attractive to US companies than that of the EU and that will also serve as an example for a global approach.

Feature Technology

Sign in. Sign in if you are already a verified reader. I want to become verified reader. To leave comments on the website you must be a verified reader.
Note: To leave comments on the website you must be a verified reader and accept the conditions of use.