Computational algorithms are a series of systematic and pre-defined instructions that are used in multiple tasks. One of these tasks is to optimise the results of internet queries, a function carried out by search and sorting algorithms. However, these algorithms reflect the values of whoever codes them, whoever develops the instructions, and this is where biases that discriminate by gender, race or language come about.
What’s the purpose of the research on biases on social media carried out by Eurecat’s digital area?
The ultimate goal is to achieve an internet with quality information regarding big data. We work to identify hate speech and fake news, and propose solutions to make the internet as transparent as possible. We carry out continuous technological monitoring so that the industry can apply innovations at all times. We are a bridge between universities and companies.
Do you also analyse the use of artificial intelligence?
We work for fair and transparent artificial intelligence. We identify biases in training algorithms, the risk to privacy in data, and apply solutions that minimise or eliminate them.
What research do you carry out?
We work on applied research projects, in public consortia and in others that are financed internally.
Are biases rife on social media?
We tend to find cases of discrimination in many different areas. For example, in the artificial intelligence systems of big companies like Google or Amazon. I remember a job proposal presented by Google on one occasion, in which men were prioritised over women. Amazon has done the same where technological jobs are concerned. Apart from job offers, we also have a lot of everyday examples of discriminatory positions that go unnoticed by users.
Well, chatbots like Alexa or Siri. Both use women’s voices, and this perpetuates the service role that has been given to the female gender throughout history. This is such a socially accepted stereotype that we find it absolutely normal.
What major biases have you detected?
The most numerous cases of discrimination are related to gender, although it’s also true that it’s the bias we look for the most when testing and it is also the focus of several of our projects. But in some cases, gender discrimination can be accompanied by other types of discrimination. I remember a case where the discrimination increased if you combined the female gender and race. It was a biometric surveillance system designed to identify the most suspicious subjects, which in this case were black women.
What other biases have you found?
We’ve worked with Wikipedia in detecting cultural gaps, that is, geographical areas with less coverage than others, and we provided tools to reduce them with the creation of the Wikipedia Diversity Observatory.
Incorporating AI into computational learning algorithms allows for creating and at the same time locating biases, but what will happen with generative AI? Will it get more complicated?
The great concern about integrating artificial intelligence is the privacy of the data that is given to the algorithms. And this is an extremely important issue because systems receive more and more data to learn from. Big data is used to train models, and these models must be transparent and correct in order to deal with biases.
What needs to be taken into account?
The new language models are very convincing but people are not aware that they may contain errors or biases, with their widespread use spreading this discrimination in society as a whole. At the moment, the European Union is trying to regulate this and has approved the start of talks that will lead to the first law in the world that regulates the use of artificial intelligence, which it is expected to be ready by the end of the year. The priority of this regulation is to ensure that artificial intelligence systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. In addition, it also says it wants to mandate that these systems be monitored by actual people.
Does the new law say anything about who must be involved in the design of the algorithms?
What we know for now is that the European regulations indicate that developers will have a lot of responsibility, much more than companies, in the development of artificial intelligence. However, the problem remains the same as we have now, which is that in the design and development of artificial intelligence one woman is involved for every five men. Increasing diversity is now more necessary than ever.
Would setting quotas in the design and development stages of artificial intelligence work?
Establishing quotas that positively discriminate in favour of women in this field is very difficult, because new technologies are constantly emerging, and women are in the minority in STEM careers. The problem must be tackled earlier, by the school and the family. The statistics tell us that there is as much talent among girls as among boys, and in Spain more women enrol in university than men, but only 30% of women choose STEM studies and, within this area, the percentage drops where AI and computing is concerned. We must break this cycle and create female role models for girls now starting school, to explain that the first people to ever write an algorithm and a compiler for programming were two women, although their names never appear. Added to this is the fact that the vast majority of university professors in STEM degree subjects are men.
There are growing demands for ethics to be included in technological development, especially among those opposed to the excessive growth of AI.
Technological evolution cannot be stopped, and ethics cannot be left to one side. Artificial intelligence will change a lot of things, for better and for worse, but it doesn’t have reasoning behind it, and luckily, it still has a long way to go for it to become analytical. But one more step will be reflective artificial intelligence, which will indeed be able to take control of systems. So our goal must be to build the best machines possible but with humans still in control.
Will machines replace people?
Never! A machine has no feelings. You can make them simulate vision or smell, but they can never experience what people perceive as passion, risk, love, compassion... We should not be afraid of technological advances, but we should monitor the misuse of technology and be watchful and anticipate discrimination and minimise them as much as possible. Personally, I am very optimistic in this regard.
And that means…?
That I believe in humanity.