EU watchdog urges AI policymakers to protect fundamental rights

Regulator calls for more guidance over the use of AI as it is "made by people" and therefore not "infallible"

A digital brain above a chip to indicate AI

The European Union's rights watchdog has warned of the risks possessed by predictive artificial intelligence (AI) used in policing, medical diagnoses and targeted adverts.

The warning came in a report produced by the Agency for Fundamental Rights (FRA), which is urging policymakers to provide more guidance on existing rules and how they can be applied to AI to ensure future laws do not harm fundamental rights. 

AI is widely used by law enforcement agencies and often comes up in cases where the technology, particularly facial recognition, clashes with privacy laws and human rights issues. The European Commission is currently mulling new legislation over the use of AI, but it hasn't had much authority over it so far

The FRA's report, 'Getting the future right - Artificial intelligence and fundamental rights in the EU', is calling on countries in the EU to make sure that AI respects all fundamental rights, not just privacy or data protection but also where it discriminates or impedes justice. It wants a guarantee that people can challenge automated decisions, as AI is "made by people". 

Governments within the bloc should also assess AI both before and during its use to reduce negative impacts, particularly where it discriminates and there is a call for an "effective oversight system", which the report suggests should be "joined-up" with members of the block to hold businesses and public administrators to account.

Related Resource

Realising the benefits of automated machine learning

How to overcome machine learning obstacles and start reaping the benefits

What are the benefits of automated machine learning - whitepaper from DataRobotDownload now

Authorities are also being urged to ensure that oversight bodies have adequate resources and skills to do their job.

"AI is not infallible, it is made by people, and humans can make mistakes," said FRA director Michael O'Flaherty. "That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people's rights both in the development and use of AI. 

"We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them."

Featured Resources

Choosing a collaboration platform

Eight questions every IT leader should ask

Download now

Performance benchmark: PostgreSQL/ MongoDB

Helping developers choose a database

Download now

Customer service vs. customer experience

Three-step guide to modern customer experience

Download now

Taking a proactive approach to cyber security

A complete guide to penetration testing

Download now

Most Popular

REvil threatens to release Apple’s hardware schematics
ransomware

REvil threatens to release Apple’s hardware schematics

21 Apr 2021
How to find RAM speed, size and type
Laptops

How to find RAM speed, size and type

8 Apr 2021
Samsung Galaxy S21 Ultra review: Ultra in every sense of the word
Mobile Phones

Samsung Galaxy S21 Ultra review: Ultra in every sense of the word

22 Apr 2021