EU watchdog urges AI policymakers to protect fundamental rights
Regulator calls for more guidance over the use of AI as it is "made by people" and therefore not "infallible"
The European Union's rights watchdog has warned of the risks possessed by predictive artificial intelligence (AI) used in policing, medical diagnoses and targeted adverts.
The warning came in a report produced by the Agency for Fundamental Rights (FRA), which is urging policymakers to provide more guidance on existing rules and how they can be applied to AI to ensure future laws do not harm fundamental rights.
AI is widely used by law enforcement agencies and often comes up in cases where the technology, particularly facial recognition, clashes with privacy laws and human rights issues. The European Commission is currently mulling new legislation over the use of AI, but it hasn't had much authority over it so far.
The FRA's report, 'Getting the future right - Artificial intelligence and fundamental rights in the EU', is calling on countries in the EU to make sure that AI respects all fundamental rights, not just privacy or data protection but also where it discriminates or impedes justice. It wants a guarantee that people can challenge automated decisions, as AI is "made by people".
Governments within the bloc should also assess AI both before and during its use to reduce negative impacts, particularly where it discriminates and there is a call for an "effective oversight system", which the report suggests should be "joined-up" with members of the block to hold businesses and public administrators to account.
Realising the benefits of automated machine learning
How to overcome machine learning obstacles and start reaping the benefitsDownload now
Authorities are also being urged to ensure that oversight bodies have adequate resources and skills to do their job.
"AI is not infallible, it is made by people, and humans can make mistakes," said FRA director Michael O'Flaherty. "That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people's rights both in the development and use of AI.
"We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them."
Managing security risk and compliance in a challenging landscape
How key technology partners grow with your organisationDownload now
Evaluate your order-to-cash process
15 recommended metrics to benchmark your O2C operationsDownload now
AI 360: Hold, fold, or double down?
How AI can benefit your businessDownload now
Getting started with Azure Red Hat OpenShift
A developer’s guide to improving application building and deployment capabilitiesDownload now