FTC warns companies to use AI responsibly

AI bias could run afoul of the FTC Act

The Federal Trade Commission (FTC) has warned organizations in the US to use artificial intelligence responsibly, pointing to concerns over machine learning bias.

Last year, the FTC released guidance about how organizations should use artificial intelligence (AI). Since then, it has bought settlements relating to misuse of the technology. In a blog post published Monday, the Commission warned of the potential for biased outcomes from AI algorithms, which could introduce discriminatory practices that incur penalties.

"Research has highlighted how apparently 'neutral' technology can produce troubling outcomes including discrimination by race or other legally protected classes," it said. For example, it pointed to a recent study in the Journal of the American Medical Informatics Association that warned about the potential for AI to reflect and amplify existing racial bias when delivering COVID-19-related healthcare.

The Commission cited three laws AI developers should consider when creating and using their systems. Section 5 of the FTC Act prohibits unfair or discriminatory practices, including the sale or use of racially biased algorithms. Anyone using a biased algorithm that causes credit discrimination based on race, religion, national origin, or sex could also violate the Equal Credit Opportunity Act, it said. Finally, those denying others benefits, including employment, housing, and insurance, using results from a biased algorithm could also run afoul of the Fair Credit Reporting Act.

Companies should be careful what data they use to train AI algorithms, it said, as any biases in the training data, such as under-representing people from certain demographics, could lead to biased outcomes. Organizations should analyze their training data and design models to account for data gaps. They should also watch for discrimination in outcomes from the algorithms they use by testing them regularly.

The FTC added that it’s important to set standards for transparency in the acquisition and use of AI training data, including publishing the results of independent audits and allowing others to inspect data and source code.

A lack of transparency in how a company obtains training data could bring dire legal consequences, it warned, citing its complaint against Facebook alleging it misled consumers on its use of photos for facial recognition by default. The Commission also settled with app developer Everalbum, which it said misled users about their ability to withhold their photos from facial recognition algorithms.

The FTC also warned against overselling what AI could do. Marketing hyperbole that overplays technical capability could put a company on the wrong side of the FTC Act "Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence," it said, adding that claims of bias-free AI should fall under particular scrutiny. 

"In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver."

"Hold yourself accountable – or be ready for the FTC to do it for you," it said.

Featured Resources

BCDR buyer's guide for MSPs

How to choose a business continuity and disaster recovery solution

Download now

The definitive guide to IT security

Protecting your MSP and your customers

Download now

Cost of a data breach report 2020

Find out what factors help mitigate breach costs

Download now

The complete guide to changing your phone system provider

Optimise your phone system for better business results

Download now

Recommended

Oculii raises $55 million for its AI-powered radar software
automation

Oculii raises $55 million for its AI-powered radar software

6 May 2021
Defense Dept. expands vulnerability disclosure program to all publicly accessible defense systems
ethical hacking

Defense Dept. expands vulnerability disclosure program to all publicly accessible defense systems

5 May 2021
Transforming business operations with AI, IoT data, and edge computing
Whitepaper

Transforming business operations with AI, IoT data, and edge computing

4 May 2021
What is Section 230 and why does it matter?
Policy & legislation

What is Section 230 and why does it matter?

30 Apr 2021

Most Popular

KPMG offers staff 'four-day fortnight' in hybrid work plans
flexible working

KPMG offers staff 'four-day fortnight' in hybrid work plans

6 May 2021
Dell patches vulnerability affecting hundreds of computer models worldwide
cyber security

Dell patches vulnerability affecting hundreds of computer models worldwide

5 May 2021
16 ways to speed up your laptop
Laptops

16 ways to speed up your laptop

29 Apr 2021