FTC warns companies to use AI responsibly
AI bias could run afoul of the FTC Act
Last year, the FTC released guidance about how organizations should use artificial intelligence (AI). Since then, it has bought settlements relating to misuse of the technology. In a blog post published Monday, the Commission warned of the potential for biased outcomes from AI algorithms, which could introduce discriminatory practices that incur penalties.
"Research has highlighted how apparently 'neutral' technology can produce troubling outcomes including discrimination by race or other legally protected classes," it said. For example, it pointed to a recent study in the Journal of the American Medical Informatics Association that warned about the potential for AI to reflect and amplify existing racial bias when delivering COVID-19-related healthcare.
The Commission cited three laws AI developers should consider when creating and using their systems. Section 5 of the FTC Act prohibits unfair or discriminatory practices, including the sale or use of racially biased algorithms. Anyone using a biased algorithm that causes credit discrimination based on race, religion, national origin, or sex could also violate the Equal Credit Opportunity Act, it said. Finally, those denying others benefits, including employment, housing, and insurance, using results from a biased algorithm could also run afoul of the Fair Credit Reporting Act.
Companies should be careful what data they use to train AI algorithms, it said, as any biases in the training data, such as under-representing people from certain demographics, could lead to biased outcomes. Organizations should analyze their training data and design models to account for data gaps. They should also watch for discrimination in outcomes from the algorithms they use by testing them regularly.
The FTC added that it’s important to set standards for transparency in the acquisition and use of AI training data, including publishing the results of independent audits and allowing others to inspect data and source code.
A lack of transparency in how a company obtains training data could bring dire legal consequences, it warned, citing its complaint against Facebook alleging it misled consumers on its use of photos for facial recognition by default. The Commission also settled with app developer Everalbum, which it said misled users about their ability to withhold their photos from facial recognition algorithms.
The FTC also warned against overselling what AI could do. Marketing hyperbole that overplays technical capability could put a company on the wrong side of the FTC Act "Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence," it said, adding that claims of bias-free AI should fall under particular scrutiny.
"In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver."
"Hold yourself accountable – or be ready for the FTC to do it for you," it said.
B2B under quarantine
Key B2C e-commerce features B2B need to adopt to surviveDownload now
The top three IT pains of the new reality and how to solve them
Driving more resiliency with unified operations and service managementDownload now
The five essentials from your endpoint security partner
Empower your MSP business to operate efficientlyDownload now
How fashion retailers are redesigning their digital future
Fashion retail guideDownload now