Greater transparency needed in AI industry to avoid future regulatory penalities
Understanding of AI from executive to user levels could save firms from costly disputes
Firms using AI models need to commit to transparency as a matter of priority, informing staff and users on how models work, or risk facing issues in the future, according to industry experts.
Failing to share the necessary insight into the inner workings of models now could lead to regulatory penalties in the future.
The advice comes as the EU and UK have moved forward with draft bills that would regulate AI use, and compel companies to maintain a degree of transparency so that users can utilise AI models responsibly.
Firms that continue to conceal how their models work could invite fines and scrutiny, as well as reputational harm in a landscape that will increasingly demand oversight on certain high-risk AI models.
Experts speaking at PrivSec London on Tuesday also urged companies to foster an understanding of AI at the board level, to enable top-down governance and accountability of AI use.
Insight into how data is sourced and processed for AI models can also help C-suite executives to better negotiate third-party contracts, which could include clauses that allow partners to use a firm’s data to enhance their models.
This is essential for supply chain accountability, as firms could be implicated in privacy or data protection disputes based on a model built using its data.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
They stressed the importance of giving employees other than data scientists, as well as users, access to information that allows them to perform a data protection impact assessment (DPIA).
“In GDPR we have controller and processor, whereas in the EU AI Act you’ve got loads of different actors like developer, distributor, user, etc,” said Pratiksha Karnawat, DPO and information security officer at First Abu Dhabi Bank.
“And the onus, the responsibility of conducting the DPIA has been put on the user of that AI system. And how are they expected to do that DPIA, because if you don't even know what the system does or what it's capable of doing, how do you do a DPIA?”
What is the state of AI legislation?
The EU’s AI Act was proposed in April 2021, laying out strict regulations against the misuse of AI models. It could be implemented as soon as the end of 2023 or the start of 2024.
Companies that do not comply with the obligations it sets out risk fines of up to 4% of total worldwide annual turnover, and up to 2% if they supply “incorrect, incomplete or misleading information”.
High-risk AI models include those that carry fundamental rights implications such as those posed by live facial recognition, or those that influence hiring decisions and could be subject to allegations of bias such as those made against Workday’s hiring systems.
“I think one of the major things we’ve done is to hold our data scientists accountable, make sure they explain how the model works to the management and to their peers, and set out those monitoring governance rules to make sure we don’t bias inside the model,” said Kobi Nissan, CPO and co-founder at data management firm Mine.
Mine has used static data sets for its training data even though dynamic models could produce better results more easily because “it’s the right thing to do”.
Draft AI legislation in the UK laid out six main principles including transparency around AI, with developers required to proactively or retrospectively detail the nature of an AI, the data it uses, how it processes this data, and a clear chain of accountability.
Its stated goal is to allow granularity that does not overly impede the development of AI with consideration to its considerable potential.
The UK’s approach was identified as “more innovative” at the talk due to its decentralised nature, but as with GDPR, European regulation could set the tone for the sector.
This is especially true for fields such as generative AI, which is being developed and rolled out by large multinational firms at present and will have to comply with EU law in order to tap into the region’s lucrative market.
“I don’t think companies should be afraid of AI,” said Debbie Reynolds, CEO and chief data privacy officer at Debbie Reynolds Consulting LLC.
“I'm not saying don't use it, definitely use it, but kick the tyres look and see what's inside, have collaborative conversations about what you're trying to achieve with the AI. Make sure that you can explain every step in what’s happening.”
Reynolds responded to claims that Microsoft has not been able to explain recent aggressive outputs of its Bing chatbot, such as those documented in a New York Times report.
“That’s not acceptable. You have to know what is happening, why it is happening, and if it’s doing weird things you have to go back to the drawing board because you don’t want to harm users and you want to get a good result.”
Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
Snowflake bigs up the power of the partner and eyes deeper engagement to tackle business challenges in the enterprise AI era
Snowflake CEO: “Many vendors sell you parts of a car and tell you to build it yourself. At Snowflake we have a different philosophy. We want to give you the car.”
Zoom launches new AI companion features for workplace platform