Technical standards bodies hope to deliver AI success with ethical development practices
The ISO, IEC, and ITU are working together to develop standards that can support the development and deployment of trustworthy AI systems
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Three major international technical standardization bodies are working to introduce ethical considerations into their standards, with the release of four guiding principles.
The International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the International Telecommunication Union (ITU) last week launched the Seoul Statement at an event in South Korea.
This statement is aimed at advancing the development of safe, inclusive, and effective international standards for AI. These standards, the bodies revealed, should reflect global needs, support regulatory alignment, and foster interoperability, trust and inclusion.
"It places international standards at the heart of AI governance," said Sung Hwan Cho, president of the ISO.
"We must systematically include social and human rights considerations into our standards work. We must collaborate across government, industry and civil society and academia to ensure all voices are heard."
The guiding principles for trustworthy AI
The statement is based on four core principles covering key areas spanning development, deployment, and long-term maintenance of AI systems.
Standards should actively incorporate sociological dimensions as well as technical ones, for example. They should deepen the understanding of the interplay between international standards and human rights, recognizing both their importance and universality throughout the AI development lifecycle.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
They should also help strengthen an inclusive, multi-stakeholder community to develop and apply international standards for the design, deployment, and governance of AI. Elsewhere, the organizations encouraged closer collaboration between public and private sector entities on AI capacity building.
"Standards are technical tools to uphold the principles we want to live by,” said Seizo Onoe, director of the ITU telecommunication standardization Bureau.
"The vision set out by this joint statement calls for diverse expertise and global commitment to collaboration and consensus – exactly what drives our standards work and exactly the spirit needed to create the future we want.”
Any regulatory framework will need to be forward-thinking and adaptable, the bodies noted, largely due to the rapid evolution of the AI landscape moving forward.
Ethical specifications will have to reflect related issues such as the poor provision of energy supply in developing countries for example, as well as the lack of compute power.
Research highlighted by the standards bodies indicates that the developing world houses less than 1% of global data center capacity, underlining the need for greater investment to broaden compute capacity.
These nations are also struggling with a shortage of chipsets and AI components, a lack of public-private data sharing, and a severe shortage of training.
Tackling AI safety concerns
Next steps for the organizations include the drafting of standards on the storage of sensitive data.
The trio are also planning to look at the issues of election interference, deepfakes, and misinformation. The latter of these areas is of particular interest, they noted, with most current deepfake detectors failing to adequately deliver.
Threat actors are already using deepfakes to dupe unsuspecting enterprise workers, with a recent study from Ironscales showing that 85% of cybersecurity and IT leaders have experienced at least one deepfake attack in the last year, marking a 10% increase on 2024 statistics.
"How is a deepfake coming about, and how do we give the technical tools to make detecting it easier?" asked Philippe Metziger, CEO and secretary general of the IEC.
"Our role is making things more transparent from a technical point of view."
The statement was created following a UN recommendation last year, with the hope that standards convergence can help reduce fragmentation and lower compliance burdens, while focusing on responsible AI development and deployment.
"Standards don't solve everything," said Metziger. "But we see ourselves as major contributors to AI governance."
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Salesforce targets telco gains with new agentic AI toolsNews Telecoms operators can draw on an array of pre-built agents to automate and streamline tasks
-
Four national compute resources launched for cutting-edge science and researchNews The new national compute centers will receive a total of £76 million in funding
-
UK firms are dragging their heels on AI training – shadow AI means they need to move fast to avoid unauthorized useNews With shadow AI rife, access to approved tools, clear guardrails, and training are needed to use the technology responsibly
-
OpenAI's big enterprise push needs systems integrators, so it's turning to consultancies to plug implementation gapsNews Consultancies such as Accenture and Capgemini will act as systems integrators and help shape AI strategies for OpenAI customers
-
Microsoft says fear of falling behind is driving an AI arms race among UK businesses – and it's fueling record adoption ratesNews New research shows AI is now a core part of UK business success strategies
-
CEOs aren't seeing any AI productivity gains, yet some tech industry leaders are still convinced AI will destroy white collar work within two yearsNews A massive survey by National Bureau of Economic Research shows limited AI impact, but continued hopes it'll boost productivity eventually
-
‘AI is no longer about experiments. It is about results’: Boards are pushing for faster returns on AI investments, and tech leaders can't keep paceNews AI projects are now being held to the same standards as any other business investment
-
AI isn’t making work easier, it’s intensifying it – researchers say teams are now facing 'unsustainable' workloads, cognitive strain, and higher levels of burnoutNews While workers report productivity gains with AI, that means they’re faced with bigger workloads
-
Business leaders are using AI as a “license to reduce headcount” – new Morgan Stanley research lays bare the impact on UK workersNews Analysis of five sectors highlights an "early warning sign" of AI’s impact on jobs
-
Lloyds Banking Group wants to train every employee in AI by the end of this year – here's how it plans to do itNews The new AI Academy from Lloyds Banking Group looks to upskill staff, drive AI use, and improve customer service
