Concerns raised over lack of open source representation on Homeland Security AI safety board

Open source generative AI concept image showing a digitized human brain elevated over a GPU and circuit board.
(Image credit: Getty Images)

The US Department of Homeland Security (DHS) has named the members of its new security advisory board that will advise the government on the safety of artificial intelligence (AI) systems.

The 22-member board, made up of business leaders from the nation’s largest technology companies, will develop a series of recommendations for critical national infrastructure (CNI) organizations on AI implementation.

The board’s guidance will help CNI orgs “prevent and prepare for AI-related disruptions to critical services that impact national or economic security, public health, or safety.”

Alejandro Mayorkas, Secretary of Homeland Security, said the goal of the board is to deliver practical solutions for the implementation of AI in everyday life during a briefing call naming its members.

The board will be composed of senior business leaders from leading US technology companies, including the CEOs of Microsoft, AWS, IBM, OpenAI, Anthropic, Alphabet, Cisco, Adobe, and AMD.

Mayorkas said “it was very important to bring key developers of this extraordinarily powerful tool” to the board, noting their experience and the significant power they exercise in shaping the development of AI systems in the future."

Other notable additions included Delta Airlines CEO Ed Bastian, Occidental Petroleum CEO Vicki Hollub, Northrop Grunman CEO Kathy Warden, as well as a number of government officials.

Lack of open source representation might detract from board’s collaborative approach

In its 2024 threat assessment, the DHS warned that AI-assisted tools will help facilitate more efficient, large-scale, evasive cyber attacks on CNI organizations such as pipelines, railways, and hospitals.

It also claimed other nations are rapidly developing AI technologies that could be used to undermine US cyber defenses, and as such businesses and governmental organizations need to collaborate and share knowledge around how they can improve their security posture.

Commenting on the announcement, principal AI engineer and security researcher at AppOmni, Joseph Thacker, distilled what he thinks the two most important effects the board will have: removing some of the mystery around AI tools, and fostering better collaboration between developers and stakeholders.

“I believe the Board will have the biggest impact in two key areas: First, it will provide really great information and education about how AI systems function and how they're improving. Second, by creating a forum for information sharing between DHS, the critical infrastructure community, and AI leaders, the Board could be a great place of collaboration and knowledge exchange”, he explained.

RELATED WHITEPAPER

“Essentially it could enable a more coordinated approach to addressing AI-related risks. In addition to its stated objectives, the AI Safety and Security Board should focus on building a practical implementation standard for how companies should approach and handle AI security.”

Thacker did raise one worry he has about the board, however, which centers around which stakeholders will get a say on this board. 

Thacker expressed concern about the lack of representation of the open source AI community, noting the board was almost exclusively made up by leaders of private companies, and the closed-source AI developers in particular.

With many developers moving towards using open source AI frameworks over their proprietary counterparts, a large swathe of AI tools will be made  using open source models. 

Thacker argued that the competition between closed source and open source AI developers could influence the board’s decision-making, with its members prioritizing the interests of their own solutions first.

“One major concern I have is that there is a large conflict of interest by bringing in the companies that are developing the closed source AI models. They are incentivized to recommend against open source models for ‘safety reasons’ when it would massively help their business models and positively affect their bottom line.”

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.