Mistral CEO calls for AI cultural levy
Foreign AI firms should pay for European content, says Arthur Mensch – and the European Commission is tending to agree
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Major AI firms should pay a content levy in Europe, with the money going to the cultural sector, the CEO and co-founder of French AI developer Mistral has said.
Writing in the Financial Times, Arthur Mensch said that Chinese firms are playing fast and loose with copyright rules, training their AIs on vast amounts of content – including from European sources.
European developers, meanwhile, are being held back by a fragmented legal environment that puts them at a competitive disadvantage, he said. The current opt-out framework, which is meant to enable rights holders to protect their content and stop AI companies from using it for training if they say so, is turning out to be unworkable in practice.
"Copyrighted works continue to spread uncontrollably online, while the legal mechanisms designed to protect them remain patchy, inconsistently applied and overly complex," he said.
Mensch's solution is a revenue-based levy on all commercial providers marketing or operating AI models in Europe – including providers based abroad, for example, the US and China.
The money would go to a central European fund dedicated to investing in new content creation, as well as to supporting Europe's cultural sectors.
And, he said, the scheme would be good for AI developers too, giving them legal certainty, shielding them from liability for training on materials accessible online. Importantly, it would not replace licensing agreements or the freedom to contract, he said.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"On the contrary, licensing opportunities should continue to develop and expand for usage beyond training," he said. "The fund would complement, not crowd out, direct relationships between creators and AI companies."
Mensch said he sees his proposal as a starting point, and is inviting creators, rights holders, policymakers, and fellow AI developers to get involved.
But while it does funnel money into the arts, it doesn't see individual creators remunerated for their work – something that most creators believe is crucial.
And the EU is moving in that direction. Earlier this month, the European Parliament adopted a series of recommendations to protect copyrighted creative work, saying that EU copyright law should apply to all generative AI systems on the EU market, regardless of the place of training.
And, they said, the use of copyrighted material by generative AI must be fairly remunerated – and some way found to collect remuneration for past use. A global licence for providers to train their systems in exchange for a flat-rate payment won't do, they said.
A new licensing market for copyrighted material would involve voluntary collective licensing agreements for individual sectors, and would include individual creators and small and medium-sized enterprises. Rights holders would be able to exclude their work from being used in AI training.
"We need clear rules for the use of copyright-protected content for AI training. Legal certainty would let AI developers know which content can be used and how licences can be obtained," said rapporteur Axel Voss after the vote.
"On the other hand, rights holders would be protected against unauthorized use of their content and receive remuneration. If we want to promote and develop AI in Europe while also protecting our creators, then these provisions are absolutely indispensable."
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Russian sentenced to jail for his part in ransomware attacksNews Aleksei Volkov operated as an initial access broker, helping cybercrime groups, including the Yanluowang ransomware group
-
Google: we need more energy for AINews Alphabet president calls for US to step up power generation to feed her company's AI ambitions
-
Google: we need more energy for AINews Alphabet president calls for US to step up power generation to feed her company's AI ambitions
-
Scottish government sets out AI plans for the next five yearsNews Deputy first minister Kate Forbes says the aim is to establish Scotland as a world leader in the technology
-
Swamped with decisions to make, managers turn to AINews Worryingly, many UK leaders are outsourcing key judgments to AI, despite a lack of data
-
Empowered Intelligence: The Impact of AI Agents -
Businesses finding it hard to distinguish real AI from the hype, report suggestsNews An Ernst & Young survey finds that CEOs are working to adopt generative AI, but find it difficult to develop and implement
-
Otter.ai wants to bring agents to all third party systems – with transcription just the startNews The AI transcription company is targeting intelligent scheduling and interoperability with project management systems, based on securely-stored transcription data
-
Is ChatGPT making us dumber? A new MIT study claims using AI tools causes cognitive issues, and it’s not the first – Microsoft has already warned about ‘diminished independent problem-solving’News A recent study from MIT suggests that using AI tools impacts brain activity, with frequent users underperforming compared to their counterparts.
-
HPE's AI factory line just got a huge updatenews New 'composable' services with Nvidia hardware will allow businesses to scale AI infrastructure
