According to Reuters, the three governments have produced a joint paper verifying this, which once published will accelerate negotiations at the European level.
Details of the paper have yet to be announced but according to Reuters, that has seen the document, France, Germany and Italy support "mandatory self-regulation through codes of conduct" for AI foundation models but are against "un-tested norms".
The joint paper goes on to state: "Together we underline that the AI Act regulates the application of AI and not the technology as such. The inherent risks lie in the application of AI systems rather than in the technology itself."
In addition, the paper states that developers of AI foundation models will have to define model cards, which provide details about the construction of the machine learning model.
"The model cards shall include the relevant information to understand the functioning of the model, its capabilities and its limits and will be based on best practices within the developers community," the paper states.
"An AI governance body could help to develop guidelines and could check the application of model cards."
Though the paper includes no sanctions, should any violations of the code of conduct arise, a system of sanctions could be created.
Germany's Digital Affairs Minister Volker Wissing told Reuters he was pleased an agreement had been reached with France and Germany, pointing out that it aims to limit only the use of AI.
"We need to regulate the applications and not the technology if we want to play in the top AI league worldwide.”
Germany’s State Secretary for Economic Affairs Franziska Brantner added, "we have developed a proposal that can ensure a balance between both objectives in a technological and legal terrain that has not yet been defined.”
Commenting on the news, Greg Hanson, SVP at Informatica, said: “AI’s powerhouse is the data that fuels it. The move for self-regulation by France, Germany and Italy recognises the transformative potential of AI and the associated risks. But it's important that EU policymakers pay attention to ensuring the accuracy of the data that feeds the technology, alongside regulating the AI system themselves.
“A binding voluntary commitment for all players - large and small – will help protect the integrity of AI. Yet most companies are still learning what data the AI algorithms need. Ultimately, AI needs the right metadata to be effective. This means there needs to be unity among regulators and policymakers surrounding the importance of data accuracy, clarity and governance. While the onus needs to be on businesses to bring discipline and resilience to AI by ensuring traceability, governance and quality are baked in," adds Hanson.
The news comes as June of this year, the European Parliament approved the AI Act, making it one of the first regions to implement AI specific legislation at this level.