An open letter from GitHub, Hugging Face, Creative Commons and other tech firms are calling the European Union to ease upcoming rules for open-source artificial intelligence models.
The letter urges policymakers to review some of the provisions of the EU's Artificial Intelligence Act, claiming that regulating upstream open-source projects as if they are commercial products or deployed AI systems would hinder open source AI development. "This would be incompatible with open source development practices and counter to the needs of individual developers and non-profit research organizations," noted GitHub in a blog post.
In particular, the group provided five suggestions to ensure that the AI Act works for open source models, including defining AI components clearly, clarifying that collaborative development on open source models does not subject developers to the bill requirements, ensuring researchers' exceptions by permitting limited testing in real-world conditions, and setting proportional requirements for “foundation models.”
Open letter calls EU regulators to support open source in the AI Act. Source: GitHub.As the term implies, open source software is software that can be inspected, modified, and enhanced by anyone because its source code is publicly accessible. In the field of artificial intelligence, open source softwares helps train and deploy models.
The European Parliament passed the Act in June by a large majority, with 499 votes for, 28 against and 93 abstaining. The Act will become law once both the EU Council — representing 27 member states — agree on a common version of the text introduced in 2021. The next step involves individual negotiations with members of the EU to smooth out the details.
According to the open letter, the Act sets a global precedent in regulating AI to address its risks while encouraging innovation. "The regulation has an important opportunity to further this goal through increased transparency and collaboration among diverse stakeholders," reads the open letter, adding that "AI requires regulation that can mitigate risks by providing sufficient standards and oversight, [...], and establishing clear liability and recourse for harms."
Magazine: ‘Moral responsibility’ — Can blockchain really improve trust in AI?