Senators propose licensing requirement for ChatGPT-level AI.

Senators propose licensing requirement for ChatGPT-level AI.

US Senators Propose New Regulations for Artificial Intelligence

AI regulation

In a bold move aimed at regulating the growing field of artificial intelligence (AI), Democrat Richard Blumenthal and Republican Josh Hawley have jointly proposed a new framework that calls for the creation of a regulatory body in the United States. The senators believe that such a body would effectively oversee AI development and restrict the use of language models like OpenAI’s GPT-4 to only licensed companies. This bipartisan proposal serves as a blueprint for future laws and influences ongoing discussions in Congress.

Under the proposed framework, companies seeking to develop AI applications, especially in areas like face recognition that are deemed “high risk,” would need to obtain a government license. To do so, they would have to thoroughly test their AI models for potential harm before deployment and disclose any instances of failure or unintended consequences after release. Additionally, their models would be subject to auditing by an independent third party.

Another key aspect of the senators’ recommendations is the public disclosure of details regarding the training data used to create AI models. This transparency would enable better understanding and analysis of AI systems’ underlying biases or potential flaws. Moreover, the framework advocates for granting individuals harmed by AI the right to bring legal action against the responsible companies.

This legislative proposal has the potential to shape the future of AI regulation in the United States. In the coming days, senators Blumenthal and Hawley will lead a Subcommittee hearing to discuss how businesses and governments can be held accountable when deploying AI systems that cause harm or violate people’s rights. Notably, prominent figures like Microsoft president Brad Smith and Nvidia’s chief scientist William Dally are slated to testify during this hearing, reflecting the importance of the issue.

Another significant event on the horizon is a series of meetings organized by senator Chuck Schumer to deliberate on AI regulation. This task has been deemed “one of the most difficult things we’ve ever undertaken” by Schumer himself. The meetings will be attended by influential tech executives, including Mark Zuckerberg, Elon Musk, and CEOs from Google, Microsoft, and Nvidia. Furthermore, representatives from organizations likely to be affected by AI algorithms, such as trade union presidents and researchers focused on human rights concerns, will be present.

The senators’ legislative framework has garnered positive feedback from experts in the field. Anna Lenhart, a former AI ethics initiative leader at IBM and current PhD candidate at the University of Maryland, acknowledges the senators’ proactive approach in drafting this framework. She highlights its departure from the traditional method of relying on extensive expert forums or commissions to address AI regulation.

However, Lenhart raises concerns about the feasibility of an AI oversight body having the necessary technical and legal knowledge to manage the wide-ranging applications of AI technology, ranging from self-driving cars to healthcare. The concept of licensing developers, as proposed in the framework, has gained traction both within the industry and in Congress. OpenAI CEO Sam Altman and senators Lindsay Graham and Elizabeth Warren have previously suggested the implementation of AI licenses. Nevertheless, skeptics, including libertarian-leaning groups and digital rights organizations, caution that such a system could impede innovation and potentially lead to industry capture.

Uncertainties still remain regarding the framework proposed by senators Blumenthal and Hawley. The question of whether AI oversight would be entrusted to a new federal agency or an existing one remains unanswered. Additionally, the criteria for defining “high risk” use cases that would require a license have not yet been specified.

Other stakeholders, such as environmental non-profit Friends of the Earth, have expressed their support for the legislative framework while urging for further details to evaluate its underlying ideas. Friends of the Earth and other accountability organizations are calling for lawmakers to prevent AI projects with high energy consumption from exacerbating climate change. They advocate for extensive documentation and public disclosure of the adverse impacts of AI systems. Importantly, they argue that industry should not solely define what is considered harmful, and that the potential energy requirements and misinformation risks of AI systems should also be taken into account.

This legislative framework represents a more stringent approach to AI regulation compared to the voluntary guidelines and non-binding AI bill of rights issued by the federal government. Although the White House recently struck a voluntary agreement with major AI companies, promising forthcoming regulations, the senators’ proposal indicates a desire for more comprehensive legislation. Ensuring the safety and responsible use of AI continues to be a pressing concern for lawmakers.

In conclusion, the joint efforts of senators Blumenthal and Hawley have brought forward a noteworthy legislative framework that aims to regulate the development and deployment of AI in the United States. Their proposal has sparked widespread discussion and will undoubtedly influence ongoing debates in Congress. While challenges and unanswered questions remain, this framework signifies an important step towards establishing robust regulations for the rapidly advancing field of artificial intelligence.

References: – Original Article: MIT Technology Review – Image Source: Freepik