“There are a number of open questions with all regulation of AI, and in particular with regard to Frontier models. Over the next year or more, there will be additional clarity (I predict) around what capabilities and quantification measures constitute a Frontier model, how to measure those metrics, and what kind of controls should or need to be imposed.”
One of the ways we can tell that technological developments in AI are moving fast— really fast—is the current dialogue relating to AI “Frontier” models. A Frontier model is a “highly capable model” that “could possess capabilities sufficient to pose severe risks to public safety.” (Anderljung, et al., “Frontier AI Regulation: Managing Emerging Risks to Public Safety,” November, 2023).
The White House Executive Order (“E.O.”) on AI, issued on Oct. 30, 2023, refers to these models as “dual-use” foundation models. The U.K. Government Office for Science has published a special report on “Future Risks of Frontier AI,” and the AI Seoul Summit, held in May 2024, was followed by an “International Scientific Report on the Safety of Advanced AI.”