![Meta’s AI Framework Sets Safety Limits on ‘High-Risk’ Models](https://theaiwired.com/wp-content/uploads/2025/02/Metas-AI-Framework-Sets-Safety-Limits-on-‘High-Risk-Models.avif)
Meta’s AI Framework Sets Safety Limits on ‘High-Risk’ Models
Meta’s Frontier AI Framework labels AI systems as ‘high-risk’ or ‘critical-risk’ to restrict the development of potentially harmful models and ensure safety as worries about AI increase.
Meta has created a new set of rules about when it might limit the use of its AI systems for safety reasons. The Frontier AI Framework divides AI models into two groups: ‘high-risk’ and ‘critical-risk.’
The ‘critical-risk’ group includes models that could help carry out major cyber or biological attacks. If an AI system is deemed a high risk, Meta will pause its development until safety measures are in place.
The company’s evaluation process uses both real testing and feedback from researchers inside and outside the company. Meta believes that current evaluation methods are not strong enough to give clear risk assessments.
AI Development & Risks
The company has always been open about developing AI, but it realizes that some models could be very dangerous if they were made public.
By sharing this plan, Meta shows its dedication to creating AI responsibly and sets itself apart from other companies that have fewer protections in place. The policy is introduced as concerns about the potential misuse of AI increase, especially with the growing use of open-source models.