Meta is planning to use AI to handle most of its product risk assessments, potentially automating evaluations for up to 90% of updates made to apps like Instagram and WhatsApp, according to internal documents seen by NPR.
Since a 2012 FTC agreement, Meta has been required to conduct privacy reviews of its products. Until now, human evaluators have led that process. The upcoming change would mark a significant shift, with AI systems making “instant decisions” on most updates after teams fill out a questionnaire about their work.
Meta says this new system will speed up product updates while still enforcing privacy safeguards. The AI will flag any associated risks and outline the standards each feature must meet before launch.
However, critics warn the shift could increase harm, especially with complex or unpredictable consequences. A former Meta executive told NPR that the system may miss subtle or emerging risks, stating that “negative externalities… are less likely to be prevented.”
In response, Meta said it has invested over $8 billion in its privacy program and continues to balance innovation with regulatory compliance. The company emphasized that while AI brings consistency to low-risk cases, human experts will still oversee higher-risk or novel features.
“We enhance our processes to better identify risks, streamline decision-making, and improve people’s experience,” a Meta spokesperson said.