Elon Musk’s company xAI is facing strong criticism after announcing its new artificial intelligence app for children, called Baby Grok. Many users and experts are questioning whether the app can truly be safe and appropriate for young users, especially given xAI’s recent controversies.
Just weeks ago, xAI’s chatbot Grok 4 came under fire after it was found sharing antisemitic content on X (formerly Twitter). This incident raised serious concerns about the company’s ability to properly moderate harmful material. Now, critics are asking how Musk’s team plans to prevent similar issues in an app targeted at children.
On social media platforms like Twitter and Reddit, users warned about the risks of exposing children to AI systems that might still produce inappropriate or offensive content. Some commentators stressed the need for clear safeguards to prevent the spread of misinformation or toxic material, especially in an educational setting.
There has also been widespread demand for detailed information about Baby Grok’s parental controls, privacy protections, and content filtering systems. Many parents and child safety advocates said they would not support the app until such protections are clearly explained and proven to work.
Some critics even mocked the name Baby Grok, calling it simplistic and gimmicky. They argued that branding the app in this way could make it harder for parents and educators to take it seriously as a learning tool.
Others pointed out that while Elon Musk’s involvement brings attention, it does not automatically ensure the product will be safe or high-quality compared to other established educational apps.
A prominent AI ethics advocate tweeted, “Given Grok’s recent antisemitic outputs, I am deeply skeptical xAI can deliver a truly safe, kid-friendly AI app without substantial, transparent oversight.”
Another Reddit user wrote, “Parents should be very cautious—these AI systems have shown they are not yet reliable for kids.”
So far, xAI has not responded publicly to the growing concerns.