How can we help?

Why did Khanmigo flag my conversation?

Updated

Khanmigo uses technology from OpenAI to automatically review conversations and flag messages that might include unsafe or inappropriate content for our learning environment. 

If one of your conversations was flagged, that means our system detected something that might go against our Khanmigo Community Guidelines.

What’s considered inappropriate?

A good rule of thumb is:

If it wouldn’t be appropriate to say out loud in a classroom, especially with elementary students present, it probably isn’t appropriate on Khan Academy either.

Khanmigo’s moderation system is designed to flag conversations that may include:

  • Profanity or inappropriate language
  • Violence, threats, or references to harm—even if meant as a joke
  • Sexual or mature content
  • Phrases that may trigger our safety filters (e.g., references to self-harm)
  • Bullying, harassment, or hate speech

Can the system make mistakes?

Yes, and we know it does sometimes. Like many AI systems, Khanmigo’s moderation tools are still evolving. They’re designed to prioritize safety, performance, and cost-effectiveness, which means they may occasionally flag content that should be allowed. 

We understand this can be frustrating, especially when you're asking a thoughtful question or trying to get help with your schoolwork. That’s why we’re constantly reviewing flagged content and regularly updating our processes when we see opportunities. 

Want to learn more?

You can read the full Khanmigo Community Guidelines here.

What happens if my child or student's Khanmigo conversation gets flagged?

If you think your message was flagged by mistake and you have questions, you can also contact our support team.