How can we help?

What is Khan Academy’s approach to responsible AI development?

Updated

As a nonprofit educational organization, we see it as our responsibility to explore what AI could mean for the future of education. We believe that AI has the potential to transform learning in a positive way, but we are also keenly aware of the risks. For that reason, we’ve developed the following guidelines for our AI development. 

We believe these guidelines will help us responsibly adapt AI for an educational setting. We want to ensure that our work always puts the needs of students and teachers first, and we are focused on ensuring that the benefits of AI are shared equally across society. As we learn more about AI, these guidelines may evolve.

  1. We educate people about the risks and we are transparent about known issues. 

We are in a testing period and have invited a limited number of people to try out our AI-powered learning guide. For the subset of participants who opt in to use our experimental AI tools, we provide clear communication about the risks and limitations of AI before providing access. Participants must read and accept the known and potential unknown risks and limitations of AI. For example, AI can be wrong and may generate inappropriate content. AI can make errors in math. We provide an easy way for participants to report any issues they encounter.

More broadly, we are launching a course for the general public entitled “AI for Education.” In our course, users will learn:

  • What large language models are 
  • How large language models apply to education
  • What AI is good at 
  • What AI is not good at 
  • Questions we should all be asking about AI
  1. We learn from the best practices of leading organizations to evaluate and mitigate risks. 

We’ve studied and adapted frameworks from the National Institute of Standards and Technology (NIST) and the Institute for Ethical AI in Education to evaluate and mitigate AI risks specific to Khan Academy.

AI is not always accurate and is not completely safe. We acknowledge that it is not possible to eliminate all risk at this time. 

Therefore, we work diligently to identify risks and put mitigation measures in place. We mitigate risk by using technical approaches such as: 

  • Fine-tuning the AI to help increase accuracy
  • Prompt engineering to guide and narrow the focus of the AI. This allows us to train and tailor the AI for a learning setting.
  • Monitoring and moderating participant interactions so that we can proactively respond to inappropriate content and apply appropriate community controls (such as removing access)
  • “Red teaming” to deliberately try to “break” or find flaws in the AI in order to uncover potential vulnerabilities

In addition:

  • Communication clearly conveys that there will be errors (even in math) and the possibility of inappropriate content. 
  • We limit access to our AI through Khan Labs, a space for testing learning tools. We use careful selection criteria so that we can test features in Khan Labs before broadening access.

We believe these efforts will make our AI stronger and more trustworthy in the long run.

  1. At this time, we only grant access to our AI applications through Khan Labs.

In order to sign up to test our AI-powered learning guide, users must be at least 18 years old and register through Khan Labs. Once registered, if adults have children associated with their Khan Academy accounts they have the ability to grant access to their children. Our in-product messaging clearly states the limitations and risks of AI. We limit the amount of interaction individuals can have with the AI per day because we have observed that extended interactions are more likely to lead to poor AI behavior. 

Every child who has parental consent to use our AI-powered learning guide receives clear communication that their chat history and activities are visible to parents or guardians and, if applicable, their teacher. Teachers can see the chat histories of their students. We use moderation technology to detect interactions that may be inappropriate, harmful, or unsafe. When the moderation system is triggered, it sends an automatic email alert to an adult.

  1. We embrace and encourage a culture where ethics and responsible development are embedded in our workflows and mindsets.

Individuals and teams are asked to identify ethical considerations and evaluate risks at the outset of every project. Our decision making is guided by risk evaluation. We prioritize risk mitigation, we embrace transparency, and we continuously reflect on the impact of our work. 

We have a detailed monitoring and evaluation plan in place during this testing period. We will learn, iterate, and improve.


AI is a nascent field that is rapidly developing. We are enthusiastic about the potential for AI to benefit education, and we recognize that we have a lot to learn. Our ultimate goal is to harness the power of AI to accelerate learning. We will evaluate how AI works and we will share our learnings with the world. We expect to adapt our plans along the way.