Society has long understood its obligation to protect young people when it comes to their use of technology, and especially their personal data. Protections for users of all ages are paramount because they – and the institutions making decisions about the technology they use – may not have the expertise to fully understand how tools are developed, what student data is collected, how it’s used, or how well it’s protected.
Due to these factors, edtech companies should take every precaution and step necessary to ensure their platforms are safe, trustworthy, compliant, and do not expose users to potential harm or exploitation.
The Tools Competition requires competitors to ensure the safeguarding of their users through responsible AI and data protection. The Tools Competition is not alone in this effort, as several prominent organizations and coalitions are already working on this important issue, calling for a code of practice for tech used in schools, establishing standards to identify and eliminate bias in AI, and developing common principles specifically for AI in education.
This post provides resources and examples that Tools competitors and other edtech developers can reference to better understand safeguarding, and ensure their tools are set up to best meet the needs of all learners.
Designing for Safety
AI is a nascent field with much still to learn about its power to advance learning outcomes and how it impacts users themselves. We know that there are inherent risks, limitations, and safety concerns that are crucial to manage as this powerful field continues to rapidly develop.
There are many leading organizations thinking about AI safety and ethics, and helping to put frameworks in place that edtech companies and developers can follow to ensure the safety of their users. Tools competitors should find and adapt the frameworks that work best for their context. We recommend a few:
- Safety by Design – developed by the Australian Government, this approach includes practical steps that technology companies can take to put user safety and rights at the center of the design and development process.
- Designing for Education with Artificial Intelligence – developed by the US Department of Education, this guide for developers provides core design recommendations for safety, security, and trust, as well as state-specific guidance.
- SAFE Benchmarks Framework – developed by the EDSAFE AI Alliance, this resource brings together more than 24 global AI safety, trust and market frameworks and aims to guide a policy process and roadmap for all education stakeholders.
- Ethical Framework for AI in Education – developed by Institute for Ethical Al in Education, this detailed framework is aimed at education leaders and practitioners to guide ethical design, procurement and application decisions related to AI in education.
If you feel overwhelmed by the wealth of resources and don’t know where to start, you’re not alone. Digital Promise recently wrote about their experience learning about and synthesizing these resources, and the co-design process that ultimately led to the development of their new Ethically Designed AI product certification.
Principles for Responsible AI
In addition to these frameworks, we can look to the examples of several edtech companies that have developed guiding principles to shape their products and approach to safety.
Grammarly is guided by 5 Responsible AI Principles. To many users, AI can seem like a “black box.” Prompts go in and responses come out with no explanation or insight into how the machine works. Through the guiding principle of explainability, Grammarly strives to provide users with a non-technical, easy-to-understand explanation of their AI’s logic and decision-making process. In this way, Grammarly aims to better inform users how its products work so they can make informed decisions about how they use them.
Duolingo offers a mobile platform for learning more than 40 languages. It recently published its Responsible AI Standards which include steps to ensure fairness in its products, which include avoiding algorithms known to contain or generate bias and ensuring their users’ demographics are represented in all of its AI-driven tools. One way they do that is by reviewing and recording the diversity of people represented in the data sets used to create AI systems. This record shows how inclusive the data are and helps the company decide if it needs to boost the representation of certain language speakers.
Khan Academy – has disclosed how it mitigates risk in relation to the AI-powered Khanmigo Tutor. Their approach to risk management covers in-app and broader education initiatives about ethics and the risks and limitations of AI; designing according to leading standards and frameworks; limiting access to AI applications – including time spent per day with the AI – based on age or parental consent; and maintaining an ongoing monitoring and evaluation plan for regular iteration.
Resources for User Safeguarding
- Reflect on your practices: these self-assessment tools, developed by the Australian government, help companies understand and improve their safety systems, processes and practices.
- Advance your understanding of best practices: This is a rapidly evolving space, and teams should closely follow the research and good practices as they continue to emerge. As one example, the U.S. Department of Education Office of Technology has published a series of guides, toolkits, webinars, and other resources for diverse stakeholders on how ton navigate AI in education while prioritizing student safety.
- Follow developmental frameworks: There are many frameworks designed to help stakeholders prioritize user safety and privacy in ed tech tools, including these developed by the Australian government, the EDSAFE AI Alliance, and The Institute for Ethical AI in Education. Teams should adapt the frameworks that are right for their context.
- Consider credentials and product certifications: Certain credentials and product certifications, like these from Digital Promise, can signal a commitment to rigorous standards and security, while also reducing the burden on school leaders to vet solutions.
Competition Expectations
In Phase II proposals, competitors will be asked to articulate their approach to safeguarding users, including potential risks and the frameworks or practices they follow (or plan to follow) in the development of their tool. For existing tools or platforms, competitors must demonstrate that their tool is safe, trustworthy, and compliant, and effectively mitigates risks to users.
While the commitment to safeguarding may be apparent throughout the entire proposal, competitors will be asked to detail their approach to safety in the Solution and Theory of Change section of the proposal, in response to the following prompt:
- How will you protect your users and mitigate risk? Describe the guardrails, practices, and/or standards you follow to address issues of privacy, accessibility, bias, and potential exploitation.
Refer to the Methodology & Safety section of your track rubric for more specifics on how competitors will be evaluated.