Phase I results will be released on November 20th.

What’s Next: Advanced AI for EdTech

The past few years have transformed what’s possible in educational technology, with Artificial Intelligence (AI) and Large Language Models (LLMs) redefining what’s possible in education, powering adaptive tutors, automated feedback, and new ways to engage students. But as the AI landscape continues to evolve, we’re entering a new phase — one defined by models that can think, act, and perceive.

In his talk during this Tools Competition Webinar, What’s Next: Advanced AI for EdTech, Ralph Abboud, Program Scientist at Renaissance Philanthropy, explored this rapidly shifting frontier and its implications for learning. 

Check out the video, read our summary of the three advancements discussed, and find additional resources below. 

From Language to Reasoning: Large Reasoning Models

Building on a simple idea, predicting the probability of the next word given a sequence of preceding words, LLMs trained on massive text corpora have proven transformational to many applications. Yet, this same setup, inherently built on probabilities, can also lead to confident, yet incorrect answers, a phenomenon known as hallucination. They also lack memory and structure, making it hard to build reliable, personalized educational tools.

Enter large reasoning models (LRMs), a new generation of systems designed not just to respond, but to think. Instead of predicting the next likely word, LRMs are trained to generate longer chains of reasoning, showing their work step by step. This shift helps models perform better on complex tasks, such as solving math problems or designing learning plans, where intermediate thinking matters as much as the final answer.

By producing and reviewing their own scratch work, LRMs can reason through problems more deliberately, reducing errors and offering richer explanations. The tradeoff is computational cost (more thinking takes more time and energy), but the improvement in reasoning opens new opportunities for education applications that demand precision and thoughtfulness. It also allows users to experiment with more capabilities without needing to specify decision processes out of the box. Ensuring these systems are transparent and accountable will be key to maintaining safety and public trust as they become more deeply embedded in learning environments.

From Responding to Acting: Agentic AI

Another major development is the rise of agentic AI, systems that can plan their own steps and use external tools to complete a task. Instead of simply generating text, an AI agent can decide, for example, to call a calculator, open a web browser, or run a code snippet to find an accurate answer.

This framework makes AI far more capable, reliable, and adaptable. In education, it means a model can not only draft an essay but also check citations, pull real-time data, and use visual tools to create a graph. The model becomes less of a text generator and more of a problem solver.

Yet this flexibility comes with risk. The more steps an agent takes, the more places it can go wrong, i.e., choosing the wrong tool, misinterpreting an instruction, or failing to complete a chain of actions. Designing reliable agents for classrooms and learning platforms will require careful safeguards and evaluation to ensure their outputs remain accurate and aligned with learning goals.

Case Study: Agentic AI in Action: ALTER-Math at the University of Florida presented an example of agentic AI in action during the webinar. The team showcased a case study that used knowledge graphs and agentic design to create effective, scalable AI-augmented teachable agents. These agents are grounded in a learning-by-teaching framework, transforming students’ roles from passive learners to proactive teachers. 

ALTER-Math is part of the Learning Engineering Virtual Institute (LEVI), a program managed by The Learning Agency, which supports learning engineering leaders who develop AI-based tools that drastically improve learning in middle school grades.

From Reading to Seeing and Hearing: Multimodal Models

Finally, multimodal models extend AI’s capabilities beyond text, allowing systems to interpret images, audio, and video. This opens the door to tools that can assess handwriting, interpret tone of voice, or analyze student work that isn’t purely text-based.

For education, the implications are enormous: real-time speech feedback for language learners, analysis of handwritten math problems, or systems that recognize student-drawn diagrams. But multimodality also brings challenges. These models are resource-intensive to run, and moderating across multiple input types (text, image, audio) adds new layers of complexity to safety, accuracy, and fairness.

Building the Infrastructure for What’s Next

Across all these innovations, one theme remains constant: technology is advancing quickly, but it’s only as useful as the educational principles guiding it. Reliable evaluation, thoughtful design, and strong alignment with learning science are essential to ensure AI tools support, rather than distract from, learning outcomes.

Equally critical is the public digital infrastructure that enables advanced AI to be both possible and equitable. Shared datasets, open-source models, and interoperable systems enable responsible development and reduce barriers to participation. Public tools and data pipelines let researchers and developers build on common foundations, rather than duplicating basic work, allowing innovation to focus on what truly matters: serving learners.

This kind of open infrastructure also supports accountability and transparency, key ingredients for safe, inclusive, and evidence-based AI in education. By ensuring data access and governance are public by design, we can identify and address inequities, foster collaboration, and ensure every learner benefits from the next wave of technological progress.

The goal isn’t to chase the newest model but to build the infrastructure that helps frontier AI serve real educational needs. Whether through personalized tutoring, automated feedback, or new modes of interaction, the next generation of AI offers powerful possibilities, if we can ensure it learns as responsibly as it teaches.

Resources to Learn More

Interested in exploring this topic further? Check out the following resources:

  • Join a monthly Learning Engineering Research Round-Up. Connect with Ralph Abboud and others in the learning engineering community to explore the intersection of AI, data, and educational research.
  • Take the Google Cloud Introduction to Generative AI course. This beginner-friendly course covers the fundamentals of generative AI and its practical applications. 
  • Explore IBM’s Tutorial on Multimodal LLMs. Learn more about how large language models can process text, images, and audio to create richer AI applications. 
  • Learn more about Safeguarding Edtech Users & Data Privacy. Understand key frameworks and best practices for protecting learners and educators through responsible AI design, transparent data use, and ethical technology governance.
  • Dive deeper into putting Accessibility & Equity at the forefront of design. Explore how inclusive design principles and accessibility standards can expand opportunity, ensuring AI tools work for all learners across contexts and abilities.

The Tools Competition launches annually each Fall and seeks to accelerate breakthrough innovations in education, backing evidence-driven technologies that generate research insights and deliver real-world impact for learners worldwide.