Phase I results will be released on November 20th.

Didn’t Advance in Phase I? Learn More | 2026 Tools Competition

The Tools Competition strives to provide value to all competitors, regardless of whether or not they progress. The competition is grateful for your time and ideas and is eager to share key feedback and resources to help you strengthen your tool and continue advancing your work.

Based on our review, we’ve compiled feedback highlighting common reasons submissions did not move forward.

Feedback on submissions to the Accelerating K-12 Learning Track and the Building Pathways to Postsecondary Success Track

Each year, the Tools Competition receives an exceptionally competitive pool of abstracts. Our evaluation process is intentionally rigorous, prioritizing potential impact and evidence-supported approaches. Because Phase II requires a substantial commitment, only the most highly aligned proposals with a strong likelihood of success advance—typically around 20% of Phase I abstracts.

  1. Alignment with track goals or audiences

Abstracts were only advanced if they clearly fit a track’s objectives, target audiences, and eligibility requirements. Proposals targeting different age groups, geographies, or learning outcomes than those specified were not advanced.

  1. Evidence of demand

Strong abstracts demonstrated clear user need or adoption potential. Proposals were less competitive without evidence that target populations need and would use the solution.

  1. Evidence of effectiveness

Many exciting ideas lacked detailed evidence showing why the tool is, or will be, successful. The strongest abstracts referenced relevant supporting research, early traction, pilot results, or partnerships – even brief but concrete examples boost the competitiveness of proposals. Existing tools lacking efficacy data, description of the tool’s traction, user experience, or impact on learner outcomes were less competitive.

  1. Innovation, novelty, or use of funds

The competition prioritizes bold, distinctive ideas. The strongest abstracts highlighted how their solution differs from existing alternatives and why it represents a novel contribution. Further, abstracts describing business as usual, standard product updates, or general growth plans scored lower because they did not sufficiently establish what makes the proposed work innovative and what advancement the funding would support.

  1. Focus on learning engineering

Learning engineering is at the core of the competition. The most compelling abstracts described how their tools generate and use learning data to drive improvement and answer key research questions. Abstracts that mentioned data use only generally or focused solely on internal improvement were less competitive.

  1. Scale and sustainability

Some tools faced barriers to scaling, such as reliance on expensive physical components or entry into saturated markets, without clear plans to overcome them. The strongest abstracts identified these risks and offered strategies or partnerships to support growth.

  1. Team capacity or relevant expertise

The most competitive teams showed that they had the appropriate expertise, staffing, and partnerships aligned to their proposed approach. Abstracts without clear evidence of these elements were less competitive, particularly for projects involving advanced technologies, complex development needs, or new domains.

  1. Detail, focus, or clarity

Some abstracts were too broad or high-level and lacked concrete details about how the tool works. Others contained significant errors or incomplete sections. Clear, well-edited abstracts that articulated a focused, feasible idea stood out most.

  1. Multiple abstract submission

For teams submitting multiple abstracts, organizers selected the strongest abstract from each team to advance.

Feedback on submissions to the Datasets for Education Innovation Track

This year marked the second cycle of the Tools Competition’s Dataset track, and we were thrilled to see strong interest and high-quality submissions. The evaluation process prioritized datasets that captured meaningful aspects of learning, represented diverse learners and contexts, and had the potential to advance research and edtech development.

Based on our review, common reasons submissions did not advance included:

  1. Misaligned focus area

The strongest datasets were grounded in learners’ and teachers’ experiences. Submissions with novelty or research value but not clearly centered on educational contexts were less competitive.

  1. Emphasis on the tool rather than the dataset

Abstracts that highlighted the dataset, its variables, metadata, populations represented, and research questions, rather than the tool itself, were most competitive. Submissions that focused on the tool – or sought funding for product development – often obscured the dataset’s novelty or utility.

  1. Unclear research questions

Submissions that clearly linked datasets to meaningful questions and challenges in the field were prioritized. Without this clarity, assessing alignment and potential impact was difficult.

  1. Incomplete or unclear variables and metadata

Strong datasets defined key variables, included thorough metadata, and covered diverse learner populations. Submissions lacking these details were harder to evaluate and less competitive.

  1. Limited novelty

The most compelling datasets added unique value or insights. Submissions duplicating existing publicly available data, leaderboards, or review platforms like EdReports were less competitive.

  1. Narrow scope or limited generalizability

Datasets that were highly specialized or narrowly focused reduced the potential for broader research use. Strong submissions balanced novelty with applicability across contexts.

  1. Potential access or licensing constraints

Submissions likely to have restrictive access or reuse were less competitive, as open availability is a key factor for advancing the field, as it enables free access, reuse, and building upon. The most impactful datasets are those that adhere to open data principles, enabling others to validate findings, develop new platforms, and extend the work in meaningful directions.

Stay engaged with the community

We encourage you to stay engaged with the competition community and continue developing your idea to advance the shared goal of improving teaching and learning worldwide.

Organizers will continue to build additional resources to support Tools Competition competitors and the learning engineering community overall. Initial resources are available below. 

  • Join the Learning Engineering Google Group. This listserv and community has over 3,500 members of the learning engineering community, and is managed by The Learning Agency, but is not specific to the Tools Competition. Competitors can post updates, seek collaborators, and identify new opportunities.
  • Join a monthly virtual networking session to connect with others in the learning engineering community. While this event is not specific to the Tools Competition, it can be a great place to network, find collaborators, and talk about the impact of your work.
  • Join a monthly Learning Engineering Research Round-Up. Discuss the intersection of AI, data, and educational research with others in the learning engineering community. 
  • Explore resources on our blog. The Tools Competition blog features articles, practical resources, and insights on topics like Learning Engineering, data privacy, accessibility and equity, responsible AI, user-centered design, and case studies from Tools Competition winners to inform and inspire your work.

The Tools Competition launches annually each Fall and seeks to accelerate breakthrough innovations in education, backing evidence-driven technologies that generate research insights and deliver real-world impact for learners worldwide.