Since the competition launched in 2020, we have seen tremendous growth in competitor interest and the quality of proposals. This leaves many competitors to wonder: Is my proposal going to be competitive? How are proposals reviewed and selected? What type of feedback can I expect from the process?
We have heard from competitors that with grant and funding opportunities, they often feel as if they’re submitting proposals into a “black hole.” They are also eager for feedback on areas they can improve or develop their ideas.
This post aims to illuminate key details of the selection process, as well as the feedback competitors will receive throughout their participation in the competition.
In the Tools Competition, we aim to provide all competitors with actionable feedback to advance their ideas, such that the competition is a valuable program for all – not just those who are ultimately named as winners. Feedback takes multiple forms and becomes increasingly rigorous as the competition progresses, the competitor pool narrows, and organizers learn more about competitors’ tools.
Get a sense of what to expect at each phase of the competition below. To recap, the competition takes place over eight months and has three phases:
- Phase I: Competitors submit a brief abstract about their tool (~1,000 words)
- Phase II: Select competitors invited to submit a full proposal, budget, and plan for execution (~3,000 words)
- Phase III: Finalists pitch virtually before a panel of judges
In Phase I, competitors have the opportunity to join one of several support events to seek feedback as they’re developing their proposals and can review a summary of evaluation results after the review process. Organizers will provide feedback on evaluation trends across the overall pool, including core strengths and weaknesses, to help competitors understand where their tool fits into the overall slate.
At this stage, competitors submit brief abstracts and organizers assess fit within the competition, alignment to track objectives, and overall novelty and quality of proposals. This phase is also organizers’ first opportunity to understand trends within the overall slate and to identify the types of ideas that are most interesting. We receive many fantastic Phase I abstracts, and only the ones most aligned with competition priorities will advance to Phase II.
As the competition has grown in size each year, Phase I has become increasingly competitive. We know that a lot of care, time, and team capacity is required to submit a proposal. Given that, the expectations for entry to Phase I are low – requiring a brief abstract of 1,000 words – however, organizers expect competitors will pack their abstract with compelling evidence to make their tool stand out. From Phase II, the time commitment and expectations increase significantly. Given that, we advance only Phase I abstracts that have true winning potential.
In Phase II, competitors will have additional opportunities to join targeted support events as they internalize the rubric and submission requirements and develop their proposals. Following the review process, organizers offer the opportunity to all Phase II competitors to receive direct proposal feedback. This can be found in the Phase II advancement notification email.
At this stage, proposals are scored according to the rubric and receive technical review for feasibility and novelty. They are reviewed at least twice by independent experts. As with any competition, there are more great ideas than there are spots for finalists and funding dollars to award. Selection of finalists takes into consideration many factors – rubric scores, technical assessment, diversity of the slate overall, alignment with competition priorities, and due diligence on the teams.
As such, feedback cannot be narrowed to any one element or score in the process. Organizers summarize key points of rationale and convey the feedback that is available to them. We work with reviewers to guide the type of feedback that will be most actionable to competitors and continue to refine this process so that all competitors receive valuable takeaways.
It is not uncommon for competitors to participate in subsequent years and succeed as winners. Sometimes, a proposal receives strong reviews but just isn’t right for the competition based on the overall slate, competition priorities, or an element in a similar proposal that pushed that one slightly ahead.
In past years, organizers have connected with competitors by phone to convey feedback summarizing numerous levels of review. Feedback conversations have also been a great opportunity to better get to know competitors, identify potential additional opportunities for competitors, and receive feedback as organizers. For the 2025 competition, organizers are exploring other ways of conveying feedback to ensure it is most valuable for competitors.
In Phase III, finalists deliver virtual pitches to a panel of judges. They have the opportunity to submit a recorded practice pitch for feedback, connect with organizers to review Phase II reviewer feedback, and follow up on their pitch to discuss judge feedback.
The pitch process and winner selection lean into the subjective element of a competition.
As organizers, we nominate a slate of finalists based on Phase II reviews that we believe could make excellent winners. It is now up to the judges to use their unique expertise across philanthropy, research, the classroom, and industry to recommend winners that are the most exciting, innovative, and potentially transformative for education.
Judges are invited to consider the proposal and the pitch. They have the opportunity to ask questions during live Q&A and deliberate privately around the strengths and weaknesses of the pitch. Together, judges consider if the team has effectively conveyed the novelty and strengths of their solution, its potential for impact, its ability to contribute to learning science research, its support of equitable learning outcomes, and the team’s track record and capacity to execute.
Organizers invite judges to give feedback to finalists, which is then shared with all finalists and winners in a follow-up conversation. In past years, some proposals that were not successful during judge deliberations did not receive additional feedback following their pitches. From the 2025 competition, we have adapted the feedback process to build this directly into judge scorecards to ensure that all competitors receive feedback.
The Tools Competition launches annually each Fall and seeks to identify edtech innovation that leverages digital technology, big data, and learning science to meet the urgent needs of learners worldwide.