Instrumentation: How Your Product Can Help Understand What Works in Learning

‘Instrumentation’ refers to building out a digital learning platform so that external researchers can leverage the data generated to engage in research – in other words, the platform becomes an “instrument” to do research. Instrumentation is central to learning engineering because it is the process by which a platform turns their data into a research tool.

Instrument your platform  through A/B testing

One primary way to instrument your platform is by building a way for external researchers to run A/B experiments. This is already taking place where digital learning platforms have “opened up” their platforms to external researchers, creating systems that allow researchers to run their trials within the platform. These platforms facilitate large-scale A/B trials and offer open-source trial tools, as well as tools that teachers themselves can use to conduct their own experiments.

One example of this approach is the ETRIALS testbed created by the ASSISTments team. As co-founder Neil Heffernan has argued, ETRIALS “allows researchers to examine basic learning interventions by embedding randomized controlled trials within students’ classwork and homework assignments. The shared infrastructure combines student-level randomization of content with detailed log files of student- and class-level features to help researchers estimate treatment effects and understand the contexts within which interventions work.”

To date, the ETRIALS tool has been used by almost two dozen researchers to conduct more than 100 studies, and these studies have yielded useful insights into student learning. For example, Neil Heffernan has shown that crowdsourcing “hints” from teachers has a statistically significant positive effect on student outcomes. The platform is currently expanding to increase the number of researchers by a factor ten over the next three years.

Other examples of platforms that can be considered “instrumented” by “opening up to researchers” include Canvas, Zearn, and Carnegie Learning.

Carnegie Learning created the powerful Upgrade tool to help ed tech platforms conduct A/B tests. This project is designed to be a “fully open source platform and aims to provide a common resource for learning scientists and educational software companies.” Using Carnegie Learning’s Upgrade, the Playpower Labs team found that adding “gamification” actually reduces learner engagement by 15 percent.

When it comes to building A/B instrumentation within a platform, the process usually begins with identifying key data flows and ways in which there could be splits within the system. Platforms will also have to address issues of consent, privacy, and sample size. For instance, the average classroom does not provide a large enough sample size, and so platforms will need to think about ways to coordinate across classrooms. A number of platforms have also found success building “templates” to make it easier for researchers to run studies at scale.

Share datasets to contribute to learning engineering

A secondary way that learning platforms can contribute to the field of learning engineering is by producing large shareable datasets. Sharing large datasets that have been anonymized (removed of all personally identifiable markers, to protect student privacy) is a big catalyst for progress in the field as a whole.

For example, in the field of machine learning for image recognition, there is a ubiquitously used open-source dataset of more than 100,000 labeled images called “ImageNet”.The creation and open-source offering of this dataset has allowed researchers to build better and better machine learning image recognition algorithms thus catapulting the field of image recognition to a new higher standard. We need similar datasets in the field of education.

An example of this approach is the development of a dataset aimed at improving assisted feedback on writing. Called the “Feedback Prize,” this effort will build on the Automated Student Assessment Prize (ASAP) that occurred in 2012 and support educators in their efforts to give feedback to students on their writing.

To date, the project has developed a dataset of nearly 400,000 essays from more than half-dozen different platforms. The data are currently being annotated for discourse features (e.g., evidence, claims, etc) and will be released as part of a data science competition. 

Another example of an organization that has created a shared dataset is CommonLit, which uses algorithms to determine the readability of texts. CommonLit has shared its corpus of 3,000 level-assessed reading passages for grades 6-12. This will allow researchers to create open-source readability formulas and applications.

In the context of the Learning Engineering Tools Competition, a dataset alone would not make a highly competitive proposal; however, teams with a compelling dataset are encouraged to partner with a researcher or developer that can design a tool or an algorithm based on the data.

Is my platform instrumented?

Consider these questions to begin determining whether your platform is instrumented:

Does your platform allow external researchers to run science of learning studies within your platform?

If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

Does your platform allow external researchers to mine data within your platform to better understand the science of learning?

If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

If the answer to either of the above questions is “no,” then we highly recommend that you partner with a researcher to help you think through how to begin to instrument your platform as part of the Tools Competition. 

If you need help identifying a researcher, please reach out to ToolsCompetition@the-learning-agency.com. We have a large and growing network of researchers who can assist platforms with:

  1. How best to instrument a platform in ways that would serve the field,
  2. Determining what data a platform is able to collect and how best to collect it,
  3. Using the data and related research to answer questions of interest.

We can facilitate connections to researchers through individual requests or broader networking listservs and events.

RECENT POSTS

Phase II deadline for Building an Adaptive & Competitive Workforce track: April 22, 2024. Meet the finalists for all other tracks here.