Using Feedback from Teachers, Students, and Platform Analytics to Generate Intelligent and Adaptive Content Recommendations
In March 2019, Learning Equality co-convened a two-day design sprint in Paris in collaboration with UNHCR, Google.org, Vodafone Foundation, and UNESCO. We explored the need for automated curriculum alignment in crisis contexts, and the possible role of artificial intelligence (AI) in recognizing curricular mandates and patterns, and recommending pertinent educational content in return. This work is part of a broader collaboration working with refugees and partner organizations to explore utilizing digital education to support learning in these contexts. The experience of engaging our professional communities in such a challenging question was as valuable as the outputs themselves, so we’ve been sharing the discussions and debates we’ve had as they may be useful in other’s work.
Over the past month, the Design2Align blog post series has covered topics such as contextual display and creation of metadata, teacher-generated content annotations, technical considerations in OER for curriculum alignment facilitation, and open models for just-in-time learning pathway recommendations.
Today, Learning Equality’s UX Design Lead, Jessica Aceret talks about the specific curriculum needs for crisis contexts, and how it requires not only a human touch but also an alignment tool that provides intelligent content recommendations so that the relevant resources can be more easily found.
The Design Sprint on Curriculum Alignment in Crisis Contexts, which took place back in March, in Paris, saw many different roles in the education technology space strategically brought together — curriculum designers, policymakers, technology experts, refugees, and more. These are people who may not always encounter each other directly in day-to-day work, yet whose roles are intimately related in the bigger picture of how curriculum gets created, delivered, and implemented. Each person who was present held a facet of knowledge and a perspective that would help us, as a collective, tackle the question at hand — how might artificial intelligence (AI) be leveraged for the process of aligning digital educational resources to an adapted curriculum designed for those in crisis contexts?
In the process of understanding the problem space, five sub-project groups emerged. As a designer who has worked on a learning platform, I decided to join the group that explored the applications of AI for Evaluation and Assessment processes.
We began by elaborating on the issues that the sprint group as a whole identified during the initial brainstorming activity. Part of the job of someone who is designing a curriculum includes sifting through a wide range of available resources to find the ones that best fit the needs of the intended learners. There are many criteria that one might consider for whether or not a resource is a good fit. Provided that the metadata is available, some of these criteria may be easier to identify, like grade level, subject, and national standard. Other criteria can be much more nuanced and subjective, like whether cultural references within the resource are relevant to the learner audience, whether the resources accommodate specific activities that a teacher might use the resource for, or whether it’s suitable for differentiated teaching or self-study.
Another important source of information for someone who is doing this alignment comes from monitoring and evaluation (M&E) processes. These include needs assessments that communicate the goals that the curriculum should serve and reports that show how learning experiences with those resources actually played out from both the teacher and student perspectives.
Within the tasks of (1) sorting through available resources, and (2) evaluating how those resources were experienced by teachers and learners, we saw the following challenges arise in our discussion:
- Alone, the process of narrowing down the right resources from large repositories regardless of other constraints is time-consuming, difficult, and at times an overwhelming task.
- Knowing what gaps exist in a curriculum gives aligners a direction to work towards, but the most helpful feedback can also be the most nuanced and difficult to communicate through statistics, numbers, and in general statements through M&E reports alone. Thus, needs may not be fully represented.
- Most importantly, the unique curriculum needs of those who have been displaced by crisis and emergency cannot be predicted by any national standard. In this case, human judgment and empathy is essential. For a curriculum aligner to efficiently find the digital educational resources they need, the tool they use must equally adapt to the situation at hand and make the findability of relevant resources salient as they build and align the digital library.
Our group saw an opportunity to leverage AI by building a feedback system for curriculum aligners and content creators that can record qualitative feedback from students and teachers about the resources they’ve used. This feedback would be combined with platform analytics and other quantitative data to provide intelligent recommendations in order to aid the task of narrowing down and identifying the best-fit resources.
In thinking about the type of feedback to collect, we started by considering what would ultimately benefit the curriculum aligner. For students, we came up with questions such as whether they understood the material, whether they would feel comfortable teaching or sharing with others, how difficult the material was, and more. For teachers, we considered questions like whether they would use the resource again or recommend it to others and how much time they spent in preparation before using it. For platform analytics, measuring things like frequency of usage, time spent on resources, geographic location of usage, and which resources displayed patterns of being consumed within a “learning pathway” could all feed into measures for the relative quality of resources.
As we dug into what would make this feasible, we identified a few things that must be in place for a future team to explore as a next step:
- Students and teachers must be incentivized to provide feedback to this system — else there will not be enough data to make reliable recommendations. We explored ideas to motivate learners and coaches through gamification elements and rewards if they gave feedback.
- We also ran off the assumption that this feedback system would be interoperable and compatible with the myriad of software platforms through which digital content sources are delivered.
- The AI scoring rules and training need human judgment on what it means for a resource to be “best fit.”
- The risk of dataset bias also must be addressed, especially given the very real constraints of intermittent or even completely unavailable internet connection in crisis contexts. Recommendations may end up being misrepresentative of needs. Relatedly, more popular and well-known resources may dwarf other lesser-known and possibly more relevant resources because of disproportionate amounts of feedback given. Fortunately, there are databases that can support these sorts of offline and distributed networks.
As the conversation deepened in the race against the sprint clock, we realized that, unfortunately, there just wasn’t enough time to discuss and record everything that future teams would need to consider to make this sort of system feasible. But, working together like this enabled us to improve our shared understanding of these issues from many important angles, and created a collaborative energy and spirit which carried us through the moments of uncertainty.
I look forward to seeing some of this work also feed into the hackathon event organized by Learning Equality, UNHCR Vodafone Foundation, Google.org and UNESCO on October 16–18 in San Francisco, California. And if you have experience with machine learning, recommender systems, or natural language processing and are interested in participating in these efforts beyond this hackathon, please email us at design2align@learningequality.org. I also invite you to follow the hashtag #design2align for links to demos and datasets that will be produced during the event.
In the next chapter of the Design2Align blog series, we will release a public report with all the findings from this two-day sprint, shedding light on the work we can all do to ensure that the benefits of AI are able to reach learners in crisis and emergency contexts. Stay tuned!