From tacit knowledge to AI: Michal Hudecek on scaling human expertise
Learn how EdTech Levebee is bridging research and real classrooms
The Global EdTech Leaders series invites leading experts, practitioners, and collaborators in EdTech to share their insights and vision for the future of educational innovations. Through this series, we hope to promote greater sharing of ideas and knowledge and facilitate important conversations within the EdTech community.
For today’s article, I connected with Michal Hudecek, co-founder and CEO of Levebee, an online educational application that creates personalized and evidence-based learning for K-5 students. Since 2014, Levebee has helped more than 500,000 children acquire essential literacy skills such as reading, writing, counting, and foreign languages. In this interview, Michal shares his insights into pedagogical research, Levebee’s frameworks and methodologies, and the value of collaboration between researchers and developers. He also delves into the nascent trend of transferring tacit expertise and knowledge into explicit guidance for AI, reflected in Levebee’s efforts to understand and program teachers’ intent into models that generate feedback across diverse learning exercises.
Hi Michal, thank you for joining us today. Can you start by sharing how you became interested in EdTech, and how Levebee came into existence?
I have a background that combines technology and business. But if I had to choose another career, I’d be a science teacher. I’ve always enjoyed figuring out how to explain things clearly to others. EdTech combines a bit of science, a bit of technology, and a lot of explanation, so it drew me in. Although I’ve been involved in many digital startups over the past 20 years, I gravitated toward EdTech. Unlike other fields, I find problems adjacent to EdTech interesting, and I never run out of energy to solve them.
Levebee was a natural continuation of that journey. Together with my co-founders, Michal Zwinger and Dr. Renata Wolfova, we set out to address learning gaps at the very start of schooling, when intervention is most effective.
In your interview with the EdTech Garage, you mentioned the value of incorporating research on math pedagogy into relevant product designs. Could you expand on that and why that is important?
Historically, this research has moved toward increasing granularity. Early studies focused on broad connections between major concepts, such as addition and subtraction versus multiplication and division. Over time, it became clear that the dependencies are much more complex. They form a web of interrelated concepts rather than a strict hierarchy.
Current research in early math now explores more specific topics like one-to-one correspondence or comparison. However, even these are still too broad to effectively bridge the assessment–practice gap. Teachers need highly actionable recommendations on how to help children, especially now, when many of them have no formal background in math education due to ongoing teacher shortages. Ideally, we need to reach the level of the smallest possible learning steps, each clearly mapped to specific activities that address them. This "last mile" problem is where we see the greatest need for further research.
EdTech offers a unique opportunity to support this kind of research. During our Math Without Barriers project, we persuaded over 1,000 schools to administer our diagnostic assessment to 25,000 incoming first graders. Sample sizes of this scale, combined with detailed logs of student interactions, are otherwise extremely difficult and expensive to obtain through traditional research methods. This is why I believe EdTech creators should work much more closely with researchers, to give back to the community whose previous work forms the foundation of their tools and to move the whole field forward.
The International Centre for EdTech Impact helped us design a methodology to statistically verify a subset of the dependencies in our learning progression model. We are now collecting data from schools for future analysis. I was impressed by how they managed to meet our tight deadline without compromising quality, which was essential for launching data collection before the end of the school year. Otherwise, the entire project would have been delayed by another six months. Now we will be able to present the results of the study to schools right at the beginning of the new school year.
Levebee’s Evidence of Impact offers a look into the educational frameworks foundational to the product’s design. What role does evidence or research play at Levebee? Is there a framework that you find especially important in the product’s design - if so, why?
Too often, EdTech tools are simply collections of disconnected digital versions of paper-based activities. Even when developers attempt to link them, the result often contains major gaps. A student may master one activity, only to find that the next one assumes knowledge or skills they haven’t developed, leading them into a dead end.
This is why we find the concept of learning progression models so valuable. These models define the pedagogical dependencies between activities and allow for continuous validation and improvement. Making the learning progression model publicly available has another benefit. It helps teachers understand what the app is trying to teach at any given moment. This enables them to step in when needed and also improves their general ability to teach math.
It also addresses a key flaw in many AI-driven learning systems: explainability. Too often, these tools expect teachers to place students in front of a screen and trust the algorithm, while still holding them accountable for learning outcomes. It’s like handing someone a self-driving car with no steering wheel and making them responsible for any accidents.
Beyond the learning progression model, we measure standard psychometric properties and track a range of internal KPIs to better understand what works and what doesn’t.
How does Levebee gather feedback from users?
We try to encourage more feedback by replying to every message and even offering free licenses to users who report a bug. Still, as with any product, only a small percentage of users actually provide feedback. And when they do, it's often difficult for them to explain exactly where they were in the app and what went wrong. That is why it is important to detect as much as possible automatically, including both technical and pedagogical issues. This includes, for example, gaps in the learning progression model I mentioned earlier. It was especially critical in the early days, when our user base was much smaller.
Now, with a large number of users, we are able to systematically collect more qualitative feedback. This comes through our email and phone support channels, as well as through the "Report a mistake" button available in every exercise. That button has become our most important source of feedback, both in terms of quantity and quality. We know exactly which task was shown on the screen and which type of device was used. We can then recreate the scenario using a cloud-based emulation service to debug it. All feedback, regardless of the source, is fed into our project management system, which helps us identify patterns and recurring issues.
That said, many of the features in the app do not come from direct user reports, but from broader problems we observe among teachers and parents. These are often issues they experience but cannot clearly articulate. We closely monitor what users of other EdTech products are complaining about, and we have also partnered with university researchers to conduct qualitative studies with our own users. This exploratory research has helped us set general priorities and decide which feedback is worth implementing. One example is our mini diagnostic assessments, designed to check whether students are ready to learn a new math concept. That feature came directly from this kind of broader, exploratory research.
What are some key challenges and opportunities that you see in the field of EdTech?
You touched on a topic that is currently top of mind for me. I am writing an article about hard problems in EdTech. Many of these challenges are the same as those in education more broadly, such as understanding when engagement does not lead to learning. Others involve skills that experienced teachers can manage well, but that technology, even with recent AI advances, still struggles with. For example, detecting cognitive overload or the emotional state of a student. Some problems are ethical, rooted in the conflicting incentives between EdTech companies and the students they serve.
When it comes to AI, the current state does not yet deliver the revolution it is often hyped for. What has changed, however, is our ability to program the intent of the teacher instead of hardcoding every possible response. Our platform includes thousands of types of exercises that are computer-generated and slightly randomized. Until recently, it was impossible to program feedback for all the variations that might appear. Now, we are about to launch a new feature that provides AI-based feedback across all these exercises.
We did not simply tell the AI to give feedback. Instead, we were able to encode what a teacher would generally try to achieve in a given situation. For example, we can instruct the AI to identify and repeat the key information from a task, regardless of how the instruction is phrased. If a task says, "Move the pictures so that the blue box has more pictures than the red box," the app can respond, "Blue has more than red." The difficult part is defining what the intent should be and when. Skilled math interventionists do this instinctively, without conscious effort.
To better understand this process, we conducted our own research. We recorded many one-on-one sessions and afterwards let the teachers narrate them to learn why they responded in a certain way. This speaks to a broader limitation of current generative AI models. They have been trained on knowledge that exists in written form, but much of human expertise is not written down. It lives in the minds of professionals who gained it through years of practice.
That is why I value working with our co-founder, Dr. Renata Wolfova, who brings over 30 years of experience in individual math interventions. She helps us turn this kind of tacit knowledge into explicit guidance that we can embed in the app.
Could you share more about your collaboration with the International Centre for EdTech Impact, and any projects that you’ve collaborated on?
Although Levebee is used by over 1,000 schools and 500,000 students, our team remains small, with just six full-time members. I am fortunate to work with brilliant colleagues who bring deep expertise in pedagogy, software development, design, and writing, ensuring the app is understandable and effective for both students and teachers. However, one area where we lack expertise is in the application of statistical methods. The International Centre for EdTech Impact helped us design a methodology to statistically verify a subset of the dependencies in our learning progression model. We are now collecting data from schools for future analysis. I was impressed by how they managed to meet our tight deadline without compromising quality, which was essential for launching data collection before the end of the school year. Otherwise, the entire project would have been delayed by another six months. Now we will be able to present the results of the study to schools right at the beginning of the new school year.
Closing reflections: It was a pleasure to connect with Michal and learn more about Levebee’s important work grounded in educational frameworks and field expertise, and its foundational concepts such as learning progression and explainability. I enjoyed learning about the promising development of mapping and programming teachers’ intent into AI tools, and the nuances of articulating human expertise in the development of AI tools to serve users in a more effective way. I look forward to keeping up with Levebee’s work in making AI-driven tools engaging and accessible for teachers and students. Thank you to Michal for generously sharing his time and expertise for this interview.
Do you know an EdTech leader we should invite next for this series? Send us their name to info@foreduimpact.org!