MOOCs promised to democratize education. And to a large extent they did: over 220 million people enrolled in at least one massive online course. But completion rates anchored around 13% and never rose. What is appearing now is qualitatively different — and poses a problem that goes beyond technology.
I. Why This Is Not a MOOC on Steroids
The failure of MOOCs as a substitute for in-person teaching has a precise cause: they were self-guided, almost self-taught learning. Recorded video, quiz, forum. The format scaled access but sacrificed the two things that make a teacher make a difference: personalization and interaction. They worked as complement, not substitute.
MOOCs (MiríadaX)
of a MOOC
AI tutor session
(Bloom, 1984)
Completion rates come from aggregated MiríadaX data in Spain (2013-2014, 58-121 courses) and are consistent with global medians of ~12.6%. Production cost corresponds to estimates from The Open University. Bloom's figure refers to human 1:1 tutoring; later reviews moderate the effect, but confirm that computational tutors reach an efficacy close to that of a human tutor — which, at scale, is transformative.
In 1984, Benjamin Bloom formulated the "2-sigma problem": individual tutoring shifts the average student's performance above the 98th percentile of the conventional classroom group. The problem, then, was that 1:1 tutoring doesn't scale. Forty years later, the tools that appeared this week attack exactly that bottleneck.
Los MOOCs fracasaron como sustituto porque eran autoguiados. Los tutores IA tienen posibilidades reales de funcionar porque son personalizados. Las universidades los van a terminar adoptando — es demasiado barato y demasiado eficaz como para no hacerlo.
II. What I Tested This Week
On March 16, Pep Martorell published in his newsletter De MOOC a MAIC an analysis of a project just released as open source by Tsinghua University: OpenMAIC, a platform that generates a complete classroom — AI teacher, virtual classmates, whiteboard, quizzes, debates — from a topic or a PDF. That same day I discovered Wondering, created by Cheng-Wei Hu, former NotebookLM engineer at Google, which presents itself as "Duolingo for anything." I decided to test both.
I uploaded half of Foucault's The Order of Things to Wondering. In fifteen minutes I had a structured walkthrough of the concept of episteme that genuinely helped me recall what I had read years ago. The experience is fluid, visual, efficient. I tested OpenMAIC with a topic on AI and philosophy: it generated slides, an interactive simulation of the Turing test, and a debate dynamic between agents.
But upon examining the content closely, the diagnosis was asymmetric: for the student, these tools are tremendous. For the teacher, they are nearly useless: pedagogical personalization is limited and source verification is practically nonexistent. In Tsinghua's own study, the worst-rated dimension (3.51/5) was "Understanding the student" — personalization is more promised than delivered. But the trend is clear: this improves with each iteration.
| OpenMAIC | Wondering | NotebookLM | |
|---|---|---|---|
| Model | Multi-agent classroom | Visual micro-lessons | Notebook with sources |
| Origin | Tsinghua (AGPL) | Ex-Google (startup) | Google Labs |
| Personalization | Partial | By level and goal | By own sources |
| Citation verification | Absent | Absent | Navigable citations |
| For the student | Excellent | Excellent | Good |
| For the teacher | Limited | Limited | Partial |
All three converge on the same point: any topic can become a personalized, visual, and nearly free learning experience. Tsinghua's own paper quantifies it: producing a MOOC costs ~€25,000 and over 100 hours of work. Generating a MAIC session costs less than $2 and 30 minutes. The asymmetry is brutal.
III. The First Response (Necessary, but Insufficient)
Faced with this landscape, the analysts' reaction has been predictable and legitimate:
Both are right about the first layer of the problem. But they stop there. They say the university must "compete on experience" and "not copy the apps." Correct. But they don't answer the question that comes next.
IV. The Right Question
If these tools do the work of knowledge transmission — explaining, exemplifying, evaluating, personalizing pace —, the question is not how to compete with them. The question is: what does the university do with the time it has left?
The answer is not to become an "app-university." That is the trap. If the university responds to AI tutors by simply adopting them, it becomes just another platform — and loses the only thing that distinguishes it.
If transmission is automated, the university can dedicate the freed time to what no AI tutor can do: teach how to think with what has been learned. Train critical thinkers and proto-researchers from the first year. Because if learning becomes trivially easy, what has value is knowing what to do with what has been learned.
Data on the distribution of teaching time in Spain allows this idea to be quantified. According to the Cabero-Epifanio survey (2019, N=580 full-time faculty), a Spanish university professor dedicates approximately 41% of academic time to direct transmission (classroom + preparation + assessment), 18% to tutoring and supervision, and 27% to research. If AI tutors absorb most of the transmission, that redistribution frees real time for what matters.
Current distribution data: full-time faculty survey in Spain (Cabero-Epifanio, 2021; datos de 2019; N=580). Total: ~49 h/semana. La columna "With AI tutors" es una proyección del autor, no una medición.
If a student can learn the fundamentals of Foucauldian epistemology in an afternoon with Wondering, the in-person seminar doesn't have to teach what an episteme is. It has to teach what to do with that concept: how to apply it to a new problem, how to question its limits, how to connect it with traditions the AI tutor doesn't know. In other words: research.
Every course can become a laboratory where students, equipped with AI tutors for the foundations, work on real problems under faculty supervision. A semester that today has three months of lectures can have one month of assisted explanation and two of guided research. Not "learning about" but "thinking with." That would be a real paradigm shift.
V. The Missing Layers
For this vision to work, there are layers that do not yet exist. Enthusiasm for AI tutors obscures the gaps in infrastructure:
OpenMAIC, Wondering, NotebookLM and dozens more already solve this. Any topic becomes an adaptive experience for less than $2. In the Tsinghua pilot, 61% of recorded behaviors were active student questions. This is not passive consumption: it is real tutoring. Universities cannot compete here — and should not try.
AI-generated quizzes and exercises already work. What is missing is assessment of complex thinking: argumentation, critical synthesis, cross-domain connection. No current system properly evaluates a philosophical essay or detects research originality.
The AI agent teaches with a teacher's confidence, but in humanities it hallucinates citations fluently. Tsinghua's own paper acknowledges it: generated results must be reviewed by instructors. Generic anti-hallucination techniques (Graph-RAG, neurosymbolic guardrails) solve structured data, not textual verification against humanities corpora. NotebookLM offers navigable citations; OpenMAIC and Wondering, nothing equivalent.
The layer the university should be building. This is not about tools but about curricular redesign: every course as a space for guided research. Turning students into proto-researchers and critical thinkers from the first year. This requires rethinking the teaching role from scratch — and no technology provider can do it for the institution.
Who certifies what is learned with an AI tutor? How is autonomous learning integrated with institutional assessment? In Spain, only 33% of universities report applying educational AI (Crue, 2023), though 52% of those that don't plan to do so within two years. Adoption is outpacing policy formalization — and that is a risk.
VI. What the University Cannot Be (and What It Can)
The university cannot be an app. That is the most dangerous trap of the moment: responding to AI tutors by becoming a personalized content platform. If it does, it loses the game — because any startup with a good language model and a good interface will do it better and cheaper.
What the university can — and must — do is use the time freed by these tools for what none of them can offer: building the capacity to think critically with what has been learned. Turning students into proto-researchers. Redesigning every course as a laboratory where, equipped with AI tutors for the fundamentals, students work on real problems under faculty supervision.
MOOCs democratized access to content. AI tutors are democratizing the learning experience. What has not yet been democratized is the capacity to think critically with what has been learned. That is what counts. That is what the university can — and must — teach.
But to build on top of these tools, the intermediate layers must be resolved: source verification in humanities domains, assessment of complex thinking, and a curricular redesign that assumes explaining is no longer the bottleneck — thinking is.
True educational innovation is not technological. It is the institutional decision to free the professor from the role of transmitter and turn them into a research director. No app does that. A university decides that.
Main sources: Paper fundacional MAIC (arXiv:2409.03512, aceptado en JCST 2026) · Cabero-Epifanio, "Academic Workload in Spanish Universities" (Education Sciences, 2021; N=580 PDI TC) · Bloom, "The 2-Sigma Problem" (Educational Researcher, 1984) · MOOC completion data: MiríadaX (2013-14), FutureLearn, IRRODL · AI adoption: Crue Digitalización (FOLTE, 2023; 46 universidades) · UNESCO (2023, 2025), Media & Learning Association (2023)