I've been reading Brave New Worlds. It's a book by Salman Khan, founder and CEO of Khan Academy, on how Generative AI will change the way students learn and teachers teach, leveraging tools like ChatGPT as a means for both. I…would not recommend the book; it was written and published in 2023–2024, which in the era of rapid AI advancement is already 2–3 generations behind what GenAI is capable of in the summer of 2025. The book also reads like a thinly veiled advertisement for Khan Academy's own Khanmigo chatbot, frequently sidebarring into speculative use cases and conversations with the tool—more suitable in a VC pitch deck than a printed book[1].
During my time at Kiddom, I helped kickstart some of our initiatives around GenAI, so I'm acutely aware of the concerns within the education space around the technology. The primary cause of anxiousness centered around, rightfully so, students cheating their way out of writing assignments and essays. But there were also mild concerns about what it meant for teachers; Brave New Worlds includes a chapter about whether the teaching profession will be negatively disrupted by AI and cost teachers jobs and opportunities. In both cases, the optimistic argument is that AI should not act as a drop-in replacement for the work done by teachers and students, but should act as an augmentative tool for productive, individualized learning.
Both can be true.
We already know that AI is a floor raiser—and therefore less of a ceiling raiser. All the use cases that we've found during this AI boom are around menial work: reading and summarizing content, creating good-enough images and videos and prose, generating well-formatted code per well-defined structure and definition. The most impacted jobs are the internships, the entry-level positions, and the offshore teams whose output can be replicated with a handful of prompts and a $20 subscription[2].
And in the realm of education, the concerns around students using ChatGPT-generated content instead of their own, or teachers being outtaught by a Generative AI, are valid because they are true. Most students aren't gifted essayists or curious historians, so forcing them into a framework that drives grades based on output will, of course, incentivize the simplest and most obvious solution. On the teacher side, not all teachers are equal in skill and experience and motivation, and they face similar incentives where the AI's output—grading papers, providing non-generic, specific feedback—is much faster, and in some cases genuinely better than their own.
Yet, the utilitarian use cases of AI as a powerful tool and force multiplier are just as valid. Users are opting for AI overviews and summaries over raw search results. Engineering teams are increasingly attributing their code as authored with AI. And the verbiage used by the likes of ChatGPT has become so commonplace that it's influencing our speech patterns[3]. In the hands of those who understand AI's applicability and have shifted their workflows to accommodate it, the efficiency gains are real and profound.
Yet what I notice in various thought pieces about the technology focuses on one side and intentionally ignores the other. If you're an AI optimist, of course AI is an always-available assistant and tutor and thought partner, equal parts polite and thoughtful and pushing back just enough to get the most of its user. Pessimists point out the inaccuracies, hallucinations, and the sloppiness of AI output whose sheer quantity dooms our lives to bland mediocrity. The duality of man is but a precursor to the duality of artificial intelligence.
As I'm writing this, OpenAI announced their foray into the education space with a student-centric Study Mode ↩︎
Though it continues to raise the question of how we train the next generation of experts if the path of expertise is wholly replaced with GenAI. ↩︎