
AI has quickly become the newest collaborator on many L&D teams. It drafts scripts, rewrites scenarios, summarizes SME discussions, and even generates assessments in minutes. At first glance, it feels like we may have solved one of our toughest challenges: how to create quality learning at speed and scale.
But a deeper question is now emerging across the industry: Are we designing learning, or simply generating content that looks like learning?
Browse LinkedIn, AI forums, or product demos and you will find a growing wave of AI-generated learning assets. Storyboards, scenario scripts, course outlines, and even full modules often appear polished and professional. The structure feels logical, the tone seems instructional, and in some cases, the material even makes it into pilots or live programs with minimal review.
Yet clarity is not the same as cognition, and content is not the same as capability.
This is where we begin to see a common pattern: AI often excels at creating the vibe of instructional quality. It can mimic expert writing, produce coherent flow, and generate learning assets that look ready — even when the underlying cognitive, behavioral, or performance foundations are missing. This technique is now widely referred to as vibe coding.
Understanding vibe coding is becoming essential for anyone responsible for learning outcomes, capability building, or workforce performance
So the real question becomes: What makes AI-generated content look effective, and what makes it truly work?
Let’s explore that.
The Rise of Vibe Coding in L&D
What is Vibe Coding?
Vibe coding refers to the practice of giving AI a specific identity through prompt design — a tone, persona, reasoning style, or instructional voice — and then using that foundation to generate consistent content across modules, formats, or learning products.
For example, a prompt such as: You are an experienced instructional designer who uses adult learning theory and workplace scenarios to explain concepts clearly and practically.
This prompt creates a stable AI “persona” that informs tone, structure, and narrative flow. As a result, the content feels aligned and instructional. In many cases, it reads like expert work.
And yet, this is exactly where teams can be misled.
Vibe coding is powerful for branding, consistency, and content drafting. But it does not guarantee instructional integrity, performance alignment, cognitive scaffolding, or business impact. That is where learning design becomes essential.
The Movie Set Syndrome in AI-Generated Learning
A vibe-coded course tends to look complete on the surface. You’ll see:
Friendly tone: AI mimics warm, polished writing. It can sound empathetic and instructional even when the underlying concepts are shaky.
Smooth storytelling: Narratives transition well, examples flow, and the module “reads” like it was written by a practiced instructional designer.
Clean visual layout: Authoring tools + AI templates = polished screens by default. It looks professional even if the logic isn’t.
Attractive structure: Introductions, body content, summaries — all laid out neatly. It resembles a complete course
In short, the course looks like something an experienced learning designer created. But inside the walls of that polished exterior, there are often serious instructional gaps.
This is where the Movie Set Syndrome emerges. Hollywood film sets look like real cities. The buildings, windows, bricks, and streets appear authentic, but behind the facade you find empty frames held up by wooden beams.
Many AI-generated courses work the same way. They look beautiful but lack the structural integrity required for real learning. The gaps inside these courses are not small. They affect learning outcomes in significant ways.
Here’s what’s missing behind the walls:
No measurable learning objectives: AI generates objectives that sound right but don’t anchor the learning. Research from Robert Mager – Preparing Instructional Objectives, Merrill's Principles of Instruction in eLearning., and Wiggins & McTighe – Understanding by Design shows that unclear objectives produce vague, unfocused instruction. Without measurable objectives: practice becomes guesswork, assessments lose purpose, and learners can’t know what success looks like
No performance alignment: AI describes knowledge; not behavior. It doesn’t naturally map content to workplace tasks, contradicting what performance consulting research (Thomas Gilbert – Human Competence, Rummler & Brache – Improving Performance) has emphasized for decades. The result: Courses that inform but don’t transform.
Missing cognitive scaffolds: Humans learn through scaffolding and structured support. But AI imitates patterns, it doesn’t manage cognitive load. This breaks principles from: Cognitive Load Theory (John Sweller), Dual Coding Thoery (Allan Paivio), Worked Examples Effect (Clark, Nguyen & Sweller). Leading to content that’s too dense, too basic, too linear, and too disconnected
Weak assessment logic: AI can produce questions, but not instructional measurement. It struggles with alignment to objectives, varied difficulty, diagnosing misconceptions and application-level practice. This contradicts decades of research on assessment quality (Bloom – Mastery Learning, Popham – Classroom Assessment, Roediger & Karpicke – Testing Effect).
No retrieval practice: One of the most robust findings in learning science (Agarwal, Roediger & McDaniel – Retrieval Practice Research) rarely shows up automatically in AI-generated content. Without retrieval, learning decays, retention drops and application suffers. AI doesn’t proactively include retrieval without being told.
No real-world application: AI outputs often remain theoretical. It doesn’t naturally produce rich, workplace-specific scenarios unless heavily guided. This violates Merrill’s First Principles that emphasize on application, demonstration and integration. Meaning the learning doesn’t transfer.
When all these gaps are combined, the course becomes the instructional equivalent of a film set. Beautiful, coherent, and impressive at first glance, but empty behind the facade.
Why Skilled Learning Professionals Can Vibe-Code and How Others Can Get There
Seasoned instructional designers and learning engineers can work effectively with AI-generated content because they have already internalized the deeper layers of learning science: cognitive load, sequencing, assessment logic, real-world application, performance consulting, and scenario-based design. Their expertise allows them to quickly refine AI output, spot instructional gaps, and transform ideas into experiences that drive behavior and performance. In their hands, AI becomes a strategic accelerator, not just a content generator.
Newer practitioners, SMEs, or teams without deep instructional backgrounds may not recognize these gaps. The clean tone and professional structure of vibe-coded content can be misleading, creating a false sense of readiness. They need to be guided in how to use it well. With the right prompts, frameworks, and review models, AI becomes a learning partner that actually helps build instructional judgment over time.
AI does not eliminate the need for instructional expertise. It makes the need for instructional expertise clearer and more urgent. The opportunity for L&D is not to avoid AI but to develop the capability to orchestrate it responsibly and strategically.
AI may help create learning content. Instructional expertise is what transforms it into learning that works.
What This Means for L&D Teams in the Age of AI
As we move deeper into the AI-driven era, one thing is becoming unmistakably clear: AI isn’t the threat. Misusing AI is. The danger isn’t that machines will replace L&D professionals, but that teams may adopt AI in ways that bypass learning science, dilute instructional quality, or prioritize speed over quality. When AI becomes a shortcut rather than a strategic partner, we end up with content that appears polished yet lacks the cognitive, behavioral, and performance depth that real learning requires.
L&D is no longer just about developing courses. We are evolving into experience architects — designing learning ecosystems that support development within the workflow. This perspective is aligned with modern workplace learning research from the 70:20:10 Institute and Jane Hart’s work on learning in the flow of work, which highlights the need for learning environments rather than standalone interventions.
We are also stepping more firmly into the identity of performance consultants, a role championed by Thomas Gilbert in Human Competence and strengthened by Rummler & Brache in Improving Performance. Their research consistently shows that performance problems often stem from environmental, process, or system gaps—not from people lacking knowledge. AI cannot diagnose these root causes; L&D can. And this is exactly where our relevance grows.
This is the real takeaway for L&D teams in the age of AI: AI accelerates creation, but only L&D ensures impact.
AI can generate content at remarkable speed, but it does not yet apply cognitive load theory, design for transfer, sequence concepts for mastery, or align capability development with business performance in a deliberate or evidence-based way. These responsibilities still require human judgment, learning science expertise, and strategic context. This is where L&D remains irreplaceable.
So, in this new era, our role is not to compete with AI or fear its capabilities. Our role is to elevate the work only we can do: designing meaningful experiences, engineering learning systems, diagnosing performance barriers, ensuring ethical and effective AI use, and guiding organizations toward capability—not just content.
This isn’t a threat to L&D. It’s our biggest moment of reinvention.
The Question Every Learning Leader Should Ask
The next time someone shares an AI-generated course and says it is ready, pause for a moment. Not to challenge the work, but to elevate the conversation. Look beyond the polish and ask a question that matters for learning and impact:
Is this content simply well-crafted, or is it designed to drive real capability and performance?
The question is not about what AI can do. It is about what L&D can make possible with it.
Aesthetic quality may capture attention. Learning design and performance outcomes are what create value. That is where our contribution matters most. That is where L&D leads the way.
—RK Prasad (@RKPrasad)



