
One of the more revealing realities emerging from enterprise AI adoption is that AI rarely remains confined to the function that first introduces it. A learning team may initially begin experimenting with AI to accelerate content creation, improve simulations, support personalization, or streamline workflows. Yet very quickly, the conversation expands far beyond learning itself. Questions begin to surface around data privacy, security, infrastructure, governance, compliance, accountability, and workforce implications. At that point, AI adoption stops being a functional initiative and becomes an organizational one.
Across the organizations we have been studying, this transition is becoming increasingly visible. Learning teams are finding themselves working far more closely with IT, legal, security, HR, procurement, and data science functions — not because collaboration is desirable in principle, but because AI systems cut across organizational boundaries in ways traditional learning technologies often did not. What initially appears to be a learning innovation gradually exposes a much broader coordination challenge inside the enterprise.
This article reflects on why AI adoption in workplace learning increasingly requires cross-department collaboration. It argues that successful AI adoption is not simply about choosing the right tools or enabling experimentation. It depends on whether organizations can build the governance structures, decision-making processes, and cross-functional relationships necessary to support AI responsibly and sustainably at scale.
This article draws on ongoing research by CommLab India in collaboration with Lancaster University, exploring how AI is shaping workplace learning across large organizations. The observations here are based on anonymized conversations with learning leaders navigating these changes in practice.
AI Changes the Scope of Learning Decisions
For many years, learning technology decisions could largely remain within the operational boundaries of the L&D function. A learning management system was selected. An authoring platform was introduced. A content library was implemented. While these decisions often involved procurement processes or occasional IT support, the learning function typically retained primary ownership of the initiative. The systems themselves were relatively contained, and their impact, while important, generally remained within the domain of learning delivery and administration.
AI changes that dynamic fundamentally.
According to McKinsey's State of AI 2025 report, 78% of organizations now use AI in at least one business function — up from 72% in early 2024 — and 71% regularly deploy generative AI across marketing, product development, service operations, and IT. This is no longer experimentation. It is organizational infrastructure. And as AI embeds itself across enterprise systems, the nature of learning decisions changes with it.
AI systems do not simply distribute content or automate workflows. They interact with enterprise data, generate outputs, influence decision-making, and increasingly become embedded within broader organizational systems and operational processes. The moment this happens, the conversation expands well beyond what L&D can manage alone.
Questions begin to emerge that the learning function cannot answer in isolation:
Where is organizational data being stored?
Can sensitive information safely enter the system?
How are AI-generated outputs validated?
Who becomes accountable when outputs are inaccurate or biased?
What governance structures are required?
How should AI systems integrate with existing enterprise infrastructure?
What regulatory or compliance implications need to be considered?
At that point, AI adoption stops being a learning initiative in the traditional sense. It becomes a cross-functional organizational issue that requires coordination across multiple domains of expertise.

The Shift: From Functional Ownership to Shared Responsibility
One of the clearest patterns emerging across the organizations we have been studying is that AI adoption tends to redistribute responsibility across functions rather than concentrate it within a single department. This pattern is well-supported by broader enterprise research.
McKinsey finds that the organizations seeing the most value from AI are those creating cross-functional teams to support AI deployment and redesigning core business processes to include AI decision-making. Yet only around one-third of organizations report scaling AI across the enterprise. A significant driver of this "pilot-to-scale gap" is precisely what many L&D teams are experiencing: functional silos, unclear ownership, and misaligned incentives that prevent cross-organizational AI deployment.
A 2025 enterprise AI adoption study by Writer and Workplace Intelligence, surveying 1,600 knowledge workers including 800 C-suite executives, found that 42% of executives say the process of adopting generative AI is "tearing their company apart" — and a key driver is lack of cross-department involvement and coordination. When organizations neglect cross-functional collaboration during AI adoption, fragmentation, power struggles, and inconsistent governance are the predictable result.
AI systems simultaneously touch multiple organizational domains:
Technology infrastructure
Security and risk management
Legal oversight
Workforce capability
Data governance
Operational workflows
Organizational policy
No individual department fully owns all of these dimensions. As a result, organizations increasingly find themselves needing shared governance models and collaborative decision-making structures to manage AI adoption effectively.
Traditional Learning Technology Decisions | AI-Enabled Learning Decisions |
Primarily L&D-led | Cross-functional ownership |
Focused on content delivery | Involves governance, data, and oversight |
Limited operational implications | Broad enterprise impact |
Technology implementation | Organizational capability issue |
Tool-centric | Ecosystem-centric |
Why AI Expands Beyond the Learning Function

Why IT Becomes Central to AI Adoption
One of the earliest and most visible collaborations learning teams encounter during AI adoption is with IT. This is not simply because AI tools are technical systems. It is because AI increasingly operates within broader enterprise environments that require infrastructure coordination, governance oversight, and operational integration.
Forrester research underscores the scale of this shift: 91% of global technology decision-makers plan to increase IT spending in the near term, with over half expecting growth to surpass 5% — driven substantially by AI initiatives.
As organizations move beyond isolated experimentation, IT becomes central to questions such as:
Infrastructure compatibility and integration with enterprise systems
Access management and permissions
Scalability across the organization
Enterprise platform governance
Secure experimentation environments
In several organizations we studied, AI adoption only accelerated once enterprise-approved environments became available. Public-facing AI tools created too much uncertainty around data exposure, intellectual property, and governance risk. Secure internal environments, enterprise copilots, or controlled AI platforms created the conditions necessary for broader experimentation.
This significantly changed the role of IT. Rather than functioning purely as a gatekeeper or approval authority, IT increasingly became an enabler responsible for helping create the technical conditions under which AI experimentation could occur safely and sustainably. This aligns with Deloitte's finding that enterprises where senior leadership actively shapes AI governance — including through IT — achieve significantly greater business value than those delegating this work to technical teams alone.
What learning teams increasingly need from IT:
Secure AI environments and infrastructure
Enterprise integration pathways
Access control and identity management
Scalable operational architecture
Platform governance support
Without these foundations, AI experimentation often remains fragmented, informal, or difficult to scale responsibly.

Why Legal and Security Teams Become Part of Learning Decisions
Another major shift is the growing involvement of legal, compliance, and security functions in learning-related AI initiatives. This reflects the reality that AI introduces new forms of organizational uncertainty that traditional learning systems rarely created at the same scale.
The regulatory data here is striking. A May–June 2025 Gartner survey of 360 IT leaders involved in generative AI rollouts found that over 70% identified regulatory compliance as one of their top three challenges — yet only 23% were confident in their organization's ability to manage security and governance components when deploying GenAI tools. Gartner also predicts that by 2030, fragmented AI regulation will quadruple and extend to cover 75% of the world's economies — driving $1 billion in total compliance spend.
These are not distant risks. They are active organizational pressures that learning teams are beginning to feel in real time as their AI initiatives intersect with confidential data, regulated processes, and externally hosted platforms.
Questions related to intellectual property, data handling, privacy, regulatory compliance, auditability, and accountability for AI-generated outputs become significantly more important once AI enters enterprise workflows. In several organizations, AI experimentation initially slowed because governance structures capable of evaluating these risks had not yet matured. This often created tension inside learning teams, particularly when promising use cases encountered delays after early success.
However, over time, many organizations began recognizing that the central issue was not whether AI should be used at all. The deeper issue was determining how AI could be introduced in ways that aligned with enterprise trust, governance, and accountability requirements.
Gartner further predicts that by 2029, "death by AI" legal claims will have doubled from the previous decade because decision-automation deployments lacked sufficient AI-risk guardrails. This trajectory makes legal and security teams not peripheral reviewers but active participants within the operational ecosystem surrounding AI-enabled learning.

Data Science Is Moving Closer to the Learning Ecosystem
As AI systems become more deeply embedded in workplace learning, another important shift is beginning to emerge: the growing intersection between L&D and data science.
Historically, learning teams have relied heavily on completion rates, participation metrics, and learner satisfaction scores. While useful, these metrics provide only limited visibility into actual capability development or behavioral change. AI-supported learning systems generate far richer forms of data: behavioral interaction patterns, decision-making signals, simulation performance data, adaptation trends, skill gap indicators, and contextual learning insights.
This creates opportunities to understand learning in more sophisticated ways. The 2025 LinkedIn Workplace Learning Report highlights how AI enables learning teams to identify skill gaps using workforce and performance data, personalize learning pathways based on individual skills and career goals, and recommend career opportunities and internal mobility paths. These are capabilities that require analytics infrastructure, not just learning content.
Organizations are beginning to explore how learners make decisions in simulated environments, where employees struggle or hesitate, how capabilities evolve over time, and whether learning experiences influence operational performance. These developments increasingly require collaboration with analytics and data science functions that have traditionally operated outside the learning domain.
Notably, 49% of career development champions in LinkedIn's survey use internal data to track skill gaps — compared to 36% of other organizations — suggesting that data-informed learning is already a distinguishing capability of higher-performing L&D functions.
Function | Increasing Role in AI-Enabled Learning |
IT | Infrastructure, integration, enterprise AI environments |
Security | Risk management, data protection, access governance |
Legal & Compliance | Regulatory review, policy, accountability |
Data Science | Analytics, behavioral insight, performance modeling |
HR | Workforce capability, role evolution, change management |
L&D | Experience design, capability development, orchestration |
How Cross-Functional Collaboration Around AI Is Expanding

HR Is Becoming More Central to AI Conversations
One of the quieter but increasingly important developments is the expanding role of HR within AI adoption discussions.
Initially, many AI conversations begin around productivity and technology enablement. But the LinkedIn Workplace Learning Report 2025 makes clear that AI is also reshaping role expectations, workflows, capability requirements, management practices, and workforce structures. At that point, the conversation naturally evolves beyond technology implementation and into workforce transformation.
The scale of the capability gap is significant. Nearly half of talent development professionals surveyed by LinkedIn — 49% — say their executives are concerned that employees do not have the skills required to deliver on business strategy. Meanwhile, Deloitte's State of AI in the Enterprise report identifies the AI skills gap as the single biggest barrier to AI integration across organizations, with education cited as the top way companies are adjusting their talent strategies.
These findings frame HR not as a support function but as a strategic partner in AI adoption.
Organizations are asking broader questions:
What new capabilities will employees need?
How should managers support AI-enabled work?
How will roles evolve as AI redistributes tasks?
How should career pathways adapt?
What forms of capability-building are required across the enterprise?
These are no longer purely learning questions. They become organizational design questions tied directly to workforce strategy — and they require HR and L&D to work more closely together than many organizations have historically structured them to do.

Why AI Oversight Committees Are Emerging
As AI adoption becomes more distributed across organizations, many enterprises are establishing formal cross-functional governance structures — AI governance councils, oversight committees, responsible AI working groups, and enterprise AI steering groups.
McKinsey's State of AI 2025 confirms this governance imperative: the redesign of workflows has the biggest effect on an organization's ability to see EBIT impact from AI, and AI governance is increasingly a shared leadership responsibility — in many organizations, on average two leaders are jointly responsible for oversight.
Yet governance maturity remains uneven. The 2024 IAPP Governance Survey found that only 28% of organizations have formally defined oversight roles for AI governance — with most still distributing AI governance tasks across compliance, IT, and legal teams without a unified structure. McKinsey's own survey found that just 28% of organizations say their CEO takes direct responsibility for AI governance oversight.
What is particularly significant about the emergence of AI governance committees is what they represent organizationally. They reflect a growing recognition that AI cannot be governed effectively from within a single functional silo. Instead, organizations increasingly need spaces where legal, security, IT, HR, data, business, and learning functions can collectively evaluate use cases, define principles, and coordinate decisions.
A Gartner survey of 360 organizations in Q2 2025 found that organizations that have deployed AI governance platforms are 3.4 times more likely to achieve high effectiveness in AI governance than those that have not.
What these governance groups often help address:
Acceptable AI usage policies
Risk classification models
Approval and escalation pathways
Accountability structures
Governance standards
Alignment across departments and business units
This is not merely administrative coordination. It represents an attempt to create organizational coherence around technologies that inherently operate across multiple systems and domains simultaneously.

The Deeper Shift: AI Requires Ecosystem Governance
Taken together, these developments point toward a much broader transformation. AI adoption increasingly requires what might best be described as ecosystem governance. Organizations are no longer governing isolated tools. They are governing interactions across systems, workflows, functions, people, and decision-making structures.
McKinsey describes this as moving from "doing AI projects" to making AI a new operating baseline — a shift that requires shared accountability, coordinated governance, cross-functional trust, clearer operating principles, and collaborative decision-making models.
This also changes the role of L&D itself. The LinkedIn Workplace Learning Report 2025 captures this evolution directly: L&D leaders are no longer expected simply to produce training programs and manage learning platforms. They are increasingly expected to act as strategic partners, helping organizations build the capabilities required for future growth. In an AI-enabled enterprise, that means operating within — and helping shape — a broader governance ecosystem.

The Risk: Fragmentation Without Coordination
Without intentional cross-functional collaboration, AI adoption often becomes fragmented very quickly. Different departments adopt different tools, establish conflicting policies, operate with inconsistent assumptions, duplicate experimentation efforts, and create overlapping governance structures.
The evidence is consistent across research sources. Deloitte finds that only 34% of organizations are truly reimagining their business through AI — despite widespread adoption — and that just one in five companies has a mature model for governance of autonomous AI agents. McKinsey identifies "operating model inertia" — functional silos, unclear ownership, and misaligned incentives — as one of the most consistent blockers preventing organizations from moving AI from pilot projects to enterprise scale.
The Writer and Workplace Intelligence survey further found that 49% of employees say they have to figure out generative AI on their own, without adequate organizational support or coordination — creating the conditions for exactly the kind of fragmented, shadow adoption that undermines enterprise governance.
The challenge, therefore, is not simply technological adoption. It is organizational alignment.

What Organizations May Need to Do Next
If AI adoption increasingly operates across organizational boundaries, then collaboration must become intentional rather than incidental. Several practical shifts appear increasingly necessary.
Establish cross-functional AI governance groups Create shared ownership across L&D, IT, legal, HR, security, and analytics. Gartner recommends forming dedicated committees with responsibility for technical oversight, risk and compliance management, and communication and decision reporting.
Define clearer approval and escalation pathways Reduce uncertainty around decision-making responsibilities. McKinsey's research recommends establishing an approvals matrix by risk tier, pre-approving tools and datasets, logging prompts and outputs, and defining rollback protocols to shorten time-to-value while maintaining compliance.
Develop shared AI operating principles Align departments around common governance expectations and usage standards. Standards such as ISO/IEC 42001 (the first global AI management system standard) and the NIST AI Risk Management Framework are helping organizations structure governance, measure impact, and mitigate risk throughout the AI lifecycle.
Create enterprise-approved experimentation environments Enable innovation within secure and governed conditions. Deloitte's research confirms that organizations moving AI from pilots to production require clean, integrated, well-governed data architecture — not the siloed systems most organizations currently operate on.
Strengthen collaboration between learning and analytics teams Connect learning insight with broader organizational intelligence. The Wharton Human-AI Research and GBK Collective's 2025 AI Adoption Report finds that organizations with the strongest outcomes are those combining AI adoption with disciplined, cross-functional measurement — tracking how AI use translates into business outcomes, not just usage metrics.

AI Is Expanding the Boundaries of Learning
One of the most important changes AI introduces into workplace learning is that it expands the boundaries of what learning decisions influence. What once remained largely within the learning function now intersects directly with infrastructure, governance, workforce strategy, analytics, risk management, and enterprise operations.
McKinsey's 2025 research concludes that the true differentiator between AI leaders and laggards is no longer technical access — it is organizational plasticity: the willingness and ability to rewrite workflows, restructure teams, redesign talent architectures, and rebuild governance frameworks around AI. This framing applies directly to L&D.
AI in L&D can no longer be implemented effectively through isolated functional ownership alone. It increasingly requires coordinated collaboration across departments because the systems being introduced affect the organization as a whole. The organizations that adapt most successfully may not necessarily be the ones with the most advanced tools. They may be the ones that learn how to coordinate across functions more effectively, align decision-making more coherently, and build governance models capable of supporting innovation without fragmentation.
Because ultimately, successful AI adoption is not only a question of technological capability. It is a question of organizational alignment.
—RK Prasad (@RKPrasad)




