Organizations adopting artificial intelligence often frame transformation as a technological upgrade. Yet research in activity theory and organizational learning suggests that technological change rarely succeeds or fails because of the technology itself. Instead, transformation is driven by contradictions within organizational activity systems. These contradictions are structural tensions between goals, tools, roles, and institutional rules.

Drawing on activity theory and evidence from enterprise learning organizations adopting AI, this article examines the four types of contradictions that drive change: primary, secondary, tertiary, and quaternary contradictions. Understanding these contradictions helps leaders move beyond tool adoption toward systemic organizational learning.

Why AI Change Is Never Just About Technology

Artificial intelligence is transforming workplace learning across industries. Organizations are introducing generative copilots, AI-powered simulations, automated content generation, and adaptive learning systems.

At first glance, these technologies appear to be productivity tools.

However, the introduction of AI often produces unexpected tensions:

  • Employees worry about job displacement despite productivity gains

  • Governance policies slow down experimentation

  • Learning metrics no longer match new performance-based training models

  • Professional identities are challenged as automation reshapes cognitive work

These tensions are not accidental. They are structural contradictions within the organizational system.

Activity theory, particularly the work of Yrjö Engeström (1987, 2001), provides a framework for understanding how contradictions drive organizational change.

Rather than viewing contradictions as problems to eliminate, activity theory treats them as developmental forces that propel systems toward transformation.

When AI enters an organization, contradictions emerge between existing structures and new capabilities. These tensions push organizations to rethink roles, rules, and objectives.

Understanding these contradictions is therefore essential for managing AI-driven transformation.

Contradictions in Activity Systems

In activity theory, work is organized within an activity system, which includes:

  • Subject – the actors performing the work

  • Object – the shared goal or purpose of the activity

  • Tools – technologies and mediating artifacts used in the activity

  • Rules – policies, norms, and regulations governing behavior

  • Community – stakeholders involved in the activity

  • Division of labor – how responsibilities are distributed

When new technologies such as AI are introduced, tensions can arise between these elements.

Engeström identified four types of contradictions that typically appear during system change:

  1. Primary contradictions

  2. Secondary contradictions

  3. Tertiary contradictions

  4. Quaternary contradictions

Each type reveals a different layer of systemic tension.

1. Primary Contradictions: Tensions Within a Single Element

Primary contradictions occur within one component of the activity system itself. These tensions often arise when an element must satisfy competing values.

Example: AI Productivity vs Job Security

Across multiple organizations adopting AI in learning design and training development, productivity gains were immediate.

AI tools enabled:

  • rapid script generation

  • automated assessment creation

  • faster course updates

  • scalable simulation environments

However, within the community element, a tension emerged. Employees recognized that increased productivity might imply reduced workforce demand.

The contradiction therefore existed within the community: AI as a productivity enabler vs AI as a potential threat to professional stability.

Even when leaders emphasized that AI would augment rather than replace workers, the underlying tension remained.

Primary contradictions reveal competing logics inside a single system element. If left unresolved, they may produce:

  • anxiety

  • resistance to innovation

  • decreased experimentation

Organizations that openly address these tensions through transparent role redesign and reskilling tend to progress more smoothly.

2. Secondary Contradictions: Tensions Between System Elements

Secondary contradictions occur between two elements of the activity system. These are the most visible contradictions during technological change.

Example: AI Tools vs Governance Rules

In several organizations adopting AI-powered simulations and content generation, legal and compliance teams initially restricted AI tool usage.

Concerns included:

  • intellectual property protection

  • data privacy

  • model reliability

  • regulatory compliance

Learning teams wanted rapid experimentation. Governance teams required extensive review.

This created a contradiction between: Tools (AI capabilities) and Rules (organizational governance frameworks)

The result was a slowdown in experimentation.

Example: AI Tools vs Division of Labor

AI automation also challenged existing job roles.

  • If AI generates content drafts, what becomes the role of the instructional designer?

  • If AI conducts simulation-based role-play, what becomes the role of the manager or trainer?

This created tension between: Tools and Division of labor

Secondary contradictions signal that the structure of the activity system must evolve. Resolving them often requires:

  • redesigning governance frameworks

  • redefining job roles

  • updating workflow processes

Organizations that treat these tensions as redesign opportunities rather than obstacles are more likely to scale AI successfully.

3. Tertiary Contradictions: Tensions Between Old and New Activity Models

Tertiary contradictions arise when a new way of working challenges an existing activity model. In AI adoption, these contradictions often appear as cultural resistance.

Example: Course-Based Learning vs Simulation-Based Learning

Many organizations historically measured learning through:

  • course completion

  • knowledge assessments

  • certification exams

However, AI-enabled training increasingly emphasizes:

  • behavioral simulations

  • role-play interactions

  • real-time performance analytics

This introduces a contradiction between: The traditional training model and the emerging AI-enabled performance model

In several organizations studied, leadership continued requesting traditional metrics such as completion rates even after simulation-based training had been introduced..This mismatch created confusion about how learning impact should be evaluated.

Tertiary contradictions represent the transition between two activity systems. They often manifest as:

  • cultural resistance

  • metric misalignment

  • legacy process inertia

Resolving them requires redefining success criteria and aligning organizational expectations with new learning models.

4. Quaternary Contradictions: Tensions Between Neighboring Systems

Quaternary contradictions occur between adjacent activity systems. Organizations rarely operate as a single system. Learning teams interact with HR, IT, compliance, operations, and external vendors.

AI adoption often exposes misalignment between these systems.

Example: Learning Systems vs Enterprise IT Systems

In several organizations, learning teams wanted to adopt advanced AI tools available in the public market. However, IT security policies restricted external tools due to data protection concerns.

This created a contradiction between: The learning activity system and the enterprise IT governance system

Example: Learning Systems vs Operational Performance Systems

In organizations attempting to link AI-enabled simulations to performance metrics, another contradiction appeared. Learning teams focused on capability development. Operations teams focused on productivity metrics.

Without shared measurement frameworks, collaboration became difficult.

Quaternary contradictions reveal that transformation cannot occur within a single department. AI adoption requires cross-system alignment across the organization.

Resolving these contradictions often involves creating new collaboration structures and governance models.

Why Contradictions Drive Organizational Learning

Contradictions may appear disruptive, but they play a critical role in organizational evolution.

According to Engeström’s theory of expansive learning, contradictions force systems to question existing assumptions and develop new activity models.

In the context of AI adoption, contradictions often trigger:

  • role redefinition

  • governance modernization

  • performance measurement reform

  • cross-functional collaboration

Organizations that recognize contradictions as signals for transformation tend to develop more resilient learning systems.

Implications for Leaders Managing AI Transformation

Leaders responsible for AI adoption in workplace learning should shift their focus from technology deployment to system diagnosis.

Instead of asking:

What AI tool should we implement?

They should ask:

  • Where are contradictions emerging in our learning system?

  • Which roles must evolve to accommodate AI capabilities?

  • Which governance frameworks must be redesigned?

  • How must our success metrics change?

These questions move the conversation from tool implementation to systemic transformation.

Conclusion

Artificial intelligence is often described as a disruptive technology. However, disruption rarely originates from the technology itself. The real drivers of transformation are the contradictions that AI exposes within organizational systems.

  • Primary contradictions reveal tensions within communities and professional identities.

  • Secondary contradictions expose misalignments between tools, roles, and rules.

  • Tertiary contradictions challenge existing models of learning and performance.

  • Quaternary contradictions reveal misalignment across organizational systems.

Understanding these four types of contradictions provides leaders with a powerful diagnostic framework. AI adoption is not simply about deploying smarter tools. It is about redesigning the activity systems through which organizations learn, adapt, and perform.

References

  • Engeström, Y. (1987). Learning by Expanding: An Activity-Theoretical Approach to Developmental Research.

  • Engeström, Y. (2001). Expansive learning at work: Toward an activity theoretical reconceptualization. Journal of Education and Work.

  • Orlikowski, W. (1992). The duality of technology: Rethinking the concept of technology in organizations. Organization Science.

  • Leonardi, P. (2011). When flexible routines meet flexible technologies. MIS Quarterly.

  • Salas, E., et al. (2009). The science of training and development in organizations. Psychological Science in the Public Interest.

  • Bostrom, R., & Heinen, J. (1977). MIS problems and failures: A socio-technical perspective. MIS Quarterly.

  • Trist, E., & Bamforth, K. (1951). Some social and psychological consequences of technological change. Human Relations.

—RK Prasad (@RKPrasad)

Keep Reading