Recently, while evaluating a series of written assessments, I noticed a striking pattern: responses that were linguistically polished but contextually hallucinated. Some even referenced specific case studies that had never been part of the shared curriculum, hinting at the usage of AI to generate the submitted answers.
This is the result of what psychology calls the principle of demand avoidance (or law of less work). By bypassing the friction of reflection to secure a likely true answer, these individuals completed the task but failed the learning. They mistook the generation of a plausible substitute for the demonstration of mastery.
The Productivity Paradox: Busyness vs. Strategic Impact
This mirrors a growing risk in the corporate world. While AI is often sold as a tool to externalize thinking, we are seeing a modern version of the Productivity Paradox. Historically, firms adopting transformative technologies experience a short-term dip in productivity. This occurs because the initial phase of adoption is not about saving time; it is about the hidden work of learning to use the tool effectively and reorganizing processes around it.
When professionals under pressure use AI as a shortcut to try to avoid this dip, they remain busy but lose productivity. They trade the effectiveness of a bespoke, context-aware solution for the mere efficiency of what Niederhoffer and colleagues recently termed workslop: “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.“
The Strategic Risk: Commoditized Thinking
When the principle of demand avoidance becomes the default organizational culture, the short-term dip in productivity is not actually overcome. It is simply hidden behind a wall of automated output. This leads to a fundamental shift toward Commoditized Thinking. If an organization’s primary insights are generated by the same logic as its competitors, it eventually erodes its unique strategic DNA through two channels:
- Strategic Convergence: Solutions begin to drift toward the statistical average. Mechanized convergence pushes GenAI users to create homogeneous outcomes for similar tasks. In a similar way, firms may produce a standardized excellence that ignores specific company constraints and stakeholder nuances, making their strategy indistinguishable from their competitors.
- Cognitive Atrophy: The risk is not just organizational, but biological. As Roxin highlights: “From a neurological standpoint, widespread use of this AI carries the risk of overall cognitive atrophy and loss of brain plasticity.” When the labor of reflection is consistently bypassed, the neural pathways required to innovate, catch errors, or navigate high stakes crises begin to weaken. We are not just externalizing a task; we are hollowing out the biological capacity for expertise itself. From an organizational perspective, the company loses the very expertise it used to pride itself on.
The Path Forward: AI as an Accelerator for Divergence
The goal, however, is not to retreat from the technology, but to master its dual role. When used with intent, the technology transcends the trap of the average. As suggested by Wang and colleagues, AI can also have a “dual role in strategic decision-making: as an accelerator for differentiated innovation and an optimizer for resource allocation.“
The true strategic opportunity lies in using AI to explore a wider possibility space rather than settling for the first plausible answer. By automating the routine, we free the human mind to engage in the high friction, high value work of synthesis and creative divergence… the very processes that build, rather than erode, cognitive plasticity!
Beyond the Proxy
Digital maturity in 2026 is not about how much we can externalize to AI; it is about knowing what must remain internal. Real competitive advantage is found in the friction of thinking, namely, the parts of the process that cannot be commoditized. As Khan points out, now that artificial intelligence is abundant, the ultimate scarcity becomes the human. The question for leadership is no longer how to use the tool, but how to identify, protect, and reward the thinker.
References & Further Reading
Burnham, K. (2025, July 9). The ‘Productivity Paradox’ of AI Adoption in Manufacturing Firms. Ideas made to matter. Insights for your work. From MIT Experts. https://mitsloan.mit.edu/ideas-made-to-matter/productivity-paradox-ai-adoption-manufacturing-firms.
Khan, R. (2025, August 12). The Networked Mind: Humans And The AI Bubble. Forbes. https://www.forbes.com/councils/forbesbusinesscouncil/2025/12/08/ai-economics-and-the-beauty-of-bubbles-mirrors-of-the-mind-in-the-age-of-excess/
Kool, W., McGuire, J. T., Rosen, Z. B., & Botvinick, M. M. (2010). Decision Making and the Avoidance of Cognitive Demand. Journal of Experimental Psychology. General, 139(4), 665–682. https://doi.org/10.1037/a0020198
Lee, H. P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025, April). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI conference on human factors in computing systems (pp. 1-22). https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
Niederhoffer, K., Rosen Kellerman, G., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025, September 22). AI-Generated ‘Workslop’ Is Destroying Productivity. Harvard Business Review. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
Roxin, I. (2025, July 3). Generative AI: The Risk of Cognitive Atrophy. Polytechnique insights. https://www.polytechnique-insights.com/en/columns/neuroscience/generative-ai-the-risk-of-cognitive-atrophy/
Wang, L., Chen, T., Wang, X., & Cheng, M. (2025). The Impact of Artificial Intelligence on Corporate Strategic Decision-Making: Convergence or Differentiation? Proceedings of the 2025 International Conference on Digital Economy and Intelligent Computing, 187–191. https://doi.org/10.1145/3746972.3747003

