Why People Create AI “Workslop” and How to Stop It

Executive Summary As the pressure to integrate generative AI intensifies, workplaces are battling a new phenomenon known as “workslop.” This term describes low-effort, AI-generated work that appears polished at first glance but is fundamentally inaccurate or vacuous, effectively offloading cognitive labor onto the recipient. “Workslop” is more than a productivity drain; it is a corrosive force that undermines workplace trust, respect, and overall morale.

1. The Nature of “Workslop” and the Trust Gap Research indicates that 41% of employees have received specific instances of “workslop” that hindered their performance, and over half admit to sending subpar AI-generated content to colleagues.

  • Relational Erosion: Receiving an AI-generated performance review or jargon-heavy research summary often leaves employees feeling undervalued or “gaslit.” This breeds resentment and leads to a rapid decline in the sender’s perceived intelligence and trustworthiness.

  • A Symptom of Management Failure: “Workslop” typically emerges from vague top-down AI mandates (e.g., “use AI everywhere”) combined with a workforce that is psychologically depleted and overburdened by shifting roles.

2. The Drivers of AI “Workslop” The data reveals that creating AI sludge is rarely about individual laziness; rather, it is a systemic response to situational pressures:

  • Cognitive Depletion: Foundational mindsets for performance—such as focus and strategic planning—have declined by 2-6% since 2020. Under the pressure to “do more with less,” employees use AI performatively to demonstrate compliance with innovation directives.

  • Declining Institutional Trust: As trust in employers reaches unprecedented lows, employees are less inclined to invest the human effort required to refine AI outputs, leading them to offload unverified work onto their peers.

3. Mitigating “Workslop”: A Systemic Response To effectively combat “workslop,” leaders must invest in a three-tiered organizational response:

  • Culture: Rebuild psychological safety so that team members feel comfortable admitting uncertainty and asking for feedback. High levels of team trust can reduce the production of “workslop” by up to 61%.

  • Practice: Replace blanket AI mandates with clear specifications of what high-quality AI output looks like for specific missions. Establish review processes that reinforce, rather than bypass, human judgment.

  • Accountability: Introduce specialized roles, such as “Forward-Deployed AI Collaboration Architects,” who can tailor AI integrations to human motivations and specific workflow frictions.

Source: https://hbr.org/2026/01/why-people-create-ai-workslop-and-how-to-stop-it?ab=HP-latest-text-6

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments