The Backlash Cycle: Why Every Popular AI Gets a 'Sucks' Phase

Understanding how we collectively process innovation and what it tells us about the future of AI tools

If you spend any time in technology circles, you have probably noticed a peculiar pattern. A new tool launches to breathless excitement. Six months later, the internet declares it "overhyped garbage." Another six months pass, and suddenly everyone treats it as a mature, useful tool with understood limitations.

Claude, GitHub Copilot, ChatGPT, and every other AI assistant currently navigating public opinion are following a script written decades ago by technologies ranging from Agile methodologies to NoSQL databases to cloud computing itself. The backlash is not a bug; it is a feature of how we collectively process innovation.

Understanding this cycle does more than help you navigate AI debates. It provides a framework for evaluating any emerging technology and separating signal from noise when everyone around you seems to have a strong opinion.

The Predictable Arc: From Magic to Garbage to Tool

The technology hype cycle follows a remarkably consistent pattern that Gartner famously mapped as the "Hype Cycle," but the emotional journey is even more predictable:

Phase 1: The Magic Phase

Early adopters discover something that solves a real problem in a novel way. Their genuine excitement amplifies through marketing, social proof, and the natural human tendency to extrapolate current capabilities into science fiction futures. "This changes everything" becomes the dominant narrative.

Phase 2: The Collision Phase

The technology reaches mainstream adoption. People who were sold on the "magic" narrative encounter the actual product with its real-world limitations, edge cases, and tradeoffs. The gap between expectation and reality creates frustration. Criticism emerges, and because contrarian takes generate engagement, the pendulum swings hard in the opposite direction. "This is overhyped garbage" becomes the counter-narrative.

Phase 3: The Integration Phase

After the noise settles, practitioners develop nuanced understanding. The technology finds its appropriate use cases. Best practices emerge. The conversation shifts from "Is this good or bad?" to "When should I use this and when should I not?" The technology becomes a tool, not a religion.

AI assistants like Claude are currently navigating the transition between Phase 2 and Phase 3. Understanding why the backlash happens helps you make better decisions than simply joining either the hype or the counter-hype.

Why the Backlash Overshoots (And Why That Matters)

The backlash against any hyped technology tends to overshoot reality for predictable reasons:

Unrealistic Expectations from Marketing

When AI companies claim their tools "revolutionize development" or "replace entire workflows," they set expectations that no technology can meet. The backlash punishes the tool for failing to be magic, even when it excels at being a very useful tool.

First Contact with Real Limitations

Someone sold on "AI can write production code" will be disappointed when they discover it hallucinates APIs, invents plausible-sounding functions that do not exist, and produces code that compiles but fails in subtle ways. Their disappointment is legitimate, but it often leads to dismissing the entire category rather than calibrating expectations.

Social Dynamics Reward Contrarianism

In a room full of people praising something, the contrarian view gets disproportionate attention. "Everyone says Claude is amazing, but let me tell you why it sucks" generates more engagement than "Claude is pretty good for certain tasks." This dynamic amplifies criticism beyond its actual prevalence.

Legitimate Criticism Gets Lost in the Noise

Real issues - like AI tools confidently presenting false information, the environmental cost of inference, or bias in training data - get mixed with performative hot takes. Important concerns become harder to address when they are buried in reflexive negativity.

For developers and architects making technology decisions, recognizing this pattern means you can extract the useful signal (real limitations worth understanding) from the noise (overcorrected hot takes).

What Four Decades of Hype Cycles Teach You

Fred Lackey, a software architect with 40 years of experience spanning from writing assembly on Timex Sinclairs to architecting serverless systems on AWS GovCloud, has witnessed this pattern repeat across multiple technology generations.

"I watched the exact same cycle with Object-Oriented Programming in the 90s. First it was going to solve all of software development's problems. Then people built unmaintainable class hierarchies and declared OOP a mistake. Eventually we figured out when to use inheritance versus composition, and it became just another tool in the toolbox."

The same pattern played out with Agile (from silver bullet to "Agile is dead" to mature practice), microservices (from mandatory to "distributed monolith hell" to contextual architecture choice), and cloud computing (from "everything must migrate" to "not all workloads belong in the cloud" to informed hybrid strategies).

Lackey applies this historical perspective to the current AI backlash: "When I see someone declare that AI coding assistants are useless because they hallucinate, I recognize someone in Phase 2 who expected Phase 1 magic. When I see someone claim AI will replace all developers, I recognize someone still stuck in Phase 1 who has not hit the limitations yet."

His approach treats AI as what it actually is: a force multiplier for people who understand its capabilities and constraints.

The "AI-First" Philosophy: Neither Hype Nor Dismissal

The mature position on AI coding assistants is not "they are magic" or "they are garbage." It is calibrated understanding of what they do well and what they do poorly.

Lackey has developed what he calls an "AI-First" workflow over the past two years, achieving measurable 40-60 percent efficiency gains on certain types of work. But his approach deliberately inverts the hype narrative:

"I don't ask AI to design a system. I tell it to build the pieces of the system I have already designed."

This framing matters. He treats language models as extremely capable junior developers who excel at execution but lack judgment, context, and architectural vision. He handles architecture, security considerations, business logic, and complex design patterns. He delegates boilerplate code, unit tests, documentation, data transfer object mappings, and service layer implementations.

The result is not "AI writes all my code" (the hype version) or "AI is useless for coding" (the backlash version). It is "AI accelerates specific parts of my workflow while I focus on the parts that require deep expertise."

This same pattern applies across AI use cases. AI assistants are remarkably good at transforming structured requirements into implementation, terrible at understanding unstated business context. Excellent at generating variations on established patterns, poor at inventing novel solutions to unprecedented problems. Fast at producing syntactically correct code, unreliable at producing semantically correct logic without human review.

Understanding this nuance is the difference between useful adoption and disappointed abandonment.

A Framework for Evaluating AI (Or Any Hyped Technology)

When you encounter strong opinions about AI tools, apply this framework to separate useful insight from noise:

1. What expectations are being compared against?

"Claude sucks at X" might mean "Claude cannot do X at all" or "Claude cannot do X as well as I expected based on marketing claims." These are different problems requiring different responses.

2. What is the actual use case?

"AI cannot replace senior architects" and "AI dramatically accelerates boilerplate generation" are both true. Someone experiencing the former while expecting the latter will be disappointed, but that says more about expectation management than capability.

3. Who is making the claim?

Someone who spent two hours with an AI tool and gave up will have different insights than someone who spent two months developing a workflow around its strengths. Both perspectives have value, but for different reasons.

4. What is the alternative being compared to?

"AI-generated code requires careful review" is a criticism only if you believe human-generated code does not. In practice, code review is already standard practice, so AI code requiring review is not a new problem - it is the same problem at different scale.

5. Is this a tool problem or an expectation problem?

"This tool does not do what I need" is useful feedback. "This tool does not do what I imagined it would do based on reading tweets" is an expectation calibration issue.

Where We Go From Here

AI assistants are settling into Phase 3 of the hype cycle. The breathless "this changes everything" narratives are fading. The reactive "this is overhyped garbage" takes are losing energy. In their place, practitioners are developing nuanced understanding of when these tools add value and when they do not.

The mature conversation is not "Should I use AI?" but "For which tasks does AI assistance provide measurable benefit, and for which tasks does it add friction?"

For developers, this might mean using AI to generate test cases and boilerplate while writing complex business logic manually. For architects, it might mean using AI to draft documentation while handling security design personally. For teams, it might mean establishing clear guidelines about which code can be AI-generated and what review process it requires.

The pattern holds across domains. AI image generation is settling into commercial illustration and concept art rather than replacing photographers. AI writing assistance is becoming a drafting and editing tool rather than a replacement for writers. AI code generation is becoming an acceleration tool for experienced developers rather than a replacement for engineering judgment.

The Meta-Lesson About Technology Evaluation

The backlash cycle teaches something more important than "how to think about AI." It teaches how to evaluate any emerging technology in an environment saturated with strong opinions.

When everyone around you is either praising or condemning a new tool, your job is not to pick a side. Your job is to ask better questions:

  • What problem does this actually solve?
  • What are its real constraints and limitations?
  • What alternatives exist and what tradeoffs do they involve?
  • What does successful adoption look like in practice?
  • Who has achieved measurable results and how did they do it?

These questions work equally well for evaluating programming languages, architectural patterns, development methodologies, or deployment strategies. They cut through hype and counter-hype to reach the only question that matters: "Is this useful for my specific context?"

After 40 years of watching technologies rise, get backlashed, and find their appropriate place in the toolkit, Lackey offers this perspective: "Every technology sucks at something. The question is whether what it is good at matters for your problem. If you are waiting for a perfect tool with no limitations, you will wait forever. If you are looking for a useful tool you can apply with appropriate constraints, there has never been a better time to be building software."

The Call to Action: Calibrate Your Expectations

The next time you encounter a strong opinion about AI - whether breathless enthusiasm or dismissive criticism - ask what expectations are being compared against. That single question usually explains the intensity.

Someone declaring "Claude revolutionizes development" probably discovered it accelerates a specific workflow that used to frustrate them. Someone declaring "Claude sucks" probably expected it to solve a problem it was never designed to handle. Both are describing their experience accurately, but neither is describing the tool objectively.

Your goal is not to adjudicate who is right. Your goal is to understand what the tool actually does, what it does not do, and whether that matches a problem you need to solve.

That approach works for AI. It works for every technology that will follow. And it works precisely because it acknowledges that the backlash cycle is not about the technology at all - it is about the gap between expectation and reality.

The best technologists do not avoid that gap. They measure it, account for it, and build useful things anyway.

Fred Lackey - Software Architect

Meet Fred Lackey

The "AI-First" Architect with 40 Years of Experience

Fred Lackey is a software architect who has witnessed four decades of technology hype cycles, from OOP in the 90s to today's AI revolution. He has pioneered the "AI-First" development workflow, achieving 40-60% efficiency gains by treating AI tools as force multipliers rather than replacements for human expertise.

His experience spans from co-architecting the proof-of-concept for Amazon.com in 1995 to leading the first SaaS product granted Authority To Operate by US Homeland Security on AWS GovCloud. He brings a practical, battle-tested perspective to evaluating and adopting emerging technologies.

Learn More About Fred