Opti-Morons and the Death of Critical Thought
I’m tired. Not the kind of tired that a good night’s sleep or a weekend off the grid can fix. It’s a deeper, more pervasive exhaustion—the fatigue of living in a culture of relentless, performative positivity. In the tech world, we’re told to “crush it,” to “move fast,” and to embrace every new “game-changer” with uncritical enthusiasm. If you’re not a believer, you’re a “naysayer” or, worse, a “blocker” of progress.
This is the tyranny of positivity. It’s an environment where doubt is treated as a character flaw and criticism is seen as a lack of vision. But as I’ve learned from three decades of building systems—from early Austrian educational networks to LoRa meshes in the Australian bush—unreflected progress isn’t progress at all. It’s just momentum without a map.

Defining the “Opti-Moron”
We need a name for the mindset that fuels this exhaustion. I call it the “Opti-Moron.”
To be clear: I’m not talking about healthy optimism. We need optimism to build things. I’m an optimist every time I plant a tree or commit code to an open-source project. I believe we can build a better food system and a more resilient community.
The Opti-Moron, however, suffers from a different condition. This is optimism as an ideology—a blind faith in technology as a universal fix-all. It’s the mindset that adopts every new AI tool without asking who owns the data, what assumptions are baked into the model, or whose job is being “disrupted” (read: destroyed) in the process. For the Opti-Moron, ethics are just a “speed bump” on the way to the future.
We see this narrative pushed by the high priests of Silicon Valley, like Marc Andreessen, who frame technology as an inevitable force for good that must be shielded from the “interference” of regulation. In this world, markets are the only valid governance, and the speed of adoption is the only metric that matters.
The Economic Engine of Hype
Why is this mindset so pervasive? Because it’s economically enforced.
Venture capital doesn’t fund nuance. It funds certainty. If you’re a founder, you can’t walk into a pitch meeting and say, “We have a tool that’s 80% effective and requires careful governance to avoid algorithmic bias.” You have to promise the world. You have to be an Opti-Moron to get the cheque.
Algorithms amplify this. A measured critique of AI bias gets 10 views; a thread about “10 AI Tools to Replace Your Entire Marketing Team” gets 10,000. The hype cycle is a feedback loop that rewards bold claims and punishes critical thought. We’ve built a system where being right is less profitable than being loud and positive.
Take a look at the “state of technology” panels at major trade shows like CES or the speculative “Hard Takeoff” narratives flooding YouTube. These aren’t just discussions; they are products. The presenters and creators make their living from the ‘content’ of hype. They sell a version of the future that is shiny, inevitable, and entirely devoid of structural friction. They are the classic “hype merchants,” profiting from the very uncertainty they claim to resolve.
When “Crushing It” Crushes the Community
This isn’t all just a corporate or high-level policy issue; it’s personal. I remember a recent Discord community focused on wireless networks where the admin was the textbook definition of this type.
On a personal level, I actually quite liked the guy. He had good motives, he was genuinely funny, and he brought a lot of energy to the space. But he was obsessed with the idea of “crushing it.” He wanted to suck all the ideas and help he could from the community—often brilliant people volunteering their time—without any questioning of his own self-serving shilling.
Everyone in that space knew the project he was pushing (and got ‘grants’ from) was dodgy, yet he refused to entertain any critical feedback. To him, any doubt was “negativity” that slowed down his momentum. In the end, the community totally blew up. The trust was gone. What could have been a genuine hub of shared knowledge was hollowed out by a mindset that valued the appearance of success over the reality of technical integrity. It was a stark lesson: toxic optimism doesn’t just mislead; it destroys the social capital required for genuine progress.
The Neo-Luddite Trap
The opposite of the Opti-Moron isn’t better. We’re seeing a rise in reactionary rejection—a modern neo-Luddite reflex that wants to burn the servers and go back to a pre-digital idyll.
It’s important to remember that the original Luddites weren’t anti-tech; they were anti-exploitation. They didn’t hate the looms; they hated the way the looms were being used to break their communities and steal their agency. Modern blanket rejection of AI or digital tools is just another way to stop thinking. By refusing to engage, we lose our ability to shape the technology.
Both extremes—blind adoption and blind rejection—are a form of abdication. They both avoid the hard work of responsibility.
The Missing Middle: Critical Engagement
The problem is not optimism or skepticism. It is the absence of governance.
We need to reclaim the “missing middle”—a space for critical engagement where doubt is a feature, not a bug. This means selective adoption based on “fitness for purpose” rather than hype. It means asking uncomfortable questions before we integrate a “black box” into our lives or our businesses.
Governance isn’t about stopping progress. It’s about ensuring progress serves people, not just corporate incentives. Without accountable governance, technology simply reflects the values of its owners. In the case of current AI models, those values are market concentration and data extraction.
Case Studies in Hype vs. Reality
We’ve seen the results of ungoverned adoption before.
- AI Bias: We have countless examples of facial recognition systems failing people of colour because the training data was a reflection of the developers’ own blind spots.
- Platform Monopolies: We watched as “connecting the world” turned into the commodification of our attention and the destruction of local journalism.
- Corporate Capture: We’ve seen brilliant open-source projects—the true digital commons—captured by companies that extract the value while giving nothing back to the contributors.
In each case, the “Opti-Moron” narrative of inevitable progress was used to silence those who pointed out the structural risks.
Toward a Framework for Responsible Optimism
So, how do we resist? We need a set of heuristics for adopting new technology. Before you integrate a new tool—especially one powered by AI—ask:
- Who benefits? Is the primary value going to the user, the community, or a distant corporation?
- Who is excluded? Does this tool require resources (power, bandwidth, data) that are unavailable to the people who need it most?
- What assumptions are built in? What world-view is this algorithm projecting?
- Can I turn it off? If the company disappears tomorrow, does the tool still work? (See my post on future bricks).
Reclaiming the Right to Think
Criticism is not negativity. Doubt is not a lack of vision. They are forms of intellectual responsibility.
We are living through a fundamental change in how we interact with information and each other. The stakes are too high to leave the steering wheel to the hype-peddlers and the Opti-Morons. We need to build a future that is resilient, open, and governed by the communities it serves.
Progress without reflection is just a faster way to get lost. It’s time we slowed down enough to think.
If you value critical takes on technology and the push for open-source sovereignty, consider supporting my work. Every contribution helps keep this blog independent and free from corporate hype.
Comments
Be the first to comment! Reply to this post from your Mastodon/Fediverse or Bluesky account, or mention this post's URL in your reply. Your comment will appear here automatically via webmention.
Follow this blog on Mastodon at @gaggl.com@web.brid.gy or on Bluesky at @gaggl.com