This piece started from a simple observation: every generation has the same panic about new media, but we never ask why the panic is so predictable.
The Australia ban is just the latest iteration. Before TikTok it was video games, before that MTV, before that TV. We keep regulating the technology without examining the incentive structure underneath.
What interested me was the recursive problem—the very capacities you'd need to see this pattern (sustained attention, ability to connect dots over time) are what get degraded by the pattern itself.
I'm not arguing for or against the ban. I'm arguing we're having the wrong conversation. And that might not be an accident.
Curious what patterns others are seeing that I missed.
Many IT departments aim for SpaceX-level process engineering—elaborate architecture reviews, multi-year transformation roadmaps, and platform teams—despite serving relatively small user bases. The result is slow decision-making, meeting overload, redundant tools, and organizational friction that slows delivery, even when the technical work itself is solid.
The alternative might be a “drone-first” approach: simple, standardized processes that prioritize speed, clarity, and measurable outcomes. Focus on quick wins, eliminate unnecessary approvals and redundant teams, and adopt a meeting rhythm that tracks decisions and follow-ups in the very next session. It’s less about frameworks and more about efficient, adaptive execution.
Is a drone-first mindset the right way to streamline internal IT teams, or are there trade-offs we’re overlooking? Curious what others have tried.
Yeah, this is a sharp take — the SQL analogy nails how over-specifying prompts kills creativity and pushes outputs toward the median. What’s missing, though, is the emotional side of prompting. It’s not just about keeping things ambiguous, it’s about feeding the model something alive enough that it can reflect something real back. Mix that technical precision with human messiness and you start getting insight, not just casting a wider net.
The AI slop epidemic has a simple cause: people are asking LLMs to create instead of using them to amplify. When you prompt "write me a poem about loss," you get generic output because there's no complexity to work with—garbage in, garbage out. But when I fed Claude my raw, messy 33-post Bluesky poem and asked it to unpack what I'd written, something different happened. Like rubber duck debugging, the act of articulating my fragmented ideas to the LLM forced me to see patterns I'd missed, contradictions I'd avoided, emotional layers I couldn't access alone. The LLM didn't create anything—it amplified what was already there by giving me a structured way to externalize and examine my own thinking. The more entropy (complexity, density, messiness) I provided, the more useful the output became. LLMs aren't steam engines that create energy from nothing; they're amplifiers that can only magnify what you feed them. If your AI output is slop, check your input first. The breakthrough isn't in the model—it's in learning to articulate your problem densely enough that the solution emerges in the telling.