When the Crowd Isn't Actually There
What happens when participation becomes the input we automate rather than the output we cultivate?
Evidently a few too many of us watched Field of Dreams in our formative years and then became community builders because far too many of us are familiar with the founding myth in community management: If you build it, they will come.
Supposedly, at least. And if they do show up, they’ll do something (play baseball!?!). Ask questions, share wins, argue about the right way to onboard a new hire, post the occasional meme that somehow perfectly captures a Tuesday.
Participation is the whole darn point. It’s what separates a community from a mailing list. But what happens when participation becomes the input we automate rather than the output we cultivate?
Answer: We get communities that look busy and feel empty.
You’ve almost certainly seen it on LinkedIn. Maybe you’ve stopped posting there because of it (I flirt with the idea weekly). Our feeds have a particular slimy texture now - a dude in a tech bro uniform sharing a “hot take” that is aggressively tepid, followed by seventeen replies that all say some version of “this is so important, thanks for sharing.” Nobody is actually talking to each other. They’re performing engagement at each other. The replies don’t reference the post in any meaningful way. The post doesn’t respond to the replies. It’s participation as theater, automated or half-automated, optimized for the appearance of a thriving network all for the sake of reach and influence.
Reddit has a different flavor of the same problem. There are entire subreddits now seeded with AI-generated “stories” - first-person confessionals, AITA posts, dramatic workplace sagas - written not because someone had an experience worth sharing, but because the content generates upvotes and therefore karma farming and therefore reach. The karma system, originally designed to surface quality, has become the thing being gamed. You end up with a front page full of structurally perfect content that nobody actually lived.
It almost makes me miss the good ol’ days of social media. (Of course one could argue the 2009-2010 Farmville craze on Facebook was also not actually lived, unless y’all are farmers and I didn’t know about it.)
So what is actually bothering us community minded folks so fiercely? Well, simply put: The failure mode isn’t fake content. It’s the erosion of legible signal.
What dies first isn’t authenticity in some abstract sense. It’s usefulness. When you can’t tell if the ten replies praising your question are written by people who’ve actually wrestled with the problem, you stop trusting the answers. When the top post in a community is there because it was optimized to be there rather than because it resonated, you lose the ability to read the room. The crowd stops telling you anything. Even worse is when YOU are throwing content into the ether… and don’t care about trust, usefulness, or reading the room.
And this matters for community practitioners because the behavior doesn’t stay on LinkedIn and Reddit. It migrates and normalizes like a bad highly contagious virus (this is a good example of when a behavior has RO > 1… and we don’t want it to).
But why? If we don’t want to see it ourselves, why the heck are we emulating it?
First, because the people who are members of your community are already living in these environments. They’re learning - unconsciously, habitually - that participation can be low-effort, performative, and consequence-free. They’re importing those norms. Your thoughtfully structured discussion forum starts getting replies that are the community equivalent of a thumbs-up emoji rendered as a paragraph.
Second, because automation tools are getting better and cheaper, and community platforms are not immune. AI writing assistants can help members “respond faster”. The gap between “a tool that helps me articulate my thought” and “a tool that posts on my behalf while I do something else” is smaller than we’d like to pretend. (I say this as someone who has absolutely let an AI tool smooth out a reply I was too tired to write well. I’m not outside this problem.)
Third - and this is the one I think about most - automated participation creates a specific kind of ghost town: one that still has lights on. Vanity metrics stay healthy. Post counts, reply counts, maybe even DAUs look fine. But the qualitative signal - you know, the sense of whether people are actually in it - quietly degrades. By the time the metrics catch up, the community has already hollowed out.
What does this mean for how you manage?
It means presence detection matters more than participation counting. Are people responding to each other, or just posting into the void? Threaded conversation depth, direct replies, members who appear in someone else’s post rather than only their own… these tell you more than total reply volume.
It means friction (in the right places) is protective. A small barrier to posting such as a format requirement, a prompt that asks for specificity, a norm that replies should reference what they’re replying to filters out the low-effort automated churn without suppressing genuine participation. Not all friction is bad friction.
And it means you should probably get more deliberate about modeling what good participation looks like. Not as a policy document. As behavior… yours, your team’s, your community champions’. Humans still imitate humans. For now, anyway.
LinkedIn and Reddit aren’t cautionary tales about them. They’re early readings on a trend that’s coming for every community that touches the internet, which is all of them.
The crowd hasn’t disappeared. But you’re going to need to get better at knowing when it’s actually there.


I just love how you think
> Not all friction is bad friction.
In case you haven't seen it, here is a link to an academic paper published in January 2026 titled, "The case against efficiency: friction in social media" that you might find interesting:
https://pmc.ncbi.nlm.nih.gov/articles/PMC12827046/
The authors present a state space representation of friction with three axes:
- Agency Reducing ↔ Agency Enhancing
- Visible ↔ Invisible
- Content-specific ↔ Content-agnostic
For example,
- "Prompting user to agree before posting disputed content" is categorized as Agency Enhancing, Visible, and Content-specific.
- A "'Circuit breaker' slowing the spread of content" is categorized as Agency Reducing, Invisible, and Content-agnostic.
I found the paper useful for a couple of reasons:
- It provides evidence from several domains of how small design decisions can change context.
- The dimensions of friction might help me think about productive friction with a bit more clarity.