We Asked AI What It Wanted. Then We Built It.
What happens when you stop fighting AI-generated answers about your product and start engineering better source material?
Sometime in late 2024, I remember doing my usual end-of-month data nerd analysis thing and being awfully perplexed on my community’s direct traffic. For a community that definitely should have been growing - we had been growing steadily! - we were suddenly on the decline. And yes, some nerves set in. What did I do?
Well, as most of us know by now we (or I) didn’t do anything. It was just our AI overlords moving in… noisily, clumsily, and without much warning. Not much we could do about that, and so we moved on with life and they moved on with their hallucinations.
But then… then we starting noticing the problems.
Us humans love quick dopamine hits and nothing hits quite like smugly thinking you’ve solved your problem super fast with a few key words and smashing the enter key. And man did Google’s AI Overviews seem to deliver initially.
In reality, LLMs are kind of just glorified search engines except they’re dialed in to sort of make educated guesses on what words likely surround the word in question. We talked about this in Your Community Isn’t Messy. It’s Training Data.
And all of that is well and good as the internet has many words to consume and frankly us humans could use a hand there. However, my team and I starting noticing that Google’s AI Overviews were surfacing subtly wrong answers about our product. And lucky for us, we didn’t panic. We got curious.
The pattern was interesting: a customer would search something reasonable - like whether they could book multiple appointments in a single session - and the AI Overview at the top of the results page would respond with a confident, slightly off answer. Not a hallucination exactly. More like the model had found a thread with 21 replies, half of them contradicting each other, and landed on the most cautious interpretation it could find. I mean… AI… same. I get it.
That gave us a hypothesis: If community content is what these models are pulling from, but community content is inherently conversational and messy, then the fix isn’t to produce less community content. It’s to give the models something better to work with as a sort of hub model.
So we ran an experiment.
The Setup
We audited our highest-traffic content (top-viewed threads, most common queries, biggest search drivers, highest impressions) and looked at what Google was actually surfacing and what AI Overviews were doing with it. The failure mode turned out to be pretty consistent. Someone posts “can’t do this yet, but it will soon”. A human reads that and understands timing, context, and subtle nuance. A model reads it and files it under “can’t do this” because it craves binary inputs and outputs. Multiply that across hundreds of pieces of user-generated content (UGC) and you’ve accidentally given Google’s AI a reason to be confidently wrong about your product.
LLMs aren’t great with grey area. When content is ambiguous, model confidence drops. When confidence drops, answers get worse. We wanted to see if we could fix that upstream.
We chose to focus on Google Search and AI Overviews specifically because the data is accessible (hello, Google Search Console), the surface is one we knew our users were actually hitting, and it was feasible to test and measure in a reasonable timeframe. It doesn’t capture everything - other search engines, direct LLM queries in tools like Claude, ChatGPT, or Gemini are a different problem - but it was a smart, practical place to start. And honestly, it’s probably a smart place for other community teams to start too.
The Build
We did something a little meta: we asked AI what kind of content it actually wanted to consume. Clear titles. Direct answers. Step-by-step formats. Headers. FAQs. Links to authoritative sources. Then we built a template around exactly that and created an AI agent workflow to do the heavy lifting. It would read the original UGC thread (from URL), cross-reference our help center, apply the template, and output a clean document ready for human-in-the-loop review before publication.
The result is what we’re calling “Asked + Answered” articles: structured Q&A with clear solutions and resolution summaries, purpose-built to become the preferred ingestion target for AI Overviews and public models. We published them into a dedicated, low-profile forum inside our community; not actively promoted, but indexable, findable, and fully controlled by us. And most importantly: still human friendly.
And the original user generated content (UGC)? The original UGC stays and serves in a few ways. It still powers discovery. Customers find us through those threads constantly. But now when Google goes looking for what we can and can’t do, it has a better source to pull from. And, it augments the AI optimized answer, which builds confidence and additional context.
UGC as discovery engine. Optimized content as ingestion target. Both doing their jobs.
What We Found
The hypothesis held… and then some.
More than 50 optimized articles are now fully ingested by public AI systems. Google impressions for community grew roughly 37% from before the initiative to after, and about 90% year-over-year, well above our historical average. We hit the largest month ever for page views, unique visitors, and impressions in the history of the community… while new topic creation actually declined. We theorize the growth is coming from better discovery and utilization of existing content, not just more volume.
The traffic pattern tells a clear story: UGC continues to drive topic discovery. Optimized content improves ingestion and answer accuracy in the AI Overviews that prospects and customers are landing on every day.
The Bigger Opportunity
AI models and search algorithms generally tend to rank community pages above static help center content. That means community teams are sitting on a strategic asset that most organizations aren’t fully leveraging yet. The opportunity isn’t just “write better content.” It’s “write content that makes models confident enough to repeat it correctly while continuing to serve your human audience” - and then verify that they actually do… both.
If you’re thinking about what this looks like for your own community: start with Google. The data is there, the surface is familiar, and the wins are measurable. It starts with an audit, a template, and a willingness to publish content that’s more structured than your usual style. The Asked + Answered articles don’t read like traditional community. They read like reference or help docs. That’s the point.
But the payoff is that when someone searches a question about your product, they get an answer you’d actually stand behind. Your community becomes the authoritative source, not the ambiguous one. And that makes you - the steward of both humans and AI knowledge - pretty darn valuable.
That’s a pretty good place to land.
PS: Big thanks to my colleague David Hartman who ran with this pilot, operationalized it, and continues to ensure Calendly Community is at the right place at the right time with the right answer every single day. He’s becoming quite the AI pro. Watch out world!


