Should AI Features Be Shipped Before the Use Cases Are?
The arms race isn't about who ships AI first. It's about what use case they are shipping it for.
AI Arms Race
Since the AI arms race began in earnest, product teams have been forced into a choice that feels binary: ship fast or someone else will ship it first. The pressure is real. Competitors are moving fast. Investors are watching. And in this environment it's tough to find a company who isn't finding a way to integrate AI with their product.
But here's what that environment tends to obscure: Releasing an AI-centric feature early and incrementally working on ways to make it more valuable — is risky and will hurt long-term adoption.
The side effect of panic is speed. The AI arms race in SaaS has kick-started a sense of panic across the industry. And one of the most valuable product principles is getting quietly sidelined — what is the use case for the AI feature we are building?
I'll give an example of Rovo AI built by Atlassian, which is essentially their in-product AI centered around the Atlassian ecosystem — which has been available widely for about a year now. Positioning itself as "a new app that helps you turn information into action"... It simply showed up in the Atlassian/Jira UI — no clear prompt for what to do with it.
Fast forward to today, rather than supporting a very specific purpose, Atlassian promotes Rovo alongside a library of use cases, which puts the burden on the user to find the best use case for their own context. This approach contradicts conventional wisdom of products being tailored to our existing needs as customers.
The Risky Cost of Making Customers Find Their Own Use Case
That notion exposes a deeper problem; the onus of AI features like Rovo is placed entirely on the user — you need to BYOU. Bring Your Own Use Case — which essentially shifts the panic of AI disruption paranoia off of Atlassian product leadership and onto the users of Atlassian products. Imparting a sense of: we've added AI to our products, our job is done, now it's your turn to find value in it. Which isn't unheard of, but it acts like in-market product discovery — which, if iterated on correctly, can create an early lead for new entrants. But the cost of being too early can lead to users writing off your feature early. Iteration can be an effective product strategy, but you need to build trust and value early for users to want to stick around long enough to see your product get better.
The Real Arms Race Nobody Is Winning Yet
Here is the thing most product conversations about AI get wrong: the arms race was never about who could ship AI. It's 2026. Everybody has shipped products with AI in them. The technology is accessible, the APIs are cheap, the integrations are table stakes. The question of "do you have AI" has already been answered across the entire industry.
The arms race now is about who has actually figured out what to do with it.
Use case definition is the new moat. Not model quality. Not integration depth. Not the number of AI features in your changelog. The product teams that are ahead right now are the ones that can answer a specific question: what job is the user trying to accomplish, and where does this AI capability fit into that job?
While OpenAI rushed ChatGPT to market, Claude by Anthropic was quietly honing its model's ability to write high-quality code. This led Anthropic to dominate coding benchmarks with their models. The team at Anthropic had a use case, and they doubled down on nailing it. Rather than become a jack-of-all-trades like ChatGPT, Claude has quietly become the most trustworthy option for agentic coding — causing Anthropic to need to curb usage rates.
The Risky Cost of Bring Your Own Use Case
None of this is to say "don't ship until you find the perfect use case." That's not the argument. But the risky cost here is that consumer sentiment is tough to shape, and the cost of being too early is high — users are prepared to write off new AI features early. When a feature becomes available, users try it out, but if they aren't impressed, getting them to come back when the feature has found the perfect use case is tough.
The next time an AI feature is on the roadmap, before the sprint planning, before the go-to-market deck, before the launch announcement is written — ask this:
What is the user trying to do, and how does this make that job meaningfully easier?
If the honest answer is "we're not sure yet, but we need something out there," that's not a shipping decision. That's a research problem. And the cost of getting that wrong — in churn, in damaged trust, in features users permanently write off — is almost certainly higher than the cost of taking another few weeks to find the answer.
The winners in AI won't be the companies that shipped first. They'll be the ones that shipped with focus. Right now, that bar is low enough that clearing it is a genuine competitive advantage.
Don't waste it.