The Tension is the Work
Lessons in balancing iteration and invention in the AI era
One of the strange luxuries, and persistent challenges, of working in Google Labs is that you’re always living in the future. Or at least trying to.
We’re building Jules in two time zones: now and not yet. On one hand, we have a real product in the world. People are using it. That means we owe them clarity, stability, and thoughtful iteration. Meet them where they are. Respect their time. Don’t get in their way.
On the other hand, we’re inventing something that didn’t really exist before: new interfaces, new workflows, new instincts. That means we have to see around corners. Not just one step ahead, but ten. Otherwise we risk showing up too late. Or worse: too early, with a tool that no one’s ready for and no one case to anchor it.
This is the paradox of Google Labs: be visionary, but also extremely grounded. Be fast, but not reckless. Ship with taste. Iterate with intention.
And because we’re building in AI, the timeline compression is real. Sometimes you go from one step ahead to ten in a day. A new model drops, or a behavior shift happens in a weekend, and suddenly what felt niche becomes default. That’s the game. Blink and you miss it.
The hard part isn’t that AI is moving fast. It’s that the team has to move fast while holding a much higher bar. One of the internal challenges I see over and over is this: we underplay how hard things really are. It’s “easy” to ship a working product. It’s hard to ship something well-integrated, with good taste, and deep respect for devex. Anyone can add AI to a product these days. It’s much harder to subtract everything unnecessary and leave behind something usable, powerful, and precise.
We’re trying to do this with Jules.
The AI part is the catalyst, but most of th work we’re doing isn’t really about AI. It’s about unlocking new developer workflows that weren’t previously possible, wile balancing necessary workflows to make the tool useful. It’s about enabling that creative control loop where the human still feels like the maestro, not a passive recipient of autocomplete in an IDE. Or just there to click the “enter” button in a CLI. It’s about tools that fade into the background instead of getting in your way. That’s the vision. Getting there is messy, nonlinear, and often thankless.
We’re working our asses off anyway.
There’s a video clip I come back to a lot when thinking about this dynamic: this one. The part that sticks:
"We need to support that team. They’re all working their butts off. Mistakes will be made… but it’s so much better than where we were. And I think we’re going to get there.”
That’s how it feels right now to work on the AIDA team in Google Labs. We’re not just building AI tools. We’re trying to build ones with taste, and vision, that understand developers, and show up in the right moments, removing the pain, and asking the right questions. Tools that don’t just generate code, but create momentum, and unlock new and powerful workflows.
So yes, we’re living one step ahead. And ten steps ahead. And some days it feels like a gamble. But I’ll take that over building something irrelevant or late.
We’re getting there.
Really enjoyed this, Kathy thanks for sharing!
I think for the first time in the history, user intent isn’t inferred from click-streams but spoken straight to the product in natural language. From this perspective one thing I’ve noticed is how older users take to AI: they value the infinite patience repeat an answer, go slower, clarify as many times as needed. Younger users are the opposite: short attention, zero tolerance for friction, and they often underspecify what they want, so the AI “fails” them faster.
That’s also why I love the way you have built Jules: begin with intent-clarity, map a plan, then generate. It feels like you’re teaching the user and the model to slow down just enough to get something valuable.
How do you see today’s kids adapting AI experiences? Will the impatience curve keep steepening, or will new patterns pull them in?