How to use Google's new Product Announcements at work
🧠 Knowledge Series #68: Use Vids for Roadmaps, Firebase for mini apps, AI agents for idea generation and more.
🔒The Knowledge Series is available for paid subscribers. Get full ongoing access to 60+ explainers and tutorials to grow your technical knowledge at work. New guides added every month.
This week, Google held its annual Cloud Next Conference where it revealed so many different product announcements and changes to existing tools like Google Sheets, Docs and more that it’s pretty difficult to unpack it all.
One thing is pretty clear, though: long gone are the days where Google found itself at the brunt of AI jokes and the company now sits firmly on top of some of the most important benchmark leaderboards with Gemini 2.5 Pro and video generation models like Veo 2. ChatGPT may be winning the consumer chatbot war but Google is not only proving itself as a force for model building, but as AI models become commoditised, it is also positioning itself as a world-class leader in building valuable products on top of those models.
Based on this week’s impressive event, it feels like Google is back.
In this Knowledge Series, we’re going to focus on some of the most relevant product announcements from the past few days for product teams - along with some of the practical ways you can use these at work for things like crafting roadmap updates, automating workflows and generating new ideas with AI Agents.
Coming up, how to use the new product announcements at work including:
Google Vids for Product Roadmaps and internal updates
Firebase Studio for building mini apps
Workspace Flows for stakeholder updates and automation
Conversational Analytics for getting the data you need and building beautiful interactive reports to share with colleagues
New ways to use Google Sheets for data forecasting and analysis
New Idea Generation agents in Agentspace for complex problem solving and generating hundreds of innovative ideas
Google Vids for Product Roadmaps and other internal updates
I’ve got to admit, despite being a pretty hardcore Google Workspace user, I’ve never really dabbled too much in Google Vids. But after this week’s announcements I thought I’d check it out to see if it was worth using. I’m now a little embarrassed that I’d not used it that much before. To be fair, it’s not a product you’re likely to need to use every day, but for times where you want to bring artifacts to life in something that isn’t just a dull deck, Vids is excellent.
The announcement made this week is that Google Vids is going to support Google’s Veo2 advanced video generation model. If you’ve not used Veo2 before, it’s definitely worth experimenting with and now regularly ranks at the top of video generation models.
Its integration into Vids means that you’re now able to embed generative AI videos directly into your videos which is ideal for scenarios where you might want to celebrate the success of a team by including them in dynamic video content or embed UI from your product and bring that to life.
Example - How to use Veo in a Roadmap update Vid
In this example, let’s imagine we have a team photograph of people who worked on a specific project together. Using Veo and Vids, you’d be able to bring it to life with a video that contains images of you working together.
We could start with a static image like this (which is AI generated in ChatGPT just to give us something to work with as an example):
And then transform it into a short animated video:
The final step would be to then add this short animated video into our interactive Product Roadmap review Vid created in Google Vid.
Veo 2 isn’t available natively just yet in Vids but it was announced this week and will be rolling out across Workspace apps over time. In the meantime, you can practise transforming images into videos and adding them directly into Google Vids for any upcoming documents at work that were decks or Miro boards but could benefit from a little more life.
Firebase Studio for building mini apps and full stack prototypes
Next, let's take a look at a brand new product announced at Google Cloud that's been generating a lot of buzz over the past few days and will allow you to take your "vibe coding" skills one step further.
Unlike the previous Canvas abilities that were released a few weeks back that were well suited to protototypes, Firebase Studio allows users (including engineers and non-engineers) to build not just prototypes - but full stack apps.
One user has described it as “basically a free alternative to Cursor, Bolt or v0, directly in the browser.” But can it really compete with those other apps?
Before we dig into some examples together, here’s a snapshot of some of Firebase Studio’s core abilities and how these compare to other products on the market: