Google just gave Gemini the ability to actually go do things — not just answer questions about them. The March 2026 Pixel Drop introduced multi-step agentic tasks: you tell Gemini what you need, it opens the right app, fills in the details, and completes the order in the background while you keep using your phone. You don’t switch apps, you don’t tap through menus, you just move on.
The feature is live now in beta for Pixel 10 and coming to the Samsung Galaxy S26 series. According to 9to5Google, supported apps at launch include Uber, DoorDash, and Grubhub — rideshare and food delivery being the obvious starting point since they’re structured, repeatable, and high-frequency. You can view Gemini’s progress or kill the task at any point.
A long press on the side button summons Gemini and hands off the task. From there, it’s hands-free — Gemini navigates the app, makes the selections, and confirms the action. According to TechCrunch, Google is calling this a beta precisely because the system still needs supervision — but the direction is unmistakable.
This Is What Siri Was Supposed to Be
Apple has been promising agentic Siri features since WWDC 2024, with delays, quiet descoping, and six-month push-backs ever since. According to The Verge, Google and Samsung have now shipped the thing Apple couldn’t. The gap isn’t in marketing — it’s in execution.
This isn’t a gap in marketing — it’s a gap in execution. The hard part of agentic phone AI isn’t the language model, it’s the plumbing: knowing which UI elements to tap, handling dynamic app layouts, and recovering gracefully when something changes mid-flow. Google has been building this infrastructure since Gemini replaced Google Assistant entirely in early 2026.
The Gemini rollout also includes Magic Cue — a feature that reads context from your conversations and proactively surfaces suggestions without being asked. If you’re texting friends about where to eat, Gemini will pop up a restaurant list based on your preferences and the conversation. It’s the kind of ambient intelligence that sounds annoying in a demo and ends up being the feature you miss when it’s gone.
Why This Matters Beyond Ordering Pizza
The pizza order is the demo. The actual implication is that AI on your phone is shifting from a conversational tool into an execution layer. As we’ve covered before, Anthropic’s labor market research showed AI is already automating a measurable chunk of white-collar tasks on desktops — the same dynamic is now reaching your pocket.
The difference with phones is scale. There are billions of Android devices in the world, and Gemini is now the default assistant on all of them. Starting with food delivery and rides is a wedge — the apps and task types that follow will expand as fast as Google can validate them.
Google says more apps and more task types are coming. What they haven’t said is how fast, or where the line is. That’s the part worth watching.