LAS VEGAS: Google has launched two new AI research agents via the Gemini API, marking the most significant upgrade to its autonomous research capabilities since the product launched. Deep Research and Deep Research Max are now available in public preview for paid Gemini API tiers. Both are built on Gemini 3.1 Pro and announced at Google Cloud Next 2026 in Las Vegas.
The two agents serve different use cases. Deep Research is optimised for speed, designed for interactive, user-facing applications where low latency matters. Deep Research Max occupies the opposite end. It runs asynchronously, spending extended compute cycles reasoning iteratively, consulting more sources, and refining its output before returning a comprehensive final report.
Internal benchmarks show Max identifies critical nuances that the standard agent misses. Both agents support multimodal inputs including PDFs, CSVs, images, audio, and video.
The most significant new capability is MCP support. For the first time, developers can combine open web data with proprietary enterprise information in a single API call. A financial analyst running Deep Research Max can pull from Google Search, a private data repository, FactSet, S&P Global, and PitchBook simultaneously, all governed by a single configuration. That collapses what previously required significant custom engineering into a setup step. Reports include native charts and infographics generated inline.
Google launched Deep Research as a consumer feature in December 2024, initially powered by Gemini 1.5 Pro. It reached developers via API in December 2025. Today’s release is the enterprise pivot, moving the product from a research assistant into infrastructure for regulated industries including finance, life sciences, and market intelligence.
Expansion to Google Cloud customers is planned for the coming weeks. The launch competes directly with OpenAI’s research agents in ChatGPT and Exa’s Deep Max API.
