After being one of the first companies to roll out a Deep Research feature at the end of last year, Google is now making that same tool available to everyone. Starting today, Gemini users can try Deep Research for free in more than 45 languages — no Gemini Advanced subscription necessary. For the uninitiated, Deep Research allows you to ask Gemini to create comprehensive but easy-to-read reports on complex topics.
Compared to say Google’s new AI Mode, Deep Research works slower than your typical chatbot, and that’s by design. Gemini will first create a research plan before it begins searching the web for information that may be relevant to your prompt. When Google first announced Deep Research, it was powered by the company’s powerful but expensive Gemini 1.5 Pro model. With today’s expansion, Google has upgraded Deep Research to run on its new Gemini 2.0 Flash Thinking Experimental model — that’s mouthful of a name that just means it’s a chain-of-thought system that can break down problems into a series of intermediate steps.
“This enhances Gemini’s capabilities across all research stages — from planning and searching to reasoning, analyzing and reporting — creating higher-quality, multi-page reports that are more detailed and insightful,” Google says of the upgrade.
If Deep Research sounds familiar, it’s because of a variety of chatbots now offer the feature, including ChatGPT. Google, however, has been ahead of the curve. Not only was it one of the first to offer the tool, but it’s now also making it widely available to all of its users ahead of competitors like OpenAI.
Separately, Google announced today the rollout of a new experimental feature it calls Gemini with personalization. The same Flash Thinking model that is allowing the company to bring Deep Research to more people will also allow Gemini to inform its responses based on information from Google apps and services you use.
“With your permission, Gemini can now tailor its responses based on your past searches, saving you time and delivering more precise answers,” says Google. In the coming months, Gemini will be able to pull context from additional Google services, including Photos and YouTube. “This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you.”
To enable the feature, select “Personalization (experimental)” from the model drop-down menu in the Gemini Apps interface. Google explains Gemini will only leverage your Search history when it determines that information may be useful. A banner with a link will allow you to easily turn off the feature if you find it’s invasive. Gemini and Gemini Advanced users can begin using this feature on the web starting today, with mobile availability to follow.