Welcome to the Gemini API Cookbook
This cookbook provides a structured learning path for using the Gemini API, focusing on hands-on tutorials and practical examples.
For comprehensive API documentation, visit ai.google.dev.
Gemini 3: For the most recent updates on our latest generation, please check the Get Started
and the thinking
guides who include migration guides.
> ๐ Nano-Banana Pro: Go bananas with our new flagship image generation model with thinking and search grounding and 4K image generation. Get started here
with a ton of examples!
Navigating the Cookbook
This cookbook is organized into two main categories:
- Examples: Practical use cases demonstrating how to combine multiple features.
We also showcase
Demos in separate repositories, illustrating end-to-end applications of the Gemini API.
What's New?
Here are the recent additions and updates to the Gemini API and the Cookbook:
- ๐ Nano-banana Pro: Use Gemini's native image generation
capabilities to edit images with high consistency or generate visual stories. Now with thinking, search grounding and 4K output!
- File Search: Discover how to ground generations in your own data in a hosted RAG system with the File Search quickstart
.
- Grounding with Google Maps: Get started using factual geographical data from ๐ Google Maps in your apps! See the Google Maps section of the Grounding Guide
.
- Veo 3.1: Get started with our video generation model with this Veo guide, including image-to-videos and video extension!

- Gemini Robotics-ER 1.5: Learn about this new Gemini model specifically for spatial understanding and reasoning for robotics applications.

- Lyria and TTS: Get started with podcast and music generation with the TTS
and Lyria RealTime
models.
- Grounding
: Discover different ways to ground Gemini's answer using different tools, from Google Search to Youtube and URLs and the new Maps grounding tool.
- Batch API
: Use Batch API to send large volume of non-time-sensitive requests to the model and get up to 90% discount.
1. Quick Starts
The
quickstarts section contains step-by-step tutorials to get you started with Gemini and learn about its specific features.
To begin, you'll need:
We recommend starting with the following:
- Get started
: Get started with Gemini models and the Gemini API, covering basic prompting and multimodal input.
Then, explore the other quickstarts tutorials to learn about individual features:
- Grounding
: use Google Search for grounded responses
- Code execution
: Generate and run Python code to solve complex tasks and even output graphs
2. Examples (Practical Use Cases)
These examples demonstrate how to combine multiple Gemini API features or 3rd-party tools to build more complex applications.
- Browser as a tool
: Use a web browser for live and internal (intranet) web interactions
Plotting and mapping Live
: Mix Live API
and Code execution* to solve complex tasks live
3D Spatial understanding
: Use Gemini 3D spatial* abilities to understand 3D scenes
Gradio and live API: Use gradio to deploy your own instance of the Live API*
3. Demos (End-to-End Applications)
These fully functional, end-to-end applications showcase the power of Gemini in real-world scenarios.
- Gemini CLI: Open-source AI agent that brings the power of Gemini directly into your terminal
- Gemini API quickstart: Python Flask App running with the Google AI Gemini API, designed to get you started building with Gemini's multi-modal capabilities
Official SDKs
The Gemini API is a REST API. You can call it directly using tools like
curl (see
REST examples or the great
Postman workspace), or use one of our official SDKs:
Get Help
Ask a question on the
Google AI Developer Forum.
The Gemini API on Google Cloud Vertex AI
For enterprise developers, the Gemini API is also available on Google Cloud Vertex AI. See
this repo for examples.
Contributing
Contributions are welcome! See
CONTRIBUTING.md for details.
Thank you for developing with the Gemini API! We're excited to see what you create.