Google’s DeepMind team has introduced Gemini Robotics On-Device, a new version of its large language model that runs entirely on robots without needing an internet connection. Announced in a blog post, this compact AI is designed to handle a wide range of tasks with two-armed robots, delivering both dexterity and flexibility in real-world settings.
Building on the original Gemini Robotics model launched in March, the on-device edition can interpret natural-language instructions—much like ChatGPT—and directly control a robot’s movements. Google highlights that it requires only minimal computing power, making it ideal for situations where low latency is critical or where connectivity is unreliable or unavailable.
In demonstrations, Gemini Robotics On-Device has performed complex chores such as folding clothes, unzipping bags, and assembling industrial belts. Google reports that the model matches the performance of its cloud-based counterpart on many benchmarks and even surpasses other local AI systems when faced with multi-step instructions or brand-new tasks.
Originally trained on ALOHA robots, the model has been successfully adapted to other platforms, including the bi-arm Franka FR3 and the Apollo humanoid robot. On the Franka FR3, it handled unfamiliar objects and industrial assembly tasks; on Apollo, it manipulated new items in general household scenarios.
Developers interested in experimenting with Gemini Robotics On-Device can access it via Google’s SDK. The advance comes as more tech companies explore AI for robotics—NVIDIA showcased its Groot N1 humanoid model at GTC 2025, and Hugging Face is working on an open-source robot AI of its own.
With Gemini Robotics On-Device, Google aims to bring smarter, more capable robots to settings ranging from factories to homes—without relying on constant cloud connectivity.