Last week, Google DeepMind announced Genie 2, a new video model that generates plausible, consistent, playable 3D environments based on a prompt image.
DeepMind claims that Genie 2 has a slew of new and exciting emerging capabilities that improve the lighting, reflections, and can even generate videos from real-world images. Within these generated worlds, the model can also create animated characters that can act as embodied agents for training purposes. The characters can interact with the world by doing things like popping balloons, opening doors, and even engaging with non-playable characters.
The DeepMind team seems to be hopeful that these AI-generated video games will be a helpful step in training agents, “Genie 2 shows the potential of foundational world models for creating diverse 3D environments and accelerating agent research,” they wrote in a recent post. Video games are a helpful tool in AI research. They are interactive and have a unique blend of challenges. They are also safe play grounds to train, test, and measure agents that may end up in products used in the real world.