The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist. According to their paper on the research, the AI can be played for up to 20 seconds while retaining all the features of the original, such as scores, ammunition levels and map layouts. Players can attack enemies, open doors and interact with the environment as usual.
After this period, the model begins to run out of memory and the illusion falls apart.
I really hope this doesn’t catch on, Games are already horifically inefficient, imagine if we started making them like this and a 4090 becomes the minnimum system requirement for goddamn DOOM.
Thinking quickly, Generative AI constructs a playable version of Doom, using only some string, a squirrel, and a playable version of Doom.
It’s cool but it’s more or less just a party trick.
Regardless of the technology, isn’t this essentially creating a facsimile of a game that already exists? So the tech isn’t really about creating a new game, it’s about replicating something that already exists in a fairly inefficient manner. That doesn’t really help you to create something new, like I’m not going to be able to come up with an idea for a new game, throw it at this AI, and get something playable out of it.
That and the fact it “can be played for up to 20 seconds” before “the model begins to run out of memory” seems like, I don’t know, a fairly major roadblock?
It’s just a research paper, not a product. It’s about discovering and learning new possible methods and applications.
That’s a fair point actually, I’m looking at it through a product lens, not a research one.
“Playable” nah. “Interactive” yes.
That seems to be the case.
An AI-generated recreation of the classic computer game Doom can be played normally despite having no computer code or graphics.
After this period, the model begins to run out of memory and the illusion falls apart.
Why are we lying about this? Just because it happens in the AI “black box” doesn’t mean it’s not producing some kind of code in the background to make this work. They even admit that it “runs out of memory.” Huh, last I checked, you’d need to be running code to use memory. The AI itself is made of code! No computer code or graphics, my ass.
The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist.
Always a good look. /s
I mean, yes, technically you build and run AI models using code. The point is there is no code defining the game logic or graphical rendering. It’s all statistical predictions of what should happen next in a game of doom by a neural network. The entirety of the game itself is learned weights within the model. Nobody coded any part of the actual game. No code was generated to run the game. It’s entirely represented within the model.
Imagine you are shown what Doom looks like, are told what the player does, and then you draw the frames of what you think it should look like. While your brain is a computation device, you aren’t explicitly running a program. You are guessing what the drawings should look like based on previous games of Doom that you have watched.
This would be like playing DnD where you see a painting and describe what you would do next as if you were the painting and they an artists painted the next scene for you.
The artists isn’t rolling dice, following the rule book, or any actual game elements they ate just painting based on the last painting and your description of the next.
Its incredibly nove approchl if not obviously a toy problem.
This is just a pile of garbage. Jim Sterling’s break down is the most complete argument. But this is just a plain ol bag of shit.