Graphics-Chip Maker at a GTC AI show in San Jose, California earlier this month Nvidia A large number of partnerships and announcements have been issued for the AI products and platforms it generates. Meanwhile, in San Francisco, NVIDIA displays next to the closed door Game Developer Conference Show game makers and media how the AI technology they generate can enhance future video games.
last year, NVIDIA’s GDC 2024 Display Cabinet There is a hands-on demonstration where I was able to talk to an AI-driven non-playable character or NPC in a pseudo-conversion. They answered what I typed and had a fairly contextual response (although not as natural as script scripts). AI can also fundamentally modernize old games for a contemporary graphic look.
This year, at GDC 2025, NVIDIA once again invited industry members and pressed into the hotel room near the Moscone Center where the conference was held. In a large room with a computer rig that houses its latest GeForce 5070, 5080 and 5090 GPUs, the company shows more gamers can see generative AI remake old games, giving animators new options and constantly evolving NPC interactions.
Nvidia also demonstrates how its latest AI graphics rendering technology, how DLSS 4’s GPU family improves image quality, light-throw and frame rates in modern games, affects gamers’ capabilities every day, although these efforts by NVIDIA are more routine than other experiments. While some of these advances rely on studios to implement new technologies into their games, other studios can now be used to try it out.
Animate with text prompts
NVIDIA details a new tool that generates character model animations based on text prompts – a bit like you can use chatgpt in iMovie to make the characters of the game move in script operations. Target? Save developers time. Use this tool to convert programming to a sequence of hours into a task of minutes.
As the tool calls body movement, it can be inserted into many digital content creation platforms. John Malaska, senior product manager at NVIDIA who runs my demo, used Autodesk Maya. To start the demonstration, Malasca set up a sample situation where he hopes a character jumps over a box, lands and moves forward. On the scene’s schedule, he selected moments for each of the three actions and wrote text prompts to make the software animate. Then it’s time to fix it.
To perfect the animation, he used body movements to generate four different variants of character jumps and chose the character he wanted. (Malaska says all animations are generated by permissioned motion capture data.) He then specifies exactly where he wants the characters to land, and then chooses where he wants them to end. Body movement simulates all frames between those carefully selected motion pivot points and boom: animated segments.
In the next part of the presentation, Malasca’s character walks through a set of stairs with the fountain. He can edit with text prompts and timeline markers to make the character sneak and bypass the courtyard fixtures.
“We are excited about it,” Malaska said. “This will really help people speed up their workflow.”
He noted that developers might get animations, but hopefully it will run slightly differently and send it back to the animator for editing. If the animation is based on actual motion capture, the time-consuming situation will be even more time-consuming, and if the game requires such fidelity, it can take days, weeks, or months to get MOCAP actors to recover records. Adjusting body movements based on the motion capture database can avoid all of this.
I don’t have to worry about motion capture artists and whether or not they can use body movement to get around their work. With generosity, the tool is well leveraged to create animated and virtual storyboard sequences before attracting professional artists to motion capture the final scene. But like any tool, it all depends on who is using it.
Body exercise plans will be released later in 2025 under the NVIDIA Enterprise license.
Recreate another stab with half-life 2 with RTX Remix
On GDC last year, I saw Nvidia’s Modders platform remake Half-Life 2, RTX mixthis is to bring new life into old games. NVIDIA’s latest classic game Reviving Valve is released to the public with free demos, and gamers can Download on Steam Check it out yourself. What I saw in NVIDIA’s press room ended up being a technical demo (rather than a full game), but it still shows what RTX mix can do to update old games to meet modern graphics expectations.
Last year’s RTX Mix Half Life 2 demo was about seeing the depth effects of flat wall textures that can be updated, for example, making them look like roasted pebbles, and that’s the same. When the wall looks at the wall, “the bricks seem to stick out because they use parallax occlusion mapping,” said Nyle Usmani, senior product manager at RTX Remix. But this year’s demo is more about lighting interactions – even simulated shadows that cover the air meter dial through glass.
Usmani guided me through all the lighting and fire effects, which modernized some of the more troubled parts of the Fallen Ravenholm area of Halfen Life 2. But the most eye-catching app is in the area where the iconic mind enemy attacks, when Usmani pauses and points out how the backlight filters through the fleshy parts of the weird pseudo-zombie meat, which makes them glow with a translucent red, like what happens when you place your fingers in front of a glove. NVIDIA is consistent with GDC and released this effect in a software development kit called underground scattering, so game developers can start using it.
RTX mixes also have other tips that Usmani points out, like the new neural shader for the platform’s latest version – in the Half-Life 2 demo. He explained that essentially, while playing the game, a bunch of neural networks live on the game data and tailored indirect lighting based on what the player sees, making the area more like in real life. In one example, he interchanged the new and new RTX mix versions and showed the light that was properly filtered through the broken garage after the light was filtered. Even better, it hits the frame to 100 per second, above 87.
“Traditionally, we would track a ray and bounce a lot to illuminate the room,” Usmani said. “Now, we track a ray and bounce it only twice, then terminate it, and then the AI emits multiple bounces. With enough frames, it almost feels like calculating an infinite amount of bounces, so we can get more accuracy [and getting] More performance. ”
Still, I saw the demo on the RTX 5070 GPU, which retails for $550, and the demo requires at least an RTX 3060 TI, so it’s unfortunate with the owner of the older graphics card. “It’s purely because the tracking of paths is very expensive – I mean, this is the future, basically the most advanced, this is the most advanced path,” Usmani said.
NVIDIA ACE uses AI to help NPC think
Last year’s NPC AI radio showed how non-player characters react to players, but this year’s Nvidia Ace Tech showed how players come up with new ideas for NPCs to change their behavior and their lives around them.
The GPU manufacturer demonstrated the technology plugged into Inzoi, a simulation-like game where players take care of NPCs in their own behavior. However, with the upcoming update, players can switch on the Smart Zoi, which uses NVIDIA ACE to insert ideas directly into the Zois (character) they supervised…and watch the corresponding reaction. Nvidia Geforce technical marketing analyst Wynne Riawan explained that these ideas cannot defy their own characteristics, so they will send ZOIs with meaningful instructions.
“So by encouraging them, ‘I want people to feel better on their days,’ it encourages them to talk to more Zois around them. What they tried was the keywords: they still failed. They are like humans. ”
Riawan inserted Zoi’s head: “What if I was just the AI in simulation?” Poor Zoi was frightened, but still ran to the public bathroom to brush her teeth, which obviously fits her trait, which is obviously indeed dental hygiene.
Those NPC actions that follow players’ insertion ideas are powered by small language models with billion parameters (large language models can range from 1 billion to over 30 billion parameters, with higher differential responses giving more opportunities). One of the games is based on 8 billion parameters Mistral Nemo Minitron The model is reduced and can be used by older and less functional GPUs.
“We do intentionally squeeze the model into a smaller model so that more people can use it,” Riyawan said.
Nvidia Ace Tech uses computer GPUs – Krafton, the publisher behind Inzoi, recommends using the minimum GPU specification of the NVIDIA RTX 3060 with 8GB of virtual memory to use this feature, Riawan said. Krafton provides Nvidia with a “budget” of VRAM to ensure that the graphics card has enough resources to render graphics. Therefore, it is necessary to minimize parameters.
If players have a more powerful GPU, NVIDIA is still discussing internally how or whether to unlock the ability to use the large parameter model. Players may be able to see the difference because NPCs “react better to their surroundings through larger models and more dynamically to their responses,” Riawan said. “Now, the focus of doing this is mainly on their thoughts and feelings.”
Starting March 28, an early access version of the smart ZOI feature will be available to all users for free. NVIDIA sees it as a stepping stone, which may one day lead to a truly dynamic NPC.
“If you have an MMORPG from NVIDIA ACE in it, the NPC doesn’t stagnate, just keep repeating the same conversation – they can be more dynamic and generate their own response based on your reputation or something.
Watch the following: Everything announced within 12 minutes of the NVIDIA CES event