How does NVIDIA’s DLSS 5 work?
Neural rendering model (what DLSS 5 uses) is basically a super-smart AI brain trained to understand and improve pictures. It takes the game’s basic frame (just colors and movement info from the game engine). The AI “understands” what’s in the scene: Is this human skin? Hair? Fabric? Is the light coming from the front, back, or cloudy sky? Then it adds realistic extras like: skin that glows a bit inside (subsurface scattering, like real ears lighting up), fabric with soft shine, hair catching light naturally — all while keeping everything consistent (no flickering between frames) and tied to the game’s original 3D world (so it doesn’t invent random stuff). It’s like the AI is “painting” better lighting and textures on top of the game’s picture in real time, making it look way more lifelike than traditional methods.
The game renders a normal frame (but maybe not super detailed on lighting/materials because it’s fast). It sends that frame’s color data + motion info to the DLSS 5 AI. The AI analyzes it once and adds photoreal upgrades (better light bounce, skin glow, fabric details, etc.) — all anchored to the game’s 3D model so it’s accurate and stable. You get a much prettier, more realistic image on screen at high resolution (up to 4K), and it runs smoothly because the heavy AI work is done efficiently on RTX GPUs (especially RTX 50-series). Developers can tweak it (intensity, colors, mask areas) so it fits their game’s style — like not making faces look too weird.
How is DLSS 5 different from traditional Ray Tracing?
While Ray Tracing simulates individual paths of light, it is extremely demanding on hardware. NVIDIA notes that a single photoreal frame in a movie can take hours to render, but a game must do it in milliseconds. DLSS 5 acts as a shortcut; rather than calculating every ray of light, it uses generative AI to predict and draw what those photoreal pixels should look like based on its deep training. Jensen Huang described this as the “GPT moment for graphics,” where the AI is no longer just making an image sharper but is actively “reinventing” how the final pixels are created.
NVIDIA DLSS 5 and Hollywood level VFX
“Hollywood VFX level” as stated in NVIDIA’s press release means the graphics in games will look as realistic and detailed as what you see in big Hollywood movies (like CGI in Marvel films or Pixar animations). In movies, those super-realistic scenes take minutes or hours to render per frame on powerful computers because they use tons of complex calculations for perfect lighting, shadows, skin glow, fabric shine, etc. Games have only about 16 milliseconds per frame to look good and run smoothly (for 60 FPS), so they’ve always fallen short of that movie-quality look. NVIDIA says DLSS 5 closes that gap using AI, so real-time games can now have lighting, materials, and details that feel “photoreal” like Hollywood VFX without slowing down the game.
Check NVIDIA GeForce’s video on DLSS 5:
Which NVIDIA GeForce RTX series will support the DLSS 5 update?
While the press release mentions that the GeForce RTX 5090 features the path tracing and neural shaders required to push this technology to its limit in 2025, DLSS 5 is built on the existing NVIDIA Streamline framework. This suggests that while the newest RTX 50-series will likely see the greatest benefit, the technology is designed to integrate with the standard DLSS pipeline used by current RTX cards. The system runs at up to 4K resolution, ensuring that the AI enhancements do not sacrifice the smooth, interactive performance gamers expect.
DLSS 5 confirmed games
NVIDIA has secured support from the industry’s largest publishers, including Bethesda, CAPCOM, and Ubisoft. Major confirmed titles include Starfield, Hogwarts Legacy, Assassin’s Creed Shadows, and Resident Evil Requiem. Other upcoming games like Black State, Phantom Blade Zero, and the The Elder Scrolls IV: Oblivion Remastered are also listed as early adopters. Developers from CAPCOM and Vantage Studios stated that this tech allows them to build “cinematic and deeply believable” worlds that were previously held back by the traditional limits of real-time rendering.