Motion blur in video games is usually a whole lot less accurate at what it’s trying to approximate than averaging 4 frame generation frames would be. Although 4 frame generation frames would be a lot slower to compute than the approximations people normally make for motion blur.
Yes, motion blur in video games is just an approximation and usually has a lot of visible failure cases (disocclusion, blurred shadows, rotary blur sometimes). It obviously can’t recreate the effect of a fast blinking light moving across the screen during a frame. It can be a pretty good approximation in the better implementations, but the only real way to ‘do it properly’ is by rendering frames multiple times per shown frame or rendering stochastically (not really possible with rasterization and obviously introduces noise). Perfect motion blur would be the average of an infinite number of frames over the period of time between the current frame and the last one. With path tracing you can do the rendering stochastically, and you need a denoiser anyways, so you can actually get very accurate motion blur. As the number of samples approaches infinity, the image approaches the correct one.
Some academics and nvidia researchers have recently coauthored a paper about optimizing path tracing to apply ReSTIR (technique for reusing information across multiple pixels and across time) to scenes with motion blur, and the results look very good (obviously still very noisy, I guess nvidia would want to train another ray reconstruction model for it). It’s also better than normal ReSTIR or Area ReSTIR when there isn’t motion blur apparently. It’s relying on a lot of approximations too, so probably not quite unbiased path tracing quality if allowed to converge, but I don’t really know.
But that probably won’t be coming to games for a while, so we’re stuck with either increasing framerates to produce blur naturally (through real or ‘fake’ frames), or approximating blur in a more fake way.
Motion blur in video games is usually a whole lot less accurate at what it’s trying to approximate than averaging 4 frame generation frames would be. Although 4 frame generation frames would be a lot slower to compute than the approximations people normally make for motion blur.
Yes, motion blur in video games is just an approximation and usually has a lot of visible failure cases (disocclusion, blurred shadows, rotary blur sometimes). It obviously can’t recreate the effect of a fast blinking light moving across the screen during a frame. It can be a pretty good approximation in the better implementations, but the only real way to ‘do it properly’ is by rendering frames multiple times per shown frame or rendering stochastically (not really possible with rasterization and obviously introduces noise). Perfect motion blur would be the average of an infinite number of frames over the period of time between the current frame and the last one. With path tracing you can do the rendering stochastically, and you need a denoiser anyways, so you can actually get very accurate motion blur. As the number of samples approaches infinity, the image approaches the correct one.
Some academics and nvidia researchers have recently coauthored a paper about optimizing path tracing to apply ReSTIR (technique for reusing information across multiple pixels and across time) to scenes with motion blur, and the results look very good (obviously still very noisy, I guess nvidia would want to train another ray reconstruction model for it). It’s also better than normal ReSTIR or Area ReSTIR when there isn’t motion blur apparently. It’s relying on a lot of approximations too, so probably not quite unbiased path tracing quality if allowed to converge, but I don’t really know.
https://research.nvidia.com/labs/rtr/publication/liu2025splatting/
But that probably won’t be coming to games for a while, so we’re stuck with either increasing framerates to produce blur naturally (through real or ‘fake’ frames), or approximating blur in a more fake way.