• Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    An Ai generated VR world would be a single map environment generated in the same way you wait at loading screens when a game starts or you move to an entirely new map.

    A text to 3D game asset Ai wouldn’t regenerate a new 3D world on every frame in the same way you wouldn’t ask AI to draw a picture of an orange cat and then ask it to draw another picture of an orange cat shifted one pixel to the left if you wanted the cat moved a pixel. The result would be totally different picture.

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      9 months ago

      I think we’re talking about different kinds of implementations.

      One being an ai generated ‘video’ that is interactive, generating new frames continuously to simulate a 3d space that you can move around in. That seems pretty hard to accomplish for the reasons you’re describing. These models are not particularly stable or consistent between frames. The software does not have an understanding of the physical rules, just how a scene might look based on it’s training data.

      Another and probably more plausible approach is likely to come from the same frame generation technology in use today with things like DLSS and FSR. I’m imagining a sort of post-processing that can draw details on top of traditional 3d geometry. You could classically render a simple scene and allow ai to draw on top of the geometry in realtime to sort of fake higher levels of detail. This is already possible, but it seems reasonable to imagine that these tools could get more creative and turn a simple blocky undetailed 3d model into a photo-realistic object. Still insanely computationally expensive but grounding the AI with classic rendering to stabilize it’s output could be really interesting.