Following on from the success of the Steam Deck, Valve is creating its very own ecosystem of products. The Steam Frame, Steam Machine, and Steam Controller are all set to launch in the new year. We’ve tried each of them and here’s what you need to know about each one.

“From the Frame to the Controller to the Machine, we’re a fairly small industrial design team here, and we really made sure it felt like a family of devices, even to the slightest detail,” Clement Gallois, a designer at Valve, tells me during a recent visit to Valve HQ. “How it feels, the buttons, how they react… everything belongs and works together kind of seamlessly.”

For more detail, make sure to check out our in-depth stories linked below:


Steam Frame: Valve’s new wireless VR headset

Steam Machine: Compact living room gaming box

Steam Controller: A controller to replace your mouse


Valve’s official video announcement.


So uh, ahem.

Yes.

Valve can indeed count to three.

  • sp3ctr4l@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 hours ago

    Yes, on the Frame, the VR headset, yes, it uses uh… PeX, FeX? Something like that.

    x86/arm translation emulation layer, and then also Proton, a translation layer for windows/etc.

    Interestingly… it seems there may be, or maybe potentially could be in the future… some realtime compute sharing going on, if you have a Machine and a Frame.

    Because… you can Steam Link stream from the Machine to the Frame, or run some games off of the Frame itself, which is roughly a very fancy smartphone, in hardware terms.

    Its via a dedicated WiFi 7 streaming dongle… and Gamers Nexus just said thats roughly a 10ms lag, 20ms worst case scenario.

    So… you could theoretically make a specific like, multithreading mode that takes advantage of a Machine + Frame setup.

    No clue if that actually exists or not, but it does at least seem possible.

    • poVoq@slrpnk.netM
      link
      fedilink
      arrow-up
      3
      ·
      5 hours ago

      No, they could compile Steam for native ARM and launch the x86 games via FEX from within.

      Remains to be seen which way they go, but I see no big reason why they would not do that.

      • sp3ctr4l@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        I mean, yeah, but… I’m thinking like a uh, distributed compute type of model, like you see on scalable server rack type deployments for what we used to call supercomputers.

        If the latency is 10ish ms, thats easily low enough that you could say, split off a chunk of the total game render pipeline instruction set, maybe a seperated physics thread, run the whole game from the x86 Steam Machine, use Steam Link as a communication layer, send x86 to the Frame, which then ‘solves’ it via the FEX emu-layer, and then the Frame also doesn’t do any other part of rendering the game, it just accepts player inputs and then recieves graphical render data.

        Physics runs at only 60fps, rest of game runs at 90 or 120 or w/e.

        Steam Machine is the master/coordinator, Frame is the slave/subject, it has various game processes just distinctly dedicated to it’s compute hardware, the Steam Machine is then potentially able to get/use more compute, assuming synchronization is stable, which means overall experienced performance gain, more fps or higher quality settings than just using a Steam Machine.


        They are already kind of doing this via what they are calling Foviated Rendering.

        Basically, the Frame eyetracks you, and it uses that data to prioritize which parts of the overall scene are rendered at what detail level.

        IE, the edges of your vision don’t actually need the same level of render resolution as the center of your vision… because human eyes literally lose vision detail away from the center of their vision.

        So, they already have a built in system that showcases the Frame and the Machine rapidly exhanging fairly low level data, as far as a game render pipeline goes.

        I’m saying, use whatever that transport buffer is, make a mode where you could potentially shunt off more data via that buffer into a distributed, sort of 2 computer version of multithreading.

        How is that really different than a game with a multiplayer client/server model?

        Like, all source games are basically structured as mutliplayer games with the server doing the world, and the client doing the player… when you play a single player source game, you’re basically just running both the client and the server at the same time, on your one machine, without any actual networking.