Following on from the success of the Steam Deck, Valve is creating its very own ecosystem of products. The Steam Frame, Steam Machine, and Steam Controller are all set to launch in the new year. We’ve tried each of them and here’s what you need to know about each one.
“From the Frame to the Controller to the Machine, we’re a fairly small industrial design team here, and we really made sure it felt like a family of devices, even to the slightest detail,” Clement Gallois, a designer at Valve, tells me during a recent visit to Valve HQ. “How it feels, the buttons, how they react… everything belongs and works together kind of seamlessly.”
For more detail, make sure to check out our in-depth stories linked below:
Steam Frame: Valve’s new wireless VR headset
Steam Machine: Compact living room gaming box
Steam Controller: A controller to replace your mouse
Valve’s official video announcement.
So uh, ahem.
Yes.
Valve can indeed count to three.



I mean, yeah, but… I’m thinking like a uh, distributed compute type of model, like you see on scalable server rack type deployments for what we used to call supercomputers.
If the latency is 10ish ms, thats easily low enough that you could say, split off a chunk of the total game render pipeline instruction set, maybe a seperated physics thread, run the whole game from the x86 Steam Machine, use Steam Link as a communication layer, send x86 to the Frame, which then ‘solves’ it via the FEX emu-layer, and then the Frame also doesn’t do any other part of rendering the game, it just accepts player inputs and then recieves graphical render data.
Physics runs at only 60fps, rest of game runs at 90 or 120 or w/e.
Steam Machine is the master/coordinator, Frame is the slave/subject, it has various game processes just distinctly dedicated to it’s compute hardware, the Steam Machine is then potentially able to get/use more compute, assuming synchronization is stable, which means overall experienced performance gain, more fps or higher quality settings than just using a Steam Machine.
They are already kind of doing this via what they are calling Foviated Rendering.
Basically, the Frame eyetracks you, and it uses that data to prioritize which parts of the overall scene are rendered at what detail level.
IE, the edges of your vision don’t actually need the same level of render resolution as the center of your vision… because human eyes literally lose vision detail away from the center of their vision.
So, they already have a built in system that showcases the Frame and the Machine rapidly exhanging fairly low level data, as far as a game render pipeline goes.
I’m saying, use whatever that transport buffer is, make a mode where you could potentially shunt off more data via that buffer into a distributed, sort of 2 computer version of multithreading.
How is that really different than a game with a multiplayer client/server model?
Like, all source games are basically structured as mutliplayer games with the server doing the world, and the client doing the player… when you play a single player source game, you’re basically just running both the client and the server at the same time, on your one machine, without any actual networking.