

Launch the actual Steam client from a terminal is what they mean, not just the game.


Launch the actual Steam client from a terminal is what they mean, not just the game.


I want to say Sonarr has a regex renaming feature, but I may just be making that up as I’m not looking at my instance right now. Doing whatever renaming based on a pattern would be preferable during the download phase in order to keep the metadata of each service clean.
Failing that, if you have a predictable list of release group strings you want removed from filenames, a one liner with sed or similar would take care of this. You’d then break the known locations of these files by any service tracking them of course, but they will eventually be reindexed.


Seems kind of pricey for that specific unit, but it should work well for just hosting simple services.


That’s probably some libinput confusion.
Try adding Sunshine as a non-steam game under Steam, launch, then see if it maps properly like that.


What kernel are you on? 3.15+ has full support for these controllers, so it should work flawlessly.


Get a replacement. I know folks who have just gotten bad units. In general,no feel like their QA is a bit lacking, but if you get a good one, should work pretty flawlessly.


What might simplify your thinking about this is called “Semantic Versioning”.
You have a big codebase of all kinds of features, but at a certain time you want to release it to be able to differentiate between a point in time and release number so you can tell when a regression happens and address it.
Proton is released by version to be able to see this exact thing. They keep all the old versions available for users because they know that not every single point release will work for all games, and there will be regressions.
This allows users to be able to identify a stable working version of Proton for a specific game, and stick to it. If you try to upgrade for a newer release for some reason and find a problem, you can always go back to the previous working version and know for certain it will work without issues.
For your specific scenario, just check ProtonDB for games and see if people have posted tweaks and config combos for a specific game. Great resource for this exact reason.


It’s a dumbass AI-powered recommendation engine with an awful GUI. That’s about it.
As far as it being malicious, that’s really up to you.


He will sue and win. There was no misconduct. This will also be reinstated with backpay when these clowns are gone anyway.
So fucking stupid, and a waste of OUR tax dollars.


It starts with the hardware first. You started well with tuning your CPU/MEM frequency settings, but that matters less if you’re running giant PSUs (or redundant), more drives than you need, and a huge number of peripherals.
Get a cheap outlet monitor to see what your power draw is and track it at the wall. I just got these cheap Emporia ones. I’m sure there’s more reputable ones out there.
Don’t go crazy with your networking solution if you don’t need them. PoE switches draw tons of power even when idle, and a 24-port switch is a huge draw if you’re only using 3 of them.
Consider getting a power efficient NAS box for backend storage, and low power Minipc for frontend serving instead of using a power hungry machine for all your network apps.
You can dive deeper into any angle thing, but these are the basics.
Gotta get that oven temp as hot as it will go or you won’t brown that crust without brushing with oil. You’ll get dense dough on the edges.


I’m solely talking about the Heading Photo, not the contents.


Trump’s banking on these shitheads being his Lil’ Army


These are some of the stupidest choices in tattoo and location I have ever seen in my life. I truly hope this is AI 🤣


Well there is Appinage or Flatpak. Either one of those.
I think you’re missing the point or not understanding.
What you’re talking about is just running a model on consumer hardware with a GUI. We’ve been running models for a decade like that. Llama is just a simplified framework for end users using LLMs.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
They aren’t talking about just running models as you’re describing.