(If you haven’t seen it before, MangoHUD is the box at the top left, the gears are your video game)
(If you haven’t seen it before, MangoHUD is the box at the top left, the gears are your video game)
Unless you’re incredibly rich, a terrorist or otherwise wanted by an intelligence service or going to DEFCON the chances of you being targeted with a Spectre-like attack is almost zero. I don’t think there has been a single, large scale attack using this family of exploits.
These are spook exploits, not steal your credit card and bitcoins exploits.


I certainly agree that appeasing Trump isn’t going to work.
A thread about death threats on these members isn’t the proper context to bring up that kind of argument.
This situation isn’t a result of them voting on the CR so that comment is, at best, bad taste and at worst a toxic attempt at sowing division.


Yeah, you do want more contextual intelligence than an 8B for this.
Oh yeah, I’m sure. I may peek at it this weekend. I’m trying to decide if Santa is going to bring me a new graphics card, so I need to see what the price:performance curve looks like.
Massive understatement!
I think I stopped actively using image generation a little bit after LoRAs and IP Adapters were invented. I was trying to edit a video (random meme gif) to change the people in the meme to have the faces of my family, but it was very hard to have consistency between frames. Since there is generated video, it seems like someone solved this problem.


I don’t know the details or the bills, but it isn’t uncommon for people in contested seats to be allowed by the party to vote in a poll-favorable way when their vote won’t change the outcome.
Simply, counting ‘Times voted with Trump’ doesn’t say much that is useful and can be misleading, especially in the context of a post about death threats to politicians.
Social media can have a very us or them mentality. If you’re not 100% lock step with the group then you’re an enemy to be scored and attacked. I read that comment as 'Yeah, they’re getting death threats but they voted with Trump so they deserved it (<insert Nazi bar comment>)".


And in any case, who is destroying them (who are the weavers)?
Out of work Programmers and Furry Hentai artists


Thanks for the recommendation, I’ll look into GLM Air, I haven’t looked into the current state of the art for self-hosting in a while.
I just use this model to translate natural language into JSON commands for my home automation system. I probably don’t need a reasoning model, but it doesn’t need to be super quick. A typical query uses very few tokens (like 3-4 keys in JSON).
The next project will be some kind of agent. A ‘go and Google this and summarize the results’ agent at first. I haven’t messed around much with MCP Servers or Agents (other than for coding). The image models I’m using are probably pretty dated too, they’re all variants of SDXL and I stopped messing with ComfyUI before video generation was possible locally, so I gotta grab another few hundred GB of models.
It’s a lot to keep up with.😮💨
Ah.
GloriousEggroll keeps a version of proton that uses all of steam updates plus additional updates made by the community (to fix odd issues) and protonfixes (a database of scripts to make the weird game-specific configurations you need to make sometimes). I use it exclusively because I’ve had issues with regular Proton. I think the default Proton versions are still version 9(or there’s an experimental Wine 10 version). GE-Proton10 is using Wine 10, so it has support for native Wayland and HDR.
Your distro probably has protonup-qt in its repo. It’s a GUI to download and install various community proton versions. Just run it, click install new version and grab the latest GE-Proton10-(28?) and restart Steam and it’ll show up in the list of Proton versions that you can pick.


I’m not exactly sure the point that you’re trying to make?
She’s received death threats.
…but she voted with Trump
Therefore: ???
For how good the game looks, it runs amazingly.
Are you using GE-Proton?
I had stability issue with Proton and Proton Experimental.
It works fine.
Arch
The setup was install it and press play.
I did add the ENVs for HDR.
If anyone knows an easier way to manage all of my steam game’s wine versions and command line arguments, lmk


How are they getting Linus to agree to such a thing?


Until Trump let Elon erase decades of soft power gains with his trashy chainsaw and then imposed unilateral tariffs on our trading partners and threatening to invade allies.
Every EU country has taken initiative to move away from the US, ignoring banking sanctions and fining American tech companies for violation of EU laws are not unlikely.


It’s okay to be scared, still do the right thing and make jokes about it.


We as a country are not ready for that conversation, yet.


Datacenters are the modern industrial looms if we’re using that metaphor.
They’re the machine that create profit for people who are unconcerned with the damage that they do to the population.


They’re overestimating the costs. 4x H100 and 512GB DDR4 will run the full DeepSeek-R1 model, that’s about $100k of GPU and $7k of RAM. It’s not something you’re going to have in your homelab (for a few years at least) but it’s well within the budget of a hobbyist group or moderately sized local business.
Since it’s an open weights model, people have created quantized versions of the model. The resulting models can have much less parameters and that makes their RAM requirements a lot lower.
You can run quantized versions of DeepSeek-R1 locally. I’m running deepseek-r1-0528-qwen3-8b on a machine with an NVIDIA 3080 12GB and 64GB RAM. Unless you pay for an AI service and are using their flagship models, it’s pretty indistinguishable from the full model.
If you’re coding or doing other tasks that push AI it’ll stumble more often, but for a ‘ChatGPT’ style interaction you couldn’t tell the difference between it and ChatGPT.
Thanks a ton, saves me having to navigate the slopped up search results (‘AI’ as a search term is SEOd to death and back a few times)
That system has the 3080 12GB and 64GB RAM but I have another 2 slots so I could go up to 128GB. I don’t doubt that there’s a GLM quant model that’ll work.
Is ollama for hosting the models and LM Studio for chatbot work still the way to go? Doesn’t seem like there’s much to improve in that area once there’s software that does the thing.