

Mongo is appalled!
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast


Mongo is appalled!


Free shipping to send him away? I’ll pay that subscription 👍


You want political toilet paper?


So that’s why sales were up.
Vegan Linux users can compile their own protein from source.
Their purity level is so high that they can kill -9 anyone wearing a leather belt with just a glance.


Not at this point, no. Not unless you know how to setup/manage docker images and have a GPU with at least 16GB of VRAM.
Also, if you’re not using Linux forget it. All the AI stuff anyone would want to run is a HUGE pain in the ass to run on Windows. The folks developing these models and the tools to use them are all running Linux. Both on their servers and on their desktops and it’s obvious once you start reading the README.md for most of these projects.
Some will have instructions for Windows but they’ll either be absolutely enormous or they’ll hand wave away the actual complexity, “These instructions assume you know the basics of advanced rocket science and quantum mechanics.”


It depends on the size of the content on the page. As long as it’s small enough to be contained within the context window, it should do a good job.
But that’s all irrelevant since the point of the summary is just to give you a general idea of what’s on the page. You’ll still get the actual title and whatnot.
Using an LLM to search on your behalf is like using grep to filter out unwanted nonsense. You don’t use it like, “I’m feeling lucky” and pray for answers. You still need to go and open the pages in the results to get at what you want.


AI models aren’t trained on anything “stolen”. When you steal something, the original owner doesn’t have it anymore. That’s not being pedantic, it’s the truth.
Also, if you actually understand how AI training works, you wouldn’t even use this sort of analogy in the first place. It’s so wrong it’s like describing a Flintstones car and saying that’s how automobiles work.
Let’s say you wrote a book and I used it as part of my AI model (LLM) training set. As my code processes your novel, token-by-token (not word-by-word!), it’ll increase or decrease a floating point value by something like 0.001. That’s it. That’s all that’s happening.
To a layman, that makes no sense whatever but it’s the truth. How can a huge list of floating point values be used to generate semi-intelligent text? That’s the actually really fucking complicated part.
Before you can even use a model you need to tokenize the prompt and then perform an inference step which then gets processed a zillion ways before that .safetensors file (which is the AI model) gets used at all.
When an AI model is outputting text, it’s using a random number generator in conjunction with a word prediction algorithm that’s based on the floating point values inside the model. It doesn’t even “copy” anything. It’s literally built upon the back of an RNG!
If an LLM successfully copies something via it’s model that is just random chance. The more copies of something that went into its training, the higher the chance of it happening (and that’s considered a bug, not a feature).
There’s also a problem that can occur on the opposite end: When a single set of tokens gets associated with just one tiny bit of the training set. That’s how you can get it to output the same thing relatively consistently when given the same prompt (associated with that set of tokens). This is also considered a bug and AI researchers are always trying to find ways to prevent this sort of thing from happening.


No it can’t do that. It’s an LLM, it can only generate the next word in a sequence.
Your knowledge is out of date, friend. These days you can configure an LLM to run tools like curl, nmap, ping, or even write then execute shell scripts and Python (though, in a sandbox for security).
Some tools that help you manage the models are preconfigured to make it easy for them to search the web on your behalf. I wouldn’t be surprised if there’s a whole ecosystem of AI tools just for searching the web that will emerge soon.
What Mozilla is implementing in Firefox will likely start with cloud-based services but eventually it’ll just be using local models, running on your PC. Then all those specialized AI search tools will become less popular as Firefox’s built-in features end up being “good enough”.


Have you tried using an LLM configured to search the Internet for you? It’s amazing!
Normal search: Loads of useless results, ads, links that are hidden ads, scams, and maybe on like the 3rd page you’ll find what you’re looking for.
AI search: It makes calls out to Google and DDG (or any other search engines you want) simultaneously, checks the content on each page to verify relevancy, then returns a list of URLs that are precisely what you want with summaries of each that it just generated on the fly (meaning: They’re up to date).
You can even do advanced stuff like, “find me ten songs on YouTube related to breakups and use this other site to convert those URLs to .ogg files and put them in my downloads folder.”
Local, FOSS AI running on your own damned PC is fucking awesome. I seriously don’t understand all the hate. It’s the technology everyone’s always wanted and it gets better every day.


It’s called dogfooding and it’s what you’re supposed to do to improve your product.


Total market share is irrelevant. What matters more is total users.
If you make a product and there’s a million people on a platform who could buy it, the costs to port that product (and support it) need to be low for it to be worthwhile.
If the total number of people on that platform increases to 10 million, now the cost to port/support becomes more like a minuscule expense rather than a difficult decision.
When you reach 100 million there’s no excuse. There’s a lot of money to be made!
For reference, the current estimated amount of desktop Linux users globally is somewhere between 60-80 million. In English-speaking countries, the total is around 19-20 million.
It’s actually a lot more complicated than this, but you get the general idea: There’s a threshold where any given software company (including games) is throwing money away by not supporting Linux.
Also keep in mind that even if Linux had 50% market share, globally, Tim Sweeney would still not allow Epic to support it. I bet he’d rather start selling their own consoles that run Windows instead!


One thing for certain, Microsoft will not stop using Copilot to develop their software in house.
You’re wrong, but I think you’ll be OK with that because the reality of the situation is actually hilarious:
https://www.theverge.com/tech/865689/microsoft-claude-code-anthropic-partnership-notepad
“Turns out Copilot sucks so let’s just use our competitor’s superior product but that’s no reason we can’t keep foisting the inferior garbage on the masses!”


Random chance gambling. All throughout history there have been many statistical anomalies and if I joined them in such a competition, I’d have just as good a chance at winning as anyone else.


Everyone that said they “dropped a bomb in the toilet” is just a poser. This guy is the real deal.


I did a ton of research about the neurological differences (the kind controlled by hormones like oxytocin) between men and women for a book… Men and women are equally emotional but testosterone is a literally “chemical brake” on tear production… Which is a stress reliever (reduces cortisol).
What this means is that if you have a room full of people and the women are crying because something awful happened, it’s likely that the men would also be crying if they didn’t have testosterone cutting it off. The same emotions are there, it’s just they aren’t as visible (external) in the men.
That’s just the most obvious, easy thing to explain (everyone understand tears) but there’s so, so much more to this kind of thing. Understanding these sorts of differences would be greatly beneficial to society. If only there were some area of study for this kind of thing…
Wait until you see Gen Alpha’s spending on alcohol!


Yeah it’s a common thought: An afterlife where people gather before going on to the next.
Usually, people think that the quality of your choices for the next life will be based on whatever criteria they think was most important in life. Someone who went out of their way to be nice will believe that it will be based on how nice you were. Whereas someone who spent their life accumulating money/power will assume it’s based on that.
For all we know, though, your “afterlife score” could be based on how many different sorts of food you tried, how many buttons you pressed, how far you traveled from where you were born, etc.
I actually have a novel idea about this concept: Dude dies and gets the red carpet treatment in the afterlife. He’s very happy about it but he doesn’t understand… He never got married and spent most of his life doing data entry and courtroom steganography.
Turns out, he got the high score in “button pressing.” He’s at the top of the leaderboard and this qualifies him for all sorts of “premium” reincarnation options. Not only that, but the gods intend to put his talents to use right away on “pressing issues.”


Comcast—in the top ten of the shittiest companies of all time that no one wants to have to deal with—is surprised that their “new” deal of, “be slightly less villainous, and expect all our problems to go away” isn’t working.
Careful! Ghost in the shell!