

Claude code wrote and opencv python app to remove every other word from the Declaration of Independence.
Not really, but you wondered
Claude code wrote and opencv python app to remove every other word from the Declaration of Independence.
Not really, but you wondered
Punch them in the fucking face and 300 kick their asses out the door
Stop fucking demanding and start fucking DOING YOUR FUCKING JOBS
Murderbot does not approve.
Welcome to January. I thought something new happened
Sure, when reality is sane. Not with Shitler at the helm.
Burn them all
So the new RuneScape was just a cash grab? “reduce complexity “ by reducing headcount just means management incompetence, it also doesn’t make you more agile, proper guidance makes you more agile.
This is a load of shit.
What are you talking about? RAG is a method you use. It only has limitations you design. Your datastore can be whatever you want it to be. The llm performs a tool use YOU define. RAG isn’t one thing. You can build a rag system out of flat files or a huge vector datastore. You determine how much data is returned to the context window. Python and chromadb easily scales to gigabytes, on consumer hardware, completely suitable for local rag.
2nd term same as the 1st.
Utterly fuck shitter alt bluesky. Fuck them sideways and down and dead. Fuck them dead.
What an ugly cunt.
I’ve tried this ai therapist thing, and it’s awful. It’s ok to help you work out what you’re thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.
My $.02 anyway.
I’ve been using ntfy for years without issue. curious how it’s failing you.
What a great idea. I never think to reach for automations.
You can use it in the US, you just have to use Altstore classic, and ensure its running on aac in your network for regular check-ins. I think it has a rolling 7 day expiration.
Nailed it. I’ve tried taking notification contexts and generally seeing how hard it is. Their foundational model, I think is 4bit quantized, 3billion parameter model.
So I loaded up llama, phi, and picollm to run some unscientific tests. Honestly they had way better results than I expected. Phi and llama handled notification summaries (I modeled the context window, nothing official) and both performed great. I have no idea wtf AFM is doing, but it’s awful.
I was just telling little Jimmy about being a miner in the actinides mines…