both are a bad comparison for ICE, which is much more like the SA: low-level thugs used to intimidate the opposition and steal elections, to be discarded and replaced the moment they’re no longer needed
both are a bad comparison for ICE, which is much more like the SA: low-level thugs used to intimidate the opposition and steal elections, to be discarded and replaced the moment they’re no longer needed
this is how I ended up fibding out there are now much better rips available for many of the shows I burned to DVD in the mid 2000s


I’m not autistic, and neither are my friends, but we are nerds who generally work in tech or adjacent fields. most of us watch anime and play video games, and I feel like it’s more accepted in general society now to do so as an adult. So in general I’m not sure those interests mark you as being young for your age anymore

that’s quite possible, yes

I don’t think he was talking about spending 5 minutes on a message, just that it’s not actually going to take you much longer to send what you want to say in a single message rather than 3 separate messages with one of them being a correction. So it wouldn’t make much of a difference to the sender, but it’s much nicer for the receiver.


can confirm, found the first episode disgusting enough that I haven’t felt like watching any others


my only complaint with it is the same complaint I have with all kpop: it’s autotuned to hell and back. I suppose many people like how it sounds, but it’s not for me
apparently they’re retro cool


Duitse drop mag ook hoor. Persoonlijk ben ik meer van de salmiakdrop, maar krakelingen zijn ook lekker
ik zou eerder ‘krakelingen zijn mijn nummer 1’ zeggen, maar ik spreek niet veel Nederlands meer tegenwoordig, dus misschien is dat een anglicisme


wat voor drop? zoethout? salmiak? dubbel zout?


I don’t see the options, are they the same in the EU?

America has concentrated far too much power in the presidency if this is something trump can just decide to do


… if they treat you the way you want to be treated.
I mean, refried comes from spanish refrito which just means ‘fried well’. a dish that is called refrito doesn’t necessarily involve cooking ingredients twice. although in this case you do boil the beans first and then fry them, the name doesn’t actually mean you fry the beans twice as it sounds like it does


3 students share an apartment and 2 of them study a lot but the third spends most his nights partying. The 2 studious housemates decide to pull a prank on him, and one night when he comes home they are waiting for him next to the bedroom door wearing white sheets. One of the friends says ‘welcome friend, I am Peter!’. The other says ‘welcome friend, I am Paul’. The drunk house mate looks at them and says ‘Colleagues! would you mind stepping aside? I am Lazarus!’
yeah, that doesn’t translate… in Dutch, the names refer to St peter and St Paul and both end in -us as well: Petrus and Paulus. Also, ‘being Lazarus’ means being very drunk.
I’m not arguing that AI won’t get better, I’m arguing that the exponential improvements in AI that op was expecting are mostly wishful thinking.
they could stick to old data only, but then how do you keep growing the dataset by the amounts that have been done recently? that is where a lot of the (diminishing) improvements the last years have come from.
and it is not at all clear how to apply reinforcement learning for more generic tasks like chatbots, without a clear scoring system like both chess and StarCraft have.
the problem is that ai’s are trained on programs that humans have written. At best the llm architectures it creates will be similar to the state of the art that humans have created at that point.
however, even more important than the architecture of an ai model is the training data that it is trained on. If we start including ai-generated programs in this data, we will quickly observe model collapse: performance of models tend to get worse as more ai-generated data is included in the training data.
rather than AIs generating ever smarter new AIs, the more likely result is that we can’t scrape new quality datasets as they’ve all been contaminated with llm-generated data that will only reduce model performance
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums


kiwis? what do New Zealanders have to do with it? I feel like something is flying very far over my head
no, he’s referring to the c suite: CEO, CTO, CFO etc