
they were still a party to NPT the entire time, and they didn’t meet obligations of that treaty for over 20 years. NPT also includes inspections by IAEA
say what you want, they can’t unbomb Fordow so any discussions about future iranian nuclear program are pointless
ah yes, the completely civilian nuclear program that for last month+ was busy using up 5% enriched uranium (on upper range of enrichment in normal reactors) and 20% (usual in research reactors) to get 60% enriched uranium (no civilian applications)
neither. the same kind of propaganda activity targeted at iranian internal audience like after assassination of qasem soleimani. like in 2020, it’s pretty obvious that trump takes this offramp
but also, this time international response is wildly different
in 2020: UK and Qatar called for deescalation, Saudis said they stand with Iraq, Turkey offered diplomatic support for deescalation
today, Saudis, UAE, Qatar (obviously), Bahrain, Kuwait, (countries directly impacted) but also Oman, Jordan, Egypt, Morocco, Lebanon, Turkey, Yemen (Aden) and Palestinian Authority all condemned Iranian strikes
today also iranian proxies are much weaker, and we already know how incompetent their air defences were
80000 hours are the same cultists from lesswrong/EA that believe singularity any time now and they’re also the core of people trying to build their imagined machine god in openai and anthropic
it’s all very much expected. verbose nonsense is their speciality and they did that way before time when chatbots were a thing
bro tried to recruit jihadists in roblox (failed) and now screenshots from this all are a matter of public record 💀
that was known almost decade before that https://en.wikipedia.org/wiki/Nth_Country_Experiment
it also helps if your air defense network doesn’t collapse immediately because it turns out that in order to guard these nukes you need also regular capable conventional military
Yeah, who else. Nuking Dresden at that point would be useless
you don’t have to choose a side and you can wish everyone involved a very nice visit to hague
either that, or nukes would be used first in korean war instead. imo it’s a good thing that nukes were first used against the most cartoonishly evil fascist state imaginable at that point
it’s like they purposefully try to think as little as possible
looking forward to day when random datacenter where they outsourced their thinking burns down
https://en.wikipedia.org/wiki/Betteridge’s_law_of_headlines
no
not yet at least, but this might change soon
i think you’ve got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there’s more
it’s maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
the gist:
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]
Here’s Eyal’s “Hook Model”:
First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.
Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.
chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don’t use it you’ll fall behind and then you’ll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that’s why they get so unserious about their chatbot use
and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it’s entirely different rabbit hole
like with any other bubble, money for it won’t last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good
For slightly earlier instance of it, there’s also real time bidding
taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don’t want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won’t go away (isn’t future bright?)
this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai
then, again people automate things they don’t want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don’t want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don’t like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like “no” or “your idea is physically impossible” or “there is no reason for any of this”. people who don’t like coding, vibecode. people who don’t like painting, generate images. people who don’t like understanding things, cram text through chatbots to summarize them. maybe you don’t see a problem with this, but it’s entirely a you problem
this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your “thinking” to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won’t repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there’s no junior to promote in their place, because junior was replaced by a chatbot.
this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i’m surely forgetting right now
and also
Is the backlash due to media narratives about AI replacing software engineers?
it’s you getting swept in outsized ad campaign for most bloated startup in history, not “backlash in media”. what you see as “backlash” is everyone else that’s not parroting openai marketing brochure
While I don’t defend them,
are you suure
e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)
e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can’t imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money