• 1 Post
  • 489 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle

  • Nah, you could definitely make one. Ensure the petrol is completely aerosolized, so that it burns completely and quickly. Now it just needs to be able to burn oxygen out of a room faster than it can get in. Or it could use the burning petrol to generate coumpounds and co2 to suffocate the fire. Get yourself basically a petrol powered weedeater and replace the rope with some sort of heat dissipater. As it spin its shoots the heat elsewhere, somewhere safer.







  • Copyrighting voices to defeat AI would achieve nothing. Modern AI’s can be overtrained to the point where they strongly resemble their training data, but that is a problem that will be fixed within the next 5 years, which is way earlier than we will see any legislation regarding AI if our geriatric government is anything to go by. After that, AI will generate its “own” content that will be legally protected as its intellectual property.

    I base this off of the fact that every media from humans is inspired by previous media. Fundamentally, once it stops directly plagiarising, there is no legal distinction between what a human would be doing and what an AI would be doing, unless we want to come up with a legal classification of “human” that explicitly rejects AI.

    That opens up a whole new can of worms though. If you define human to be having human dna, does that mean the 3 babies they have been genetically altered are not human? Or are they, because they comtain at least some human dna? Does that mean i can give my AI a vestigial organ and now its legally protected? Does it have to run off a brain? What is a brain? If i duplicate the neural connections in a brain with mosfets down to every single connection, that is indisputably a human intelligence running on possibly non human hardware depending on the word of law. Or is it a human intelligence? They would react the exact same way to their organic counterpart, down to having the same memories and emotions. Does their non-biological hardware preempt them from being human? Does a pacemaker? Or neuralink?

    Theres a lot to be worked out here, but it seems to me to be much less problematic to target the people wanting to misuse AI rather than targetting the tool itself.





  • The issue with the leftists as youve described them is that AI IS happening. There will be no stopping it, no containing it. Its already reshaping our world. And by downplaying its future potential(ai will never be able to identify objects, oops it can now. AI will never be able to make art, whoops it can now. AI will never be able to replace a human in every industry…) you are actively stiffling discussion on how to manage those effects. Anyone who has been seriously following AIs progression over the last ten years can see the patterns on the wall. The time of human workers is over. The only question is whether were going to use AI for the benefit of everyone, or for the benefit of the elite. And by convincing consumers they shouldnt use, even though you cant convince the elite not to, you are only pushing the dial further in the wrong direction.

    RIGHT NOW we need to embrace AI. Humans have already shown that we are either incapable or unwilling to run a functional society. Putting something else in charge to make the decisions humans cant or wont is the only possible future we have. Ofc Im not asking to out an llm in charge of the world. But I am asking that people actually do something about their world leaders and start getting ubi implemented today. Because soon enough we will have an AI capable of making those decisions. And make them it will. And thats either a good thing under the post scarcity society weve built, or people rolled over and ignored the problem until they decided they dont need us anymore and get rid of us.


  • Oh absolutely. I take a tech-optimist appraoch when discussing new technologies, because the alternative is that everyone outside of the 1% dies in less than 20 years. That seems boring to discuss to me, especially since while i may be a techno-optimist, im also a political-realist. Hence I can see that no one (escept luigi) is actually going to do anything impactful before we all die, so there just doesnt seem to be much point in discussing it. But maybe im wrong, maybe someone will finally decide that maybe we shouldnt be allowing evil people to run the world. In that case the techno-optimist route seems more likely!

    Edit: premptive little thought experiment for anyone wanting to disagree. Take whatever counter point you are about to make, and then ask what the results of that “good action” actually were. Likely a lot of nothing, regaurdless of how well intentioned it may have been. Trump is still president after all, and were still just as fucked. The same could be argued about luigi, it doesnt actually seem he achieved much. But hey, he did in fact get rid of one problem directly at the source, which is more than can be said for anyone else in the modern era afaik.