The Age rating is who can use the App, not how long it’s been up.
The Age rating is who can use the App, not how long it’s been up.
YouTube will actually take action and has done in most instances. I won’t say they’re the fastest but they do kick people off the platform if they deem them high risk.
I don’t understand the comments suggesting this is “guilty by proxy”. These platforms have algorithms designed to keep you engaged and through their callousness, have allowed extremist content to remain visible.
Are we going to ignore all the anti-vaxxer groups who fueled vaccine hesitancy which resulted in long dead diseases making a resurgence?
To call Facebook anything less than complicit in the rise of extremist ideologies and conspiratorial beliefs, is extremely short-sighted.
“But Freedom of Speech!”
If that speech causes harm like convincing a teenager walking into a grocery store and gunning people down is a good idea, you don’t deserve to have that speech. Sorry, you’ve violated the social contract and those people’s blood is on your hands.
Whenever some dipshit responds to me with “you’re talking about AGI, this is AI”, my only reply is fuck right off.
I’ve just done the dance already and I’m tired of their watered-down attempts at bringing human complexity down to a level that makes their chat bots seem smart.
I don’t need a theory for this, you’re being highly reductive by focusing on a few features of human communication.
What research? These bots aren’t that complicated beyond an optimisation algorithm. Regardless of the tasks you give it, it can’t evolve beyond what it is.
There’s no way these chatbots are capable of evolving into Ultron. That’s like saying a toaster is capable of nuclear fusion.
Sounds like a great car! It does seem like something’s wrong with the battery so a replacement is in order.
From the replies I’ve been getting, I think so.
My mum’s 2019 Toyota Yaris has to have its engine run every few days or the battery dies from just sitting on the driveway. It could be a faulty car battery but considering this car isn’t even that old and has barely driven 30k miles, it’s not doing so great. I discovered yesterday that my EV charges better after I’ve driven it around and the battery’s warmed up a bit. The car goes a bit haywire when you cold start so it seems like it needs some prep time before a drive.
The problem isn’t the misinformation itself, it’s the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn’t on AI for creating misinformation, it’s for making the situation worse.
The most comprehensive Installation Guide is the one on ArchWiki bar none. I used the ArchWiki well before I started using Arch Linux, it’s just that good.
How it goes about constructing sentences doesn’t mean the phrases it reproduces aren’t plagiarism. Plagiarism doesn’t care about probability of occurrence, it looks at how much one work closely resembles another and the more similar they are, the more likely it is to be plagiarised.
You can only escape plagiarism by proving that you didn’t copy intentionally or you cite your sources.
GPT has no defence because it has to learn from the sources in order to learn the probabilities of the phrases being constructed together. It also doesn’t cite its sources so in my eyes, if found to be plagiarising then it has no defence.
The reason GPT is different from those examples (not all of them but I’m not going into that), is that the malicious action is on the part of the user. With GPT, it gives you an output that it has plagiarised. The user can take that output and then submit it as their own which is further plagiarism but that doesn’t absolve GPT. The problem is that GPT doesn’t cite its own sources which would be very helpful in understanding the information it’s getting and with fact-checking it.
There’s a bit more nuance to your example. The company is liable for building a tool that allows plagiarism to happen. That’s not down to how people are using it, that’s just what the tool does.
I’m not sure what you mean by this. Information has always been free if you look hard enough. With the advent of the internet, you’re able to connect with people who possess this information and you’re likely to find it for free on YouTube or other websites.
Copyright exists to protect against plagiarism or theft (in an ideal world). I understand the frustration that comes with archaic laws and that updates to laws move at a glacier’s pace, however, the death of copyright harms more people than you’re expecting.
Piracy has existed as long as the internet has. Companies have been complaining ceaselessly about lost profits but once LLMs came along, they’re fine with piracy if it’s been masked behind a glorified search algorithm. They’re fine with cutting jobs and replacing them with an LLM that produces less quality output at significantly cheaper rates.
I get that part but I think what gets taken more seriously is how 'human" the responses seem which is a testament to how good the LLM model is. But that’s set dressing when GPT has been known to give incorrect, outdated or contradictory answers. Not always but unless you know what kind of answer to expect, you have to verify what it’s telling you which means you’ll be spending half the time fact-checking the LLM.
You can’t feed it perceptions no more than you can feed me your perceptions. You give it text and the quality of the output is determined by how the LLM has been trained to understand that text. If by feeding it perceptions, you mean by what it’s trained on, I have to remind you that the reality GPT is trained on is the one dictated by the internet with all of its biases. The internet is not a reflection of reality, it’s how many people escape from reality and share information. It’s highly subject to survivorship bias. If the information doesn’t appear on the internet, GPT is unaware of it.
To give an example, if GPT gives you a bad output and you tell it that it’s a bad output, it will apologise. This seems smart but it’s not really. It doesn’t actually feel remorse, it’s giving a predetermined response based on what it’s understood by your text.
Even the Wayback Machine has limits to what is available.