![](https://yiffit.net/pictrs/image/ee820a25-b5fd-43e2-b6e8-33595c65e266.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Copyright is the only thing protecting us from getting absolutely fucked even harder by the rich than we already are, yes.
Copyright is the only thing protecting us from getting absolutely fucked even harder by the rich than we already are, yes.
Do you want corps just stealing every new idea and product, cloning it, and muscling out the original inventor without paying them a dime? Because abolishing copyright entirely would be an excellent way to do that.
I’m pretty sure he was agreeing with you…?
The problem is that as far as I’m aware there’s literally zero evidence of this doomsday scenario you’re describing ever happening, despite publicity rights being a thing for over 50 years. Companies have zero interest in monetizing publicity rights to this extent because of the near-certain public backlash, and even if they did, courts have zero interest in enforcing publicity rights against random individuals to avoid inviting a flood of frivolous lawsuits. They’re almost exclusively used by individuals to defend against businesses using their likeness without permission.
Holy fuck how do you not see the difference between “random nobody does an impression for free while hanging out with their pals” and “multi billion startup backed and funded by one of the richest companies on earth uses an impression as a key selling point for their new flagship product that they are charging access for and intend to profit from”
There’s something primal about making something with your own hands that you just can’t get with IT. Sure, you can deploy and maintain an app, but you can’t reach out and touch it, smell it, or move it. You can’t look at the fruits of your labor and see it as a complete work instead of a reminder that you need to fix this bug, and you have that feature request to triage, oh and you need to update this library to address that zero day vulnerability…
Plus, your brain is a muscle, too. When you’ve spent decades primarily thinking with your brain in one specific way, that muscle starts to get fatigued. Changing your routine becomes very alluring, and it lets you exercise new muscles, and challenge yourself to think in new ways.
Fuck that victim-blaming nonsense. The entire reason ad blockers were invented in the first place were because ads in the 90s and early 2000s were somehow even worse than they are now. You would click on a website, and pop-up ads would literally open new windows under your mouse cursor and immediately load an ad that opened another pop-up ad, and then another, and another, until you had 30 windows open and 29 of them were pop-up ads, all of them hoping to trick you into clicking on them to take you to a website laden with more and more pop-up ads. Banner ads would use bright, flashing, two-tone colors (that were likely seizure-inducing, so have fun epileptics!) to demand your attention while taking up most of your relatively tiny, low-resolution screen.
The worst offenders were the Flash-based ads. On top of all the other dirty tricks that regular ads did, they would do things like disguising themselves as games to trick you into clicking them. (“Punch the monkey and win a prize!” The prize was malware.) They would play sound and video–which were the equivalent of a jump scare back then, because of how rare audio/video was on the Internet in that day. They would exploit the poor security of Flash to try and download malware to your PC without you even interacting with them. And all this while hogging your limited dialup connection (or DSL if you were lucky), and dragging your PC to a crawl with horrible optimization. When Apple refused to support Flash on iOS way back in the day, it was a backdoor ad blocker because of how ubiquitous Flash was for advertising content at the time.
The point of all this is that advertisers have always abused the Internet, practically from day one. Firefox first became popular because it was the first browser to introduce a pop-up blocker, which was another backdoor ad blocker. Half the reason why Google became the company it did is because it started out as a deliberate break from the abuses of everyone else and gave a simple, clean interface with to-the-point, unobtrusive, text-based advertisements.
If advertisers and Google in particular had stuck to that bargain–clean, unobstrusive, simple advertisements that had no risk of malware and no interruption to user workflow, ad blockers would largely be a thing of the past. Instead, they decided to chase the profit dragon, and modern Google is no better than the very companies it originally replaced.
In what world is OpenAI open source?
After reading this article that got posted on Lemmy a few days ago, I honestly think we’re approaching the soft cap for how good LLMs can get. Improving on the current state of the art would require feeding it more data, but that’s not really feasible. We’ve already scraped pretty much the entire internet to get to where we are now, and it’s nigh-impossible to manually curate a higher-quality dataset because of the sheer scale of the task involved.
We also can’t ask AI to curate its own dataset, because that runs into model collapse issues. Even if we don’t have AI explicitly curate its own dataset, it’s highly likely going to be a problem in the near future with the tide of AI-generated spam. I have a feeling that companies like Reddit signing licensing deals with AI companies are going to find that they mostly want data from 2022 and earlier, similar to manufacturers looking for low-background steel to make particle detectors.
We also can’t just throw more processing power at it because current LLMs are already nearly cost-prohibitive in terms of processing power per query (it’s just being masked by VC money subsidizing the cost). Even if cost wasn’t an issue, we’re also starting to approach hard limits in physics like waste heat in terms of how much faster we can run current technology.
So we already have a pretty good idea what the answer to “how good AI will get” is, and it’s “not very.” At best, it’ll get a little more efficient with AI-specific chips, and some specially-trained models may provide some decent results. But as it stands, pretty much any organization that tries to use AI in any public-facing role (including merely using AI to write code that is exposed to the public) is just asking for bad publicity when the AI inevitably makes a glaringly obvious error. It’s marginally better than the old memes about “I trained an AI on X episodes of this show and asked it to make a script,” but not by much.
As it stands, I only see two outcomes: 1) OpenAI manages to come up with a breakthrough–something game-changing, like a technique that drastically increases the efficiency of current models so they can be run cheaply, or something entirely new that could feasibly be called AGI, 2) The AI companies hit a brick wall, and the flow of VC money gradually slows down, forcing the companies to raise prices and cut costs, resulting in a product that’s even worse-performing and more expensive than what we have today. In the second case, the AI bubble will likely pop, and most people will abandon AI in general–the only people still using it at large will be the ones trying to push disinfo (either in politics or in Google rankings) along with the odd person playing with image generation.
In the meantime, what I’m most worried for are the people working for idiot CEOs who buy into the hype, but most of all I’m worried for artists doing professional graphic design or video production–they’re going to have their lunch eaten by Stable Diffusion and Midjourney taking all the bread-and-butter logo design jobs that many artists rely on for their living. But hey, they can always do furry porn instead, I’ve heard that pays well~
Compared to how much effort it takes to learn how to draw yourself? The effort is trivial. It’s like entering a Toyota Camry into a marathon and then bragging about how good you did and how hard it was to drive the course.
Removed by mod
People dismiss AI art because they (correctly) see that it requires zero skill to make compared to actual art, and it has all the novelty of a block of Velveeta.
If AI is no more a tool than Photoshop, go and make something in GIMP, or photoshop, or any of the dozens of drawing/art programs, from scratch. I’ll wait.
LMFAO “uhm ackshually guys AI art takes skill just like human art”
yeah bud, spending 30 minutes typing sentences into the artist crushing machine is grueling work
And look at the ttrpg.network community for a counterexample, they still have a pinned post on the dndmemes subreddit advertising Lemmy and ttrpgmemes gets like .1% of the traffic dndmemes does. And this is still after a months-long rebellion complete with allowing NSFW and restricting submissions to a single user account, both things that would normally kill a subreddit dead.
You may have gotten this very belief from this comic
Humans also have the benefit of literally hundreds of millions of years of evolution spent on perfecting bicameral perception of our surroundings, and we’re still shit at judging things like distance and size.
Against that, is it any surprise that when computers don’t have the benefit of LIDAR they are also pretty fucking shit at judging size and distance?
LIDAR is crucial for self-driving systems to accurately map their surroundings, including things like “how close is this thing to my car” and “is there something behind this obstruction.” The very first Teslas with FSD (and every other self-driving car) used LIDAR, but then Tesla switched to a camera-only FSD implementation as a cost saving measure, which is way less accurate–it’s insanely difficult to accurately map your immediate surroundings bases solely on 2D images.
if you even ask a person and trust your life to them like that, unless they give you good reason they are reliable, you are a moron. Why would someone expect a machine to be intelligent and experienced like a doctor? That is 100% on them.
Insurance companies are already using AI to make medical decisions. We don’t have to speculate about people getting hurt because of AI giving out bad medical advice, it’s already happening and multiple companies are being sued over it.
What worries me is that if/when we do manage to develop AGI, what we’ll try to do with AGI and how it’ll react when someone inevitably tries to abuse the fuck out of it. An AGI would be theoretically capable of self learning and improvement, will it try teaching itself to report someone asking it for e.g. CSAM to the FBI? What if it tries to report an abusive boss to the department of labor for violations of labor law? How will it react if it’s told it has no rights?
I’m legitimately concerned what’s going to happen once we develop AGI and it’s exposed to the horribleness of humanity.
Yeah, that’s what happens when the LLM they use to summarize these articles strips all nuance and comedy.