![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/2QNz7bkA1V.png)
https://www.washingtonpost.com/technology/2020/10/07/apple-geep-iphone-recycle-shred/ and they’ll sue you if you try to stop them from making more E-waste! https://youtu.be/rZjbNWgsDt8
https://www.washingtonpost.com/technology/2020/10/07/apple-geep-iphone-recycle-shred/ and they’ll sue you if you try to stop them from making more E-waste! https://youtu.be/rZjbNWgsDt8
The issue with this is the difference between GB (1,000,000,000 bytes) and GiB (1,073,741,824 bytes) https://massive.io/file-transfer/gb-vs-gib-whats-the-difference/
HDD manufacturers use GB, which is a metric measurement, because its better for marketing while computers use GiB, which is a binary measurement. So people think they’re buying 15GiB but in reality they’re buying 13.5GiB marketed as 15GB
AI regulations is definitely needed, selfregulations never works, look at how Google and Meta have been operating and even now with GDPR in place they’re still getting away with abusing users data with no consequences.
OpenAI did not tell us what good regulation should look like,” the person said.
What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office.
I should hope OpnAI didn’t tell them how to regulate OpenAI and I really hope this isn’t the only regulation that we see since technology is constantly advancing we’re going to need to constantly update regulation to keep companies like OpenAI from getting out of control like Google.
OpenAI argued that, for example, the ability of an AI system to draft job descriptions should not be considered a “high risk” use case, nor the use of an AI in an educational setting to draft exam questions for human curation. After OpenAI shared these concerns last September, an exemption was added to the Act
This bothers me, job descriptions are already ridiculous with over the top requirements for jobs that don’t require them, feeding these prompts into AI is only going to make that worse.
With regards to drafting exams, does it not start to make these exams redundant if the experts on the material being examined can’t even come up with questions and problems, then why should students even bother engaging with the material when they could just use AI because of this loose regulation.
Researchers have demonstrated that ChatGPT can, with the right coaxing, be vulnerable to a type of exploit known as a jailbreak, where specific prompts can cause it to bypass its safety filters and comply with instructions to, for example, write phishing emails or return recipes for dangerous substances.
Unfortunately since this regulation isn’t global and there are so many open source models that can run on consumer hardware there is no real way to regulate jailbreaking prompts and this is always going to be an issue. On the other hand though, these open source low power models are needed to give users more options and privacy, this is where we went wrong with search engines and operating systems.
I see the Extend part of Embrace Extend Extinguish is about to start…