• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle



  • AI regulations is definitely needed, selfregulations never works, look at how Google and Meta have been operating and even now with GDPR in place they’re still getting away with abusing users data with no consequences.

    OpenAI did not tell us what good regulation should look like,” the person said.

    What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office.

    I should hope OpnAI didn’t tell them how to regulate OpenAI and I really hope this isn’t the only regulation that we see since technology is constantly advancing we’re going to need to constantly update regulation to keep companies like OpenAI from getting out of control like Google.

    OpenAI argued that, for example, the ability of an AI system to draft job descriptions should not be considered a “high risk” use case, nor the use of an AI in an educational setting to draft exam questions for human curation. After OpenAI shared these concerns last September, an exemption was added to the Act

    This bothers me, job descriptions are already ridiculous with over the top requirements for jobs that don’t require them, feeding these prompts into AI is only going to make that worse.

    With regards to drafting exams, does it not start to make these exams redundant if the experts on the material being examined can’t even come up with questions and problems, then why should students even bother engaging with the material when they could just use AI because of this loose regulation.

    Researchers have demonstrated that ChatGPT can, with the right coaxing, be vulnerable to a type of exploit known as a jailbreak, where specific prompts can cause it to bypass its safety filters and comply with instructions to, for example, write phishing emails or return recipes for dangerous substances.

    Unfortunately since this regulation isn’t global and there are so many open source models that can run on consumer hardware there is no real way to regulate jailbreaking prompts and this is always going to be an issue. On the other hand though, these open source low power models are needed to give users more options and privacy, this is where we went wrong with search engines and operating systems.