• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2023

help-circle
  • Everyone shitting on AI so let’s put a reality check on it.

    Presuming things develop as they do rn, AIs are really good at producing images of high quality and clips of low quality. They are capable of producing music clips of low quality. They are capable of text to speech while simulating specific voices and converting one voice into the other while maintaining pitch and characteristics. They are able to create medium-size texts.

    AIs are currently unable to create longer low quality videos or shorter high quality clips. They are unable to create songs. They are unable to create isolated sound design or synthesize voices from scratch. They are unable to create cohesive publications or story scripts.

    From what I can tell, there are no indications for AI to replace most creative jobs. The only thing they will replace are jobs that do not require creativity but that do require a lot of repetitive processes. That is a plus in my book.

    I think the hype around AI is overblown, but I also think the dystopian outlook on AI is overblown as well.

    AI is just a tool. It can be just as good or as bad as anyone who’s using it is, and we already have that with the internet itself. I don’t think this will completely change our everyday life in a big way, just a few more annoyances in one part of our lives, and a few less in another.

    That said, even if it’s not world changing, it will be important to get to know how it works, and that’s more out of convenience, and less out of necessity.

    (please correct me if any information is wrong)






  • I still feel like if we want to grow faster organically we need to natively support more “discovery functions”. Just things that you can toggle off like for example a recommendation screen and stuff. The algorithm for it we can make and adapt open source so no one is scared we collect data.

    We’re running into the Linux Vs Windows problem, where you can technically do more stuff and have more control over you account on Lemmy, but you need to be familiar with the fediverse before joining, just to Unterstand how to use Lemmy. That’s a big problem for any potential new user.





  • I said this on the last repost as well.

    Obviously there are reasons the film studios want that but actually getting information because you suspect someone crimes a bit too hard online is really tough. Your evidence must be waterproof to get a subpoena and until then you can run into a plathera of different issues thanks to airtight GDPR rules that still apply to US companies as well (they updated them to be even more strict with their newer compliance laws last year).

    Actually there’s a good chance that sharing data or IPs without a subpoena could be not only devastating to any potential legal case, but also to Reddit. They will never do this because they stand to gain nothing from it as is and if they wanna go IPO they can’t pull such shakes moves rn.

    Obligatory IANAL, if you need legal advice, ask a lawyer because they need all your context and they will know the ins and outs of their field.



  • That’s not how this works. They are running internationally, and GDPR would hit them like a brick if they did that.

    I would assume they had some deals with law enforcement to transmit data one narrow circumstances.

    I’m honestly asking what the impact to the users is from this breach.

    Well if you signed up there and did an ancestry inquiry, those hackers can now without a doubt link you to your ancestry. They might be able to doxx famous people and in the wrong hands this could lead to stalking, and even more dangerous situations. Basically everyone who is signed up there has lost their privacy and has their sensitive data at the mercy of a criminal.

    This is different. This is a breach and if you have a company taking care of such sensitive data, it’s your job to do the best you can to protect it. If they really do blame this on the users, they are in for a class action and hefty fine from the EU, especially now that they’ve established even more guidelines towards companies regarding the maintenance of sensitive data. This will hurt on some regard.






  • Because the training, and therefore the datasets are an important part of the work with AI. A lot of ppl are arguing that therefore, the ppl who provided the data (e.g. artists) should get a cut of the revenue or a static fee or something similar for compensation. Because looking at a picture is deemed fine in our society, but copying it and using it for something else is seen more critically.

    Btw. I am totally with you regarding the need to not hinder progress, but at the end of the day, we need to think about both the future prospects and the morality.

    There was something about labels being forced to pay a cut of the revenue to all bigger artists for every CD they’d sell. I can’t remember what it was exactly, but something like that could be of use here as well maybe.


  • “scam bot operators will just use stolen credits cards -”

    And that’s not true. Yes, there will be a small portion that do it, but this is where this idea is pretty smart.

    Taking your credit card information is a functional hurdle, but also a legal risk.

    There’s a bunch of companies and people who will stop using bots just because they can’t implement it, don’t want to implement it, or don’t have the time. Also, don’t forget if there’s one person who provides 10.000 active bots, that means providing credit card information 10.000x times, but also 120.000$ per year. If you wanna do it legally, this shit is expensive, and probably not worth it for a lot of ppl.

    And there’s also a bunch of ppl who are weighting the risk of being exposed for fake credit cards, and they stop using bots because they are not willing to commit fraud.

    I get that this will turn off even more users and it’s obviously a bad pr move, but you can’t understate that it is quite effective for the things he says he wants to achieve.