• gmtom@lemmy.world
    link
    fedilink
    arrow-up
    69
    arrow-down
    24
    ·
    8 days ago

    I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.

    This work has already saved thousands of peoples lives.

    But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.

    • redwattlebird@lemmings.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      7 days ago

      And that AI has been trained on data that has been stolen, taking away the livelihood of thousands more. Further, the environmental destruction will have the capacity to destroy millions more.

      I’m not lost on the benefits; it can be used to better society. However, the lack of policy around it, especially the pandering to corporations by the American judicial system, is the crux here. For me, at least.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        7 days ago

        No. Im also part of the ethics committee at my work and since we work with peoples medical data as our training sets 9/10ths of our time is about making sure that data is collected ethically and with very specific consent.

        • redwattlebird@lemmings.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          7 days ago

          I’m fine with that. My issue is primarily theft and permissions and the way your committee is running it should be the absolute baseline of how models gather data. Keep up the great work. I hope that this practice becomes mainstream.

    • Corelli_III@midwest.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      7
      ·
      7 days ago

      nobody is trashing Visual Machine Learning to assist in medical diagnostics

      cool strawman though, i like his little hat

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        7 days ago

        No, when you litterally say “Fuck AI, no exceptions” you are very very expliticly covering all AI in that statement.

        • Corelli_III@midwest.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          7 days ago

          what do you think visual machine learning applied to medical diagnostics is exactly

          does it count as “ai” if i could teach an 11th grader how to build it, because it’s essentially statistically filtering legos

          don’t lose the thread sportschampion

          • gmtom@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            6 days ago

            Well most of my colleagues have PHDs or MDs, so good luck teaching an 11th grader to do it.

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          arrow-down
          6
          ·
          7 days ago

          It’s almost like it isn’t the “training on a large data set” part that people hate about generative AI

          ICBMs and rocket ships both burn fuel to send a payload to a destination. Why does NASA get to send tons of satellites to space, but I’m the asshole when I nuke Europe??? They both utiluze the same technology!

            • starman2112@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              7 days ago

              Nope, all generative AI is bad, no exceptions. Something that uses the same kind of technology but doesn’t try to imitate a human with artistic or linguistic output isn’t the kind of AI we’re talking about.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        33
        arrow-down
        15
        ·
        edit-2
        8 days ago

        Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.

        Corporate enshittification is what’s demonic. When you say fuck AI, you should really mean “fuck Sam Altman”

        • monotremata@lemmy.ca
          link
          fedilink
          English
          arrow-up
          29
          arrow-down
          6
          ·
          8 days ago

          I mean, not really? Maybe they’re both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That’s a pretty significant difference.

          • KeenFlame@feddit.nu
            link
            fedilink
            arrow-up
            13
            ·
            7 days ago

            No, really. Deep learning and transformers etc. was discoveries that allowed for all of the above, just because corporate vc shitheads drag their musty balls in the latest boom abusing the piss out of it and making it uncool, does not mean the technology is a useless scam

            • ILikeTraaaains@lemmy.world
              link
              fedilink
              arrow-up
              8
              ·
              7 days ago

              This.

              I recently attended a congress about technology applied on healthcare.

              There were works that improved diagnosis and interventions with AI, generative mainly used for synthetic data for training.

              However there were also other works that left a bad aftertaste in my mouth, like replacing human interaction between the patient and a specialist with a chatbot in charge of explaining the procedure and answering questions to the patient. Some saw privacy laws as a hindrance and wanted to use any kind of private data.

              Both GenAI, one that improves lives and other that improves profits.

            • monotremata@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              7 days ago

              Yeah, that’s not what I was disagreeing with. You’re right about that; I’m on record saying that capitalism is our first superintelligence and it’s already misaligned. I’m just saying that it isn’t really meaningless to object to generative AI. Sure the edges of the category are blurry, but all the LLMs and diffusion-based image generators and video generators were unethically trained on massive bodies of stolen data. Seriously, talking about AI as though the architecture is the only significant element when getting good training data is like 90% of the challenge is kind of a pet peeve of mine. And seen in that light there’s a pretty significant distinction between the AI people are objecting to and the AI people aren’t objecting to, and I don’t think it’s a matter of “a meaningless buzzword.”

              • KeenFlame@feddit.nu
                link
                fedilink
                arrow-up
                1
                ·
                6 days ago

                I totally understand that. The pet peeve of yours, i just disagree with on a fundamental level. The data is the content, and speaking about it as if the data is the technology itself is like talking about clothes in general as being useful or not. It’s meaningless especially if you don’t know about or acknowledge the different types of apparel and their uses. It’s obviously not general knowledge but it would be like bickering about if underwear is a great idea or not, it’s totally up to the individual if they want to wear them, even if being butt naked in public is illegal. If the framework is irrelevant, then the immediate problem isn’t generative AI, especially the perfectly ethical open source models

          • AdrianTheFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            8 days ago

            I think DLSS/FSR/XeSS is a good example of something that is clearly ethical and also clearly generative AI. Can’t really think of many others lol

        • AeonFelis@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          8
          ·
          edit-2
          7 days ago

          Generative AI is a meaningless buzzword for the same underlying technology

          What? An AI that can “detect repirstory ilnesses in xrays and MRI scans” is not generative. It does not generate anything. It’s a discriminative AI. Sure, the theories behind these technologies have many things is common - but I wouldn’t call them “the same underlying technology”.

          • gmtom@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            7 days ago

            It is litterally the exact same technology. If i wanted to i could turn our xray product into a image generator in less than a day.

            • AeonFelis@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              7 days ago

              Because they are both computers and you can install different (GPU-bound) software on them?

              It’s true that generative AI is uses discriminative models behind the scenes, but the layer needed on top of that is enough to classify it as a different technology.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        6
        ·
        7 days ago
        1. Except clearly some people do. This post is very specifically saying ALL AI is bad and there is no exceptions.

        2. Generative AI isnt a well defined concept and a lot of the tech we use is indistinguishable on a technical level from “Generstive AI”

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          10
          arrow-down
          7
          ·
          edit-2
          7 days ago
          1. sephirAmy explicitly said generative AI

          2. Give me an example, and watch me distinguish it from the kind of generative AI sephirAmy is talking about

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      4
      ·
      edit-2
      8 days ago

      All this is being stoked by OpenAI, Anthropic and such.

      They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”

      For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.

    • ysjet@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      18
      ·
      8 days ago

      Those are not GPTs or LLMs. Fuck off with your bullshit trying to conflate the two.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        18
        arrow-down
        4
        ·
        7 days ago

        We actually do use Generative Pre-trained Transformers as the base for a lot of our tech. So yes they are GPTs.

        And even if they werent GPTs this is a post saying all AI is bad and how there is literally no exceptions to that.

        • ysjet@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          edit-2
          7 days ago

          Again with the conflation. They clearly mean GPTs and LLMs from the context they provide, they just don’t have another name for it, mostly because people like you like to pretend that AI is shit like chatGPT when it benefits you, and regular machine learning is AI when it benefits you.

          And no, GPTs are not needed, nor used, as a base for most of the useful tech, because anyone with any sense in this industry knows that good models and carefully curated training data gets you more accurate, reliable results than large amounts of shit data.

          • gmtom@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            7 days ago

            Our whole tech stack is built off of GPTs. They are just a tool, use it badly and you grt AI slop, use it well and you can save peoples lives.