I was in a group chat where the bro copied and pasted my question into ChatGPT, took a screenshot, and pasted it into the chat.

As a joke, I said that Gemini disagreed with the answer. He asked what it said.

I made up an answer and then said I did another fact check with Claude, and ran it through “blast processing” servers to “fact check with a million sources”.

He replies that ChatGPT 5.1 is getting unreliable even at the $200 a month model and considering switching to a smarter agent.

Guys - it’s not funny anymore.

  • Noxy@pawb.social
    link
    fedilink
    English
    arrow-up
    17
    ·
    4 days ago

    fuckin Sega Genesis genius over here!

    I both hate and love that some folks are that checked out from doing their own thinking

    • Silic0n_Alph4@lemmy.world
      link
      fedilink
      arrow-up
      39
      ·
      5 days ago

      It’s not that they’re stupid - it’s that they’re completely unscrupulous and unethical. If you gave up on being a decent human being then I’m sure you’d think up loads of ways of making those megabucks. How about slavery! Hmm, too on the nose. How about we insert ourselves as rent-seeking middlemen in a stable industry, taking a fee from customers and only giving tasks to the workers who generate enough profit for us. It’s not slavery, it’s just the gig economy! And when we say “people are happier not being employees under our system” it won’t be because we’re stupid, it’ll be because we no longer care about inconvenient truths that get in the way of our profits.

      Please don’t do any of this, kind lemming.

      • Alpha71@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        The best way to make money would be to get into “spirituality” By selling Chakra aligning stickers or somesuch. Or even better, getting into hifi audio, and selling empty boxes you can plug into a wall to “harmonize” the electrical waves of the wiring in the house. 😂

        PS. These were actual products being sold.

        • Silic0n_Alph4@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          Ok, how about Chakra-aligned 5G-blocking hifi MaxFi audio cables? It’s a niche demographic, but it’d be like shooting tinfoil-hat-wearing fish in a barrel.

    • kboos1@lemmy.world
      link
      fedilink
      arrow-up
      38
      ·
      5 days ago

      From my experience they are really good at blending in and going with the flow, so in companies this means that they are like minded and team players. If you want to move up and make more money you just need to become a yes man and stop making waves

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Lol, Klarna for ChatGPT.

        I think the more likely answer is (hopefully) they’re talking out of their ass. But who knows. Somebody is paying for that $200 tier. At least one person has to be.

  • mech@feddit.org
    link
    fedilink
    arrow-up
    81
    ·
    5 days ago

    He replies that ChatGPT 5.1 is getting unreliable

    But he still pastes its output without even reading it first.

  • mrgoosmoos@lemmy.ca
    link
    fedilink
    English
    arrow-up
    36
    ·
    5 days ago

    The one thing that I appreciate about people pasting screenshots of AI answers is the notice that it is an AI answer and I can ignore it. I hate it when people copy paste the text and you start reading it only to realize that they wasted your time by giving you shit that you didn’t consent to

    • laranis@lemmy.zip
      link
      fedilink
      arrow-up
      7
      ·
      4 days ago

      I had someone do this to me in a professional setting “I asked ChatGPT and it gave me a good answer. I’ll forward it to you so you can read it.”

      Are you shitting me? If you want to use an LLM to start your thought process or start your research, fine. That’s probably the best use case for LLMs. But don’t claim you did something valuable and then pass the task on to me to do because you couldn’t be bothered to assimilate, internalize, and contextualize the information during the meeting where that was the whole friggin’ purpose.

    • ByteOnBikes@discuss.onlineOP
      link
      fedilink
      arrow-up
      11
      ·
      5 days ago

      That’s a really good perspective. I shouldn’t get angry, I should see it as the red flags that it is. Thank you.

  • Victoria@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    20
    ·
    5 days ago

    You need to be more creative. Say something like “I asked Gemini to fuck my wife for me because I don’t have time to do it myself”

  • paequ2@lemmy.today
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 days ago

    ................................................................................................................................................................................

    I’ve 💯 had similar experiences… We’re cooked. 💀

    • SCmSTR@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Nononono… That money goes towards a similarly made up model to justify to people that buy and sell stock to give them more money to buy all the ram and hdds. So it’s not being wasted, it’s actively perpetuating the bullshit machine that’s fucking us all and going to either make a massive breakthrough and change life as we know it, or much more likely just crash the entire economy and require the biggest bailout in human history just to soften the blow.

  • Lee Duna@lemmy.nz
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    5 days ago

    I often see this in chat groups, unfortunately it comes from Linux chat groups.

    AI bros are similar to anti-vaccine or MAGA bros and they are short sighted, AI is not for us, but merely a tool for the rich to enrich themselves.

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    8
    arrow-down
    18
    ·
    5 days ago

    Was the original answer wrong though? Because otherwise it seems like you’re the one lying and making stuff up, not the AI.

    This seems to show more that people can bullshit just as much as AI can and that you’re in fact the unreliable source, not the AI.

    • cmhe@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      4 days ago

      From the context, it seems to be a question which answer isn’t easily verifiably. So it is a question unsuitable for LLMs. Because verifying that answer would take more work, then researching and answering it yourself.

      People using AI should know that. So making fun on them is fine.

      That the discussion devolved into thinking wherever or not a certain LLMs first, second or third anwer is wrong, true or truer, or if the prompt needs to be modified, or wherever or not another model is superior in answering these type of questions is exactly that useless and distracting discussion that prevents people from talking about the answer to the question at hand, and the reason why people get less efficient when using LLMs.

      • ThomasWilliams@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        From the context, it seems to be a question which answer isn’t easily verifiably. So it is a question unsuitable for LLMs

        Have you ever used a chatbot ? , the fact that an answer is unverifiable doesn’t stop them answering at all.

        • cmhe@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          Have you ever used a chatbot ? , the fact that an answer is unverifiable doesn’t stop them answering at all.

          Yes, I’ve use chatbots. And yes, I know that they always manage to generate answer full of conviction even while wrong. I never said otherwise.

          My point is about the person using a chatbot/LLM needs to be able to easily verify if a generated reply is right or wrong, otherwise it doesn’t make much sense using LLMs, because they could have just researched the answer directly instead.

      • Not_mikey@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        4 days ago

        Because verifying that answer would take more work, then researching and answering it yourself.

        Not necessarily, again it depends on whether it’s right or wrong. If it’s right it can give you a lead to research into, if it’s wrong then you’re just wasting your time following a dead end.

        You can also just ask the AI to give it’s sources. If the AI is agentic and not just a simple LLM then it is already probably doing a web search and “reading” articles and papers on the topic to synthesize the answer, if it doesn’t give those links with the output you can usually just ask it and it will embed them in the output, making the verification part easier as you can just read them yourself.

        that the discussion devolved …

        Yeah but it seems OP was the one that devolved it. They could’ve just said that they think the AI is wrong for x reasons and continued the discussion but instead they made up a fake model with a fake answer which inevitably leads to a discussion of the different models contradicting each other and why that is. And again this is for a contradiction that probably doesn’t exist.

        This would be like if someone said their doctor told them to do something and I lied and said my doctor told me to do the opposite and the discussion turned to which doctor was more trustworthy. Then saying that even though I lied, the fact that we wasted our time discussing the trustworthiness of doctors means we shouldn’t consult them at all as it takes too much time arguing over my made up scenarios.

        • Shanmugha@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          4 days ago

          Not necessarily, again it depends on whether it’s right or wrong

          Lol. I’ll try to make it nice and easy: how is this going to be fucking discovered, again?

        • cmhe@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          4 days ago

          No! Why should the one receiving a generated answer when asking a person spend more effort validating it, then the person that asked the LLM the answer? They could have just replied: “I don’t know. But I can ask what some LLM can generate, if you like.”

          Unsolicitedly answering someone with LLM generated blubbering is a sign of disrespect.

          And sure, the LLM can generate lots of different sources. Even more than exist, or references to other sites, that where generated by LLMs or written by human authors that used LLMs, or researchers that wrote their papers with LLMs, or refer to other authors that used LLMs and so on and so forth.

          The LLM or those LLM ‘agents’ cannot go outside, and do experiments, interview witnesses or do proper research. All they do is look for hearsay and confidently generate a string of nice sounding and maybe even convincing words full of conviction.

          • Not_mikey@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            4 days ago

            Why should the one receiving a generated answer when asking a person spend more effort validating it

            Because they’re the ones asking the question. You can flip this around and ask why should the person put time into researching and answering the question for OP? If they have no obligation then an AI answer, if it’s right, is better than no answer as it gives OP some leads to research into. OP can always just ignore the AI answer if they don’t trust it, they don’t have to validate it.

            Unsolicitedly answering someone with LLM generated blubbering is a sign of disrespect.

            Fair enough, but etiquette on AI is new and not universal. We don’t know that the person meant to disrespect OP. The mature thing to do would be for OP to say that they felt disrespected by that response instead of pretending like it’s fine and reinforcing the behavior which will lead to that person continuing to do it.

            It’d be like if someone used a new / obscure slur, the right thing to do is inform them how it is offensive, not pretend it’s fine and start using it in the conversation to fuck with them . If they keep using it after you inform them, then yeah fuck them, but make sure your not normalizing it yourself too.

            Meanwhile lying to someone to fuck with them and making stuff up is universally known to be disrespectful. OP was intentionally disrespectful, the other person may not have been.

            The rest of your comment seems to be an argument against getting answers from the Internet in general these days. A person doing research is just as likely as an agent to come across bogus LLM content, and a person also isn’t getting actual real world data when they are researching on the Internet.

            • cmhe@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              3 days ago

              Because they’re the ones asking the question. You can flip this around and ask why should the person put time into researching and answering the question for OP?

              Because they where asked. They don’t need to spend time, if they don’t want to. They can just say: “I don’t know.” or if they want to be more helpful “I don’t know, but I can ask a LLM, if you’d like”. If someone answers they should at least have something to say.

              If they have no obligation then an AI answer, if it’s right, is better than no answer as it gives OP some leads to research into. OP can always just ignore the AI answer if they don’t trust it, they don’t have to validate it.

              No, its not better. They where asked, not the LLM.

              Do you always think like that? Like… If some newspaper start printing LLM generated slop news articles. Would you say, “It is the responsibility of the reader to research if anything in it is true or not?” No, its not! A civilization is build on trust, and if you start eroding that kind of trust, people will more and more distrust each other.

              People that assert something should be able to defend it. Otherwise we are in a post-fact/post-truth era.

              Fair enough, but etiquette on AI is new and not universal. We don’t know that the person meant to disrespect OP. The mature thing to do would be for OP to say that they felt disrespected by that response instead of pretending like it’s fine and reinforcing the behavior which will lead to that person continuing to do it.

              That really depends on the history and relationship between them. We don’t know that, so I will not assume anything. They could have had previous talks where they stated that they don’t like LLM generated replies. But this is beside the point.

              All I assumed is that they likely have not agreed to an LLM reply beforehand.

              It’d be like if someone used a new / obscure slur, the right thing to do is inform them how it is offensive, not pretend it’s fine and start using it in the conversation to fuck with them . If they keep using it after you inform them, then yeah fuck them, but make sure your not normalizing it yourself too.

              No, this is a false comparison. If I want to talk to a person, and ask for their input on a matter. I want their input, not them asking their relatives or friends. I think this is just normal etiquette and social assumptions. This is true even before LLMs where a thing.

              The rest of your comment seems to be an argument against getting answers from the Internet in general these days. A person doing research is just as likely as an agent to come across bogus LLM content, and a person also isn’t getting actual real world data when they are researching on the Internet.

              Well… No… My point is that LLMs (or your agents you are going on about, which are just LLMs, that where able to populate their context with content or random internet searches) are making research generally more difficult. But it is still possible. Nowadays you can no longer trust the goodwill on researchers, you have to get to the bottom of it. Looking up statistics, or doing your own experiments, etc. A person is generally superior to any LLM agent, because they can do that. People in a specific field understand the underlying rules, and don’t just produce strings of words, that they make up as they go. People can research the reputation of certain internet sites, and look further and deeper.

              But I do hope that people will become more aware of these issues, and learn the limits of LLMs. So that they know they cannot rely on them. I really wish that copy&pasting LLM generated content without validating and correcting its output will stop.

              • Not_mikey@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                1
                ·
                3 days ago

                If some newspaper start printing LLM generated slop news articles. Would you say, “It is the responsibility of the reader to research if anything in it is true or not?” No, its not! A civilization is build on trust, and if you start eroding that kind of trust, people will more and more distrust each other.

                As long as the newspaper clearly says it’s written by an LLM that’s fine with me. I can either completely ignore it or take it with a grain of salt. Truth is built on trust but trust should be a spectrum, you should never fully believe or fully dismiss something based on its source. There are some sources you can trust more than others, but there should always be some doubt. I have a fair amount of trust in LLMs because in my experience most of the time they are correct, I’d trust them more than something printed in Breitbart but less than something printed in the New York Times, but even with the new York times I watch out for anything that seems off.

                You, along with most of this sub, seem to have zero trust in LLMs, which is fine, believe what you want. I’m not going to argue with you on that because I’m not going to be able to change your mind just as you won’t be able to change Trump’s mind on the new York times. I just want you to know that there are people who do trust LLMs and do think their responses are valuable and can be true.

                If I want to talk to a person, and ask for their input on a matter. I want their input, not them asking their relatives or friends. I think this is just normal etiquette and social assumptions. This is true even before LLMs where a thing.

                I don’t think this is universal, that may be your expectation, but assuming it’s not something private or sensitive I’d be fine with my friend asking a third party. Like if I texted in a group chat that I’m having car troubles and asked if anyone knows what’s wrong I would not be offended if one of my friends texted back that they’re uncles a mechanic and said to try x. I would be offended if that person lied about it coming from their uncle or lied about their uncle being a mechanic, but in this case the person was very clear about the source of the information they got and it’s “credentials”. Part of the reason I may be asking someone something is if they don’t know the answer they may know someone who knows the answer and forward it on to them.

                Nowadays you can no longer trust the goodwill on researchers, you have to get to the bottom of it. Looking up statistics, or doing your own experiments, etc. A person is generally superior to any LLM agent, because they can do that. People in a specific field understand the underlying rules, and don’t just produce strings of words, that they make up as they go. People can research the reputation of certain internet sites, and look further and deeper.

                I don’t think this is true for every person, maybe for experts, but an AI agent is probably just as good as a layman on doing online research. Yes if you can ask an expert in the field to do the research for you they will be better then an AI agent but that’s rarely an option, most of the time it’s going to be you by yourself, or if your lucky a friend with some general knowledge of the area googling something and looking through the top 3-5 links and using those to synthesize an answer. An AI agent can do that just as well and may have more “knowledge” of the area than the person. Like chatgpt knows more about say the country of Bhutan then your average person, probably not as much as a Bhutanese person, but you probably don’t know a Bhutanese person and can’t ask them the question. It can even research the sources themselves or use a tool that rates the trustworthiness of a source to inform which one is true in a contradiction.

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      I’m tired of pointing out flaws in AI supporters logic. so this is the point where I tell you how flawed your argument is, you get bent, and then fuck off.