• Syntha@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    7 months ago

    AI is not prone to hallucinations, LLMs are. I doubt Amazon is building a chatbot to optimise packaging.

    • Ashelyn@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      I mean, AI is used in fraud detection pretty often; when it hits a false positive (which happens frequently on a population-level basis), is that not a hallucination of some sort? Obviously LLMs can go off the rails much further because it’s readable text, but any machine learning model will occasionally spit out really bad guesses almost any person could have done better with. (To be fair, humans are highly capable of really bad guesses too).

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        No, false positives and false negatives are not hallucinations. Otherwise things like a blood test not involving any ml would also be halucinating which removes all meaning from the term.

        • Ashelyn@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          That’s fair. I think fundamentally a false positive/negative isn’t that much different. Pretty much all tests—especially those dealing with real world conditions—are heuristic, as are all LLMs by necessity of the design. Hallucination is a pretty specific term given to AI as an attempt to assign agency to a system that doesn’t actually have any (by implying it’s crazy and making stuff up instead of a black box with deterministic inputs and outputs spitting out something factually wrong but with a similar format to what is trained on). I feel like the nature of any tool where “you can’t trust this to be entirely accurate” should have an umbrella term that encompasses both types of providing inaccurate info under certain conditions.

          I suppose the difference is that AI is a lot more likely to randomly go off, whereas a blood test is likelier to provide repeated false positives for the same person with their unique biology? There’s also the fact that most medical tests represent a true/false dichotomy or lookup table, whereas an LLM is given the entire bounds of language.

          Would an AI clustering algorithm (say, K-means for instance) giving an inaccurate diagnosis be a false positive/negative or a hallucination? These models can be programmed on a sliding scale and I feel like there’s definitely an area where the line could get pretty blurry.

    • Llewellyn@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      What do you consider to be an AI?
      And do you consider any of the existing systems to be the one?

      • Syntha@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        When I use “AI” I’m using computer science terminology. Artificial intelligence is a subfield of CS, in that sense, any model that comes of that field is, by definition, AI.

        • Llewellyn@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          7 months ago

          Then it’s strange that you are separating AI and LLM, because in CS LLM is a type of artificial intelligence.

          • Syntha@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Some AI, namely, LLMs, can hallucinate, but not AI in general. I just had a bit of fun in how I worded it, I guess I should’ve expected someone to become annoyingly nitpicky about it.

              • Syntha@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                I don’t think I was being wrong, technically, I do think you can write that way if you want to be a bit facetious, but I’m not a native speaker so, maybe not.

    • polygon6121@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      7 months ago

      AI in general is definitely prone to hallucinations. It is more commonly seen in LLMs because it is more widely used by the public. It is definitely a problem with all AI

        • polygon6121@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Text to video, automated driving, object detection, language translations. I might be misusing the term, you could argue that the word is describing what LLMs commonly does and that is where the term is derived from. You can also argue that AI is sometimes correct and the human have issues identifying the correct answer. But In my mind it is much the same just different applications. A car completely missing a firetruck approaching or a LLM just spewing out wrong statements is the same to me.

          • Syntha@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Yeah, well it’s not the same. Models are wrong all the time, why use a different term at all when it’s just “being wrong”?

            • polygon6121@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              The model makes decisions thinking it is right, but for whatever reason can’t see a firetruck or stopsign or misidentifies the object… you know almost like how a human hallucinating would perceive something from external sensory that is not there.

              I don’t mind giving it another term, but “being wrong” is misleading. But you are correct in the sense that it depends on every given case…

              • Syntha@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                No, the model isn’t “thinking”, no model in use today has anything resembling an internal cognitive process. It is making a prediction. A covid test is predicting whether you have the Covid-19 virus inside you or not. If its prediction contradicts your biological state, it is wrong. If an object recognition algorithm does not predict there being a firetruck, how is that not being wrong in the same way?