• 0 Posts
  • 486 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle

  • While truly defining pretty much any aspect of human intelligence is functionally impossible with our current understanding of the mind, we can create some very usable “good enough” working definitions for these purposes.

    At a basic level, “reasoning” would be the act of drawing logical conclusions from available data. And that’s not what these models do. They mimic reasoning, by mimicking human communication. Humans communicate (and developed a lot of specialized language with which to communicate) the process by which we reason, and so LLMs can basically replicate the appearance of reasoning by replicating the language around it.

    The way you can tell that they’re not actually reasoning is simple; their conclusions often bear no actual connection to the facts. There’s an example I linked elsewhere where the new model is asked to list states with W in their name. It does a bunch of preamble where it spells out very clearly what the requirements and process are; assemble a list of all states, then check each name for the presence of the letter W.

    And then it includes North Dakota, South Dakota, North Carolina and South Carolina in the list.

    Any human being capable of reasoning would absolutely understand that that was wrong, if they were taking the time to carefully and systematically work through the problem in that way. The AI does not, because all this apparent “thinking” is a smoke show. They’re machines built to give the appearance of intelligence, nothing more.

    When real AGI, or even something approaching it, actually becomes a thing, I will be extremely excited. But this is just snake oil being sold as medicine. You’re not required to buy into their bullshit just to prove you’re not a technophobe.



  • Noted. I’ll have to play around with that sometime.

    Despite my obvious stance as an AI skeptic, I have no problem with putting it to use in places where it can be used effectively (and ethically). I’ve just found that in practice, those uses are varnishingly few. I’m not on some noble quest to rid the world of computers, I just don’t like being sold overhyped crap.

    I’m also hesitant to try to rebuild any part of my workflow around the current generation of these tools, when they obviously aren’t going to exist in a few years, or will exist but at an exorbitant price. The cost to run genAI is far, far higher than any entity (even Microsoft) has any willingness to sustain long term. We’re in the “give it away or make it super cheap to get everyone bought in” phase right now, but the enshittification will come hard and fast on this one, much sooner than anyone thinks. OpenAI are literally burning billions just in compute right now. It’s unsustainable. Short of some kind of magical innovation that brings those compute costs down a hundred or thousand fold, this isn’t going to stick around.



  • More and more advanced tools for automation are an important part of creating a post-scarcity future. If we can combine that with tearing down our current economic system - which inherently requires and thus has to manufacture scarcity - we can uplift our species in ways we can currently only imagine.

    But this ain’t it bud. If I ask you for water and you hand me a glass of warm piss, I’m not “against drinking water” for refusing to gulp it down.

    This isn’t AI. It isn’t - meaningfully and usefully - any form of automation at all. A bunch of conmen slapped the letters “AI” on the side of their bottle of piss and you’re drinking it down like it’s grandma’s peach tea.

    The people calling out the fundamental flaws with these products aren’t doing so because we hate the entire concept of automation, any more than someone exposing a snake-oil salesman hates medicine. What we hate is being lied to. The current state of this technology is bullshit and hype. It is not fit for human consumption (other than recreationally) and the money being pumped into it could be put to far better uses. OpenAI may have lofty goals, but they have utterly failed at achieving them, and right now any true desire to create AGI has been totally subsumed by the need to keep pumping out slightly better looking versions of the same polished turd in order to convince investors to keep paying for their staggeringly high hosting costs.








  • The plagiarism engine effect is exactly what you need for a good programming tool. Most problems you’re ever going to encounter are solved and GenAI becomes a very complex code autocomplete.

    In theory, yes, although having actually tried using genAI as a programming tool, thy actual results are deeply lacklustre at best. It sort of works, under the right circumstances, but only if you already know enough to confidently do the job yourself, at which point the value in having an AI do it for you, and then having to check the AI’s work for any of a million possible fuck ups, seems limited at best.


  • The reasonable in-between is despising without presently fearing.

    GenAI is a plagiarism engine. That’s really not something that can be defended. But as a means of automating away the jobs of writers it has proven itself to be so deeply deficient that there’s very little to fear at this time.

    The arrival of these tools has, however, served as a wake up call to groups like the screenwriters guild, and I’m very glad that they’re getting proper rules in place now before these tools become “good enough” to start producing the kind of low grade verbal slurry that Hollywood will happily accept.




  • The problem is that even the specific things they’re good at, they don’t do well enough to justify spending actual money on. And when I say “actual money”, I’m not talking about the hilariously discounted prices AI companies are offering in an effort to capture an audience.

    A bot that can do a job reasonably well, but still needs a human to check their work is, from an employment perspective, still an employee, just now with some very expensive helper software. And because of the inherent unreliability of LLMs, a problem that many top figures in the industry are finally admitting may never be solved, they will always need a human to check their work. And that human has to be competent enough to do the job without the AI, in order to figure out where and how it went wrong.

    GenAI was supposed to put us all out of work, and maybe one day it will, but the current state of the technology isn’t remotely close to being good enough to do that. It turns out that while bots can very effectively look and sound like humans, they’re not remotely capable of thinking like humans, and that actually matters when your chatbot starts promising customers discounts that don’t actually exist, to name one real example. What was treated as being the last ten percent is actually looking more and more like ninety-nine percent of the work in terms of creating something that can effectively replace a human being.

    (As an aside, I can’t help but feel that a big part of this epic faceplant arises from Silicon Valley fully ingesting the bullshit notion of “unskilled labour”. Turns out working the drive thru at McDonald’s is a more complicated job than people think, including McDonald’s themselves. We’ve so undervalued the skills of vast swathes of our population that we were easily deluded into thinking they could all be replaced by simple machines. While some of those tasks certainly can, and will, be automated, there are some human elements - especially in conflict resolution - that are really hard to replace)


  • “What are the chances…”

    Approximately 100%.

    That doesn’t mean that the slide will absolutely continue. There may be some fresh injection of hype that will push investor confidence back up, but right now the wind is definitely going out of the sails.

    The core issue, as the Goldman - Sachs report notes, is that AI is currently being valued as a trillion dollar industry, but it has not remotely demonstrated the ability to solve a trillion dollar problem.

    No one selling AI tools is able to demonstrate with confidence that they can be made reliable enough, or cheap enough, to truly replace the human element, and without that they will only ever be fun curiosities.

    And that “cheap enough” part is critical. It is not only that GenAI is deeply unreliable, but also that it costs a truly staggering amount of money to operate (OpenAI are burning something like $10 billion a year). What’s the point in replacing an employee you pay $10 an hour to handle customer service issues with a bot that costs $5 for every reply it generates?