- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Archive Link: https://web.archive.org/web/20240330224149/https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
This is fascinating. I’ve certainly seen AI hallucinating things like imaginary functions in gdscript. Admittedly, it does it a lot more with gpt3 than with gpt4 on a subscription, which is consistent with what 3 vs 4 has access to, but I’m sure the problems apply in a lot of other use cases that might have not had the benefit of more recent documentation.
I suppose it’s not surprising that a number of larger entities have been falling prey to this, as they keep trying to inappropriately jam AI into their production lines where it’s incapable of doing the job. Pretty clever vulnerability to find, though.
Ultimately, this is probably a good thing for human coders, imo. The more LLMs demonstrate that they’re not effective without robust human intervention, the better.
So… there will be organizations that will train devs for this, and that will outright ban LLMs. I know which mine will be. Time to reconsider my job, and possibly my place of work…
I think there will be a market for “corporate compliant and secure!!” LLMs. “Pay us gobs of money so you don’t get ‘hacked’ by dumb LLM users”