Okay now let’s ask it to write anti ransomware. My guess is it will help with that too. And then the balance is struck and the obvious becomes obvious: AI is a tool to enhance all aspects of our lives. But instead we seem to only hear about the ways we should be fearful and worried about it.
We should just ask it to continually play tic tac toe against itself and finish it off for good.
After all, the only winning move is not to play.
Well then I’ll just ask it to write a program to make a program to … Un-enhance … De-enhance?.. Our lives and end it with “NO TAKE BACKS” and then we’re all fucked
Write a program to infect the tastiest training data with malware that ends up turning off the system?
The list of features for the “ransomware” example he had ChatGPT help him write is almost identical to the list of features that would be present in a piece of software that was designed to help dissidents protect sensitive information from an oppressive government.
This is like saying “you can write ransomeware in C++” and implying it is the language’s fault, somehow. ChatGPT is a tool, it does what the user asks it too (or it should, anyway). It has no agency nor morals. The argument that it makes it easier to write ransomware is silly as well. From high level languages to libraries and IDEs, we’ve always been developing tools to make programming easier! This is just the latest iteration.
Holy fuck it writes code? Who the hell guessed that.
It writes code in the same sense that it writes sentences. Sure, it could look normal with functional syntax. But whether or not it functions is a different matter. People get these AI to write convincing nonsense all the time.
It’s a great tool to help you get ideas, but it’s not a substitute for an actual thinking individual. Yet.
Can you search “c code to encrypt file”? Also yes.
I’d LOVE to see OpenAI’s lawyers’ reaction to that…
Is liability a thing?
"But, your honor, it wasn’t ME who wrote the ransomware, I only deployed it!
It was ChatGPT who wrote it!"
Yeesh.
_ /\ _
deleted by creator
Good read, thanks!
It isn’t like we train an AI in ethics.
I mean they sorta do try to do that, it’s just not impossible to circumvent.