The main limitation I see LLMs have right now is they can only produce output that’s derived from a training set. So you will never see anything completely outside that training set get generated.
For programmers, this means if you’re solving a problem that is brand new or novel in some way, the AI can’t do it. Similar to if you limited a human to copy and pasting code and never writing any from scratch.
For artists, this means that if someone asked the AI for an image outside of it’s training data of stock photography and popular culture, it’s not going to do it. Even just trying to create a novel perspective can completely fail when prompting an AI, which can really limit the AI’s ability to make creative and interesting images.
How much others care about these limitations will ultimately determine if either person will get replaced in their job.
The main limitation I see LLMs have right now is they can only produce output that’s derived from a training set. So you will never see anything completely outside that training set get generated.
For programmers, this means if you’re solving a problem that is brand new or novel in some way, the AI can’t do it. Similar to if you limited a human to copy and pasting code and never writing any from scratch.
For artists, this means that if someone asked the AI for an image outside of it’s training data of stock photography and popular culture, it’s not going to do it. Even just trying to create a novel perspective can completely fail when prompting an AI, which can really limit the AI’s ability to make creative and interesting images.
How much others care about these limitations will ultimately determine if either person will get replaced in their job.