Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.
Are they going to ‘red-team’ away adversarial prompting as well? Doubt it. Sooooo the issue is the input data. Always has been.