Listening to Richard Dawkins today who was putting forward warnings about AI, was only a small part of a bigger talk, but he warned about the programming of AI in that if you was to seek assurances of securing the future of the planet or a subject along those lines then the likely outcome would be to get rid of humans, was interesting talk about how well meaning messages might come with dire long term consequences
Or a terrorist could create an ai and tell it to get rid of humans.
Long way off that yet though. With all the hype they are mostly large language models which basically are good at predicting what to say in response to a question. Basically a glorified predictive text. Even where they can code they are just using code they have learned from the internet.
Will that change? Yes most likely. Will it effect jobs? Definitely. Will people abuse them? Yep. Wipe out humanity? Not this week at least.