Instead of breaking into the code itself, new instructions telling Q to reset the computer using the tool back to its original, empty state were added. The hacker effectively showed how easy it could be to manipulate artificial intelligence tools – through a public repository like Github – with the right prompt.
Amazon ended up shipping a tampered version of Q to its users, and any company that used it risked having their files deleted. Fortunately for Amazon, the hacker deliberately kept the risk for end users low in order to highlight the vulnerability, and the company said it “quickly mitigated” the problem.
NEW VULNERABILITIES
But this won’t be the last time hackers try to manipulate an AI coding tool for their own purposes, thanks to what seems to be a broad lack of concern about the hazards.
More than two-thirds of organisations are now using AI models to help them develop software, but 46 per cent of them are using those AI models in risky ways, according to the 2025 State of Application Risk Report by Israeli cyber security firm Legit Security. “Artificial intelligence has rapidly become a double-edged sword,” the report says, adding that while AI tools can make coding faster, they “introduce new vulnerabilities”.
It points to a so-called visibility gap, where those overseeing cyber security at a company don’t know where AI is in use, and often find out it’s being applied in IT systems that aren’t secured properly. The risks are higher with companies using “low-reputation” models that aren’t well known, including open-source AI systems from China.