Threat actors could weaponize rules files with obfuscated malicious instructions using the new "Rules File Backdoor" technique to enable GitHub Copilot, Cursor, and other artificial intelligence coding assistants to generate malware- or vulnerability-containing code, according to SC Media.
Malicious rules files, which could be spread via GitHub and other open-source platforms, could allow circumvention of security checks, and also allow the generation of code that exposes database credentials, API keys, and other sensitive details, a report from Pillar Security revealed.
With GitHub and Cursor both emphasizing users' responsibility in reviewing code generated by their respective AI coding assistants, developers have been urged by researchers to evaluate rules files for possible malicious injections, bolster examination of AI configuration files and AI-generated code, and leverage automated detection tools.
These findings follow a GitHub survey that showed developers' near-universal usage of generative AI in and out of work, highlighting the technology's pervasiveness.