Awareness of tested artificial intelligence (AI) systems was noted by Microsoft AI Red Team (AIRT) researchers to be valuable in red teaming strategy development, SC Media reports.
Microsoft AIRT researchers said aside from considering the increased risk associated with larger AI models and model applications in creating test scenarios, AI red teamers should also examine realistic scenarios involving the exploitation of straightforward attack techniques likely to get leveraged by malicious actors.
The AIRT group, which evaluated 100 generative AI apps, models, plugins, and copilots, also touted the benefits of Microsoft's Python Risk Identification Tool for generative AI framework in AI safety and security testing, while emphasizing the need to develop more robust AI systems.
"In the absence of safety and security guarantees, we need methods to develop AI systems that are as difficult to break as possible," said the AIRT researchers. "One way to do this is using break-fix cycles, which perform multiple rounds of red teaming and mitigation until the system is robust to a wide range of attacks."