Bugcrowd has announced the availability of AI Bias Assessments as part of its AI Safety and Security Solutions portfolio on the Bugcrowd Platform.
AI Bias Assessment taps the power of the crowd to help enterprises and government agencies adopt Large Language Model (LLM) applications safely, efficiently, and confidently.
LLM applications run on algorithmic models that are trained on huge sets of data. Even when that training data is curated by humans, which it often is not, the application can easily reflect “data bias” caused by stereotypes, prejudices, exclusionary language, and a range of other possible biases from the training data. Such biases can lead the model to behave in potentially unintended and harmful ways, adding considerable risk and unpredictability to LLM adoption.
Some examples of potential flaws include Representation Bias (disproportionate representation or omission of certain groups in the training data), Pre-Existing Bias (biases stemming from historical or societal prejudices present in the training data), and Algorithmic Processing Bias (biases introduced through the processing and interpretation of data by AI algorithms).
The public sector is urgently affected by this growing risk. As of March 2024, the US Government mandated its agencies to conform with AI safety guidelines – including the detection of data bias.
That mandate extends to Federal contractors later in 2024.
This problem requires a new approach to security because traditional security scanners and penetration tests are unable to detect such bias.
Bugcrowd AI Bias Assessments are private, reward-for-results engagements on the Bugcrowd Platform that activate trusted, third-party security researchers (aka a “crowd”) to identify and prioritize data bias flaws in LLM applications.
Participants are paid based on the successful demonstration of impact, with more impactful findings earning higher payments.
The Bugcrowd Platform’s AI-driven approach to researcher sourcing and activation, known as CrowdMatchTM, allows it to build and optimize crowds with virtually any skill set, to meet virtually any risk reduction goal, including security testing and beyond.
“Bugcrowd’s work with customers like the US DoD’s Chief Digital and Artificial Intelligence Office (CDAO), along with our partner ConductorAI, has become a crucial proving ground for AI detection by unleashing the crowd for identifying data bias flaws,” said Dave Gerry, CEO of Bugcrowd. “We’re eager to share the lessons we’ve learned with other customers facing similar challenges.”
“ConductorAI’s partnership with Bugcrowd for the AI Bias Assessment program has been highly successful. By leveraging ConductorAI’s AI audit expertise and Bugcrowd’s crowdsourced security platform, we led the first public adversarial testing of LLM systems for bias on behalf of the DoD. This collaboration has set a solid foundation for future bias bounties, showcasing our steadfast commitment to ethical AI,” said Zach Long, Founder, ConductorAI.
“As the leading crowdsourced security platform provider, Bugcrowd is uniquely positioned to meet the new and evolving challenges of AI Bias Assessment, just as we’ve met the emergent security challenges of previous technology waves such as mobile, automotive, cloud computing, crypto, and APIs,” said Casey Ellis, Founder and Chief Strategy Officer of Bugcrowd.