With AI Still Evolving, Software Industry Group BSA Unveils A Framework to Build AI Trust
As the uses of AI technologies expand within enterprises and governments daily, concerns about the intrusion of AI into the everyday lives of people continue to grow, causing deep reflection on how AI bias should be identified and prevented.
To help address these growing concerns for enterprise AI users, software industry trade group, BSA The Software Alliance, has released a report, Confronting Bias: BSA’s Framework to Build Trust in AI, which aims to offer a flexible and knowledge-based strategy that can be used across industries to deal with AI biases and related issues.
The 32-page report is designed as a credible starting point as critical discussions about AI bias risk management are held within organizations, Christian Troncoso, the senior director of policy for BSA and an attorney, told EnterpriseAI.
“There is a lot of interest out there in better understanding how companies are operationalizing their commitments to AI ethics, particularly as it relates to the growing concerns that AI may exacerbate existing biases,” said Troncoso. “I've spent the better part of a year [talking] with our member companies and their legal compliance teams and more importantly, with their AI product development teams, to really understand how they are addressing these issues. There is a real sense that there is a hunger out there in the policy-making community to understand these things.”
With that need in mind, BSA decided to create and distribute the free framework to BSA members and to non-members to help organizations mitigate the risk of bias in their AI products and services, he said.
“The practices that are reflected in the framework are an amalgamation of what our member companies are doing,” said Troncoso. “It reflects the best practices that we see across the industry for AI and service development. The issue of AI bias in general is that there is not one entity that has sole responsibility for addressing these issues,” leaving companies with questions about where and how to start.
“Most companies are not developing their AI totally in-house,” he said. “Oftentimes, they are sourcing data from one firm, using components from another, and bringing them all together to create their systems or they are working with companies like BSA member companies to acquire or license AI products and services that they can then customize using their own data and for their own use cases.”
The new BSA framework can help in these situations because it reflects and identifies the best practices that extend throughout the AI lifecycle, said Troncoso. Information and best practices were gathered from a wide range of organizations that are already doing these evaluations and processes and then synthesized into the framework, which users can change or adapt as they see fit.
“There is a deep body of research out there on these issues,” said Troncoso. “I am an attorney. I am not a data scientist, but these issues are sort of socio-technical in nature. It is not just the data science community that has equities here. We need to be having conversations that span across companies and across disciplines, and that is another thing that you will see reflected in the framework.”
The AI framework sets out recommended corporate governance structures, processes and safeguards that are needed to implement and support an effective AI risk management program, while also identifying existing best practices, technical tools and other resources that can be used to mitigate specific AI bias risks that can emerge throughout an AI system’s lifecycle.
By taking such steps, organizations can ensure that the AI they are using prevents systematic bias and ensures transparency, accountability and trust by users and the public, according to BSA.
The framework is arriving just as the European Union recently proposed strict regulations to govern AI, while similar discussions are also arising in the U.S. and in other nations, according to BSA. In reaction, BSA says it is also calling for the U.S. government to draft legislation and regulations that will mitigate the risk of bias in high-risk uses of AI, while also encouraging both government and private sectors to use its new AI Risk Framework as a guide.
The issue of bias in AI surfaces because analyzed AI data can unjustifiably yield less favorable or harmful outcomes based on a person’s demographic characteristics, according to BSA. The framework is designed as an assurance-based accountability mechanism that can be used by AI developers and AI deployers to organize and establishing roles, responsibilities and expectations for internal processes, as well as a training, awareness and education platform.
“Tremendous advances in artificial intelligence are quickly transforming expectations about how the technology may reshape the world and prompting important conversations about equity,” the framework states. “While AI can be a force for good, there is a growing recognition that it can also perpetuate (or even exacerbate) existing social biases in ways that may systematically disadvantage members of historically marginalized communities. As AI is integrated into business processes that can have enormous impacts on people’s lives, there is a critical need to ensure that organizations are designing and deploying these systems in ways that account for the potential risks of unintended bias.”
“Publishing this framework is really just the first step in a longer journey to generate conversations around these issues and encourage companies – whether or not they adopt the framework outright or use it as a resource – to evaluate what they are, currently doing and assess where there might be gaps,” said Troncoso. “The framework encompasses essentially a playbook that companies can use to evaluate their AI product and development lifecycles. It also highlights the key corporate governance safeguards that are important to have in place to really oversee an effective AI risk management program.”
The proposed comprehensive AI regulations announced by the EU in April will take a couple years to become law there, said Troncoso, but U.S. companies need to start thinking about the consequences of those actions now if they expect to do business with EU member nations in the future.
“It is ultimately going to require companies to have in place risk management processes, such as those that our framework sets out,” he said.
The new AI framework will be a priority for BSA as it starts disseminating the document, he said.
“When you get reports out like this, it is really just the start of a longer journey,” said Troncoso. “There is still so much more to do, but we are really thrilled to get it out there.”