AI Governance: How Blockchain Can Build Accountability and Trust
A recent article by McKinsey & Co. perhaps best sums up the current zeitgeist on corporate environmental, social and governance (ESG) initiatives: “Although valid questions have been raised about ESG, the need for companies to understand and address their externalities is likely to become essential to maintaining their social license.” Artificial intelligence (AI) technology is a juggernaut of an externality, increasingly influencing how social license is granted – or not. Fortunately, amidst widespread public mistrust of artificial intelligence, using blockchain technology for AI governance can profoundly help companies wanting to build public trust in their responsible use of this technology and, in turn, social license.
Using blockchain for AI governance
Simply put, using blockchain technology to immutably record all the decisions made about an AI or machine learning (ML) model is a major step toward transparency, a critical precursor to trust. This use of blockchain allows auditability, as well, to further help establish trust. These tenets are at the heart of an AI governance model built around a corporate AI and model development standard, and enforced by blockchain technology.
Developing an AI decisioning model is a complex process that comprises myriad incremental decisions. These include the model’s variables, model design, algorithms, training and test data utilized, selection of features, the model’s raw latent features, ethics testing and stability testing. It also includes the scientists who built different portions of the variable sets, participated in model creation, and performed model testing. As enabled by blockchain technology, the sum and total record of these decisions provides the visibility required to effectively govern models internally according to corporate-defined standards, ascribe accountability and satisfy impending regulatory requirements.
Steps to codify accountability
Before blockchain became a buzzword, I began implementing an analytic model management approach in my data science organization. In 2010 I instituted a development process centered on an analytic tracking document (ATD). This approach detailed model design, variable sets, scientists assigned, train and testing data, success criteria and ethics/robustness testing. The ATD breaks down the entire development process into three or more agile sprints, with formal reviews and approvals at each stage of fulfillment.
I have since made blockchain the linchpin of the ATD; it’s the mechanism used to codify analytic and ML model development by associating a chain of entities, work tasks and requirements with each individual model, including testing and validation checks. Blockchain technology essentially records an immutable instance of the contract between my data scientists, managers and me that describes:
- What the model is
- The model’s objectives
- How we’d build that model, including prescribed ML algorithm
- Areas that the model must improve upon, for example, a 30% improvement in card not present (CNP) credit card fraud at a transaction level
- The degrees of freedom the scientists have to solve the problem, and those which they don’t
- Re-use of trusted and validated variable and model code snip-its
- Training and test data requirements
- Ethical AI procedures and tests
- Robustness and stability tests
- Specific model testing and model validation checklists
- Specific assigned analytic scientists to build the variables, models, train them and those who will validate code, confirm results, perform testing of the model variables and model output
- Specific success criteria for the model and specific customer segments
- Specific analytic sprints, tasks and scientists assigned, and formal sprint reviews/approvals of requirements met.
As you can see, the ATD informs a set of very specific requirements that are linked to the corporate model development AI standard. Once we’ve all negotiated of our roles, responsibilities, timelines, and requirements of the build, everyone on the team signs the ATD as a contract. It becomes the document by which we define the entire Agile model development process.
Having individuals assigned to each of the requirements, the team then assesses a set of existing collateral, which are typically pieces of previous validated variable code and models. Some analytic variables have been approved in the past, others will be adjusted, and still others will be new. The blockchain then records each time the variable is used in this model – for example, any code that was adopted from code stores, written new, and changes that were made – who did it, which tests were done, and the modeling manager who approved it, and my sign-off.
Importantly, the blockchain provides a trail of decision-making. It shows if a variable is acceptable, if it introduces bias into the model, or if the variable is utilized properly. The blockchain produces not just a checklist of positive outcomes. It records the entire journey of building these models, including their mistakes, corrections and improvements.
This approach affords a high level of confidence that no one has added a variable to the model that performs poorly or introduces some form of bias. It ensures that no one used an incorrect field in their data specification or changed validated variables without permission and validation. Without the critical review process afforded by the ATD and now blockchain to hold it auditable, my data science organization could inadvertently introduce a model with errors, particularly as these models and associated algorithms become more and more complex.
Furthermore, the blockchain system ensures that corporate standards for ethics and stability testing are performed, reviewed and approved. The process captures and codifies which relationships must be monitored when the model is in production, to meet Responsible AI standards.
Transparent development journeys result in less bias
Overlaying the corporate model development standard on the blockchain gives the analytic model its own entity, life, structure and description. Model development becomes a rigorously organized process; detailed documentation can be produced to ensure that all elements have gone through the proper review and the model’s decisions are free of bias. These steps will be revisited while the model is in production, providing an essential AI monitoring framework for the operational phase of AI model governance. These assets inform observability and monitoring requirements when the model is ultimately used, necessary to maintain trust in its decisioning.
In sum, blockchain allows complex AI models to become transparent and auditable. These are critical factors in making AI technology accountable and trustworthy – an essential step in building AI governance systems that can renew, instead of erode, companies’ social license.
Scott Zoldi is chief analytics officer at FICO responsible for the analytic development of FICO's product and technology solutions. While at FICO, Scott has been responsible for authoring more than 110 analytic patents, with 71 granted and 46 pending. Scott is actively involved in the development of new analytic products and Big Data analytics applications, many of which leverage new streaming analytic innovations such as adaptive analytics, collaborative profiling and self-calibrating analytics. Scott is most recently focused on the applications of streaming self-learning analytics for real-time detection of cyber security attacks. Scott serves on two boards of directors, Software San Diego and Cyber Center of Excellence. Scott received his Ph.D. in theoretical and computational physics from Duke University.