Advanced Computing in the Age of AI | Tuesday, March 19, 2024

To Solve AI Ethics Gaps, We Must First Acknowledge The Problem Exists 

Companies are adopting AI solutions at unprecedented rates, but ethical worries continue to dog the roll outs. While there are no established standards for AI ethics, a common set of guidelines is beginning to emerge to help bridge the gap between ethical principles and the AI implementations. Unfortunately, a general hesitancy to even discuss the problem could slow efforts to find a solution.

As the AI Ethics Chief for Boston Consulting Group, Steve Mills talks with a lot of companies about their ethical concerns and their ethics programs. While they’re not slowing down their AI rollouts because of ethics concerns at this time, Mills says, they are grappling with the issue and are searching for the best way to develop AI systems without violating ethical principles.

Steve Mills of Boston Consulting Group

“What we continue seeing here is this gap, what we started calling this the responsible AI gap, that gap from principle to action,” Mills says. “They want to do the right thing, but no one really knows how. There is no clear roadmap or framework of this is how you build an AI ethics program, or a responsible AI program. Folks just don’t know.”

As a management consulting firm, Boston Consulting Group is well positioned to help companies with this problem. Mills and his BCG colleagues have helped companies develop AI programs. Out of that experience, they recently came up with a general AI ethics program that others can use as a framework to get started.

It has six parts, including:

      1. Empower Responsible AI Leadership – Appoint a leader who will take responsibility and give her a team;
      2. Develop principles, policies, and training – These are the core principles that will guide AI development;
      3. Establish human and AI governance – The system for reviewing adherence to principles and for participants to voice concerns;
      4. Conduct Responsible AI reviews – Build or buy a tool to conduct reviews of AI systems at scale;
      5. Integrate tools and methods – Directly imbuing ethical AI considerations into the AI tools and tech;
      6. Build a test-a-response plan – The system for responding to lapses in principles and for testing. You can read more about BCG’s six-part plan here.

    The most important thing a company can do to get started is to appoint somebody to be responsible for the AI ethics program, Mills says. That person can come from inside the company or outside of it, he says. Regardless, he or she will need to be able to drive the vision and strategy of ethics, but also understand the technology. Finding such a person will not be easy (indeed, just finding AI ethicists let alone executives who can take this role is no easy task).

    “Ultimately, you’re going to need a team. You’re not going to be successful with just one person,” Mills says. “You need a wide diversity of skill sets. You need bundled into that group the strategists, the technologists, the ethicists, marketing–all of it bundled together. Ultimately, this is really about driving a culture change.”

    To read the rest of this story, go to our sister site, Datanami.

About the author: Alex Woodie

Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.

EnterpriseAI