Advanced Computing in the Age of AI | Tuesday, July 23, 2024

Four Steps to Ensure GenAI Safety and Ethics 

Atena Reyhani is Chief Product Officer at ContractPodAi

With the deployment of generative artificial intelligence (GenAI) happening at a rapid pace, organizations of all sizes are tasked with navigating the challenges around implementation, especially regarding ethics and accuracy.

What is important for corporate leaders is establishing clear guidelines and guardrails for GenAI that encourage responsible AI usage and avoid unintended consequences. Product teams and business leaders, meanwhile, should secure a path for their organization’s digital transformation journey, safely building and employing AI solutions across the enterprise in an ethical and transparent function.

With that in mind, here are four suggestions on areas companies can focus on to make sure ethics and safety are at the forefront of their GenAI implementation:

Leveraging Specialized LLMs

Public, one-size fits all domains only goes so far when ensuring security for verticalized use cases. This is why organizations deploying GenAI should focus on specialized enterprise large language models (LLMs) and vertical solutions. Vertical solutions add industry-relevant frameworks, customer specific rules and information to enhance precision regarding business' needs.

By using specialized models and vertical solutions, leaders can not only make sure certain AI outputs are relevant to the business and its objectives and specific to the industry but also put in place guardrails for essential accuracy, privacy, and security.

Specialized LLMs can play a vital role in GenAI rollout

Having a guardrails-first mindset and strong governance mechanisms for the responsible use of AI helps protect companies against AI misuse and misleading content on one hand and breaches and cyber threats on the other.

Furthermore, if you’re leveraging a vendor’s GenAI solution, their values and practices must align with your organization’s values. For example, asking the vendor questions about how their data is being trained, what sort of guardrails they have in place, and how they go about ensuring security and ethical usage, will help you narrow down the right GenAI vendor to work with.

Raising AI Awareness with Training Initiatives

When GenAI is adopted and implemented in a company, leaders should implement employee training initiatives to help employees keep up with the technology, support them in understanding how the technology is, and is not, to be used, and to reinforce training on the human-in-the-loop approach when it comes to vetting the GenAI outputs.

This is all in an effort to make them comfortable with the introduction of GenAI and its application to other processes. It is a way to help them understand this technology’s possibilities as much as its limitations and creates a means to foster more literacy, promote engagement, and trust around the technology throughout the company. Also, by continuing to educate people on GenAI usage and offering ongoing training, leaders can build on that awareness and trust, increasing comfortability with AI technology and decreasing chances of its misuse.

Furthermore, not only is it important for the people to be trained, but the output of the GenAI is only as accurate as the data within. Organizations must ensure the data is being trained, and cleansed, that’s feeding into the GenAI.

Ensuring Strict Data Privacy Enterprise-Wide

In large industries—like legal, banking, and healthcare—inputting vast amounts of personal information into publicly accessible AI systems represents a substantial security risk to individuals.

If an organization is leveraging a vendor’s GenAI solution, they can check to see if their vendor fully controls the data that their LLMs are trained on and ensure the customer data isn’t used for model training. The data will be processed by AI, but it’s important that the AI does not learn or retain the customer data for privacy purposes.

Data privacy is of the utmost importance in GenAI deployment

To mitigate privacy risk, then, leaders must implement added safety measures and robust data governance practices—across the enterprise—around how this data is collected and retained.

This involves factoring in privacy considerations during AI design to limit unnecessary data exposure later on; putting strict limits on how long data is stored to prevent the storage of personal information over long periods and reducing the chances of it being exposed to breaches; and anonymizing and aggregating data—or removing identifiable information from datasets and combining individual data points into larger datasets—to shield people’s identities and personal details.

Take a “Human in The Loop” Approach

As mentioned above, AI systems can sometimes generate inaccurate or unexpected outputs—or what are known as “hallucinations.” These occurrences give rise to the need for leaders to supervise and evaluate the quality of AI responses with increasing regularity.

Companies can start by dedicating resources to monitor AI systems to improve their quality and, therefore, their trustworthiness. Think of overseeing the technology as watching over a child’s behavioral development: the quality of the oversight the AI or child is exposed to directly impacts their output and behavior respectively.

This means fostering fair and balanced AI outputs by constantly exposing AI models to diverse, unbiased, and wholly accurate data.

The Future of Generative AI

At the end of the day, the increasing advancement and regulation of GenAI calls for corporate leaders to establish accurate, secure, and trusted forms of the technology. It takes time to increase AI awareness with internal training initiatives, adopt and implement specific large language models, guarantee strict data privacy across the enterprise, and watch over and adjust the very latest AI systems.

But by embracing the unique opportunity to take the above measures—and many others—corporate leaders can advance corporate innovation that is not only ethically sound and socially responsible but also operationally advantageous for their business.


Atena Reyhani is Chief Product Officer at ContractPodAi. Her responsibilities include leading the product vision, product strategy, and roadmap. She leads the product team and works in close collaboration with the rest of the leadership team across the organization to formulate and execute the product vision. Prior to joining ContractPodAi, Atena led various cross-functional teams to develop products in Higher Education, Lottery & Gaming industries. Her educational background is a blend of computer science and business, and her areas of focus include brain-computer interfaces and AI-based business transformation.

EnterpriseAI