AI and privacy frameworks: Some assembly required
Now is the time to make sure your organization is prepared for the risks of AI. Read on to learn how to build an AI governance program with privacy and security in mind, understanding the risks, and implementing safeguards to protect your organization.
Historically in the United States, privacy regulations were developed in reaction to specific events: consumers felt their trust was violated, and the government responded to prevent that from happening again in the future.
All data pertaining to a person is important, and regulators have acted with increasing urgency to enact consumer data protection laws. This urgency has translated to swift action, as AI technology is in the midst of drastic changes that put personal data at risk without ethical and responsible governance.
The AI landscape and regulation
For the past 20 years, we’ve used the same AI components in different forms — primarily machine learning models in technologies designed to improve workflow efficiencies, and data or behavioral analytics. However, we have seen exponential growth in the development of AI in the last few years, with the emergence of large language models (LLMs) and generative AI.
The viral popularity of these AI tools has led to an enormous, global call to action on privacy and protecting personal data. Regulators released papers and proposed legislation, and governments from all over the world issued materials on how to regulate this space. The White House released the AI Bill of Rights and an Executive order, and the EU released the AI Act. In short, the rise in new AI technologies led to governments, regulatory bodies, and professional organizations reacting faster than ever before.
Best practices to build an AI Governance program today
As a compliance professional, it can be extremely daunting to try and keep an organization’s standards in check with piles of documents and numerous frameworks outlining how to do so with different methods. Furthermore, despite the rise in AI and the large task of implementing safeguards, few compliance and privacy teams have adequately expanded to review and respond to all this information.
So, where to start?
The internal team
A strong AI Governance program must include diverse stakeholders. A logical first step is to establish an AI committee — a brain trust of interdisciplinary, multi-functional team members from various departments. AI may be used in many areas of an organization (HR, data science, legal, third-party risk assessment, security/IT, developers), so you need representative team members involved to inform the committee on their department’s specific AI risks and how to navigate them.
Once the AI committee is established, start asking questions to determine how your program should function within the organization, such as:
- Does your organization have reason to consider AI governance?
- Does your organization use AI tools?
- Does your organization develop AI tools?
- Do your vendors use or provide services with AI tools?
Reflecting on these questions will help define how narrow or broad the scope of your program should be, and how to map out which areas your organization should review.
Next, figure out what risks are inherent to the ways your organization utilizes AI. Consider whether all teams should be able to utilize solutions with AI, or if this should be limited to only some teams or team members.
If you do build tools that utilize AI, be sure to include the developers building those products on your AI committee. Their insider perspective on the potential risks and options for mitigating those risks is invaluable.
External resources
Even with a robust AI committee, program, and management plan, third party reviews and certifications are the next step for maturity. Just as one would hire an inspector to ensure a house is built properly, you may want to bring in a third party with AI expertise. Third-party auditors are the home inspectors of privacy frameworks. They can review the overarching program to provide feedback and expert advice on how to resolve any issues they find.
There are also free resources to help your organization build a privacy framework. The National Institute of Standards and Technology (NIST) has put together a great resource called the AI risk management framework. This framework consolidates all the proposed legislation and materials released by governing bodies across the globe to help organizations understand the governance process and what must be considered in the AI space.
The AI risk management framework is in version 1, with feedback still being accepted. Nevertheless, the framework can be very helpful to compliance teams without AI expertise. Even if you do have expertise, a consolidated framework is a good benchmarking exercise. Another great resource is the IAPP AI Governance Center, which includes content, networking opportunities, and thought leadership.
Vendor considerations
Third-party vendors are adding generative AI-enabled features and products at lightning speed, sometimes without explicit notice. This can present risks that privacy, security, and third party risk management teams must consider in their AI governance programs. But what if there are no AI experts on your team? How do you cut through the confusion?
A good first step is to look at what your team is already doing, what you know well, and how you can leverage existing protocols. How are you currently looking at and assessing vendors in your third-party risk management program? Gathering information on what systems and data vendors are accessing and how they are using that info is usually part of the process, and this same information is important in AI governance. How data are processed and secured is another overlapping governance area.
Take time to evaluate the use case of a product, and whether it is accessing highly sensitive data, such as biometric identification, or involved in high-risk automated decision-making such as hiring and firing, etc. Then, have your AI committee compile a list of questions concerning these uses, what risks they can present, the elements of human oversight, and how a third party can mitigate those risks. Many of the questions around third-party AI programs will be similar to the questions you are currently asking. Your AI committee can then examine what new risks third-party AI will introduce to navigate what steps should be added to the process. be added to the process.
Where AI regulation is going
With the amount of proposed legislation and circulating guidance on AI governance, regulatory oversight is on the horizon. Taking a lesson from the enactment of the General Data Protection Regulation (GDPR), it is advisable to act now instead of waiting until a law is passed to start building a program. While some laws have a ramp-up period, others are effective immediately, so it is vital to proactively determine what resources and processes you might need to comply with new AI privacy regulations.
You should also review some of the proposed legislation for the international regions you do business in to ensure your program meets those potential standards. Canada and Europe, for example, are frequently publishing content on AI governance, providing a great opportunity for teams doing business there to be proactive.
This preparation is crucial to the software development lifecycle as well. Review potential legislation and how your technology plans to utilize personal data to avoid future compliance issues. For example, if you receive a deletion request, but cannot delete the data because an AI model is learning from it, regulators could force the deletion of an algorithm due to improper collection and use of data.
Understanding the risks of AI
AI has the potential to do a multitude of tasks for us, which is certainly appealing. This appeal, as well as the high propensity for widespread implementation and use, further underlines the need to consider how AI risks and regulations might impact how your organization does business. Strategy is the key to success.
Bias is a particularly pressing AI risk that must be considered before implementing an AI-powered tool. The data fed into an AI model impacts how it makes decisions. You must monitor AI decisions closely to ensure this type of inadvertent bias is not causing harm.
For example, consider hiring tools that utilize AI to review resumes and save time for HR teams by narrowing down an applicant pool to the most qualified candidates. If problematic data is fed into the system, and there is no human oversight on how the tool determines which resumes to advance for review, the hiring process can become negatively biased, shutting out potential star employees and introducing legal risk. It is also important to take care in choosing the right individuals to provide oversight for AI tools, as even well-meaning humans can be biased.
Human oversight is key
It is important to accept the biases present in all human beings can show up in the AI technologies we create. While a process involving AI might start off as fair, it can quickly and unintentionally become biased and do more harm than good. But conscientious humans can also evaluate the functions and output of AI technology to improve how it fulfills its purpose. This oversight is critical, no matter what the tech is used for, and regardless of the fact that humans are imperfect.
We should never forget that AI is fallible. It can present fiction as fact, make inappropriate statements, and even “hallucinate.” You need human oversight to recognize the problem when an LLM explains the history of bears in space>.
Human oversight is also crucial for determining if and why an AI technology is not performing as it should, and to make plans for what to do if that happens. Even if you feel completely able to manage the risks of AI, you still need parameters in place to ensure that everyone understands when and how to use it to use it safely and appropriately.
Make sure AI works for you
AI may seem daunting, and many people outright fear it. In order to quell that fear and use AI responsibly, your organization must take time to review how AI affects you, when it can be used, how it is helpful, and how to keep it safe.
AI technology has introduced new and disparate risks, but there are ways to mitigate the risks and protect your organization so everyone can enjoy AI’s many benefits.
Learn more with our on-demand webinar, AI Governance and Privacy: Some Assembly Required.