Governance And Guardrails Help Credit Unions Navigate AI

How Members Cooperative focuses on structure, oversight, and clear expectations to ensure AI supports, not undermines, long term strategy.
Simone Suri, Members Cooperative Credit Union
Simone Suri, Chief Administrative Officer & General Counsel, Members Cooperative Credit Union

Most people wouldn’t take a road trip without putting on their seatbelt first. Simone Suri wants credit unions to take the same approach with their AI journeys.

Suri is chief administrative officer and general counsel at Members Cooperative Credit Union ($1.2B, Duluth, MN). The cooperative started using artificial intelligence in earnest about a year ago, but leadership didn’t just hand over access and let staff loose. Rather, it grants access on a case-by-case basis to employees who request it. Employees with approval may use only the credit union’s licensed Microsoft Copilot, and they cannot submit member data into the tool.

“We don’t want people going to ChatGPT because we have no idea what’s going to happen to that data,” Suri says. “The idea is to use a tool where the data will remain safe. We want our data in a more controlled environment.”

Members Cooperative’s approach to AI revolves around a robust internal policy that provides guardrails for usage. That starts with a risk assessment and written documentation that details for all users when, where, and how they can use AI. Such documentation protects the credit union, staff, and members alike. Although most employees are excited about using AI, Suri says, most people don’t have a technology background. Having a policy like this in place is crucial for helping them understand the opportunities they can leverage as well as what to avoid.

“Anytime new technology and data is involved, we need to take a second and understand not only the benefits but also the risks and how we navigate those risks so we can get the benefits without compromising the security of our data,” Suri says. “This has to be a no-compromise situation.”

The Use Cases

Members Cooperative selected Copilot because of its security, data protections, and integration capabilities with the credit union’s existing tools, Suri says. Although the credit union does not allow member data in AI — which largely rules out use cases for member-facing staff — Suri says there are many ways AI drives back-office efficiencies. But even then, there are guardrails. Associates must disclose when they use AI, and human oversight is required, given AI’s propensity to make mistakes.

“You can’t assume everything coming out of an AI tool is like punching in two times two on a calculator and the answer is always four,” Suri says. “You really need to look at those results and validate them. We’ve had situations where information hasn’t been accurate. Most recently, I found that people like to put legal questions into AI. Again, you have to think about where AI is generating its answers. The information isn’t coming from an attorney on the other side of the wires answering your question. It’s pulling from all of these different databases, some of which are outdated or old or just might be illegitimate. At the same time, AI may be pulling from sources that are amazing and incredibly accurate, but you still need that human oversight.”

So what are the use cases at Members Cooperative? Like other credit unions, it is still figuring that out. But Suri says inputting existing policies into a licensed AI tool can help improve those policies, whether by making them more concise or uncovering gaps. Similarly, AI could conduct market research or even provide a starting point to draft an AI usage policy. One key, she notes, is the prompts that go into AI. The better the prompts, the better the outcomes.

“Each organization needs to figure out where its comfort level is,” she says. “Some studies have shown we’re not getting the efficiencies we think we’re getting.”

Best Practice: AI tools are everywhere — even Google’s first results are frequently an AI summary. Suri suggests closing off access to public AI sites on all credit union-owned computers. Doing so encourages employees to use credit union-licensed services and steers users away from potentially less secure tools. Plus, she adds, many credit unions are moving in that direction.

The Balancing Act

Suri acknowledges there’s a balancing act between empowering employees and exercising caution. The key, she says, is education. Leaders must ensure organizations are discussing the risks and advantages of AI, identifying use cases, and recognizing how those use cases might vary by department. What’s most important is to keep the conversation going.

CU QUICK FACTS

Members Cooperative Credit Union

HQ: Duluth, MN
ASSETS: $1.2B
MEMBERS: 58,793
BRANCHES: 12
EMPLOYEES: 205
NET WORTH: 10.2%
ROA: 0.57%

“The biggest problem you’re seeing in credit union land today is we know AI is there and people are excited but not talking about how to use it safely,” she says. “Or we talk about a risk once and then not again for six months.”

Risk assessments and robust governance around AI can help alleviate the concern in departments like risk and IT, in part because that kind of documentation provides clear, objective guidance on do’s and don’ts.

“It allows you to recognize risks and contemplate what you’re willing to accept,” Suri says. “What risks are appropriate given the risk appetite of the credit union and which are not? Going through the process in a methodical way like a risk assessment makes it less personal. But you’ve got to have enough knowledge on both sides to have those tough conversations and work through those risks.”

In addition to clear standards for employees, Suri suggest examining all vendor contracts to better understand how those providers use AI. That can be a challenge, especially if the credit union’s point of contact does not have all the answers. If that’s the case, keep digging.

“You’ve got to ask questions about what type of AI tools your vendor is using, what data is going in, and identify associated risks, including compliance and operational risk,” she says. “Those are hard questions for a sales rep to answer, so you usually need to get other folks involved in those conversations.”

Suri adds that memorializing AI disclosure requirements in the contract can help the credit union if vendors change their practices later on.

“Third parties are a significant source of FI breaches,” she says. “If they start putting member data or other proprietary information into those tools, we’re vulnerable.”

The Lessons

Suri admits that even though Members Cooperative has a robust AI plan in place, it still has work to do.

“You could line up 10 employees, ask each one about their comfort level and what AI can do for them, and every single person will have a completely different answer,” she says.

But rather than pushing employees to use AI more, Suri says that varied level of comfort —  which is likely common at many credit unions — exemplifies the need for a thoughtful approach, including a risk assessment and well-formulated governance approach.

The big lesson might not be to shy away from AI, but to be thoughtful about how, when, and where it’s deployed — and always back up usage with human oversight.

“I’m finding more and more folks are getting comfortable with AI and pushing the boundaries, which is great,” Suri says. “It’s good to play with things and challenge yourself. But be cautious along that journey because we are seeing incorrect results. Not because AI has become less accurate, but because people are getting more comfortable and pushing the boundaries of what AI can provide.”

April 6, 2026
Scroll to Top