What Makes Good AI Policy? Start With Curiosity And Accountability.

As credit unions move from experimentation to adoption, leaders offer firsthand knowledge on what separates weak policies from strong ones that actually work.

Top-Level Takeaways

  • Strong AI policies balance control with enablement, not restriction.
  • Risk-tiering and human oversight show up in every mature approach.
  • Training and governance matter as much as the technology itself.

Credit unions are no longer asking whether to adopt artificial intelligence. They’re figuring out how to govern it before it governs them, and that shift is showing up clearly in how they write and enforce policies.

Across institutions, leaders describe the same tension in slightly different ways.

“AI adoption is not optional for high-performing organizations, but it must be done responsibly and intentionally,” says Paul Donahue, senior vice president of collections and information security at CEFCU ($8.3B, Peoria, IL).

At Sunward Federal Credit Union ($4.6M, Albuquerque, NM), vice president Dennis Wood says the credit union designed its AI rules to provide handrails, not handcuffs, to guide employees without slowing progress.

“Our policy is meant to educate and guide employees on proper usage of AI, whether used in their daily jobs or by vendor partners,” says Dennis Wood, vice president of innovation at Sunward. “We’ve also created a security policy that only allows employees to use AI resources once they have completed an annual training, vetted by our AI and data governance council.”

Best Practices For The Best Policies

A review of multiple real-world AI policies uncovers a consistent structure. Although language and formatting vary, the strongest policies converge around a handful of practical components that define how the credit union uses, governs, and monitors AI across the organization.

Some of the must-have components that emerged include:

  • Clear acceptable use rules and defined tool access.
  • Data security, privacy, and classification standards.
  • Human oversight and accountability requirements.
  • Risk tiering and governance processes.
  • Vendor oversight and third-party accountability.
  • Employee training and AI literacy expectations.

Managers say these elements appear repeatedly across policies not because they’re theoretical best practices, but because they address real operational risks that credit unions are already encountering.

Guardrails, Not Roadblocks, For Acceptable Use

Paul Donahue, CEFCU
Paul Donahue, SVP of Collections and Information Security, CEFCU

Strong policies start with clarity around what employees can and cannot do. This includes approved tools, prohibited activities, and practical examples of acceptable use that remove ambiguity.

One policy explicitly limits usage to “Permitted AI Tools … vetted and approved by the AI Leadership Committee,” while prohibiting any unapproved tools without prior review. That’s particularly important because of how accessible these tools, which also are often free, have become. Leaders say this clarity is not optional given the enthusiasm for AI tools like ChatGPT and more.

“That enthusiasm reinforces why strong training and clear policies are essential,” says Donahue at CEFCU. “Training gives employees confidence to use AI responsibly, and policies provide practical, repeatable guardrails.”

Donahue notes that clear guidance builds confidence but it doesn’t eliminate risk. That’s where technical controls come in as a necessary backstop.

“Technical restrictions remain an important layer of protection,” the SVP says. “Controls such as blocking unapproved external AI tools help reduce exposure, but the goal is to continually enhance technical safeguards without preventing employees from leveraging AI productively.”

He says what’s worked best is combining clear, board‑backed governance, human‑in‑the‑loop approval requirements, and a pragmatic low‑risk certification pathway.

Responsible AI And Data Discipline

Dennis Wood, Sunward FCU
Dennis Wood, VP of Innovation, Sunward FCU

If there is one area where policies require exacting detail, it is in data protection. Across documents, strict rules around sensitive data, personally identifiable information (PII), and confidential information are non-negotiable.

One credit union’s policy prohibits entering PII such as member numbers, credit card numbers, and other sensitive data into AI tools, whereas another policy establishes entire categories of restricted data that AI systems cannot access at all. These controls reflect a shared understanding that the biggest AI risk is not the model itself, but how data flows through it.

That concern shows up consistently in how leaders talk about training. At Sunward, Wood says effective programs must include “best practices to protect PII/NPPI data” alongside risks like bias and hallucinations, reinforcing that responsible AI starts with data discipline.

Humans Own The Outcome

Kayvee Kondapalli_Greater Texas FCU
Kayvee Kondapalli, EVP & CIO, Greater Texas FCU

Another universal element is the requirement for human oversight. No policy reviewed for this article allows AI to operate without accountability, especially in member-facing or decision-making contexts.

One policy states clearly that employees “may not make important decisions based solely on … AI Tool output.” Another emphasizes that AI must “augment human capabilities … not replace or undermine them.”

These are operational safeguards, not theoretical ones, and in practice, that means AI decisions still have a human owner.

“AI should augment, not replace, accountable decision-makers,” says Kayvee Kondapalli, executive vice president and chief information officer at Greater Texas Credit Union ($980.0M, Austin, TX). The executive adds that at his credit union, every system must have a defined human owner responsible for validation, monitoring, and escalation.

Risk Tiers And Governance In AI

A clear pattern across policies is the use of risk-tiering. Rather than treating all AI the same, institutions categorize use cases based on impact, complexity, and exposure.

At Greater Texas, governance distinguishes between assistive, decision-support, and member-impacting AI, with increasing levels of scrutiny as risk rises. Others apply a similar model, requiring executive-level approval for high-risk, member-impacting use cases.

Leaders say this structure works best when it is grounded in real use cases.

Doug True, FORUM Credit Union
Doug True, President & CEO, FORUM Credit Union

Doug True, CEO of FORUM Credit Union ($2.3B, Fishers, IN), says his shop’s approach is intentionally practical.

“Our AI governance framework is custom fit for our use cases and our partners involved with developing solutions,” he says. “That is supported by a cross-functional work team who meet regularly to share best practices in usage and governance.”

For FORUM, building real-world clarity into its AI strategy has translated into faster adoption without losing oversight.

“Driving processes and governance centered on specific use cases has been a logical and efficient practice for us,” True continues, noting that the approach allows the organization to balance speed with oversight while scaling adoption.

Vendors Are Part Of The Policy, Not Outside It

AI risk does not stop at internal tools. Policies consistently extend governance to vendors, requiring transparency, due diligence, and ongoing monitoring.

One policy makes this explicit: third-party providers “remain fully accountable for the security, compliance, accuracy, and outcomes of AI-enabled solutions,” and their use of AI does not transfer risk away from the credit union. Other policies reinforce this with requirements for model validation, testing access, and contractual controls.

Leaders say this is becoming more urgent as vendor capabilities evolve quickly.

“Many service providers … have adopted AI capabilities in a mad rush,” says Wood at Sunward. Such a pace makes it critical that credit unions continuously evaluate and reassess vendor risk rather than treat it as a one-time exercise.

“Your stakeholders will thank you for this,” Wood says.

Training Is The Policy In Action

Even the most detailed policy fails without employee understanding. That’s why training and AI literacy show up as core components across both policies and interviews on the subject.

At Sunward, employees cannot access AI tools until they complete required training, reinforcing accountability at the point of use. CEFCU similarly emphasizes “recurring training” as essential, particularly as adoption accelerates organically across teams.

Leaders consistently stress that training is what turns policy into practice. Technical controls can only go so far, making education and clear expectations the most effective way to scale responsible AI use across the organization.

The Bottom Line? The Real Standard Is Balance.

What stands out across these policies is not just what they include, but how they balance competing priorities. They’re intended to be structured enough to manage risk, but flexible enough to allow innovation.

That balance is also tied to mission.

“As a member-owned, not-for-profit cooperative, we owe it to our members to leverage tools like artificial intelligence to deliver on our value proposition,” says True at FORUM. “We are on a prudent path to take advantage of artificial intelligence while at the same time protecting the cooperative.”

Kondapalli at Greater Texas reinforces that broader takeaway, noting governance must be “principles-based, risk-calibrated, and continuously refined,” rather than static in a rapidly evolving environment.

In the end, a good AI policy is not defined by how much it restricts, but by how well it enables. The credit unions getting this right are not necessarily the ones moving the fastest — they’re the shops building governance that can keep up.

“The most effective AI policies are not fear-based,” Kondapalli says. “They balance protection with empowerment.”

April 6, 2026
Scroll to Top