Artificial intelligence is disrupting financial services faster than the speed of the internet.
Generative AI tools for both member-facing applications and back-office processes are in the hands of employees right now, raising the stakes for governance, compliance, and smart use policies that don’t hinder innovation or competitiveness. That means today’s credit union leaders must balance moving quickly to unlock AI’s value with putting enough governance and guardrails in place to mitigate risk.
Credit unions across Illinois, Indiana, Texas, Washington, and beyond are putting AI to work while building real-world strategies to govern it as they go. Read on to learn about their AI use cases and dive into how they’re approaching governance issues.
Enjoy reading all of the insights across this two-part series, or click to skip to insights from: BCU, CEFCU, FORUM Credit Union, Greater Texas FCU, University FCU, WSECU.
Clear, Simple Guidelines

John Sahagian has been with BCU ($6.2B, Vernon Hills, IL) for 25 years. He became the suburban Chicago shop’s vice president and chief data officer in July 2018.
Sahagian says BCU is actively integrating gen AI within existing platforms for departments like HR, marketing, and software development. These tools, often provided through partnerships, enhance efficiency and align with AI roadmaps from trusted vendors.
BUC also has heavily invested in Salesforce and Microsoft platforms, both of which offer powerful generative AI tools within secure frameworks. Additionally, the credit union is providing AI training and resources to ensure employees can work creatively and effectively alongside machine intelligence.
What steps has your credit union taken to establish clear, responsible AI governance and policy frameworks, and how are you ensuring ethical and compliant adoption across departments?
John Sahagian: Gen AI clearly holds massive potential, but it also brings entirely new risks. Instead of shutting everything down, we chose to embrace the opportunity and quickly rolled out a clear, simple AI acceptable use standard.
This guideline spelled out the do’s and don’ts in plain language and helped people understand the risks involved. Gen AI tools are accessible to everyone. That makes this both a strength and a challenge.
How are you identifying and addressing “shadow AI” use within your organization, and what safeguards are in place to manage risks?
JS: Our security team has been very proactive in scanning for unauthorized AI usage and even blocking unauthorized AI activity. We don’t do this to discourage AI use, but rather to ensure all tools used have been reviewed.
Furthermore, we make available to all employees permitted gen AI tools that operate inside our security framework and ensure prompts and responses are protected. So, anyone that wants to experiment and use AI absolutely can within the permitted tools.
What role have your executives and the board played in shaping your AI governance strategy, and how do you communicate its importance across the enterprise?
JS: As soon as ChatGPT hit the scene, it was apparent these new AI models and tools would be game changers. Our board gave us a dual mandate of, “there’s new risks here, you better be careful,” and “there’s a lot of value here, you better not lose pace!”
We’re fortunate our board members see where this is going and are as enthusiastic about AI progress as they are about AI defense. We provide them with quarterly updates on the progress of our AI roadmap.
Communication is absolutely essential. This thing we’re trying to govern is constantly changing and moving, so it can feel overwhelming to start building policies and standards. A limited few in your organization will likely read through your AI governance standard, but it’s important every employee knows you have one.
Empowered Employees To Leverage AI Responsibly

Tammie Fletcher has been vice president of HR at CEFCU ($8.1B, Peoria, IL) for the past three years. She has been with the central Illinois cooperative since 1989, starting her career in marketing.
Fletcher says CEFCU formed an internal team led by C-level executives to develop AI guidelines and a policy framework that focus on enabling responsible use of gen AI as well as identifying current use cases and paving the way for future capabilities.
The team identified more than 60 AI use cases at the outset, many already embedded in existing software. These range from basic machine learning applications to advanced gen AI functionalities across the credit union. Employees also can use external generative AI tools like Chat GPT and internal tools like Microsoft Copilot Chat.
What steps has your credit union taken to establish clear, responsible AI governance and policy frameworks, and how are you ensuring ethical and compliant adoption across departments?
Tammie Fletcher: Our cross-functional team created a comprehensive AI policy that defines CEFCU’s approach to responsible AI adoption, explains why we use it, and sets guardrails for development and deployment.
We also launched a generative AI acceptable use policy that sets clear, practical rules for ethical and secure AI usage. Both policies are now official corporate policies, recently approved by the CEFCU board.
We’re finalizing a strategic roadmap under the guidance of our chief officers to ensure sustainable and impactful implementation.
How are you identifying and addressing “shadow AI” use within your organization, and what safeguards are in place to manage risks?
TF: We conducted a comprehensive survey across departments to identify existing AI applications. Detailed training will be required of all employees to ensure they understand restrictions for using AI and how to leverage tools to enable secure internal use of AI to help with tasks, including document writing, content generation, meeting minutes, data analysis and trends, and more.
There will also be technical restrictions placed on access to unapproved AI applications.
What role have your executives and the board played in shaping your AI governance strategy, and how do you communicate its importance across the enterprise?
TF: Our executive leadership has been instrumental in shaping and guiding our AI strategy, ensuring alignment with CEFCU’s mission. They work closely with the AI team they formed to provide ongoing feedback.
We will ensure credit unionwide alignment through ongoing training, transparent communication about AI initiatives, and strong leadership support. AI governance is essential to maintaining our members’ trust and ensuring our use of AI technology remains compliant with regulations, internal policies, and ethical standards while staying aligned with CEFCU’s mission and vision.
Our approach empowers employees to leverage AI responsibly to enhance their work while keeping human judgment and fact-checking in all decision-making processes.
Weekly Recaps For Today And Tomorrow

Doug True began his career with FORUM Credit Union ($2.3B, Fishers, IN) as a management trainee in 1988. He was named the Indianapolis-area credit union’s CEO in November 2011.
True says FORUM Credit Union is applying AI across multiple departments, including indirect lending, where AI helps review auto loan contracts for accuracy and compliance. In commercial services, the credit union uses AI to summarize property appraisals efficiently. In marketing, AI tools generate copy suggestions, whereas the fraud department uses AI to detect patterns relevant to Suspicious Activity Reporting (SAR). Additionally, robotic process automation is streamlining internal audit processes on large data sets.
What steps has your credit union taken to establish clear, responsible AI governance and policy frameworks, and how are you ensuring ethical and compliant adoption across departments?
Doug True: Our executive team regularly meets to discuss AI, we’ve established a cross-functional team, and we will possibly make a new hire in 2026. This position would help us document governance of AI tools, document usage to avoid duplication of efforts, and ensure we’re leveraging existing tools before purchasing new tools.
How are you identifying and addressing “shadow AI” use within your organization, and what safeguards are in place to manage risks?
DT: Our technology team has controls in place for the use of AI tools. We’re actively surveying via technology and social engineering.
What role have your executives and the board played in shaping your AI governance strategy, and how do you communicate its importance across the enterprise?
DT: Governance is happening among our executive team as well as the cross-functional team across the credit union currently using AI tools. We regularly discuss developments in the AI space at our executive team and board meetings.
We publish a recap each week for our volunteers on what we’re working on at the credit union. This recap often includes how we’re using AI today and how we plan to use it in the future.
AI governance is vital to the protection of member data and intellectual property. We internally develop our internet banking and mobile app platform, so it’s critically important we protect this intellectual property contained in this code set.
Interviews have been edited and condensed.