Artificial intelligence is disrupting financial services faster than the speed of the internet.
Generative AI tools for both member-facing applications and back-office processes are in the hands of employees right now, raising the stakes for governance, compliance, and smart use policies that don’t hinder innovation or competitiveness. That means today’s credit union leaders must balance moving quickly to unlock AI’s value with putting enough governance and guardrails in place to mitigate risk.
Credit unions across Illinois, Indiana, Texas, Washington, and beyond are putting AI to work while building real-world strategies to govern it as they go. Read on to learn about their AI use cases and dive into how they’re approaching governance issues.
Enjoy reading all of the insights across this two-part series, or click to skip to insights from: BCU, CEFCU, FORUM Credit Union, Greater Texas FCU, University FCU, WSECU.
Regular AI Ideation Sessions

Kayvee Kondapalli has been CIO of Greater Texas Federal Credit Union ($957.3M, Austin, TX) for the past six years. He has nearly 25 years of credit union technology experience.
Kondapalli says Greater Texas has begun testing AI applications, including Microsoft and Google chatbots, although nothing is yet live. The credit union has partnered with a vendor to deploy an AI-based website chatbot and a contact center agent to assist members more effectively.
Staff members are already using tools like ChatGPT and Microsoft Copilot to streamline tasks such as document creation, data analysis, and decision-making. The veteran technologist says his shop has also launched ideation sessions with management to identify future use cases and ensure compliance with AI policies.
What steps has your credit union taken to establish clear, responsible AI governance and policy frameworks, and how are you ensuring ethical and compliant adoption across departments?
Kayvee Kondapalli: We have a set of AI use guidelines. All employees have been trained and must participate in monthly AI courses to keep current with tech changes and our policies. Our senior management team discusses this topic frequently, weighing pros and cons every time a new tool is requested or talked about on the internet.
Greater Texas understands the benefits of AI, yet we’re careful in trusting and adoption. We’ve bolstered content filtering to block generative AI sites except those approved, and requests for access are reviewed by IT leadership, our cybersecurity officer, the CIO, and our technology steering committee as needed before giving the green light.
We regularly evaluate AI use cases in the credit union and financial services industry through reading online articles and participating in virtual and in-person generative AI-specific events. We also hold regular AI ideation sessions with middle management to explore new ways to possibly use the technology.
For example, we currently have a line of business tinkering with developing a chatbot of sorts to aid with a recurring task, and another department is testing an interactive report development tool.
How are you identifying and addressing “shadow AI” use within your organization, and what safeguards are in place to manage risks?
KK: We are committed to using AI safely and ethically. Employees are thoroughly trained in our AI policies and receive ongoing education about generative AI and which tools are approved for use within the credit union.
We use content filtering monitors to govern the use of approved generative AI tools. And to stay ahead of shadow use, we have regular open discussions within the executive team to explore new ways each department could use AI to improve efficiency.
What role have your executives and the board played in shaping your AI governance strategy, and how do you communicate its importance across the enterprise?
KK: As ChatGPT began picking up steam, we saw what was coming and wanted to start leading with education and governance in this area before it became commonplace in the workplace.
Our cybersecurity officer collaborated with the head of marketing and together they developed a set of AI use guidelines. These were presented to the technology steering committee, made up of mostly senior management, including our CEO. These guidelines are now an official part of our employee handbook.
Given the newness, exponential evolution, and rapid adoption of AI, we felt it was critical to be on the leading edge of governing how AI is used in our credit union. AI is almost like the internet is born again, the technology has such a profound impact.
AI As A Strategic Asset

John Orton joined University Federal Credit Union ($4.2B, Austin, TX) as vice president of enterprise risk management in February 2022. There, he oversees the fraud, collections, legal, facilities, and compliance areas.
Orton says UFCU is embedding AI into its digital strategy to become more data-driven and member-focused, using advanced analytics to personalize experiences and generate actionable insights. He says such tools help predict member needs and improve service delivery across all platforms.
UFCU is piloting AI-driven solutions that automate operations, support employee decision-making, and improve service efficiency. An ongoing focus is expanding AI use responsibly through innovation and strategic partnerships.
What steps has your credit union taken to establish clear, responsible AI governance and policy frameworks, and how are you ensuring ethical and compliant adoption across departments?
John Orton: UFCU is among the early credit unions to formalize an AI policy, reflecting our proactive stance on responsible innovation and data stewardship. We regularly review our internal framework to ensure alignment with industry best practices and regulatory expectations. We designed that framework to guide ethical use of AI in ways that protect member trust and organizational integrity.
We’re advancing our data and AI strategy by building a modern, scalable data platform and fostering a culture of responsible innovation. We strive to empower employees with the tools and training needed to leverage data and AI for personalized member service and operational efficiency.
By automating routine tasks and streamlining processes, our goal is to enable teams to focus on delivering meaningful experiences. Our strategy is guided by continuous improvement, transparency, and a commitment to measurable impact for members and the organization.
How are you identifying and addressing “shadow AI” use within your organization, and what safeguards are in place to manage risks?
JO: UFCU prioritizes education and clear communication to guide ethical AI adoption. We have controls in place to protect member data and prevent unauthorized sharing, and we are continuously evaluating our governance framework to address emerging risks.
As our AI maturity grows, we plan to enhance our monitoring capabilities to ensure compliance and support responsible innovation across all departments. We’re committed to continuous improvement as the AI landscape evolves.
What role have your executives and the board played in shaping your AI governance strategy, and how do you communicate its importance across the enterprise?
JO: UFCU’s senior leadership and board have set a bold vision to use data and AI as strategic assets in our shift to a member-centric, digital-first organization. Their support, along with our cross-functional AI committee, ensures our approach aligns with our mission to deliver personalized, proactive member experiences and empower employees with actionable insights.
AI governance is key to responsible innovation and long-term success. We ensure every initiative aligns with our values, regulatory standards, and ethical commitments. We’re building a culture of data stewardship and continuous learning, equipping employees to use AI tools that automate routine tasks, boost efficiency, and deepen member engagement.
Through education, clear policies, and leadership support, we aim to help teams use data and AI to drive operational excellence and personalized service.
Just Another Technology

Shawn Dunn is vice president of data and analytics at Washington State Employees Credit Union ($5.1B, Olympia, WA). He joined WSECU in June 2024 and has 15 years of experience in credit union business processes and intelligence.
Dunn says AI adoption at WSECU is guided by member service and organizational benefit, with efforts centered on quickly accessing actionable insights. The credit union is enhancing existing platforms and preparing to grow through future vendor collaborations.
Education also is a major priority, with WSECU training staff members on AI tools, use cases, and best practices. According to Dunn, the credit union’s most significant rollout so far is Microsoft Copilot, which is integrating with Office tools to accelerate strategic decision-making through gen AI-driven insights.
What steps has your credit union taken to establish clear, responsible AI governance and policy frameworks, and how are you ensuring ethical and compliant adoption across departments?
Shawn Dunn: We began with policy, values, and buy-in from the board and senior leadership. In 2024, we formed an AI guidance group made up of leaders from data, IT, and compliance.
One of the group’s first efforts was publishing an organizational AI usage policy with clear guidelines on acceptable use. We also developed communication plans, training opportunities, and a strategy for managing AI technologies.
A key belief we’ve embraced is that AI is just another technology. We already have strong internal processes for evaluating and managing tech, so there’s no need to over-engineer new governance frameworks.
Our top priority now is team readiness. Without it, successful AI adoption will falter. We’ve built a clear communication plan that includes leadership vision, training, and success stories to normalize AI at WSECU and increase our team’s impact.
At the same time, we’re exploring partnerships where AI supports business objectives. Staying focused on tools that truly serve members and staff helps us avoid chasing the next shiny object that doesn’t move us forward.
How are you identifying and addressing “shadow AI” use within your organization, and what safeguards are in place to manage risks?
SD: Managing sensitive data is foundational in financial services. Our AI acceptable use policy is a great place to start for our team. We’ve also had discussions with leaders across the organization to ensure that we continue to follow established guidelines for onboarding and using new technologies.
I’ve talked to some peers who decided to outright block tools like Copilot altogether, and this is likely inadvertently increasing risk. Your teams know the value of these tools, and if you don’t provide them in a controlled manner, they’ll find ways to use them in a potentially more irresponsible fashion.
What role have your executives and the board played in shaping your AI governance strategy, and how do you communicate its importance across the enterprise?
SD: Like any successful initiative, you need buy-in and alignment at the top to gain employee confidence and adoption. WSECU’s senior leaders have been highly engaged since the onset of our AI efforts. In addition to representation on the AI guidance group, senior leadership is integral to communicating the vision of how AI elevates our efforts and improves the member experience.
They’re also sharing their own AI learning journeys, mirroring for the entire staff that we’re all learning together how to use these tools. Everything ties back to our organizational capabilities and those key strategic objectives established in the business plan.
AI governance is not just a compliance exercise; it’s a strategic requirement. I encourage my peers to find governance practices already implemented in their own organizations. There’s no need to create redundant frameworks to manage a new capability like AI. The focus should be on layering in additional considerations within established governance practices, such as how you map, measure, and monitor the impacts of AI-based tools.
Interviews have been edited and condensed.