How Does BCU Put AI Policy Into Practice?

AI governance matters as much as innovation when it comes to AI. Learn how BCU built an AI practice that prioritizes data integrity, risk management, and real world decision making.

John Sahagian has spent more than two decades at BCU ($6.2B, Vernon Hills, IL), working across a range of roles before stepping into his current position as chief data officer in 2018. During his 26-year tenure with the cooperative, he has held responsibilities in member services, loan sales, collections, marketing, and member intelligence.

Today, Sahagian leads BCU’s digital transformation and credit union AI strategy. His work focuses on advancing machine learning, strengthening data analytics capabilities, and modernizing the organization’s corporate technology infrastructure.

“Ultimately, it’s my job to make sure BCU manages and trusts its data, understands what that data is telling us, and can take meaningful action based on that intelligence,” he says.

As the leader responsible for BCU’s data strategy, Sahagian sees AI as both an opportunity and an obligation.

I grow increasingly convinced of an inevitable future in which our workforce will be both human and machine, leveraging what each does best to the benefit of our members.

John Sahagian, Chief Data Officer, BCU

How is BCU using AI, particularly generative AI? How much effort is it putting toward growing that use?

John Sahagian: We’re focusing on three areas when it comes to gen AI. First, we’re empowering teams across BCU to safely use AI-driven features many of our existing tools and partners offer. We’ve approved dozens of these efficiency-generating features spanning many areas of the credit union, including HR, marketing, and software development. Many of our key partners have exciting AI roadmaps that we hope to benefit from.

Second, we’ve invested in using the Salesforce and Microsoft platforms in a big way. They, of course, have made massive investments into powerful gen AI capabilities within strong trust frameworks. We believe we’re well-positioned to leverage these capabilities to be a leader in creating next-level member experience and value.

And third, we believe it’s important that employees can prepare themselves for and continue to thrive in the new age of AI. So, we give them access to training and powerful AI tools, which they’re using in creative ways to build efficiency in their respective roles.

I grow increasingly convinced of an inevitable future in which our workforce will be both human and machine, leveraging what each does best to the benefit of our members. We want to make sure our employees are ready for and able to thrive in this exciting future.

John Sahagian, BCU
John Sahagian, Chief Data Officer, BCU

What steps has BCU taken to create clear, responsible AI governance and policy frameworks?

JS: Gen AI clearly holds massive potential, but it also brings entirely new risks. It would have been far easier to lock everything down and not allow any AI use, but we decided that would risk stifling an unprecedented opportunity for innovation.

Instead of shutting everything down, we chose to embrace the opportunity and quickly rolled out a clear, simple AI acceptable use standard following the ChatGPT release. This super-short, common-sense guideline spelled out the do’s and the don’ts in plain language and helped people understand the new tools and risks involved. Gen AI tools are accessible to everyone. That makes this both a strength and a challenge.

What followed then?

JS: Shortly thereafter, we pulled together an AI leadership committee, with representation from security, legal, business, digital, data, and IT. This group developed and adopted AI guiding principles, an AI governance standard, and a fast-track process by which business teams could seek permission to use AI-based features and systems.

Although this governance framework continues to evolve, it’s already essential in giving us the confidence and discipline to transition our focus from AI defense to going on AI offense.

What AI initiatives are currently underway at BCU, and what progress are you seeing?

JS: Like everyone else, we’re still learning and taking it one step at a time.

We’re building some simple internal AI bots for a variety of tasks. For example, we have a bot that generates high-quality marketing content in our brand voice, mimics the style of past marketing content, and is guided by several key policies and regulations. Our marketing team is finding this to be a huge time saver for communication development.

Second is a bot that generates SQL code using plain-language prompts and is built to work with our data model. Analysts across BCU who aren’t comfortable writing queries from scratch love it and even our experienced SQL developers use it to work more efficiently.

What project do you think is particularly interesting and promising?

JS: Probably the most exciting gen AI use case we’re pursuing today is building autonomous AI agents that interact directly with our members. In fact, our members are already interacting with LLM-based agents via outbound calls and our web chat experience for select topics.

CU QUICK FACTS

BCU

HQ: Vernon Hills, IL
ASSETS: $6.3B
MEMBERS: 368,252
BRANCHES: 47
EMPLOYEES: 770
NET WORTH: 10.2%
ROA: 0.72%

The initial results are impressive, and we’re excited to extend these scalable agents to sales use cases and across more channels like voice and SMS.

Now, effective monitoring and controls are essential to ensure agents are providing high-quality experiences, not getting confused, and not wandering outside the guardrails we put in place. These AI agents need to be onboarded, trained, monitored, and performance managed just like any human agent does.

How are you addressing the “shadow use” of AI and ensuring ethical, compliant adoption across departments?

JS: Our security team has been very proactive in setting up scanning for unauthorized AI usage and even blocking unauthorized AI activity. We don’t do this to discourage AI use but to ensure all tools have been reviewed.

Furthermore, we make available to all employees gen AI tools that operate inside our security framework and ensure prompts and responses are protected. So, anyone that wants to experiment and use AI absolutely can within these permitted tools.

What role has senior and executive leadership and board engagement played in shaping your AI policies?

JS: As soon as ChatGPT hit the scene, it was apparent these new AI models and tools would be game changers. Our board gave us a dual mandate of, “there’s new risks here, you better be careful,” and “there’s a lot of value here, you better not lose pace!”

We’re fortunate our board members sees where this is going and are as enthusiastic about AI progress as they are about AI defense. We provide them with quarterly updates on the progress of our AI roadmap.

Our leadership team has also been extremely involved in our advancement of AI. Our lending, operations and compliance leaders are all asking how AI will help us grow, elevate our member experience, build a more scalable operating mode, and make better, faster decisions.

When our business leaders are the ones asking these questions and shaping our AI roadmap, our chances of success go through the roof.

How do you communicate the importance of AI governance across the enterprise?

JS: This thing we’re trying to govern is constantly changing and moving, and it can feel overwhelming to start building policies and standards. But it’s essential that employees, partners, and regulators have confidence that someone in the credit union is following what’s going on and is planning for its control and utilization.

A limited few in your organization will likely read through your AI governance standard, but it’s important that every employee knows you have one.

This interview has been edited and condensed.

March 27, 2026
Scroll to Top