Each new technology to emerge presents new risks, and artificial intelligence (AI) is no different. But dealing with those threats is a broad responsibility that extends well beyond just the credit union’s IT staff.
As credit unions increasingly leverage artificial intelligence (AI) for tasks such as fraud detection, loan decisioning and much more, risk managers are also facing the need for proactive measures to address compliance, biases, and cybersecurity threats.
Below, six credit union leaders address how they’re developing governance frameworks and procedures at the same time as they explore and deploy new use cases. Their goal: to enhance operational efficiencies while addressing potential vulnerabilities that also can emerge if not properly managed.
Here’s what they say:
“It reminds me of when Google first emerged. AI is going to naturally affect the way we work, but we’ve got to make those new technologies safe as we embrace them. We’re trying to balance the scales of justice here, if you will.”
GREATER TEXAS FCU
Sid Burkins is a vice president and risk mitigation officer at Greater Texas FCU ($949.8M, Austin, TX) and has been working in risk and loss mitigation for the past five years.
What AI applications are in use at Greater Texas?
Sid Burkins: We’re currently using Zest AI for loan decisioning and TruValidate for identity verification.
What do you see as the biggest risks around artificial intelligence?
SB: Artificial intelligence is a double-edged sword. AI is a great tool that can be used to detect fraud, however, it can also be used to perpetuate fraud.
Fraudsters can manipulate AI algorithms and models to generate realistic-looking fake data, such as videos, images, documents, emails, names, phone numbers, voice cloning, and addresses. These realistic fakes can then be sold for profit or used in the creation of fake and malicious accounts on unsuspecting platforms, and in many instances can be used to pass through a company’s identity verification and security procedures.
Credit unions can counter the rising threat of AI in multiple ways, for example with SMS alerts and pop-ups when members access apps and online accounts. But awareness training for staff and members will remain a key component in mitigating AI fraudulent exposure.
What are you doing about those risks?
SB: We’re investing in staff development and training, and in AI and machine-learning technologies designed help identify and assess commonly encountered fraud types critical in the development of a comprehensive fraud landscape.
GROW FINANCIAL FCU
Chase Clelland joined Grow Financial FCU ($3.5B, Tampa, FL) in 2013 and has been SVP for risk management since 2022.
What AI applications are in use at your credit union?
Chase Clelland: We’re in the infancy of applying AI at our credit union. We’ve opened up use of Bard/Gemini, ChatGPT, QuillBot, and Copilot and other Microsoft products. We’re also evaluating several others, including AbleAI and Pienso.
What do you see as the biggest risks around artificial intelligence?
CC: As I alluded to above, exposure of personally identifying information (PII) is the biggest risk. We’re seeing new ways emerge to use, for instance, ChatGPT with security control on the back end that engage with a private cloud. That could be a great way to run large language models (LLMs) in a safe environment, but it’s only been out for a few months and it’s still unproven.
As our credit union’s risk officer, that’s a focal point of my due diligence around LLMs. So is cost modeling. Anytime you get involved with tokenization of data, those miniscule basis points on a dollar really add up.
And then there’s the risk of malicious attacks. There already have been hundreds of instances of malicious code discovered embedded in downloaded LLMs. The bad actors are importing themselves into these new technologies.
It reminds me of when Google first emerged. AI is going to naturally affect the way we work, but we’ve got to make those new technologies safe as we embrace them. We’re trying to balance the scales of justice here, if you will.
What are you doing about those risks?
CC: We’ve formed a cross-functional AI governance team and a charter governance policy and we’re now in the process of ideating use cases to bring these tools to a greater population around Grow. Making sure that we don’t have account name structures or anything that could help identify a member is at the crux of what we’re doing and we’re making sure that whatever tools we use are either on premise or in our cloud instance. Nothing goes external.
Right now, about 50 of our nearly 600 people are playing around with and have access to the tools. That provides us with the opportunity for testing and learning. We also think of our governance charter as containing four pillars with a group focusing on each: training, ideation and use cases, security and safety, and compliance and ethics.
DUPACO COMMUNITY CREDIT UNION
Todd Link is chief risk officer at Dupaco Community Credit Union ($3.2B, Dubuque, IA) and has led Dupaco’s risk management program for the past 10 years.
What AI applications are in use at your credit union?
Todd Link: We use AI functions like many credit unions are as well. That includes fraud monitoring and detection, automated phone assistance, and conversational AI, to soon support member inquiry in foreign languages.
Certainly, there is tremendous opportunity for all credit unions to use AI to better understand and serve members with tailored offers, appointment scheduling, etc. Another area of growth is in generative AI, where applications like ChatGPT and Microsoft Copilot can be used to ideate or assist in content development. It’s an exciting time to leverage this technology for basic account inquiry to enable our team members to spend more time with members on their complex financial well-being questions and needs.
What do you see as the biggest risks around artificial intelligence?
TL: As with any technology, the risks include ensuring compliance with regulations, recognizing that AI can have programming bias to guard against, and ensuring the AI is accurate in any information it provides.
Another consideration is privacy and protecting all private and proprietary information. We work in a very precise and trust-driven industry. It’s incumbent on us to ensure that any tool we utilize to serve members has high accuracy and service reliability. AI also needs to fit within organizational strategy as well as business plans.
What are you doing about those risks?
TL: I believe a great place to start is to have a good AI policy in place, as well as procedures around any technology deployment. Use-case risk assessments also provide value to ensure the technology fits within the credit union risk profile and the existing business plans.
GOLDEN 1 CREDIT UNION
Jay Tkachuk joined Golden 1 Credit Union ($21.1B, Sacramento, CA) in August 22 as the cooperative’s EVP and chief digital officer.
What AI applications are in use at your credit union?
Jay Tkachuk: At Golden 1 Credit Union, specifically with machine learning, we use it for loan decisioning, various flavors of analysis, and other tasks. We use natural-language processing/understanding for call analysis and we’re currently experimenting with gen AI for productivity augmentation.
We continue to look for new ways to leverage these technologies for competitive advantage, and thus must be prudent, pragmatic, and methodical in how we compete in this landscape.
What do you see as the biggest risks around artificial intelligence?
JT: Specifically, with machine learning platforms, the greatest risks are the built-in, yet unknown biases. These technologies are very complex, and for those that truly understand the inner workings and can extract the maximal value out of them, will be highly valued assets for financial institutions looking to continue evolving these models.
What are you doing about those risks?
JT: We’re investing in internal resources with hands-on experience with such technologies, pursuing a pragmatic, focused approach to enable a narrow, well-defined set of capabilities. As we learn through this process, we’ll expand the AI applications as the member and operational needs evolve.
UNIVERSITY FCU
John Orton is vice president of enterprise risk management at University FCU ($4.0B, Austin, TX). He joined UFCU two years ago after 15 years as a CFO at another large Texas cooperative.
What AI applications are in use at your credit union?
John Orton: We currently use Azure Machine Learning, Microsoft Copilot and Teams, ChatGPT, Base AI, and many more. Most of our security tools use AI, for example, as do our digital account-opening and commercial LOS and digital banking platforms.
What do you see as the biggest risks around artificial intelligence?
JO: On offense, we could lag other financial institutions and not realize the benefits of AI in the areas of better understanding and solving the needs of our members/customers. We could also miss out on efficiency and effectiveness opportunities to do our work better/faster/cheaper. We could then in turn miss out on growth opportunities.
On defense, there are much increased risks for cybersecurity and fraud, for bias in decision-making, and for legal copyright protections.
What are you doing about those risks?
JO: We’re striving to achieve an unbiased and supportive resource on resources and decision-making in these areas. To make sure that we are not all gas and no brakes on AI, or vice versa. On offense, we’re looking at deploying AI chatbot solutions to answer member questions better and more quickly. On the defense side, we’re actively looking at AI-powered solutions to better ferret out fraudulent transactions in real time.
WRIGHT-PATT CREDIT UNION
Jen Ogden is chief risk officer at Wright-Patt Credit Union ($8.3B, Beavercreek, OH). She joined the Dayton-based credit union in 2010 and has been in her current role since September 2022.
What AI applications are in use at your credit union?
Jen Ogden: The credit union recently deployed conversational AI through chatbot technology on our website. Members can interact with “Patty,” our virtual assistant, to ask questions or easily locate information anytime. This not only enhances the member experience, but those interactions also help us understand how members prefer to engage in the digital channel.
Behind the scenes, we’ve also deployed generative AI in pilot to power internal reference material for our partner-employees. We’re excited about this opportunity as well because we believe it can enhance efficiency and help partner-employees quickly locate information as they are serving members.
What do you see as the biggest risks around artificial intelligence?
JO: Data security, compliance, and algorithmic bias are at the forefront of AI-related risks the credit union monitors. This is especially important as AI technology becomes more prevalent in vendor solutions and the tools we use.
Fraud and cybersecurity risks are also top of mind as language models become easier for bad actors to use and exploit. More broadly, AI technology is evolving quickly, and open-source AI is expected to accelerate that pace. The velocity of change itself will present risk as credit unions and the industry work to update controls and framework standards.
What are you doing about those risks?
JO: The credit union has established internal practices governing the use of AI, and we manage risk through a series of procedures, risk assessments, model validations, and our enterprise security program. We’re also working to enhance our model and vendor risk management functions to support the changing environment.
While AI introduces new risks, managing those risks effectively also provides new opportunities to positively impact our credit union and our members.
These interviews were edited and condensed.
Ready, Set, Collaborate
Each new year is another chance to tackle strategic goals that put your members first. Callahan Roundtables offer C-suite executives the chance to collaborate with like-minded leaders on upcoming strategies, discuss roadblocks and lessons learned, and connect on industry hot topics. Pull up a chair – register today.
Learn More