Genius AI

Fighting AI with AI:
How Lenders Can Stay on the Offense Against Generative Fraud

Artificial intelligence has transformed what’s possible in mortgage fraud, and not in a good way. The same technology powering productivity tools and automation across the industry is now being weaponized to fabricate documents, clone identities, and intercept transactions at a scale and sophistication that legacy detection systems were never built to handle.
The challenge for lenders is no longer just identifying fraud. It’s keeping pace with an adversary that is evolving faster than traditional controls can respond, and now, it’s also about meeting the expectations of regulators who are watching closely.

The Regulatory Stakes Just Got Higher

On April 13, 2026, Fannie Mae released new AI governance guidelines that take effect August 6 for all sellers and servicers of loans it guarantees, joining Freddie Mac, which implemented similar requirements in March. The message from the GSEs is unambiguous: AI use in mortgage lending must be safe, legal, ethical, and aligned with agency expectations.
The new Fannie Mae requirements are sweeping. Lenders must establish internal AI governance policies, designate an oversight official responsible for annual compliance reviews, deliver regular AI training to loan officers and relevant staff, and demonstrate active risk management aligned with their own tolerance levels. Fannie Mae’s guidance puts it plainly: “The pace of innovation brings heightened responsibility. As AI/ML models grow more complex and more deeply embedded in critical processes, seller/servicers must ensure these technologies are deployed safely, legally, ethically and in alignment with Fannie Mae’s expectations.”
Perhaps most significantly for the vendor community, Fannie Mae explicitly states that lenders will be held responsible for noncompliant AI use by their subcontractors and vendors and mandates appropriate supervision of those providers. This is not a future risk. It is a current compliance obligation with a hard effective date, and lenders who have not yet taken stock of how AI is being used across their operations, including by the partners and vendors they rely on, are already behind.

The AI Fraud Threat Is Already Here

The regulatory urgency reflects a fraud environment that has already reached alarming levels. In a recent Indecomm webinar, “QC as an Early Warning System,” featuring GenWay’s Chief Risk and Compliance Officer Alicia Gazotti,” fifty-one mortgage QC professionals were asked to identify their top risk concerns for 2026. The results were striking.
AI-generated document fraud ranked as the single highest concern by a wide margin, with traditional mortgage fraud and AI synthetic identity fraud close behind. Regulatory compliance and rising delinquency rates rounded out the top five. For those of us who have been watching the industry for a while, the results were not surprising, but they were clarifying. AI-driven fraud is not an emerging concern on the horizon. It is the defining risk concern of right now.
Generative AI could drive U.S. fraud losses to $40 billion by 2027, growing at 32% annually. According to Cotality’s 2026 data, 1 in 118 mortgage applications already shows fraud indicators. Behind those numbers is a new generation of AI-driven fraud that every lender needs to understand.
Generative AI can now produce documents that are virtually indistinguishable from legitimate documents, defeating human reviewers and legacy fraud systems alike.
As Alicia Gazotti, Chief Risk and Compliance Officer at GenWay Home Mortgage, put it during a recent Indecomm Fireside Chat:
“AI-generated documents – they have the ability to produce pay stubs, W-2s, bank statements, all sorts of documentation, and it’s really become more difficult for humans to detect nowadays. Agencies are finding different ways to use their AI to detect fraud and using it as a countermeasure to combat mortgage fraud. So they’re a little bit ahead of the game”
Fraudsters are pairing these fabricated documents with shell LLCs and fictitious business entities to create a seemingly credible paper trail, making income fraud harder than ever to catch at the origination stage.
Beyond documents, AI is being used to clone audio and video calls, impersonate borrowers and lender staff, and construct synthetic identities by blending real and fabricated personal data. These impersonations are then used to authorize wire transfers, bypass identity verification, and misdirect funds, often before anyone realizes something is wrong. Deepfake technology has made AI-assisted wire fraud one of the most financially devastating threats in real estate transactions today, with funds that are not secured within the first 24 hours unlikely to ever be recovered.
Agencies have taken notice. They are now deploying their own AI post-closing to detect document fraud, which means lenders whose QC programs are not built to catch AI-generated fraud pre-funding face not just financial losses, but growing repurchase risk and the very compliance exposure the new GSE guidelines are designed to address.
Brian Margulies, Vice President of Operations at Indecomm Global Services, sees it firsthand:
“The old days, you could kind of tell if someone was cut and pasting: the numbers don’t add up, the font’s crooked, you could eyeball some of those. Now, with what can be generated, having a robust QC program, particularly re-verifying information on the front end during origination, is critical.”

Staying on the Offense: IDXGenius and AuditGenius

Indecomm’s Genius product suite was built with AI at its core, designed to detect what human review and legacy systems miss and to keep pace with a fraud landscape that does not stand still. Critically, Indecomm’s AI tools are developed and deployed with governance, transparency, and lender accountability in mind, aligned with the kind of trustworthy AI frameworks Fannie Mae now requires.
In the QC fireside chat, Gazotti captured the urgency well:
“I think at some point, the traditional vendor tools are going to become a little more obsolete. We need to think a little bit more outside of the box, because it’s going to start to get adopted by other agencies.”
IDXGenius| ai brings AI-powered identity and document verification to the front end of the loan process. By analyzing document metadata, flagging template inconsistencies, and cross-referencing data points that AI-generated fakes frequently get wrong, IDXGenius | ai helps lenders intercept fraudulent submissions before they move deeper into the pipeline. It is a proactive layer of defense at the point where AI fraud is most effectively stopped, before origination.
AuditGenius brings that same AI-driven intelligence to the QC process, enabling lenders to identify patterns, surface anomalies, and conduct more consistent and defensible reviews across their loan portfolio. As agencies increasingly use AI to scrutinize loans post-closing and as Fannie Mae’s new guidelines demand documented and auditable AI governance, AuditGenius helps lenders align their internal QC posture with that same level of rigor. Findings are documented, trends are visible, and the organization is positioned to demonstrate the kind of thoughtful, evidence-based oversight regulators now expect.
Together, these tools reflect a simple but essential principle: you cannot fight AI fraud with manual processes, and you cannot satisfy AI governance requirements with informal ones.

Building an AI-Ready Fraud Defense

Technology is the foundation, but a complete AI fraud defense requires the right organizational posture around it. Under the new Fannie Mae guidelines, lenders must designate an internal AI overseer and conduct annual reviews of their AI policies, which means AI governance can no longer be delegated informally or addressed reactively. It needs a named owner, a documented framework, and a regular review cycle.
QC calibration needs to account for AI-specific anomalies such as metadata irregularities, font inconsistencies, and recently formed LLCs used to fabricate employment, and detection checklists should be updated continuously as fraud techniques evolve. Access controls matter too. AI-powered impersonation is most effective when authentication is weak, and multi-factor authentication and biometric verification add friction that synthetic identities and deepfakes cannot easily overcome.
Perhaps most importantly, fraud awareness needs to be embedded into the organization’s culture. Fannie Mae specifically requires regular communication and training on AI policies for all personnel who work with these technologies. Staff who understand how AI fraud works and how their organization’s AI tools are governed are a meaningful line of defense. That same expectation now formally extends to vendors: lenders are responsible for ensuring their QC and technology partners are operating within compliant, trustworthy AI frameworks, making vendor selection a governance decision as much as an operational one.

What Lenders Should Ask Their Vendors

With lenders now formally accountable for how their vendors use AI, it is worth having an open conversation with your QC and technology partners about where they stand. Simple questions go a long way: What AI tools are part of your process? How do you stay current with GSE policy changes? How do you handle it when something does not work as expected?
The goal is not to audit your vendors but to make sure you are working with partners who are thinking about these questions the same way you are. The right QC partner should be able to speak openly about how their tools work, how they are kept current, and how their practices support your agency readiness. That kind of transparency is what a good partnership looks like, and increasingly, it is what the agencies expect to see.

The Bottom Line

Generative AI has permanently changed the fraud threat landscape, and the GSEs have responded with governance requirements that make AI oversight a formal compliance obligation. The lenders best positioned to protect their portfolios and their agency relationships are those who treat AI fraud defense and AI governance as two sides of the same strategic priority.
IDXGenius and AuditGenius were built for exactly this environment: AI-powered tools, transparently governed, designed to keep lenders on the offense. In the fight against AI-powered fraud, the best defense is a smarter, more accountable offense.
Sources: Fannie Mae AI Governance Guidelines (April 2026); Cotality (2026); Indecomm fraud trend analysis
We use cookies to offer you a better browsing experience, analyze site traffic, personalize content and serve targeted ads. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or they have collected from your use of their services. Read how we use cookies and how you can control them in our Cookie Disclosure Policy. By using our site, you consent to our use of cookies.