
Artificial intelligence used to be the stuff of sci-fi movies. We could imagine it as a concept, and maybe embrace the concept that it would eventually become reality, but most people didn’t anticipate—as recently as a decade ago—that it would be part of our daily lives today.
AI is part of your daily life, whether you realize it or not.
Here are just a couple of examples of when AI is making decisions that affect you—often without your knowledge:
- Healthcare triage. AI systems decide which patients get flagged as high-risk, how quickly you’re seen, and what treatments are suggested. Radiology scans, pathology slides, and even mental health screenings are filtered through AI before a doctor weighs in.
- Financial decisions. AI models are determining whether you get approved for a loan, what interest rate you’re offered, what insurance premium you pay, and even whether your résumé is seen by a human recruiter. Many people who get rejected for credit or jobs never even realize that an algorithm made the decision.
- Pricing of goods and services. Dynamic pricing algorithms adjust what you pay for flights, rideshares, hotel rooms, and even groceries and retail products. Two people looking at the same product at the same time can see different prices based on their browsing history, location, and predicted willingness to pay.
- Fraud and security screening. AI controls whether your credit card transaction clears, your email lands in someone’s inbox or spam folder, or whether you get flagged at an airport for increased security. These AI-driven decisions happen invisibly.
- Infrastructure. Believe it or not, AI often manages the power grid, timing of traffic lights, water treatment optimization, and supply chain logistics.
- Voice and text communication. Every time autocomplete helps you finish a text, makes a grammar suggestion, or offers a smart reply, this is AI shaping how you express yourself. Over time, this nudges language patterns and can even flatten out and inform individual writing style across millions of people.
- Content and information filtering. The articles, videos, and posts you see on social media, news apps, and even search engines are heavily curated by AI recommendations. AI is quietly shaping your worldview, your political opinions, and maybe even your mood—not with what it shows you, but through what it doesn’t show you.
The common thread, here, is that many of these systems operate without disclosure, so people experience the effects of AI decisions without ever knowing a decision was made.
But in the legal system—specifically, in the courtroom—decisions are everything. The entire outcome of a legal case, which could be life-changing for the involved parties, hinges on a judge’s decisions about what evidence may be admitted and on the jury’s perceptions of how the evidence is presented and what it proves.
Once upon a time, a common phrase was, “a picture is worth a thousand words.” Jurors could accept that if they could view something in a photograph or video, it must be exactly what happened. Many, many cases would turn on photo or video evidence, which was considered the gold standard for truth and veracity.
So, now, long-held Federal Rules of Evidence have to be applied to videos, photos, and other media used as evidence. The problem is that what used to be reliable now can be made into a “deepfake”... and the jury might not be able to spot it.
What is a deepfake?
A deepfake is synthetic media—typically video, audio, or images—that is created using deep learning techniques (which is why it’s aptly named—it’s a blend of “deep learning” and “fake”) to convincingly depict someone saying or doing something they never actually said or did.
The most common form involves swapping one person’s face onto another person’s body in video, but deepfakes can also include cloned voices, fabricated photos, and real-time facial manipulation in live video calls.
The technology relies on generative adversarial networks (GANs) or similar. This is a neural network that generates fake content, while another evaluates how realistic it appears. The two improve against each other iteratively until the output is difficult to distinguish from real footage.
Deepfakes originated around 2017. At that time, online communities were using this method to insert celebrities’ faces into explicit videos without consent. Since then, the technology has become more accessible and applications have expanded. Sometimes, they are harmless and fun, such as in film production, satire, or for accessibility tools. But often they’re harmful, and they’re used for political disinformation, fraud, non-consensual pornography, or identity theft.
Deepfakes are particularly concerning, as compared to older forms of media manipulation, because they are low-cost to create, there’s high realism (i.e. hard to detect), and they’re scalable. A person who has a consumer-grade computer and a couple of reference images or audio clips can produce convincing fakes that would have previously required a professional studio.
Here’s an example of a deepfake video that features a likeness of actor Tom Cruise. If you’re familiar with Cruise, you might think this is his face and voice. But it isn’t. This is from a TikTok account that shares deepfake videos of Cruise.
One of these videos is a deepfake.
Can you spot which is real and which isn’t?
Deepfakes and the legal profession
AI has advanced to the point that a fabricated video, audio, or image is nearly indistinguishable from authentic recordings. This is no longer a speculative concern for lawyers; it’s an active, measurable, and an accelerating threat to dependable, reliable evidence. There were an estimated eight million deepfake files online in 2025, increasing at a rate near 900%.
In September 2025, one of the first known instances of deepfakes resulted in a civil case being dismissed in Alameda County, California, court. The judge dismissed the case after determining that the videotaped witness testimony was deliberately fabricated.
Rule 707 (in development)
The federal Advisory Committee on Evidence Rules developed Rule 707, which would apply expert-witness reliability standards to machine-generated evidence. Louisiana has already enacted the nation’s first state-level AI evidence verification framework.
Deepfake statistics
Exponential increases and sophistication
Cybersecurity firm DeepStrike reported that the number of deepfake files increased from 500,000 in 2023 to an estimated eight million in 2025. Another company tracked deepfake incidents, reporting 42 in 2023 to 150 in 2024, or a 257% increase. There were 179 logged incidents in the first quarter, alone, of 2025. By the second quarter of that year, there were 487 discrete incidents tracked by a single company; this figure rose to 2,031 in the third quarter.
| Issue | By the numbers | Source |
|---|---|---|
| Deepfake files online (2023 vs. 2025) | 500,000 → 8 million (~900% annual growth) | DeepStrike 2025 |
| Tracked incidents (2023 → Q1 2025) | 42 → 179 (Q1 2025 exceeded all of 2024) | Keepnet / Resemble AI |
| Average business loss per incident (2024) | $500,000 (up to $680,000 for enterprises) | Eftsure / Business.com |
| U.S. AI-facilitated fraud projection | $12.3B (2023) → $40B (2027), 32% CAGR (compound annual growth rate) | Deloitte |
| Human detection accuracy (video) | 24.50% | iScience / SQ Magazine |
| ID verification failures linked to deepfakes | 1 in 20 (5%) | Veriff 2025 |
| Companies with anti-deepfake protocols | 13% | Programs.com 2026 |
| Deepfakes' share of all fraud attacks (2025) | 6.5%, up 2,137% since 2022 | Signicat / Keepnet |
| Insurance professionals concerned | 80% concerned, only 22% use media validation | Attestiv Survey |
| States with deepfake legislation (mid-2025) | 46 states | Ballotpedia |
University at Buffalo professor Siwei Lyu is a leading deepfake researcher. He reports that voice cloning has crossed what he calls an ‘indistinguishable threshold’ in which a few seconds of audio can now suffice to producing a convincing clone, complete with natural intonation, pauses, and breathing noise.
Human observers can identify high-quality deepfake videos only 24.5% of the time, which is statistically worse than flipping a coin.
Financial and fraud impact
As you can see in the table above, the financial consequences are staggering. In 2024, businesses lost an average of nearly a half million dollars per deepfake-related incident, and even more for larger enterprises. The Deloitte Center for Financial Services projects that U.S. fraud losses facilitated by generative AI will climb from $12.3 billion in 2024 to $40 billion by 2027. By 2026, it’s predicted that 30% of enterprises will no longer consider standalone identity verification and authentication solutions to be reliable.
How deepfakes threaten dashcam and surveillance evidence
Dashcam and surveillance footage have long been treated as among the most objective forms of evidence in personal injury, auto accident, premises liability, and criminal cases. However, this is, unfortunately, an outdated version of reality. AI tools can seamlessly alter timestamps, license plates, vehicle positions, weather conditions, and the identities of individuals captured on camera. In one example (outside the U.S.), a U.K. insurance law firm reported discovering that CCTV evidence displaying the date, time, and vehicle registration had been manipulated using AI to support a fabricated accident claim.
In fact, European motor carriers disclosed in April 2025 that diffusion models were used in a fraud to inject artificial scratches and cracks into photos of bumpers. This inflated average insurance payouts by approximately £13,000 per claim (about $17,345). Nearly anyone can now produce realistic footage, including simulated bodycam and surveillance-style video, in minutes—thanks to consumer video-generation tools from companies like OpenAI (Sora 2) and Google (Veo 3).
Chain of custody issues
Surveillance video, like a police officer’s body-worn camera, is forensically controlled to preserve authenticity because it must follow a chain-of-custody protocol. However, dashcams and private surveillance recordings don’t have standardized verification (like cryptographic signing, centralized storage, or verified metadata) and no chain of custody. They enter the court proceeding through a consumer device with no standardized preservation system. That means they are especially vulnerable to undetected manipulation.
The ‘deepfake defense’
We’ve established that an undetected deepfake can be a big problem in court.
But there’s another way deepfakes can cause trouble: When the parties challenge authentic evidence by claiming it’s a deepfake. In one wrongful death lawsuit, Huang v. Tesla, Tesla counsel refused to admit the authenticity of a video that showed Elon Musk making statements about Autopilot safety. The lawyers argued it was possible the video could be a deepfake. The court did not allow this tactic and warned that it’s a slippery slope; to accept this would suggest that any celebrity or famous person could avoid accountability by claiming a recording or image is a deepfake.
An expert who studies media, technology, and democracy is concerned about this trend. She’s worried that if it continues, jurors might begin to discount the authenticity of footage that deserves their attention.
If trust in video evidence erodes, it undermines the factual record at the core of the judicial process.
Insurance bad faith and deepfakes
The insurance industry faces two challenges related to deepfakes.
- Dual exposure for insurers. There have been concerns about policyholders and third-party claimants submitting AI-manipulated evidence. This could include doctored damage photos, fabricated medical consultations, and altered surveillance footage. These inflate claims or could support fictitious incidents (i.e. incidents that never happened). This type of deepfake abuse could add millions of dollars in operational losses to insurance carriers.
- Bad faith exposure. If an insurer denies a legitimate claim based on unsubstantiated suspicions that evidence was AI-generated, they could face a bad faith lawsuit. If a carrier rejects a valid dashcam recording simply because deepfake technology exists—without costly forensic analysis or expert consultation—they could risk significant bad faith liability and regulatory scrutiny.
While 80% of insurance industry professionals in a recent survey expressed concern about the potential for deepfakes affecting outcomes, only 22% had any form of media validation or fraud prevention systems in place for digital evidence.
This gap between awareness and action creates exactly the type of unreasonable claims-handling that could trigger bad-faith scrutiny.
What’s next for insurance coverage and deepfakes?
Since about 2024, insurance carriers have been rewriting policy language to address AI-generated content. Typically, a standard cyber insurance policy will exclude deepfake fraud losses from traditional coverage. This would require a policyholder to purchase a separate deepfake policy. Carriers are beginning to conduct increasingly rigorous investigations into whether insureds maintained the required authentication controls and whether evidence was preserved correctly.
How the law is adapting evidence rules to AI
Federal Rules of Evidence: Proposed Rule 707
The U.S. Judicial Conference Advisory Committee on Evidence Rules is actively developing a new framework to address AI-generated evidence. In May 2025, the Committee voted 8-1 in favor of seeking public comment on a new rule. That August, Proposed Rule 707 was released for public comment through February 16, 2026. Rule 707 would govern machine-generated evidence by applying reliability standards that are analogous to those used for expert witness testimony under the Daubert standard.

Rule 901 amendment
A parallel proposal to amend Rule 901 would shift the burden of proof when it comes to deepfakes. The party that challenges the evidence as a deepfake would need to demonstrate a basis for the claim. Then, the burden would shift to the party presenting the material to prove its authenticity by a preponderance of the evidence. This is a higher standard than the current baseline sufficiency requirement.
Rule 707 would apply only to evidence that the presenter has acknowledged as AI-generated, not to evidence whose authenticity is disputed. This would limit its effectiveness against covert deepfakes.
Louisiana state laws for AI evidence
Louisiana is the first state to establish laws for AI-generated evidence. The state passed Act No. 250 (HB 178) on August 1, 2025. This law mandates that attorneys use “reasonable diligence” to determine whether the evidence they present was generated or materially altered by AI. They are responsible for the evidence they uncover and for that submitted by their clients. However, one Fifth Circuit Court of Appeals judge said the courts cannot manage this alone; the attorneys must ask probing questions about each piece of digital evidence their clients provide.
Bar Associations advocating for rule changes
Lawmakers and scholars agree that the current federal rules are insufficient for the “deepfake era.” One law school professor proposed that the authenticity determination should shift from the jury to the judge, because deepfake detection requires technical expertise that’s beyond a typical juror’s ability to assess. Others have suggested requiring parties who allege that evidence is a deepfake to substantiate the claims with evidence in order to prevent frivolous “deepfake defense” assertions.
What’s next for authenticating deepfakes in the courtroom?
1. Content provenance standards
The Coalition for Content Provenance and Authenticity (C2PA) has developed an open technical standard called Content Credentials that can cryptographically bind provenance data to digital files. In plain language, what this means is that it can create a “manifest” that makes any tampering evident. This would record the creation history, edit actions, and capture device details of any digital asset. Major tech companies such as Microsoft, Adobe, Google, Meta, OpenAI, Sony, and Intel are members of the coalition. The C2PA Conformance Program officially launched in June 2025.
In 2025, the National Security Agency (NSA) issued a Cybersecurity Information Sheet that incorporated the coalition’s system, along with detection, education, and policy. Evidence captured on a C2PA-enabled device will carry embedded cryptographic provanence that can be verified at any point in the chain of custody. This includes smartphones, professional cameras, and security systems.
2. AI detection and forensic analysis
Digital forensic experts use machine learning techniques that include artifact detection, frame-by-frame analysis, blink analysis, luminance gradient analysis, and pixel error analysis to determine whether media was altered or fabricated.
This practice of examining multiple data sources simultaneously is called multimodal analysis. It’s particularly effective against deepfakes that combine synthetic audio and video.
Experts anticipate that the market for deepfake and synthetic media detection tools will reach $10 billion by 2030. Even so, detection isn’t a complete solution. Many detection models have a 45-50% performance drop when there are deepfakes made by new techniques that aren’t previously encountered. Although audio deepfake detectors can typically achieve close to 90% accuracy in controlled conditions, this drops significantly in adversarial settings. It’s necessary to combine detection with provenance-based authentication.
3. Preservation protocols
Attorneys, insurers, and investigators must adopt rigorous evidence-preservation techniques to address the deepfake threat.
There are a couple of ways they could be doing this:
- Capturing original device metadata at the point of collection
- Maintaining bit-for-bit forensic copies with cryptographic hash verification (SHA-256 or equivalent)
- Documenting a complete chain of custody from device to courtroom
- Retaining original recording devices when possible for forensic examination
- Establishing relationships with qualified digital forensic experts before litigation begins
- Implementing C2PA-compatible capture tools where available
Most importantly, there’s a clear message for the legal profession: The evidentiary assumptions of the analog age cannot survive the synthetic age. Being proactive about adapting to deepfake technology and detection isn’t optional—it’s an ethical obligation.
Deepfake evidence authentication checklist
A lawyer’s quick-reference guide
Phase |
Action items & requirements |
|---|---|
| Phase I: Intake and initial assessment |
|
| Phase II: Chain of custody and preservation |
|
| Phase III: Technical verification |
|
| Phase IV: Expert and legal preparation |
|
| Phase V: Courtroom presentation |
|
![Deepfake Evidence Authentication Checklist [infographic]](https://www.enjuris.com/wp-content/uploads/2026/04/infographic-the-deepfake-evidence-authentication-checklist.jpg)
