Attestiv Video Deepfake Detection Adds Context Analysis, Using AI to Further Uncover Deepfake Threats

LEHI, Utah, Feb. 27, 2025 (GLOBE NEWSWIRE) — Emerging technologies like artificial intelligence can significantly improve business efficiency. At the same time, AI makes generating deepfakes to commit fraud and support crime easier. AI-generated deepfakes are increasingly used to cheat consumers and businesses, costing more than $12 billion in 2023 and expected to rise to $40 billion in losses by 2027. To help organizations combat deepfakes, Attestiv has upgraded its video deepfake detection platform with new Context Analysis capabilities so anyone can identify deepfake video threats before they lead to losses or harm.

Attestiv has added new Context Analysis features to Attestiv Video deepfake detection, using generative AI to identify digitally altered video content and uncover potential malicious deepfake scams. The new features examine a video file's context, including metadata, descriptions, and transcript to detect signs of modifications that indicate deepfakes or malicious content.

The threat landscape for AI-powered deepfakes continues to expand, rising 700% in 2023. Cybercriminals use generative AI to create fictitious social media posts for social engineering, spear phishing, and confidence and investment fraud. Cybercriminals are increasingly targeting businesses, creating phony celebrity endorsements or deepfake content to impersonate company executives, law enforcement, and authority figures to commit fraud. The new Context Analysis in Attestiv Video helps quickly assess the validity of any video, providing a summary of authenticity at a glance.

“Attestiv represents a valuable tool in our arsenal to detect manipulated videos, particularly those created or edited using generative AI” said Steven Kline, founder of Pixel Analysis LLC a digital media forensics company based in Connecticut.

“As the deepfake threat landscape expands, we continue to level the playing field with new capabilities to defend against deepfakes,” said Nicos Vekiarides, CEO of Attestiv. “Our new Context Analysis adds generative AI technology to better uncover deepfakes. We believe everyone should have access to tools to protect themselves from deepfakes, so we offer Attestiv Video with Context Analysis for consumers and businesses, starting at no cost.”

Attestiv Video Deepfake Detection is available as a free, entry-level solution, enabling free scans of up to five videos per month. Those who need more scans and faster scan times can upgrade to Attestiv's premium Video with enhanced scan fidelity, advanced analysis settings, and higher scan queue priority. Businesses likewise can upgrade to business or enterprise plans which offer even more features and dedicated or regional deployments that include APIs.

For more information, visit. www.attestiv.com.

About Attestiv

Attestiv offers the industry's first cloud-scale fraud protection platform for videos, photos, and documents, serving the insurance, financial services, cybersecurity, news, and media sectors. Utilizing patented AI analysis and tamper-proofing technology, Attestiv enables protection against media tampering, alteration, and generative AI, ensuring the highest standards of trust for your business. For more information, please visit https://attestiv.com.

Media contact:

Len Fernandes
Firecracker PR
len@firecrackerpr.com
1-888-317-4687

Photos accompanying this announcement are available at:

https://www.globenewswire.com/NewsRoom/AttachmentNg/53e529ec-0ec2-4e16-a0a1-db99ca43015a

https://www.globenewswire.com/NewsRoom/AttachmentNg/517dcfee-7c50-4589-b4ed-3f73b3e5267a


Primary Logo

Contextual Analysis Feature

Attestiv has added new Context Analysis features to Attestiv Video deepfake detection, using generative AI to identify digitally altered video content and uncover potential malicious deepfake scams.
Video Authenticity Analysis: Scam ad using Elon Musk's Face

The new features examine a video file's context, including metadata, descriptions, and transcript to detect signs of modifications that indicate deepfakes or malicious content.
Scroll to Top