Companies Develop AI Tools to Detect Deepfakes in Hiring and Security
Security firms are creating AI-based systems to identify deepfake technology used in job applications and scams. These tools analyze audio and video to verify identities amid rising incidents of fraud. Experts note challenges in detection as deepfake capabilities advance.
auntminnie.comRising Use of Deepfakes
in Fraud Companies are facing an increase in fake job applicants using deepfake technology, according to representatives from security firms.
Pindrop Security has reported cases where individuals use altered voices and faces to secure multiple positions within the same organization. Representatives from Pindrop stated that some customers discovered the same person hired three times under different identities. These deepfakes often involve people modifying their own features rather than fully AI-generated personas.
A previous detection method involved asking individuals to hold three fingers in front of their face, but current AI models can now render this convincingly. Representatives from Pindrop noted that such alterations are now imperceptible to the naked eye.
Detection Technologies and Challenges Firms
like Pindrop Security and Reality Defender are developing software to counter these threats.
Pindrop's system analyzes audio and video during meetings, such as on Zoom, to confirm participants' identities. Users consent to the collection of voice, face scans, and IP addresses, retained for up to 90 days.
KEY FACTS CLAIMED: - Deepfake job fraud: same person hired multiple times with altered identities - Detection retention: voice and face data kept up to 90 days - Scam evolution: targets now include all employee levels - Trust boundaries: sight and hearing no longer reliable for verification
Story Timeline
3 events- Recent
Fraudsters targeted a publicly traded company by compiling employee data for deepfake scams.
1 sourceThe Verge - Ongoing
Companies like Pindrop report fake job applicants using deepfakes to secure multiple roles.
1 sourceThe Verge - Past years
Law enforcement has warned of deepfake kidnapping scams involving convincing voice imitations.
1 sourceThe Verge
Potential Impact
- 01
Large companies could face financial losses from undetected deepfake infiltrations.
- 02
Businesses may increase adoption of AI detection tools to prevent hiring fraud.
- 03
Scammers might adapt tactics to evade new detection technologies.
- 04
Consumer awareness of deepfake threats could rise, prompting demand for personal security solutions.
Transparency Panel
Related Stories
2 sourcesWhite House Announces NASA Plan for Nuclear Reactors on Moon and in Orbit
The White House has directed NASA to collaborate with the Departments of Defense and Energy on developing nuclear reactors for the moon's surface and orbit. The initiative aims to provide sustained power for future space missions. Technologies are targeted to produce at least 20…
forbes.comWaymo Opens Public Robotaxi Rides in Miami and Orlando, Introduces Teen Accounts in Phoenix
Waymo announced that fully autonomous robotaxi rides are now available to the general public in Miami and Orlando. The company also introduced highway travel in Miami and accounts for teens ages 14 to 17 in Phoenix. These services began on April 15, 2026.
gearnews.comFCC Filing Reveals Details of Unannounced Teenage Engineering KO-Amp 35 Device
A new FCC filing has disclosed information about an unannounced device from Teenage Engineering called the KO-Amp 35. The device is part of the company's EP family of instruments and includes features such as a rechargeable battery and Bluetooth connectivity. Limited details from…