Abstract
Once limited to entertainment and disinformation, deepfakes are now extending into the financial sector, where voice and facial impersonations exploit biometric authentication systems to facilitate fraudulent transactions. This evolution exposes gaps in existing legal and regulatory frameworks, raising critical questions about consumer protection and institutional safeguards. This Note argues for a reconceptualization of deepfake harms as both a privacy and a financial security issue. It examines the illusion of consent generated by synthetic impersonation and insufficient statutory protections. The analysis examines the patchwork of federal, state, and international laws governing data privacy and artificial media, highlighting the gaps that allow biometric exploitation to exist. Finally, this Note proposes a two-pronged solution: enactment of comprehensive federal privacy legislation with biometric protections that incorporate robust notice-and-choice standards coordinated enforcement between state and federal authorities, and implementation of a public-private key infrastructure within financial institutions to authenticate identity against deepfake interference.
Recommended Citation
Hazel Fernandez, Face Card Declined: The Deepfake Threat to Biometric Security in Financial Systems, 83 Wash. & Lee L. Rev. Online 38 (2025), https://scholarlycommons.law.wlu.edu/wlulr-online/vol83/iss1/2
Included in
Banking and Finance Law Commons, Computer Law Commons, Privacy Law Commons, Science and Technology Law Commons