Access Now:

Iris scanning. Voice recognition. Brain implants. Biometric technologies are no longer simply the stuff of science-fiction or spy movies; they are here and they are harming the most marginalized people. A new Access Now publication, written by Xiaowei Wang and Shazeda Ahmed from UCLA’s Center on Race and Digital Justice and the ELISAVA School of Design and Engineering, explores how AI–based biometric tech systems are being used to classify, categorize, and control our bodies, perpetuating discrimination in the process.

Put simply, biometric data is information about physical or behavioral characteristics that are generally unique to an individual. Physical biometric data might include someone’s facial features, fingerprints, or iris patterns, while behavioral biometric data may include gait, signature, or voice patterns. All of this data can be input into biometric systems, which use artificial intelligence (AI) – i.e. “machine learning” algorithms – to make predictions about people. For example, such algorithms can extract a biometric template from an image of a person’s fingerprint and match it against others in a database in order to identify that person. Biometrics systems range in complexity and sophistication, from widely available “low-end” tech such as video cameras and voice recorders, to more restricted “high-end” tools that allow for capturing and reading brain wave data, for instance. 

Go to link