Where you’ve been and what you’ve done is no longer secret, not even from strangers. Facial recognition technology—once the preserve of law enforcement and intelligence agencies—has gone public.

Where you’ve been and what you’ve done is no longer secret, not even from strangers. Facial recognition technology—once the preserve of law enforcement and intelligence agencies—has gone public.

The technology that underpins facial recognition has not changed drastically over the last few years. But access to access to data and the willingness to use it certainly has. Clearview AI is a commercial facial recognition tool available to law enforcement agencies around the world. The US company relies on a collection of 3bn+ images of faces scraped from the web to generate matches. It has seen a spike in use in the wake of the Capitol Hill riot as police seek rioters that engaged in criminal acts.

Similar tools that use the power of web-sourced datasets are also available to the public. Polish service PimEyes matches uploaded faces against a database of 900m+ scraped images and shows where on the web a face has appeared. Russian search engine Yandex uses facial recognition in its image search. FindClone and Search4Faces allow face-based searches of Russian social media site VK.

While data protection regulations in Europe and the United States may result in steep fines or closure for some of these services, others will likely rise from their ashes. This has been the case for other fringe and outright illegal services such as music and film piracy sites and darknet markets.

The ethical and legal status of these tools may be in debate for some time but the fact that they are publicly available has significant implications.

⚡ Impact

A person’s face has become a key to unlocking their history and private life.

Anyone can use facial recognition tools to discover information considered to be private. Images of a person may enable the discovery of information such as:

The person’s name

  • LGBTQIA status (as described by Netzpolitik.org)
  • Social, religious and political affiliations
  • Disabilities and illnesses
  • Workplace and home addresses

Individual impact: this technology exposes individuals to a range of potential harms. These include reputation damage, persecution, and physical and psychological harm.

Business impact: employers must consider how employees are exposed to risks and take steps to prevent or mitigate these.

⚠ 🏢 Business risks

  • Failing duty of care – exposing employees to harm by requiring them to have headshots online or by placing them in the public eye
  • Reputation risk – having employees exposed for personal histories and choices deemed to be unpalatable (not a new risk but now more easily triggered)
  • Security risk – tools that allow unprecedented insight into employees' lives increase security risks (makes identity theft, fraud and blackmail easier)

🏃 Actions for individuals

  • Audit your digital footprint – to identify what images of you are online
  • Try to have compromising images removed – but be careful not to trigger the Streisand Effect and increase your visibility via your actions

🚨 Actions for businesses

  • Provide support – for employees who may be exposed to harm due to the nature of their work
  • Check policies – for how your business deals with reputation damage related to employee actions; a clear social media policy may be a good place to start
  • Update recruitment processes – consider how potential employees should be screened, particularly when recruiting for high-profile roles
  • Avoid facial recognition tools – for your recruitment process due to the potential for abuse, discrimination, reputation damage and legal action

Companies

Resources

This is a re-post of my newsletter Dispersed Knowledge. You can read the original briefing.