` Australia orders the biggest Data Subject Rights deletion ever - Clarip Privacy Blog
ENTERPRISE    |    CONSUMER PRIVACY TIPS    |    DATA BREACHES & ALERTS    |    WHITEPAPERS

Australia orders the biggest Data Subject Rights deletion ever

Australia Data Subject Rights deletion

In what is likely the biggest Data Subject Rights (DSR) deletion request ever, Australia’s national privacy regulator has ordered Clearview AI to destroy all images and facial templates belonging to individuals living in Australia.  Clearview AI is a company focused on facial recognition technologies.

Australia’s national privacy regulator, the Office of the Australian Information Commissioner, determined that Clearview AI had violated the privacy rights of residents of Australia by collecting their biometric information (facial images and templates) from various sources, including Facebook, Instagram, LinkedIn, and other websites.  Clearview AI would have a very difficult time denying their aggregation of facial data, at least to Australian regulators, as both state and federal agencies in the country utilize Clearview AI’s facial recognition technology.  The Australian Federal Police as well as state police forces had run more than 1,000 searches using Clearview AI data.

For its part, Clearview AI states that it hasn’t done anything wrong.  Mark Love, the company’s attorney, says, “Clearview AI has not violated any law nor has it interfered with the privacy of Australians.”  He further went on to say, “Clearview AI does not do business in Australia, does not have any Australian users.”

Setting aside the issue of whether or not Clearview AI has violated any law, there is definitely some nuance to Mark Love’s statement that they haven’t interfered with the privacy of Australians.  There are two big pieces to unpack here.  First, is the aggregation of publicly available data about an individual an interference in their privacy?  Second, is the mapping of someone’s facial characteristics an invasion of their privacy?

The first question gets at Mark Love’s likely explanation for how Clearview AI is not interfering with the privacy of Australians.  He could simply say that all of the data that they gather about individuals is public information.  They simply aggregate information about individuals, but the information itself is already out in the public.

While it may be true that all of the information that Clearview AI gathers about people is in the public domain, it’s not necessarily the case that aggregating that data into digital dossiers about people doesn’t interfere with their privacy.  So, what have the courts said?  So far, case law is on Clearview AI’s side. In Australia, it isn’t even clear if a tort of invasion of privacy exists.  The existence of such a right hasn’t been a holding in any Australian federal cases with precedential value.

In other jurisdictions, such as the US, there is a standard described as the expectation of privacy.  The standard applies to Fourth Amendment searches and seizures and entitles individuals to privacy in places that they would subjectively expect to have privacy, such as their own homes, public restrooms, hotel rooms, or phone booths.  Even using the expectation of privacy standard in a completely different context and in a different country altogether, Clearview AI probably has a solid defense, there simply isn’t an expectation of privacy about things that people post publicly online under their own names.

This first instance, I’ll describe as the digital dossier.  Clearview AI has merely gathered public information into one place.  They’ve created a digital file for each of many different people.

The second instance, is the question of whether creating a digital map of someone’s facial characteristics is an interference in their privacy.  The first instance consisted of merely gathering information.  The second instance involves the creation of something else from the information collected.

It is important to preface this whole next discussion with the following statement.  I don’t know the details of how Clearview AI’s facial recognition technology works.  I can only speculate.  Accordingly, here are my primary relevant assumptions about how their technology works:

  1. There is some kind of data file that is used to represent key aspects of facial architecture. (Hereinafter referred to as the ‘map’.)
  2. The map is sophisticated enough that humans are unable to glean the map from looking at images of someone’s face.

Once again, I don’t know how Clearview AI’s technology works, they may not use any kind of facial map.  It is also an assumption on my end that for the technology to be effective for facial recognition, this ‘map’ needs to be more sophisticated in identifying facial details than human brains are.

However, assuming that the two statements are true, Clearview AI may very well be interfering with the privacy of Australians.  Based on my assumptions, Clearview AI would be gathering data about people, and then creating something new about these same people that is not in the public domain.

Due to the issue at hand being privacy, it is relevant whether Clearview AI uses this private information or shares the information.  If they simply use the information, they are on firmer legal ground than if they share that same private information about individuals with third parties, such as law enforcement.  Public disclosure of private facts is considered to be a tort in some jurisdictions.

Overall, Clearview AI is probably on firm legal ground.   Simply gathering publicly available information is probably permissible under Australian law.  Generating additional private information about an individual is also probably permissible under Australian law as long as that information is not disclosed publicly.

All that being said, laws can always be amended or newly drafted.  Clearview AI has attracted the attention of Australian regulators.  At this point, it is probably in their best interests to appease the regulators.

Sensitive personal information, which includes data like biometric scans, is particularly important in the eyes of regulators and privacy conscious citizens.  Companies that process sensitive personal information should exercise an abundance of caution in their processing of sensitive personal information.  Keeping that in mind, any company that processes sensitive personal information needs to know where it resides and how it is processed.  That’s where data mapping or data risk intelligence can come in handy.  Clarip offers fully automated data-mapping and can also perform data risk intelligence scans to allow your organization to fully appreciate the flows of sensitive personal information in your digital domain and whether or not it is at risk.  Clarip also provides clients with fully automated, end-to-end data subject access request fulfillment as well as vendor and consent management.  Visit us at www.clarip.com or call us at 1-888-252-5653 to learn more!

The pixel
Show Buttons
Hide Buttons