top of page

AI is watching you!

  • Writer: BSLB
    BSLB
  • Mar 15, 2024
  • 5 min read

On 26 February 2024, the suspected terrorist Daniela Klette, who had been wanted for more than 30 years, was arrested by the police in Berlin, Germany. Daniela Klette is said to have been a member of the left-wing extremist Red Army Faction (RAF), which was responsible for at least 33 murders and attacks in Germany between 1970 and 1998. Klette went into hiding after the dissolution of the RAF and was one of the most wanted terrorists in Germany and Europe.


In November 2023, the police received an alleged crucial tip about the whereabouts of the suspected terrorist. Journalists had tried to track down Klette for a German podcast. To do so, they worked together with a journalist from the Bellingcat network. The journalist used “Pimeyes” software, a type of internet accessibility software for reverse image internet searches. After uploading a picture of a person, the software will reveal all images on the internet of that person. The software uses biometric data, which it automatically extracts by using AI from the publicly accessible images and then compares this with the uploaded images. After an alleged 30-minute search, the journalist was able to find several publicly accessible pictures of Klette. The police have not publicly stated that this journalistic investigation led to the arrest. However, it appears that through the use of the “Pimeyes“ software and several previously publicly available images of Klette journalists have ended a 30-year ongoing search.


In general, biometric identification is defined as the process of verifying the identity of a person based on their physical characteristics such as face, fingerprint, iris or voice. Facial recognition involves the identification of a human face using biometric technology by mapping facial features e.g. from a photograph or a video and then analysing and comparing the information with a database of known faces to identify a match.


AI already enhances existing identification capabilities through machine learning algorithms that analyse biometric data from various sources such as cameras and passports at airports. This speeds up the boarding and check-in process which might be even obsolete in the future. At several airports in the US, travellers can already use their faces instead of boarding passes at the bag drop and the security check, including the John F. Kennedy Airport in New York. In China, 86 percent of the international airports are using biometric technology. Travellers in Bejing, one of China’s largest airports, can even pay at duty-free shops by using facial recognition systems. In the EU, facial recognition systems are not yet widely spread but nevertheless are used at several airports. For example, the Frankfurt am Main airport in Germany has installed biometric systems across all terminals to facilitate check-in, security and boarding processes.


In conclusion, the utilization of biometric data and AI in law enforcement and travel sectors presents a remarkable advancement in efficiency and safety. So, in theory, everything is looking good, and the future will be much better because of biometric AI. But how true is this preceding statement — are we moving in the right direction?


Indeed, Biometric AI systems do have multiple downsides. Generally speaking, a distinction can be made between three risk areas. Use of it by governments or by private parties and cybersecurity relating to the accumulated data.


Those systems could be deployed for mass surveillance, enabling governments to track citizens’ movements, activities, and behaviours without their knowledge or consent. Biometric AI initially deployed for specific purposes like security or law enforcement could be repurposed for other uses once governments have access to it. Additionally, facial recognition systems may lead to a rising risk of surveillance and social scoring as seen in China.


The systems as they are, currently are imperfect but strong improvements can be expected in the future. Nevertheless, in several cases, particularly in the US, individuals were wrongfully arrested in public after systems incorrectly matched pictures with alleged suspects.


Companies may exploit biometric data for targeted advertising, profiling, or other commercial purposes without individuals’ consent, leading to even more intrusive marketing tactics and loss of autonomy. However, with that technology it may also be much easier to identify health data or other extremely sensitive data of data subjects, which can be used against individuals, especially after possible data breaches.


Because, in a world where cybersecurity threats are increasing and data breaches appear nearly unavoidable, data-sensitive activities as facial recognition systems pose a prominent level of risk. Biometric databases are possibly lucrative targets for cybercriminals seeking to steal sensitive information for financial gain (i.e. through blackmailing) or identity fraud. Once stolen it is almost impossible for law enforcement to return or revoke rights of use in relation to an individual’s biometric data. Hackers could use this data to open bank accounts on the victim’s name, which could lead to large- scale consequences.


It seems clear that Biometric AI does not only have advantages. For this reason, it is important to initiate international or supranational regulation to reduce the potential risks for humankind and to set rules for the suppliers and users of that technology.


In order to deal with the risks of these far-reaching technologies, the European Commission proposed in 2021 and the European Parliament adopted the AI Act on 13 March 2024. According to the EU, the AI Act shall ensure safety and compliance with fundamental rights, while boosting innovation. The European Commissioner for Internal Market Thierry Breton posted on X that the “EU is regulating as little as possible but as much as needed.” The regulations follow a risk-based approach, which means that high risk uses of AI will face the most regulation.


According to the AI Act (para. 39) any processing of biometric Data and other personal data involved in the use of AI systems for biometric identification, except in the context of the use of real-time biometric remote identification systems in publicly accessible spaces for law enforcement purposes, should continue to comply with all requirements from Article 10 of the Law Enforcement Directive (Directive 2016/680). For purposes other than law enforcement, the processing of biometric data (sensitive personal data) is prohibited according to Article 9(1) GDPR (Regulation 2016/679) which is subject to several exceptions in Article 9(2) GDPR. The exceptions include, among others, the explicit consent by the data subject to the processing of such data, the case that processing is necessary for reasons of public interest in the area of public health or when the processing is necessary for the establishment, exercise or defence of legal claims or whenever courts are acting in their judicial capacity.


Nevertheless, Article 9(4) GDPR grants the member states further discretion in the regulation of face recognition systems. The member states may maintain or introduce further conditions, including limitations regarding the processing of genetic data and biometric data.


Regarding the regulation of Biometric AI, the AI Act in addition to the GDPR might be a first step in the right direction, because it finally provides a broad set of rules that capture a wide range of AI technology-related capabilities. It acknowledges that Biometric AI comes with many high-level risks. But it sees also the chances for law enforcement and welfare. In conclusion, one can state:


Firstly, the AI Act cannot solve the problem of international coordination of regulations. One can hope that the AI Act will become a blueprint and because of that can set a standard level of AI regulation. Secondly it is questionable if SMEs within the EU can afford to and are able to comply with this regulation. As this Regulation is complex, there might be the need for special consultancy services, which may disadvantage SMEs, as it could lead to disproportionately high costs.


Thirdly, the AI Act does not provide a framework for cybersecurity measures, but only refers to the general GDPR rules on the processing of biometric data. In our opinion in the future sensitive data must be mandatorily encrypted in a way that in the event of theft, the data is automatically not usable. Biometric data, when collected, must be protected at all costs, since it contains very sensitive information, and the implementation of these protective measures must be properly enforced.


CC: Niklas J. Wagner and Lukas B. Schaefgen

 
 
 

Comments


Join our monthly newsletter

Thanks for submitting!

Stay Connected with Us

  • LinkedIn
  • Instagram

© 2035 by Bocconi Students for Law and Business. Powered and secured by Wix 

bottom of page