Adversarial patterns and short explanation of neural networks

An essay about computer vision as an apparatus of mass surveillance and the tools to protect your privacy

Artificial intelligence or more precisely computer vision with higher accuracy – biometrics, is used as a tool for mass surveillance of the population in technologically advanced and totalitarian-oriented parts of the world. Specifically, the algorithm for facial biometric recognition is extensively used, which, as we know, is morally controversially used by the Chinese government. Especially since it is associated with a social credit system that affects the quality of life based on individual behavior. However, activists have already started developing tools to counteract this. In this initial phase of the project, I will explore the so-called “adversarial patches,” where we confuse artificial intelligence through seemingly simple images. This can either make the AI recognize something different, like a banana, or simply freeze and fail to recognize the person as an object worthy of its computational power.

But before we can discuss biometrics and artificial intelligence in the context of mass surveillance and the fight against it, we must first understand how deep learning (DL) works and the associated algorithms in artificial neural networks (NN).

The concept of artificial intelligence, or AI, was coined by Professor John McCarthy in 1956 to provide a neutral term for the rapidly developing field of research. With this neutral terminology, McCarthy aimed to avoid focusing on a single direction of development within the field of thinking machines or intelligent machines, which encompassed disciplines such as cybernetics, automata theory, and complex data processing. As defined by McCarthy, AI is based on the assumption that every aspect of learning or any other feature of intelligence can, in principle, be precisely described so that a machine can simulate that characteristic based on the study of it.

Deep learning is part of a much larger family of machine learning techniques. It is a data analytics technique that enables computers to learn from experience. Deep learning mainly focuses on mimicking the functioning of the human brain to provide increasingly accurate predictions. It powers numerous automation operations and analytical tasks without human supervision. The key differentiator of deep learning from other algorithms is the use of neural networks, which are supposed to emulate the functioning of human neural pathways. A standard neural network consists of interconnected processors called neurons or nodes. Each neuron produces a sequence conditioned by the real-valued activations. The input neuron is activated through a sensor that perceives the environment, while the others are activated based on weighted connections from previously activated neurons. Some neurons can also interact with the environment by triggering various actions. Therefore, each of these neurons in a specific network has its linear regression model composed of input data, weights, and output data. After determining the input plane, we establish the so-called “weights” to determine what the program will perceive and recognize. The output data is examined using a function, and if the values exceed the weight threshold, the value is transmitted to the next neuron; otherwise, the value is not passed to the next layer. Neural networks require training on data to function properly. They are commonly used in speech recognition and image recognition. A trivial example is Google’s search algorithm.

Artificial intelligence is used in practice for various purposes, from digital assistants and autonomous vehicles to complex business processes and, ultimately, for biometric facial recognition. However, this field can become complicated from political and social perspectives. For example, it is known that the Slovenian police used algorithms for facial recognition from video footage of protests that occurred extensively last year and issued fines based on the findings. While still not as controversial as what China is implementing by combining multiple systems to exert strong control and repression over the population, several European countries, including Slovenia, Austria, Finland, France, and eight others planning to adopt the technology in the future, indicate a potential trend.

While European (including Slovenian) pilot projects are relatively milder compared to those in the East, it is not a reason to be less cautious. Especially considering that the use of artificial intelligence is in a constitutionally gray area. For example, in Brussels, facial recognition was introduced at the airport without the knowledge of the relevant security services. In Rotterdam, a neighborhood implemented a project called ‘robber-free’ using smart streetlights to detect suspicious activity. This system was later joined by cities such as Nice (where there were plans to introduce biometric recognition in secondary schools, but the court deemed it illegal), Berlin, Hamburg, Mannheim, and others. Fortunately, the algorithms have been turned off, but the cameras and microphones remain in place, which could suggest future usage when the systems are more developed. The European Green Party already submitted an initiative in October last year for the general abolition of biometric recognition in all public areas.

Now, in the event that mass surveillance of the population is introduced throughout Europe and the world, how can we fight against it? I think it is somewhat naive to believe that corporations will not have their hands in it, as concerns have already arisen about the lobbying of digital security services at the European level. For this reason, in addition to actively promoting fair and humane use, in case things go wrong, we also need appropriate systems to protect our own privacy, as one of the primary concerns with facial recognition technology is its potential for discriminatory outcomes. AI algorithms are trained on datasets that may not be representative or inclusive of the diverse population, leading to biased results. The accuracy of facial recognition systems has been found to vary significantly across different demographics, with higher error rates for women, people of color, and individuals with non-binary gender expressions. This bias can result in false identifications, wrongful arrests, and the perpetuation of systemic discrimination.

This is where the so-called ‘adversarial patches’ come into play, which hypothetically, in my research, aim to confuse the algorithm to produce incorrect results. Unfortunately, this technology has already been used for dangerous and controversial purposes, as one group discovered that by placing simple black squares on a STOP sign, they could confuse Tesla’s algorithm into thinking it was a sign limiting the speed to 40.

However, I am not talking about such use. In recent years, there have been significant improvements in facial biometric recognition, and at this point, it is possible to recognize a face even if the person is wearing a mask, making it sensible to protect oneself from mass surveillance. For this purpose, as this study has shown, special patterns can be printed on masks or stickers can be applied to them. These so-called adversarial patches work by generating noise on a specific object, confusing artificial intelligence so that it does not recognize the object in its field of vision or recognizes something completely different. In this study, a banana was taken as an example, with a toaster sign placed next to it. As soon as this sign appears next to the banana, the algorithm is no longer able to recognize the banana for what it is. Instead, it focuses on the generated toaster sign and confidently asserts that there is indeed a toaster in the photo, completely overlooking the presence of the banana.

Another approach is the development of privacy-preserving technologies that aim to protect individuals’ identities while still allowing for necessary surveillance tasks. For example, privacy-enhancing techniques such as differential privacy or secure multi-party computation can be employed to analyze data while preserving the anonymity of individuals.

Furthermore, the legal and regulatory frameworks surrounding the use of facial recognition technology are evolving. Some jurisdictions have implemented restrictions on its use, requiring explicit consent, clear purposes, and transparent policies for data handling. Others have called for outright bans on certain applications of the technology. These measures aim to strike a balance between utilizing AI for security purposes and safeguarding individuals’ privacy and civil liberties.

The discussion surrounding facial recognition technology and mass surveillance is complex and multifaceted. It involves considerations of ethics, human rights, and the responsible use of AI. As society continues to grapple with these issues, it is crucial to have an open and inclusive dialogue that involves all stakeholders, including policymakers, technology developers, civil society organizations, and the general public.

In addition to the ethical and legal aspects, it is also important to address the potential involvement of corporations in mass surveillance initiatives. There have been concerns about the influence and lobbying power of digital security agencies at the European level. It is crucial to remain vigilant and ensure that the deployment of facial recognition technology is guided by principles of transparency, accountability, and public interest, rather than corporate interests.

To effectively combat mass surveillance, it is necessary to foster a multi-faceted approach. This includes raising awareness about the risks and implications of widespread surveillance, advocating for robust data protection laws and regulations, and supporting the development and implementation of privacy-enhancing technologies.

Education and public discourse play a vital role in shaping the societal response to mass surveillance. By fostering a critical understanding of artificial intelligence, deep learning, and the underlying algorithms, individuals can make informed decisions and actively participate in discussions about the responsible use of technology.

Furthermore, collaboration between researchers, activists, and policymakers is essential for developing effective countermeasures against mass surveillance. This includes the continued exploration of techniques such as adversarial patches, as well as the development of innovative solutions that protect privacy without compromising security.

Ultimately, the fight against mass surveillance requires a collective effort. It involves promoting a human-centric approach to technology, ensuring that AI and biometric systems are designed and used in a manner that respects human rights, individual privacy, and the principles of a democratic society.


Sources:


You can download the original, non-translated, non-enhanced essay here:

Related Articles

Hello world!

Kubus.site is live! Expect to see new content in the upcoming days, such as projects, portfolio, shop and more!

Schedule A MEETING

Fill out the form below, and we will be in touch shortly.
Contact Information
More Information
Preferred Date and Time Selection