In case you’re familiar with Cesare Lombroso’s theory, short women would be categorized as criminals while men with big noses and thick lips are automatic sex offender suspects. AI is now helping cops to identify people with criminal tendencies on the streets and prompting a quick police response. Muggers are no longer those runaway kids on the streets of Delhi, they could be the passengers on a motorbike with strong phalanges to grab your handbag without allowing you to press the alarm button on your throat. Now in most cities, the Artificial Intelligence Camera eyes watch over the dark corners of the street and the straight lines of the highway to tell cops about suspicious behaviors.
Not all cities, so don’t go to an African country and walk through the forbidden streets just because you read it here!
How AI Picked a Drug Trafficker on the Highway
In March 2022, AI made the first breakthrough in identifying criminal behaviour among motorists. As a grey Chevrolet rushed with speed towards Hutchinson River Parkway, an AI camera marked the vehicle’s behaviour as suspicious. The police quickly used AI to search through 1.6 billion license plate records over the past two years.
Surprisingly, the Artificial Intelligence showed that the car belonging to Zaya had a similar behaviour to those trafficking drugs. Unaware of what AI had on him, Zaya pulled over upon request by Westchester PD and the search confirmed the curiosity of AI. The items in the car included a semiautomatic pistol, $34,000 cash, and 112 grams of crack cocaine. It would take one year for Zaya to plead guilty to drug trafficking.
This is not the first time AI has surprised humans with its capability to detect criminal behaviour and support the police efforts to bring criminals to book! In July 2023, an AI traffic camera provoked a congress protest after detecting thousands of traffic violations within hours of installation. While many people are worried about generative AI, many will have a reason to smile because AI secured their bags from the hands of muggers.
If you thought police arrival late to the scene is only real in movies, you’re probably wrong! Even in real life, the cops have a reputation for arriving late. You may wonder if this happens by design.
I have passed through the hands of muggers and I don’t remember the cops offering any tangible solution since they arrived late and there was nothing to show the snatch thieves vanished with my wallet.
You probably have encountered snatch theft of some sort. They happen like a flash light and only the camera can tell what happened. With their bike technology, you never get the opportunity to see the face of the criminal. However, AI cameras can determine the suspicious behaviour of bike men approaching a car with an open window in a traffic signal. As they make away with your precious property, you begin to smile because you know they’ll be sorry at the next stop after they land in the hands of cops tipped by the AI camera.
This is a hope that’s beginning to come to a reality on our streets and soon cops will have no reason to arrive late at the scene.
Concerns over Privacy in AI Camera Surveillance
Gold, Zaya’s lawyer weighed into the matter citing privacy concerns that the Artificial Intelligence technology has put in the hands of the individual police officer: “This is the spectre of modern surveillance that the Fourth Amendment must guard against. This is the systematic development and deployment of a vast surveillance network that invades society’s reasonable expectations of privacy.”
The use of AI-powered cameras raises significant privacy concerns due to their ability to capture, process, and analyze vast amounts of visual data. These concerns can be grouped into several categories:
- Surveillance and Monitoring
Invasion of Privacy. Artificial Intelligence cameras can potentially capture people’s activities, behaviors, and personal information without their consent, leading to a breach of privacy.
Mass Surveillance. Governments and organizations can use Artificial Intelligence cameras for mass surveillance, which may infringe on individuals’ civil liberties.
- Facial Recognition
Misidentification. Facial recognition technology is not always accurate, and errors in identification can lead to false accusations and privacy violations.
Tracking. The continuous tracking of individuals’ movements through facial recognition systems can be used for profiling and tracking without their knowledge or consent.
- Data Storage and Security
Data Breaches. Storing visual data collected by Artificial Intelligence cameras can lead to data breaches and leaks if not adequately secured, potentially exposing sensitive information.
Data Retention. Prolonged storage of video footage can create a database of historical activities that can be exploited if1. accessed by unauthorized parties.
- Consent and Notification
Lack of Consent. Individuals may not be aware that they are being recorded or subjected to Artificial Intelligence analysis, leading to a lack of informed consent.
Notification. There may be inadequate notification mechanisms in place to inform people when they are being surveilled by Artificial Intelligence cameras.
- Biometric Data Protection
Biometric Privacy. Facial recognition and other biometric data collected by Artificial Intelligence cameras may not receive the same level of legal protection as other personal data, leading to potential misuse.
- Algorithmic Bias and Discrimination
Bias in AI Algorithms. Artificial Intelligence cameras may exhibit bias, leading to discriminatory outcomes, especially when it comes to facial recognition, which can disproportionately affect certain groups.
- Location Tracking
Geolocation Data. Artificial Intelligence cameras with GPS capabilities can track individuals’ movements, creating a detailed record of their daily routines, which can be exploited or abused.
- Secondary Use of Data
Data Sharing. Data collected by Artificial Intelligence cameras can be shared or sold to third parties without individuals’ knowledge or consent for various purposes, including targeted advertising.
- Deepfakes and Manipulation
Misuse of Artificial Intelligence. The same AI technology used in cameras to enhance images and videos can also be used to create deepfakes, which can deceive and manipulate people.
- Legal and Regulatory Challenges
Lack of Regulation. Many regions lack comprehensive regulations and legal frameworks to govern the use of Artificial Intelligence cameras, leaving individuals vulnerable to privacy violations.
Crimes still go unnoticed and or unpunished. However, the use of AI cameras brings new approaches that will likely change police response as well as the criminal tendencies
To address these privacy issues, governments, organizations, and individuals must consider implementing strong data protection laws, ensuring transparency in Artificial Intelligence camera use, obtaining informed consent, regularly auditing Artificial Intelligence algorithms for bias, and establishing robust security measures for data storage. Striking a balance between the benefits of AI-powered cameras and the protection of individual privacy rights is a complex challenge that requires ongoing attention and regulation.