Police unlawfully storing images of innocent people for facial recognition

The Guardian 08.12.24

It seems legal requirements do not affect the UK police:

'Images of arrested people who were innocent of any crimes are still being stored in a police database that may be used for facial recognition purposes, an official report has warned. In 2012, the high court ruled that keeping the images of people who faced no action or who were charged and then acquitted was unlawful. Despite the ruling, custody images of innocent people are still on the Police national database, which is available to all UK police forces and selected law enforcement agencies. The images can be used for facial recognition checks of potential suspects… Charlie Whelton, policy and campaigns officer at Liberty, said: “It is deeply concerning that people who have never been charged with a crime are finding their sensitive biometric data not only unlawfully retained by police, but used to fuel the unregulated and deeply invasive use of facial recognition technology as well. “The police need to answer as to why they are still holding this highly personal data more than 10 years after the courts said this is against the law. This is even more concerning as police forge ahead with dangerous facial recognition technology that makes our photos as sensitive as our fingerprints.” He called on parliament to “urgently act to regulate the use of this technology”.’

Witnesses say the Israeli army is using facial recognition technology in its assault on north Gaza

Mondoweiss 31.10.24

Stripped of their souls, the occupying army will rely on technology to perform war crimes:

‘Facial recognition technology used by Israel pulls from a database of information about Palestinians that has been built up over the years, including on Palestinians in the West Bank. One of those databases is called Wolf Pack, which according to Amnesty International, contains extensive information on Palestinians in the West Bank and Gaza, “including where they live, who their family members are, and whether they are wanted for questioning by Israeli authorities.”  In the old city of Hebron in the southern West Bank, Israeli surveillance cameras use a facial recognition system called Red Wolf on Palestinians who pass through checkpoints in the city. “Their face is scanned, without their knowledge or consent, and compared with biometric entries in databases which exclusively contain information about Palestinians,” Amnesty described in a May 2023 report…

Programs like “Lavender,” “The Gospel,” and “Where’s Daddy” have pushed Human Rights Watch to warn against their use of “faulty data and inexact approximations to inform military actions.” Several media exposés have also shown how some of these AI systems loosely identify civilians as targets for assassination or alert the Israeli army to target members of Hamas when they are with their families.  Testimonies gathered by Mondoweiss for this report and in previous reporting confirm that the brutal Israeli invasion in northern Gaza is utilizing these technologies as a means of organizing how it conducts mass arrests, field executions, and ethnic cleansing… Al-Fram said that the army picked people out of the queue using a “laser” pointer affixed to a tank. She described the army shining the laser on the ID cards and calling on people to advance towards the checkpoint, where the soldiers set up a camera.  “The soldiers arrested over 100 men in front of my eyes; they arrested them in front of their wives, and they were beating them, cursing them, and threatening to kill them and their families. Many wives saw their husbands in this situation.”

“The soldiers were telling the women: ‘We will kill you by a sniper bullet, we will run over your skulls with tanks, we will stone you to death, we will make you bleed to death,’” al-Fram continued. “The women were terrified and thought they would be killed.” Then, the soldiers would gather five women at a time and walk them to a security check or a scan of the face or eye. “They arrested two women in front of me from the crowd based on their face scans. People later said they were relatives of people known to be members of armed factions, but they were women. They were carrying children.”  “The soldiers ordered them to give their children to other women. The mothers started to panic like crazy. They looked around frantically for any woman they knew to give their children to,” al-Fram continued. “We would walk towards the face-scanning point in utter terror in our hearts, walking between dozens of tanks and soldiers pointing their weapons at us. And we would stand there for 3 or 5 minutes. They were the worst minutes of my life. A person’s fate was decided based on that scan: either arrest, beating and humiliation, or release them and force them to leave towards the south.”’

UK police organized crime unit seeks new facial recognition software

Biometric Update 04.10.24

It’s interesting the article hasn’t mentioned the lack of accuracy in FR’s identification:

‘From January to the end of August this year, London’s Metropolitan Police deployed the technology 117 times. In comparison, the force used facial recognition only 32 times between 2020 and 2023, according to data compiled by members of the London City Hall Greens party. Almost 771,000 people have had their faces scanned over almost five years while the most targeted areas were Croydon and Westminster. The deployments lasted more than 716 hours, according to the analysis reported by The Financial Times. The Met Police ramped up its use of facial recognition this summer during the anti-immigration riots that swept through parts of the UK, instigated by the Southport stabbing attack. The police force, however, has also been met with a lawsuit over a case of misidentification unrelated to the riots. Digital rights group Big Brother Watch argues that the case is the “tip of the iceberg” of people falsely accused after being misidentified by live facial recognition.’

UK lawmakers debate facial recognition as a solution for retail crime

Biometric Update 04.10.24

The infamous Pegasus name is becoming more mired in intrusive spy technology:

‘To fight the scourge of shoplifting, the UK’s largest retailers agreed last year to fund a biometric police operation named Project Pegasus to the tune of £600,000 (US$752,000). The project has so far identified more than 150 individuals linked to organized retail crime and facilitated more than 23 arrests of so-called “high harm” offenders… The Pegasus initiative is a part of OPAL, a national intelligence unit focused on serious organized acquisitive crime (SOAC). Last year, former policing minister Chris Philp also floated plans to double their use of facial recognition checks against the Police National Database and expand the number of images that can be compared by drawing on other databases, including the passport and immigration database.’

The Dutch Data Protection Authority is considering whether Clearview’s directors can be held personally responsible

The Verge 03.10.24

Clearview should be buried under fines:

‘Clearview AI has been hit with its largest fine yet under Europe’s General Data Protection Regulation (GDPR) by a Dutch regulator. The Dutch Data Protection Authority, or Dutch DPA, announced a fine of 30.5 million euros, or about $33.7 million. The American facial recognition company, which built a database of images scraped from social media platforms, has been the target of regulators around the world for alleged privacy violations. It’s previously faced fines from the UK, Australia, France, and Italy and been forced to delete data on those countries’ residents… In a statement, Clearview’s chief legal officer, Jack Mulcaire, said that the company “does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR.” Mulcaire said that the decision is “unlawful, devoid of due process and is unenforceable.”’

Big tech firms profit from disorder. Don’t let them use these riots to push for more surveillance

The Guardian 07.08.24

A very opaque system of politics keep the masses in check:

‘Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this, nor is there any statutory legal basis for it. People are placed on “watchlists” with no statutory criteria. These lists typically include victims and vulnerable people alongside “people of interest” and convicted criminals. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. And there is no specific law governing how the police, or private companies such as retailers, are authorised to use this technology… Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs… Total societal surveillance is a dangerous and poor substitute for intelligence-led community cooperation and policing. VIP lanes and cosy chats with billionaires at Bletchley Park must be replaced by parliamentary process and primary legislation. Not only would this protect our liberties under the law, but it could also go some way towards rebuilding people’s trust in politics.’

Starmer’s live facial recognition plan would usher in national ID, campaigners say

The Guardian 02.08.24

A lazy move by the new government, or is it doing Tony Blair’s bidding?:

‘Civil liberties campaigners have said that a proposal made by Keir Starmer on Thursday to expand the use of live facial recognition technology would amount to the effective introduction of a national ID card system based on people’s faces. Silkie Carlo, the director of Big Brother Watch, said it was ironic the new prime minister was suggesting a greater use of facial matching on the same day that an EU-wide law largely banning real-time surveillance technology came into force’

Indian police adopt facial recognition despite risk of massive data breaches

Biometric Update 06.05.24

As authoritarianism rises, so does the use of flawed FR:

‘Tamil Nadu police’s facial recognition system was first deployed in 2021. It uses biometrics software developed by the CDAC (Centre for Development of Advanced Computing) Kolkata. Intended to be used by police officers on patrol who might need to verify information about a potential suspect, the system has been criticized for giving too much allowance to police in determining who warrants a face scan, since there are no formal criteria for identifying someone as a suspect.’

Pistols drawn as UK surveillance state duels with rights groups

Biometric Update 30.04.24

Something I’ve been warning about for some years now:

‘Retail environments are also proving to be a major arena in the facial recognition debate. The government recently pledged £55 million to expand facial recognition systems across major retail outlets in England and Wales in an effort to fight shoplifting – what Silkie Carlo, director of Big Brother Watch, calls an “abysmal waste of public money on dangerously authoritarian and inaccurate facial recognition surveillance.”

“Criminals should be brought to justice – but papering over the cracks of broken policing with Orwellian tech is not a solution,” Carlo says. “Live facial recognition may be commonplace in China but these Government plans put the UK completely out of sync with the rest of the democratic world.”’

How Israel uses facial-recognition systems in Gaza and beyond

The Guardian 19.04.24

When a land is being occupied, the population’s inane privacy rights are severely trampled upon:

'In an Amnesty International report on Israel’s use of facial recognition last year, the rights group documented security forces’ extensive gathering of Palestinian biometric data without their consent. Israeli authorities have used facial recognition to build a huge database of Palestinians that is then used to restrict freedom of movement and carry out mass surveillance, according to the report… There’s a slew of facial-recognition tools that the state of Israel has experimented with in the occupied Palestinian territories for the better part of the last decade. We’re looking at tools by the names of Red Wolf, Blue Wolf and Wolf Pack… What’s been particularly chilling about the system has been hearing the stories about individuals who haven’t been able to even come back into their own communities as a result of not being recognized by the algorithm. Also hearing soldiers speaking about the fact that now they were doubting whether they should let a person that they know very well pass through a checkpoint, because the computer was telling them not to. They were finding that increasingly they had a tendency of thinking of Palestinians as numbers that had either green, yellow or red lights associated with them on a computer screen… Even if you know that these systems are incredibly inaccurate, the fact that your livelihood might depend on following a strict algorithmic prescription means that you’re more likely to follow it. It has tremendously problematic outcomes and means that there is a void in terms of accountability.’

Making a Killing?

Somo 16.04.24

Palantir is involved in FR and data collection of Palestinians murdered by Israel:

’Mass surveillance and facial recognition(opens in new window) have been applied in Gaza by the Israeli military “cataloging the faces of Palestinians without their knowledge or consent, according to Israeli intelligence officers, military officials and soldiers.” The New York Times has reported that technology provided by Israeli company Corsight is run by Israel’s military (cyber-) intelligence unit 8200, which was tasked with creating a “hit list.” American data analytics company Palantir Technologies, specialised in defence and intelligence services, has also  that it provides services to the Israeli Ministry of Defence to support “war-related missions.” Palantir’s CEO has stated(opens in new window) that the company’s services are responsible for “most of the targeting” in the war in Ukraine, and Time Magazine has reported(opens in new window) that such services can identify a military target and prompt an attack within three minutes – it is plausible that similar technology is being used by Israeli armed forces in Gaza.’

Leisure centres scrap biometric systems to keep tabs on staff amid UK data watchdog clampdown

The Guardian 16.04.24

It seems that FR is being liberally used in the UK:

‘In February, the Information Commissioner’s Office (ICO) ordered a Serco subsidiary to stop using biometrics to monitor the attendance of staff at leisure centres it operates and also issued more stringent guidance on the use of facial recognition and fingerprint scanning. The ICO found that the biometric data of more than 2,000 employees had been unlawfully processed at 38 centres managed by Serco Leisure to check their attendance using facial recognition technology and in two cases via fingerprint scanning systems… Several companies supply biometric technology to help monitor staff attendance. Ian Hogg, the chief executive of Shopworks, which supplied the technology to Serco and supplies biometrics to about 40 or 50 companies, said he had been contacting clients in retail, leisure, hospitality and care homes and factories after the ICO decision… In March, an Uber Eats driver received a financial settlement, after allegations that facial recognition checks required to access his work app were racially discriminatory, which led to him being unable to access the Uber Eats app to secure work.’

Co-op supermarkets given AI tech and 200 secure tills after 44% rise in crime

The Guardian 07.02.24

Pretty soon, Pegasus will not only have total access to our health databases but will use FR to nail shopping customers. Will Pegasus be granted China-like surveillance of the population?:

‘The plan also involves the more controversial Project Pegasus under which 10 of the country’s biggest retailers, including Marks & Spencer, Boots and Primark, are handing over CCTV images to the police, to be run through databases using facial recognition technology in an effort to identify prolific or potentially dangerous individuals. Some experts argue that technology such as self-checkouts and the display of expensive goods on shelves, rather than behind counters served by staff, have contributed to the problems.'

Facial recognition cameras in supermarkets ‘targeted at poor areas’ in England

The Guardian 27.01.24

The article is misleading. FR is pervasive in all big chain supermarkets in the UK:

‘Increasingly used by police and private firms, live facial recognition operates in real-time to compare camera feeds with faces on a ­predetermined watchlist, to identify people of interest. Each time a match is found, the system generates an alert… On Friday, the House of Lords Justice and Home Affairs committee wrote to the home secretary, James Cleverly, calling on him to urgently address concerns about live facial ­recognition use by police, which it said lacked “clear legal foundation”. The committee said there were “no rigorous standards or systems of regulation” in place for monitoring the technology’s use and “no consistency in approaches to training” among police forces. “The committee accepts that live facial recognition may be a valuable tool in apprehending criminals, but it is deeply concerned that its use is being expanded without proper ­scrutiny and accountability,” it said.’

UK police have been secretly using passport database for facial recognition for 3 years

Biometric Update 08.01.24

With neither checks nor balances between the public and the police, of course these manoeuvres would be enacted:

‘Police in the UK have been using the country’s passport holder database to conduct facial recognition searches without public disclosure, a new investigation has revealed, sparking fears over privacy. The secretive practice has been going on since 2019, according to records obtained by The Telegraph and Liberty Investigates. The facial recognition searches were conducted even though policing minister Chris Philp did not mention the possibility of using the passport database until October 2023. The majority of the biometric searches were conducted during the first nine months of last year when police authorities used the technology to trawl through passport images more than 300 times. The database holds records of 46 million British passport holders. Forces have also carried out searches of the UK immigration database, which holds information on foreign nationals, the investigation showed.’

UK police use of facial recognition probed by lawmakers

Biometric Update 14.12.23

The police in the UK and elsewhere will always favour FR technology and will positively encourage it:

‘In a new paper published in the Modern Law Review, Daragh Murray, senior lecturer at Queen Mary University London School of Law, argues that legal courts should acknowledge the intrusiveness of live facial recognition compared to other police surveillance methods. The law should also establish an explicit statutory basis for the technology. “Understanding how a surveillance-related chilling effect impacts on human rights protections is complex, and the police use of Real-time Facial Recognition technology presents a challenge unlike any previously addressed by human rights law,” Murray writes.’

UK police plan nationwide rollout of NEC facial recognition on mobile phones

Biometric Update 20.11.23

The road to a police state for the UK seems to be unhindered:

‘Police in the United Kingdom are planning to identify criminals by taking their photos with mobile phones and running them through a database with facial recognition software. The app, known as Operator Initiated Facial Recognition (OIFR), is already being trialed by three UK police forces, with plans for a nationwide rollout next year. By May 2024, UK police plan to increase the use of retrospective facial recognition to identify people by 100 percent, and to set up a roadmap for OIFR on a national level… Policing officials have been promising to expand the use of live facial recognition while police in Scotland recently released data showing it has tripled the use of retrospective facial recognition over the last five years. In September, the police launched Project Pegasus, a US$752,000 police operation supported by British retailers to match CCTV images of shoplifters with those in a national police database.’

How Chinese firm linked to repression of Uyghurs aids Israeli surveillance in West Bank

The Guardian 11.11.23

Whether it’s Chinese tech or not, that’s totally irrelevant. FR is a much-loved tool for authoritarian regimes:

‘In the occupied Palestinian territories, there are cameras everywhere. In Silwan, in occupied East Jerusalem, residents say cameras were installed by Israeli police up and down their streets, peering into their homes. One resident named Sara said she and her family “could be detected as if the cameras were just in our house … we couldn’t feel at home in our own house and had to be fully dressed all the time.” Surveillance cameras now cover the Damascus Gate, the main entrance into the old city of Jerusalem and one of the only public areas for Palestinians to gather socially and hold demonstrations. It’s at that gate that “Palestinians are being watched and assessed at all times”, according to an Amnesty International report, Automated Apartheid. These cameras have created a chilling effect on not just the ability to protest but also on the daily lives of Palestinians who live under occupation, according to Amnesty investigators. The organization had previously concluded that Israel has established a system of apartheid against Palestinians.'

UK police minister calls for more live facial recognition

Biometric Update 30.10.23

This accelerated push for FR in the UK clearly demonstrates the rise of a police state doctrine:

'In a letter to police chiefs published on Sunday, the country’s Policing Minister Chris Philp said that the UK police should use live recognition more widely to quickly identify suspects and deter crime. “AI technology is a powerful tool for good, with huge opportunities to advance policing and cut crime,” says Philp. The call came after police caught three at-large suspects using live facial recognition technology during last month’s Arsenal vs. Tottenham football match in London, including one wanted for sexual offenses.’

Major UK retailers urged to quit ‘authoritarian’ police facial recognition strategy

The Guardian 28.10.23

Retailers, small and large, have been using the tech for a few years now. I wonder if they have special deals with AI companies for training purposes, or am I being too paranoid?

‘A coalition of 14 human rights groups has written to the main retailers – also including Marks & Spencer, the Co-op, Next, Boots and Primark – saying that their participation in a new government-backed scheme that relies heavily on facial recognition technology to combat shoplifting will “amplify existing inequalities in the criminal justice system”. The letter, from Liberty, Amnesty International and Big Brother Watch, among others, questions the unchecked rollout of a technology that has provoked fierce criticism over its impact on privacy and human rights at a time when the European Union is seeking to ban the technology in public spaces through proposed legislation.’

Facial recognition will ‘transform investigative work,’ says UK’s top cop

Biometric Update 13.09.23

The UK is well and truly on the road to becoming a police state, with US tech in tow:

‘United Kingdom’s most senior police officer believes that facial recognition will transform investigative work the same way DNA testing did 30 years ago. The London Metropolitan Police Commissioner Sir Mark Rowley hailed the technology while speaking at an event marking his first year in office, The Guardian reports... “It can absolutely be as intrusive as DNA, which is why it’s so concerning that the Met is using it to scan hundreds of thousands of innocent Londoners, often with dangerously inaccurate results,” says Silkie Carlo, director of advocacy group Big Brother Watch. Warnings have also been coming from official watchdogs. In March, Biometrics and Surveillance Camera Commissioner (OBSCC) Fraser Sampson highlighted that many police forces in the UK are falling short of the regulatory and ethical obligations of deploying facial recognition. Despite criticism, the police announced this month a new US$752,000 police operation to match CCTV images of shoplifters with those in a national police database. The plan, named Project Pegaus, is financed by 10 large supermarkets and retailers in the UK.’

Home Office secretly backs facial recognition technology to curb shoplifting

The Guardian 29.07.23

FR has been used in all major and corner shops for a few years now:

‘Home Office officials have drawn up secret plans to lobby the independent privacy regulator in an attempt to push the rollout of controversial facial recognition technology into high street shops and supermarkets, internal government minutes seen by the Observer reveal. The covert strategy was agreed during a closed-door meeting on 8 March between policing minister Chris Philp, senior Home Office officials and the private firm Facewatch, whose facial recognition cameras have provoked fierce opposition after being installed in shops.’

Privacy group challenges Ryanair's use of facial recognition

Reuters 27.07.23

It’s a new low for this horrid company:

'Digital rights group NOYB on Thursday filed a complaint against Ryanair (RYA.I), alleging that the airline is violating customers' data protection rights by using facial recognition to verify their identity when booking through online travel agents. NOYB, led by Austrian privacy activist Max Schrems, filed the complaint with Spain's data protection agency on behalf of a complainant who booked a Ryanair flight through the Spanish-based online travel agency eDreams ODIGEO.’

French Senate votes in favor of public facial recognition pilot

Biometric Update 14.06.23

France is veering into a totalitarian system:

'The French Senate voted on Monday to adopt a draft law on testing facial recognition technology in public spaces. The law will allow judicial investigators and intelligence services to use remote biometric identification in public for three years. The draft law was adopted amid opposition from human rights organizations and certain politicians with 226 votes in favor and 177 votes against from left-wing senators, Le Monde reports. In May, the Senate tried to quell public concerns by promising to set limits on the use of the technology and “prevent a surveillance society.” The new draft specifies that real-time facial recognition use in public will be limited to tracking down terrorists by intelligence services, child abductions and particularly serious crimes. In the latter two cases, judicial investigators will need to seek out authorization from the Prime Minister, prosecutor or examining judge which will only be valid for 48 hours. Retrospective use of facial recognition, i.e. on recorded videos, will be authorized for terrorism and serious crime investigations by the prosecutor or examining judge. Authorization in cases of terrorism will last for one month… A ban on real-time facial recognition in the EU’s draft AI Act does not include carve-outs for terrorism or missing persons searches, setting up a possible clash between French and EU law.’

Turkish minister sued after showcasing gov’t facial recognition app

Biometric Update 29.05.23

Minister in trouble after revealing a government’s far-reaching FR tech:

'Turkey’s minister of interior is facing a data privacy lawsuit after going on air to show off a state-developed mobile app with facial recognition capable of identifying each resident of the country… Soylu showcased the app named KIM (“who” in English) during a video interview on YouTube with Shiftdelete.net, an online culture publisher in Turkey. In the video, Soygu is seen taking a photo of the show’s host Hakki Alkan with the KIM app. In a matter of seconds, the software displays Alkan’s full name and several headshots. During the show, the minister also stated that he had learned the name of a woman and examined the data of a woman after taking her photo in a television studio… The Progressive Lawyers Association said that authorities generally process personal data for reasons such as public order and public safety. However, regardless of whether the data is processed by public authorities or other entities, they must comply with the limitations outlined by relevant regulations, they added.’

NGOs critical of biometrics technology excluded from process to draft EU AI treaty

Biometric Update 23.01.23

Eliminating and censoring criticism is the purview of authoritarian societies:

‘Some civil society organizations which have vocally opposed the use of biometrics technologies such as facial recognition say they have been excluded from a process to draft an international artificial intelligence (AI) treaty, a project undertaken by the Council of Europe (CoE).  The AI treaty being worked on is a draft legislative framework which will act as the basis for the development, deployment and use of AI-based technologies in line with the CoE’s standards on human rights, democracy and the rule of law.  Reporting by Euractiv indicates that the exclusion was proposed by the United States government and affects organizations including Algorithm Watch, Fair Trials, Homo Digitalis and the Conference of International Non-Governmental Organizations.  As an example, Algorithm Watch last year joined a campaign of around 177 civil society organizations to call for an outright ban on biometric recognition technologies, saying they are an enabler of discriminatory surveillance which is an infringement on human rights.  Euractiv reports that a decision was reached by the CoE to the effect that the drafting process will happen behind closed doors and the civil society organizations will only be brought in later in the process to give their comments on the already drafted document. Also, the organizations will not see the final text of the document before it is adopted by the CoE plenary.’

The facial recognition firm mining YOUR data (VIDEO)

Big Brother Watch 17.11.22

Seriously invasive tech from PimEyes, which must have been inspired by Clearview AI, happening on UK streets right now.

Police in China can track protests by enabling ‘alarms’ on Hikvision software

The Guardian 29.12.22

A popular UK FR tech is being used to monitor crowds in China:

‘Chinese police can set up “alarms” for various protest activities using a software platform provided by Hikvision, a major Chinese camera and surveillance manufacturer, the Guardian has learned. Descriptions of protest activity listed among the “alarms” include “gathering crowds to disrupt order in public places”, “unlawful assembly, procession, demonstration” and threats to “petition”…  While Hikvision is best known for its camera equipment, the company has joined other players in developing and providing centralized platforms for police and other law enforcement to maintain, manage, analyze and respond to information collected through the many cameras set up across China. Hikvision pitches its cloud platform, called Infovision IoT, as a means to “provide intelligent public security decision-making and services” for police in order to alleviate “uneven allocation of resources, heavy workload, inability to share data”, according to the company’s website...  At least nine alarm types are protest-related, according to a translation of the Hikvision technical guide: “gathering crowds to attack state organs”, “gathering crowds to disrupt the order of the unit”, “gathering crowds to disrupt order in public places”, “gathering crowds to disrupt traffic order”, “gathering crowds to disrupt order on public transport”, “gathering crowds obstructing the normal running of vehicles”, “crowd looting”, “unlawful assembly, procession, demonstration” and a “threat to petition”.’

Eufy doorbell cameras uploading facial recognition data to the cloud without consent

Biometric Update 30.11.22

Not sure how big Eufy’s market shares are but the findings are wholly unsurprising:

‘If you use Eufy security cameras, they may be sending biometric information to the cloud despite claims to “keep data local,” according to a report on 9 to 5 Google, which details an ongoing back-and-forth between the Anker brand and a security professional on social media.  Last week, on Twitter, a security researcher named Paul Moore posted proof that EufyCam’s Video Doorbell Dual cameras were uploading unencrypted facial recognition data and identity information to Eufy’s cloud servers, without permission, even when the device’s cloud functionality was disabled.  “You have some serious questions to answer, Eufy,” wrote Moore, whose profile lists him as an Information Security Consultant who “defeats behavioral biometrics.” In a series of tweets, he revealed that footage from the company’s cameras could be accessed using simple audio software, without any encryption or authentication measures. “You can remotely start a stream and watch Eufy cameras live using VLC,” Moore tweeted.'

Information commissioner warns firms over ‘emotional analysis’ technologies

The Guardian 25.10.22

Finally, someone with a sensible head on their shoulders is speaking out:

'It’s the first time the regulator has issued a blanket warning on the ineffectiveness of a new technology, said Stephen Bonner, the deputy commissioner, but one that is justified by the harm that could be caused if companies made meaningful decisions based on meaningless data…  Simply using emotional analysis technology isn’t a problem per se, Bonner said – but treating it as anything more than entertainment is…  In spring 2023, the regulator will be publishing guidance on how to use biometric technologies, including facial, fingerprint and voice recognition. The area is particularly sensitive, since “biometric data is unique to an individual and is difficult or impossible to change should it ever be lost, stolen or inappropriately used”.’

Texas sues Google for allegedly capturing biometric data of millions without consent

Reuters 20.10.22

Rampant abuse of the law from Google:

'Texas has filed a lawsuit against Alphabet's (GOOGL.O) Google for allegedly collecting biometric data of millions of Texans without obtaining proper consent, the attorney general's office said in a statement on Thursday.  The complaint says that companies operating in Texas have been barred for more than a decade from collecting people's faces, voices or other biometric data without advanced, informed consent.  "In blatant defiance of that law, Google has, since at least 2015, collected biometric data from innumerable Texans and used their faces and their voices to serve Google’s commercial ends," the complaint said. "Indeed, all across the state, everyday Texans have become unwitting cash cows being milked by Google for profits.”  The collection occurred through products like Google Photos, Google Assistant, and Nest Hub Max, the statement said.  Google said it would fight the lawsuit, saying that users of the services had the option to turn off the biometric collection feature.’

Imagine the Web pointed cameras back at you. That’s what the metaverse will be like

Biometric Update 14.10.22

Biometrics is the new portal through which tech companies will own your face, voice and emotions:

‘Every (legless) step of the way to life in the metaverse is going to require the recording of one biometric identifier or another. Logging on, buying and selling anything, dating, attending business meetings – if passwords are not being typed, biometrics will fill the void.  It is true that there might be trusted entities that vouch for one’s identity.  Maybe people in the metaverse will have digitally represented payment and ID cards, for instance. Or Apple will transplant its model for alleviating its customers of their need to use most passwords.  But in every case, anyone hoping to have a full life while wearing opaque goggles while sitting in a coffee shop, will have to surrender their voice, face, iris and/or fingerprint to someone else. And there is no reason to stop there. Heartbeats, breath, ear canal structure, anything biometric can and someday will be ID currency.  It is fascinating to see venerable tech culture and business magazine Wired write with a note of alarm about how Facebook’s father organization Meta Platforms plans to put five cameras in a future headset…  

What is fascinating about Wired’s big take here is that it is the only magazine in the world purpose built at the inception of the internet age. No one has been fawning over/warning about cyberspace as long as its editors have (and in more insufferable fonts and colors).  Now there are misgivings.  Wired’s not alone. Time, one of the United States’ oldest surviving news and culture publication, reports qualms, too.  And The Information, launched in 2013 to dig out deep tech industry stories, this month published a piece noting that Apple is expected to launch its line of virtual reality headsets and content next year with iris scanners for ID verification.’

Convenience store spy cameras face legal challenge

BBC 27.07.22

If one should look up to any supermarket’s ceiling, they would discover that ALL of them have riddled their stores with FR:

'The Southern Co-Op chain is facing a legal challenge to its use of facial recognition technology to cut crime…  Big Brother Watch has challenged the legality of the system in a submission to the ICO, shared with the BBC.  The group says the biometric scans are "Orwellian in the extreme”.  "The supermarket is adding customers to secret watch-lists with no due process, meaning shoppers can be spied on, blacklisted across multiple stores and denied food shopping despite being entirely innocent," said Big Brother Watch's director Silkie Carlo.  "This is a deeply unethical and a frankly chilling way for any business to behave.’

London police paying public to test live facial recognition during live operation

Biometric Update 15.07.22

The Met keeps failing at FR:

‘London Metropolitan Police have been paying members of the public £50 (US$59) to take part in live facial recognition testing in central London for research and equality purposes during live operations using the technology where arrests were made for people wanted on drugs and violence charges…  The reason for the testing is to “help the Metropolitan Police Service fulfill its Public Sector Equality Duties regarding the uses of facial recognition.” The Metropolitan Police Service is currently in special measures following a series of scandals involving misogyny and racism...  “As well as scanning thousands of members of the public, the Met police used actors and children as young as 14 as subjects to test their facial recognition algorithm. The force were experimenting to ensure that even people wearing masks, hats or glasses can be subjected to a biometric identity check,” writes Big Brother Watch Legal and Policy Officer Madeleine Stone.  “The deployment was another embarrassment for the Met’s highly inaccurate technology. We witnessed several wrongful interventions, including a French exchange student, who was flagged despite being in the country for just a few days. It critical that parliament urgently gets a grip of this dystopian technology and bans its use.”…  Police say 15,600 people had their biometrics processed during that event. The recent update on police facial recognition use states that “After the testing is complete, the data from the volunteers and the CCTV footage will be kept by the Met and for a longer period than we would normally hold it for. This retention period is currently set at three years and is then subject to review.”  London Assembly Member Zack Polanski tweeted photos of police signage at Oxford Circus which state the data processed of anyone passing through will be kept for three years.’

The world’s biggest surveillance company you’ve never heard of

Technology Review 22.06.22

A ubiquitous surveillance camera system is in the UK, and elsewhere:

'Its founding team consisted mostly of engineers at China Electronics Technology Group Corporation, a state-owned company that makes electronic products for both civilian and military uses. In 2008, Hikvision transferred 48% of its shares to CETC, making Hikvision officially a subsidiary of a state-owned firm…  But Hikvision has also been aiming to go global since the start. It started registering its name as a trademark in over a hundred countries as early as 2004. In 2010, it was the top digital video recorder provider in the world thanks to its network of surveillance cameras, which use DVRs to record the footage. Overseas sales made up 27% of its $12.42 billion in revenue in 2021.   It’s hard to know exactly how many Hikvision cameras are being used around the world, but research in 2021 by industry research group Top10VPN used the Shodan search engine (which scans the internet for the unique IP addresses used by devices, in this case cameras) to find 4.8 million networks of Hikvision devices in 191 countries outside China…  Each of these detected IP networks can support up to 24 Hikvision cameras, meaning the total numbers of cameras will be even higher. And that is only a conservative estimate, because not all the cameras appear in Shodan scans.  For example, the research found 55,455 Hikvision networks in London. “From my experience of just walking around London, it would probably be several times over that. They're in almost every supermarket,” says Samuel Woodhams, a researcher at Top10VPN who carried out the study.’

Facial recognition technology: how it’s being used in Ukraine and why it’s still so controversial

The Conversation 14.06.22

It seems that nothing much deters this parasitic company of grabbing all photos in sight:

Ukraine’s Ministry of Defence has been using Clearview AI facial recognition software since March 2022 to build a case for war crimes and identify the dead – both Russian and Ukrainian. The Ministry of Digital Transformation in Ukraine said it is using Clearview AI technology to give Russians the chance to experience the “true cost of the war”, and to let families know that if they want to find their loved ones’ bodies, they are “welcome to come to Ukraine”.  Ukraine is being given free access to the software. It’s also being used at checkpoints and could help reunite refugees with their families…  Clearview AI’s chief executive Hoan Ton-That said its facial recognition software has allowed Ukrainian law enforcement and government officials to store more than 2 billion images from VKontakte, a Russian social networking service. Hoan said the software can help Ukrainian officials identify dead soldiers more efficiently than fingerprints, and works even if a soldier’s face is damaged.  But there is conflicting evidence about facial recognition software’s effectiveness. According to the US Department of Energy, decomposition of a person’s face can reduce the software’s accuracy. On the other hand, recent scientific research demonstrated results relating to the identification of dead people that were similar to or better than human assessment.  Research suggests fingerprints, dental records and DNA are still the most reliable identification techniques. But they are tools for trained professionals, while facial recognition can be used by non-experts.  Another issue flagged by research is that facial recognition can mistakenly pair two images, or fail to match photos of the same person. In Ukraine, the consequences of any potential error with AI could be disastrous. An innocent civilian could be killed if they are misidentified as a Russian soldier…  

Over the last two years, data protection authorities in Canada, France, Italy, Austria and Greece have all fined, investigated or banned Clearview AI from collecting images of people.  The future of Clearview AI in the UK is uncertain. The worst-case scenario for ordinary people and businesses would be if the UK government fails to take on board the concerns raised in response to its consultation on the Modern Bill of Rights. Liberty has warned of a potential human rights “power grab”.  The best outcome, in my opinion, would be for the UK government to scrap its plans for a Modern Bill of Rights. This would also mean that UK courts should continue to take account of cases from the European Court of Human Rights as case law.  Unless laws governing the use of facial recognition are adopted, police use of this technology risks breaching privacy rights, data protection and equality laws.’

Facial recognition in Europe: what’s happening and how can people be protected?

Biometric Update 17.05.22

Let’s hope that public FR is definitely outlawed:

'The European Data Protection Board (EDPB) has put out for public consultation its Guidelines on the Use of Facial Recognition Technology in the Area of Law Enforcement...    It should be noted that the body takes a hard line on the use of facial recognition. The EDPB is an independent body that ensures the application of GDPR across the European Economic Area. Not to be confused with the European Data Protection Supervisor, EDPS, which is the EU’s independent data protection authority focused more on the bloc’s own institutions.  The two bodies work together and in May 2021 called for a “general ban on any use of AI for an automated recognition of human features in publicly accessible spaces” in a joint opinion on the AI Act. They also called for a ban on categorization, emotion analysis and scraping the internet to create facial image databases.’

Draft bill allows Israeli police access to face biometrics from public CCTV

Biometric Update 10.05.22

With most of the world relying on biometrics to ‘prevent’ crimes, it’s a sorry state for the future of innocent citizens:

‘Israel’s Ministerial Committee for Legislation cleared a biometric surveillance bill on Sunday that will enable security services to access information from CCTV and use facial recognition without a warrant, Haaretz reports.  The bill would also provide a legal basis for the currently-deployed ‘Eagle Eye’ system, which is already used to track vehicular movement across the country (and which will be further regulated by the new bill)…  Within the Ministerial Committee for Legislation, only Aliyah and Integration Minister Pnina Tamano-Shata was against the new bill.  “When the police can place biometric cameras in every neighborhood with the wave of a finger, it leads to abuse and excessive enforcement of particular populations,” she said.  The minister also warned against demographic biases commonly associated with facial recognition systems.  Tamano-Shata’s views were countered by Justice Minister Gideon Saar who said that “When it comes to reining in terror, I take the invasion of privacy with a grain of salt. It’s a public space.”’

UK schools introducing biometrics without due care, legal backing, report says

Biometric Update 05.05.22

A truly insidious way to enter the public domain:

‘‘The State of Biometrics 2022: A Review of Policy and Practice in UK Education’ is written by Pippa King and Jen Persson for advocacy group defenddigitalme, with a forward written by UK Biometrics and Surveillance Camera Commissioner Fraser Sampson.  The organization says childrens’ rights advocates and lawmakers are concerned about the intrusiveness of biometric data in educational settings.  Sampson begins by asking who benefits from the school biometric deployments, who is performing oversight, whether the trustworthiness of partners like technology providers has been considered, what the motivation for deployment is and why it is being hurried.  “Some – including, surprisingly, the Department for Education – appear to have taken the view that bare compliance with Chapter 2 of the Act is all that is required to ensure the lawful, ethical and accountable use of biometric surveillance in schools,” writes Sampson in the forward.  “While Chapter 2 addresses one narrow legal issue (that of protecting biometric information of children in schools) and guidance on its practical implementation is vital, I do not believe that this excludes the need to address the many wider technological, legal and societal issues of biometric surveillance generally. If biometric surveillance is to have a legitimate role in places of education, both role and legitimacy will need a much broader approach than this.”…  

UK schools are trialing experimental technologies for attentiveness analysis and allowing mission creep, the report suggests, while regulatory enforcement is absent, as in the case of a Scottish school district instructed by the ICO to stop using facial recognition last October.  The main suppliers in the market are identified as CRB Cunninghams, Gladstone and AMI, each Constellation Software subsidiaries, Iris Software Group’s Biostore, Civica and the related National Retail Systems, Live Register, Live Register’s Vericool, and Synel.  The UK Protection of Freedoms Act 2012 and the current version of the UK’s data protection law are insufficient for protecting the rights of children in schools, according to the grim assessment. To bring the right measure of protection to the situation, they urge expanded legislation, that stronger oversight be applied, and educational settings brought under the remit of the Surveillance Camera Commissioner.’

South Africa’s private surveillance machine is fueling a digital apartheid

Technology Review 19.04.22

South Africa’s apartheid credentials make it easy for FR to be embraced:

‘But then fiber coverage expanded, AI capabilities advanced, and companies abroad, seeing an opportunity, began dumping the latest surveillance technologies into the country. The local security industry, forged under the pressures of a high-crime environment, embraced the menu of options.  The effect has been the rapid creation of a centralized, coordinated, entirely privatized mass surveillance operation. Vumacam, the company building the nationwide CCTV network, already has over 6,600 cameras and counting, more than 5,000 of which are concentrated in Joburg. The video footage it takes feeds into security rooms around the country, which then use all manner of AI tools like license plate recognition to track population movement and trace individuals…  Whereas South Africa has just over 1,100 police stations with just over 180,000 staff members, there are 11,372 registered security companies and 564,540 actively employed security guards, more than the police and the military combined.   The imbalance is a remnant of apartheid. In the late 1970s, the ruling National Party deployed police to protect its political interests, controlling widespread unrest in opposition to the government. These duties took precedence over actual police work, leaving an opening for private players…  Vumacam partnered with the Chinese company Hikvision and the Swedish company Axis Communications to provide the hardware while iSentry and Milestone, a popular Denmark-based video surveillance management tool, provided the software. From there, it teamed up with private agencies patrolling wealthier residential areas and erected poles with high-definition cameras where they wanted on top of Johannesburg’s fiber network…  

But absent from the conversation is why the crime exists in the first place. Researchers of industrial societies have repeatedly demonstrated that inequality drives crime. Not only is South Africa the world’s most unequal country, but the gap is deeply racialized, a part of apartheid’s legacy. The latest government reports show that in 2015 half of the country lived in poverty; 93% of those people were Black.  As a result, it’s predominantly white people who have the means to pay for surveillance, and predominantly Black people who end up without a say about being surveilled.  Adding to it all, AI tools like facial recognition and anomaly detection don’t always work, and the consequences aren’t evenly distributed. The likelihood that facial recognition software will make a false identification increases dramatically when footage is recorded outdoors, under uncontrolled conditions, and that risk is much greater for Black people.’

CPC: Criminal Procedure Identification Bill raises fears of surveillance in India

BBC 13.04.22

FR may not be as ubiquitous in India as it is in China, UK and the US, but biometric surveillance is on the rise in an effort to subjugate swathes of the population:

'The Criminal Procedure (Identification) bill, which was passed in parliament last week, makes it compulsory for those arrested or detained to share sensitive data - like iris and retina scans. The police can retain this data for up to 75 years. The bill will now be sent to the president for his assent…  India's current prison law - the Identification of Prisoners Act, 1920 - allows the police to collect only photographs, fingerprint and footprint impressions. But it limits this to those who have been convicted, are out on bail, or those charged with offences punishable with rigorous imprisonment of one year.  The new law, however, massively expands its ambit to include other sensitive information such as fingerprints, retina scans, behavioural attributes - like signatures and handwriting - and other "biological samples”…  It's not uncommon for investigative agencies to mine personal data. Several countries including the US and the UK, collect biometric identifiers - facial features, fingerprints or retina scans - of people who are arrested or convicted.  But unlike the UK and US, India also doesn't have robust systems to investigate alleged police misconduct, Mr Kodali says.  The increasing use of facial recognition technology by governments and law enforcement has become a contentious issue the world over. This is especially true in authoritarian regimes where the data can be used to track citizens…  There is also a certain fear over the government's ability to protect this data in the age of unscrupulous data leaks and hacks. In fact, data leaks involving the personal information of Indian citizens from the world's largest biometric ID database programme, the Aadhaar, have been widely reported in the past.  "Since the bill allows law-enforcement to gather increasing amounts of data from anyone who is detained, there is nothing in it that would prevent, say, a police officer from detaining a peaceful protester and collecting their data to use against them later," says Mr Sharma.’

Facial recognition planned for UK schools without Biometrics Commissioner consultation

Biometric Update 11.04.22

Get them young, in this police state direction the UK is veering toward:

‘UK Biometrics and Surveillance Camera Commissioner Fraser Sampson says he was not consulted over plans drafted by the Department for Education (DfE) to deploy facial recognition cameras to scan pupils’ biometrics at schools across the country…  This is not the first time Sampson warns against the dangers of mass surveillance. Since his appointment in March 2021, the Biometrics Commissioner has been actively campaigning to make sure that these technologies are deployed lawfully and with safeguards to protect individuals’ privacy.  “Where is the lawful purpose of introducing this clearly intrusive type of technology into a school?” he said in regards to the recent DfE plans.  “How does any of this fit with much wider government obligations on the UN convention on the rights of the child not to be subject to close scrutiny and have the freedom to sit in a classroom without being watched, let alone recorded?”’

China uses AI software to improve its surveillance capabilities 

Reuters 08.04.22

When FR runs on steroids, there’s nowhere to hide, whether you’re a Uighur or not:

‘According to more than 50 publicly available documents examined by Reuters, dozens of entities in China have over the past four years bought such software, known as "one person, one file". The technology improves on existing software, which simply collects data but leaves it to people to organise.  “The system has the ability to learn independently and can optimize the accuracy of file creation as the amount of data increases. (Faces that are) partially blocked, masked, or wearing glasses, and low-resolution portraits can also be archived relatively accurately,” according to a tender published in July by the public security department of Henan, China’s third-largest province by population…  At least four of the tenders said the software should be able to pull information from the individual's social media accounts. Half of the tenders said the software would be used to compile and analyse personal details such as relatives, social circles, vehicle records, marriage status, and shopping habits.’


Clearview AI plans to offer face biometrics services to banks, other private businesses

Biometric Update 04.04.22

Having illegally siphoned billions of images from social media, Clearview has been working with US federal agencies and is now poised to reap untold riches through private businesses:

‘According to Ton-That, the new “consent-based” solution will reportedly not rely on the company’s 20-billion image database, which will remain reserved for law enforcement use.  The global market for biometric identity verification is expected to leap dramatically for the duration of the 2020s.  The news comes amidst an eventful March for Clearview AI, which saw the company fined €20 million by the Italian privacy guarantor, and making its face biometrics service available to the Ukrainian Ministry of Defense to deliver news of the Russian war dead to families.'

Police in NYC, South Wales, Dubai deploy more facial recognition, with public support in US

Biometric Update 18.03.22

Police states around the world embrace FR, based on an exhaustive study.  I can only conclude that the groups they have questioned, have not been fully informed on the tech’s dangerous pitfalls:

'A new survey by Pew Research shows 46 percent of U.S. adults think the “widespread use of facial recognition technology by police to monitor crowds and look for people who may have committed a crime” is a good idea for society. Twenty-seven percent think this would be bad, and 27 percent are unsure, of more than ten thousand surveyed.  This comes as the NYPD brings back its 400-strong Neighborhood Safety Teams to tackle gun crime, this time with uniforms and facial recognition-enabled body cams. The South Wales Police are back to live facial recognition trials in public and the Dubai Police are deploying the first of 400 smart patrol vehicles.’

Exclusive: Ukraine has started using Clearview AI’s facial recognition during war

Reuters 14.03.22

Another tech giant to be used in conflict, one that would add million more data nuggets for its business:

‘Ukraine's defense ministry on Saturday began using Clearview AI’s facial recognition technology, the company's chief executive told Reuters, after the U.S. startup offered to uncover Russian assailants, combat misinformation and identify the dead.  Ukraine is receiving free access to Clearview AI’s powerful search engine for faces, letting authorities potentially vet people of interest at checkpoints, among other uses, added Lee Wolosky, an adviser to Clearview and former diplomat under U.S. presidents Barack Obama and Joe Biden…  The Clearview founder said his startup had more than 2 billion images from the Russian social media service VKontakte at its disposal, out of a database of over 10 billion photos total…  

A mismatch could lead to civilian deaths, just like unfair arrests have arisen from police use, said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project in New York.  “We’re going to see well-intentioned technology backfiring and harming the very people it’s supposed to help,” he said.  Ton-That said Clearview should never be wielded as the sole source of identification and that he would not want the technology to be used in violation of the Geneva Conventions, which created legal standards for humanitarian treatment during war…  Clearview, which primarily sells to U.S. law enforcement, is fighting lawsuits in the United States accusing it of violating privacy rights by taking images from the web. Clearview contends its data gathering is similar to how Google search works. Still, several countries including the United Kingdom and Australia have deemed its practices illegal.  Cahn described identifying the deceased as probably the least dangerous way to deploy the technology in war, but he said that “once you introduce these systems and the associated databases to a war zone, you have no control over how it will be used and misused.”’

Statewatch warns of worsening ethnic profiling risk in EU as identity systems expand

Biometric Update 02.03.22

It goes without saying that the snooping fervour which encapsulates global frontier decisions will be targeting people of ‘colour’:

‘Monitoring organization Statewatch warns that new biometric identity controls used by police and immigration authorities in the European Union could see ethnic minority citizens and non-citizens subjected to unwarranted intrusions into their lives. Its new report also warns readers not to get caught up in the new technologies coming online, and to consider the structures and policies behind their use.  ‘Building the Biometric State: Police Powers and Discrimination’ hopes to find ways to hold authorities publicly and politically accountable for what it foresees as increasing discrimination as a result of biometric policy and technology. It calls for a “firewall” between policing and public services.  The report provides an overview and twenty-year timeline of the EU’s plans and actions for biometrics. It states that on the one hand, the bloc has invested at least €290 million (US$322 million) in public research into biometric technologies since 1998. On the other, the biometric boom has been fuelled by “secretive police and policy networks that operate with little or no democratic scrutiny,” allowing the use of biometrics to spread from police stations, to the streets and borders.  It scrutinizes the Common Identity Repository which is being built to contain solely the data on foreign nationals living in the EU. Statewatch fears that the database, plus the bloc’s drive for interoperability for border policies will increase the pressure for biometric registration of foreign nationals and more checks aimed at detecting those without the correct documents.  Skin color could become a “proxy for immigration status” as technology makes identity checks ever easier to conduct… 

“In a context of systemic racism and discrimination and a continued drive by both national governments and EU institutions to identify increasing numbers of foreign nationals in order to deport and/or exclude them from their territory, the attempt to extend and entrench the deployment and use of biometric technologies must be interrogated and challenged, as part of the broader fight against state racism and ethnic profiling, and for racial equality and social justice,” states the report.'

The airport tech helping to prevent delayed flights

BBC 07.02.22

The rise of digital IDs and biometric authentication presages a dystopian future.  The fact that the companies creating such systems are not in the least regulated is extremely troubling:

‘IntellAct says its system is currently being evaluated by a number of airports in Europe, Asia and the Middle East - and that preliminary tests conducted by Israeli carrier, El Al, at the country's Ben Gurion Airport showed it could potentially cut turnaround times by 15%.  Yet, Prof Sandra Wachter, a senior research fellow in AI at Oxford University, says such high-tech staff monitoring systems at airports raises concerns.  "Monitoring how often employees take breaks, how fast they work and incentivising them to work as fast as possible is stressful and dehumanising," she says.  "In fact, a stressful work environment is more likely to lead to mistakes, oversights or accidents. And this is particularly problematic in air traffic where the stakes are so high.”… If this all sounds terribly futuristic and a bit worrying, one US airline has already gone one stage further. Since last year, Delta has been trialling a scheme at Detroit and Atlanta international airports whereby your face is your passport, as Forbes first reported....  You don't have to take out or scan your physical passport. It is a more advanced facial recognition system than the ones many of us have already used at airports' passport control that do require you to scan your physical passport so the system can match your face to your photo…  Yet Prof Wachter says such technology raises important privacy issues that need to be debated.  "The algorithm is quickly checking your passport against a database," says Prof Wachter. "But having a database with very detailed information, including biometric data, causes risks. What is this data used for? Who has access to this data? Will this data be shared with third parties?’

Biometric surveillance: Face-first plunge into dystopia

Al Jazeera 31.01.22

FR adoption is seriously addictive to authoritarian regimes:

'Biometric facial comparison technology is currently “deployed at 205 airports for air entry” and 32 for departure, as well as at 12 seaports and at “virtually all pedestrian and bus processing facilities” along the country’s northern and southern land frontiers. Between June 2017 and November 2021, more than 117 million people got to “say hello” to Big Brother at the US border…  In 2019, NPR reported on the Israeli tech company AnyVision, the developer of both the facial recognition software utilised to identify Palestinians at Israeli military checkpoints in the West Bank as well as the technology deployed in a secret Israeli military surveillance project of West Bank Palestinians. According to NPR, AnyVision – which was at the time receiving funding from Microsoft – “wouldn’t identify its clients but said its technology is installed in hundreds of sites in over 40 countries”. Then-CEO Eylon Eshtein was quoted as follows: “I don’t operate in China. I also don’t sell in Africa or Russia. We only sell systems to democratic countries with proper governments.”  Given the Israeli army’s track record of, like, slaughtering 22 members of one Palestinian family in the Gaza Strip in one fell swoop, it seems democracy and propriety are perhaps overrated.’

The IRS Needs to Stop Using ID.me's Face Recognition, Privacy Experts Warn

Gizmodo 26.01.22

Digital IDs are around the corner.  Heralded by a vaccine passport, they are now being used in all walks of life; envy of the CPR political system is catching up:

‘Specifically, ID.me told Gizmodo it uses 1:many face recognition when users first enroll in its system to prevent identity theft, which is in addition to the 1:1 check it users to verify someone’s identity. In other words, ID.me uses 1:1 to make sure you are you, and 1:many to make sure you’re not someone else.  The revelation of ID.me’s use of 1:many face recognition drew immediate criticisms from a wide range of privacy groups. One of those, digital rights nonprofit Fight For the Future, released a statement accusing the company of “lying about the scope of its facial recognition surveillance.” In an emailed statement Fight for the Future Campaign Director, Caitlin Seeley George said the revelations should make government agencies reconsider their partnerships with ID.me.  “The IRS needs to immediately halt its plan to use facial recognition verification, and all government agencies should end their contracts with ID.me,” Seeley George wrote. “We also think that Congress should investigate how this company was able to win these government contracts and what other lies it might be promoting.”’

Feds' spending on facial recognition tech expands, despite privacy concerns

Cyberscoop 10.01.22

A company that’s been ordered to cease illegal FR collection of the public, has now entered into a financial deal with the FBI:

'The contracts demonstrate that despite a growing chorus of concerns from lawmakers, regulators and civil liberties advocates about the dangers of facial recognition technology, federal law enforcement agencies have no interest in rolling back their use of the technologies. Instead, they’re plowing ahead with private partnerships with companies whose databases of photos of private citizens eclipse government databases in scale.’

Clearview AI's 'Search Engine for Faces' Set to Receive Patent

Gizmodo 06.12.21

That’s a middle finger flipped towards the regulatory body:

‘Clearview AI, the notorious facial recognition company which has partnered with over 2,400 law enforcement agencies across the U.S, is about to receive a patent for what it describes as a first of its kind, “search engine for faces.” Politico, which was the first to discover the patent originally filed in August 2020, determined the U.S. Patent and Trademark Office had sent Clearview a notice of allowance last week. That means Clearview essentially has the patent in the bag so long as it pays its administrative fees… Privacy advocates and researchers oppose the patent and worry it could normalize Clearview’s data scraping practices before lawmakers have a chance to pass meaningful data privacy regulations constraining the technology.And with well over $38 million raised so far in funding according to Crunchbase, paying the bill shouldn’t be a problem. “The part that they’re looking to protect is exactly the part that’s the most problematic,” Amnesty International researcher Matt Mahmoudi told Politico. “They are patenting the very part of it that’s in violation of international human rights law.”’

Mass surveillance fuels oppression of Uighurs and Palestinians

Al Jazeera 24.11.21

As we move towards an entrenched dystopian world, there should be a global outcry against this tech:

‘The Israeli military is reportedly using facial recognition to build a massive database of personal information on Palestinians in the occupied West Bank, which includes their photos, family histories and education, and assigns them a security rating. When soldiers, outfitted with an official smartphone Blue Wolf app, scan a Palestinian’s face, the app shows yellow, red or green to indicate whether the person should be detained or allowed to pass. To one of us – a researcher on China for Human Rights Watch – the Israeli Blue Wolf system is eerily familiar. A similar mass surveillance system is in use by the Chinese authorities in Xinjiang, called Integrated Joint Operations Platform (IJOP), which acts as the “brain” behind various sensory systems throughout the region. IJOP is also a big data system, which detects “abnormality” as arbitrarily defined by the authorities… In recent years growing attention has been paid to China’s use and export of mass surveillance. But Chinese companies are not alone. Surveillance technologies have proliferated globally in a legal and regulatory vacuum.’

Israel escalates surveillance of Palestinians with facial recognition program in West Bank

The Independent 09.11.21

FR thrives in the world’s most oppressive country, building on superlative and global tech to support it:

‘The sweeping surveillance effort utilises a smartphone technology called Blue Wolf and has been in operation for the past two years, as reported by The Washington Post. A former Israeli soldier told the newspaper the technology was like a “Facebook for Palestinians”, as it was able to capture photos of Palestinians and match them with a database. The photos to build the database for Blue Wolf were taken by the Israeli army of Palestinians of varying ages. It is unclear how many photos have been taken for the surveillance programme, but it is understood to be thousands. Soldiers are then able to use an app that will flash different colours to ascertain if a Palestinian is to be arrested, detained or left alone.'

I promised you an update on my FR opposition directed against my local supermarkets (Sainsbury’s); following emails to my local MP, who has since written to the supermarket chain directors, I’m happy to announce that an A4 sign is now prominently displayed at the stores’ entrance. They use the word ‘CCTV’ rather than facial recognition but it’s a step in the right direction. Please use your powers as citizens to encourage transparency by retail giants. Every little action helps.

European Parliament backs ban on remote biometric surveillance

TechCrunch 06.10.21

Excellent move by the European parliament. Hopefully that would set the stage for others to follow:

To respect “privacy and human dignity”, MEPs said that EU lawmakers should pass a permanent ban on the automated recognition of individuals in public spaces, saying citizens should only be monitored when suspected of a crime. The parliament has also called for a ban on the use of private facial recognition databases — such as the controversial AI system created by U.S. startup Clearview (also already in use by some police forces in Europe) — and said predictive policing based on behavioural data should also be outlawed… Commenting in a statement, rapporteur Petar Vitanov (S&D, BG) said: “Fundamental rights are unconditional. For the first time ever, we are calling for a moratorium on the deployment of facial recognition systems for law enforcement purposes, as the technology has proven to be ineffective and often leads to discriminatory results. We are clearly opposed to predictive policing based on the use of AI as well as any processing of biometric data that leads to mass surveillance. This is a huge win for all European citizens.”… The parliament’s resolution also calls for a ban on AI assisting judicial decisions — another highly controversial area where automation is already been applied, with the risk of automation cementing and scaling systemic biases in criminal justice systems.’

The Guardian view on biometric technology in schools: watch closely

The Guardian 18.10.21

Seriously intrusive and totally unnecessary:

‘Last year, the court of appeal ruled that the use of facial recognition technology by police in Wales breached privacy and equality laws. In the US and Sweden, schools have been stopped from using it to monitor attendance or security. Typically, the buyers and sellers of these systems present them as useful tools and nothing more. But as Prof Kate Crawford, the author of a recent book about AI, and other critics have pointed out, the companies at this point are running ahead of democratic debate and decision-making. The challenges of how to regulate and secure consent for the kinds of information gathering that digital technology makes possible are a long way from being answered. And while this is the case it is ethically dubious, to put it mildly, to use children as guinea pigs.’

The covid tech that is intimately tied to China’s surveillance state

Technology Review 11.10.21

Surveillance tech knows no borders:

‘Just a few months later, across town, Amazon—the world’s wealthiest technology company—received a shipment of 1,500 heat-mapping camera systems from the Chinese surveillance company Dahua. Many of these systems, which were collectively worth around $10 million, were to be installed in Amazon warehouses to monitor the heat signatures of employees and alert managers if workers exhibited covid symptoms. Other cameras included in the shipment were distributed to IBM and Chrysler, among other buyers…  In fact, numerous studies have shown that surveillance systems support systemic racism and dehumanization by making targeted populations detainable. The past and current US administrations’ use of the Entity List to halt sales to companies like Dahua and Megvii, while important, is also producing a double standard, punishing Chinese firms for automating racialization while funding American companies to do similar things.   Increasing numbers of US-based companies are attempting to develop their own algorithms to detect racial phenotypes, though through a consumerist approach that is premised on consent. By making automated racialization a form of convenience in marketing things like lipstick, companies like Revlon are hardening the technical scripts that are available to individuals.   As a result, in many ways race continues to be an unthought part of how people interact with the world. Police in the United States and in China think about automated assessment technologies as tools they have to detect potential criminals or terrorists. The algorithms make it appear normal that Black men or Uyghurs are disproportionately detected by these systems. They stop the police, and those they protect, from recognizing that surveillance is always about controlling and disciplining people who do not fit into the vision of those in power. The world, not China alone, has a problem with surveillance.’

London is buying heaps of facial recognition tech

Wired 29.09.21

So glad I took up the FR issue with my MP - it seems she’s fighting against it:

‘The UK’s biggest police force is set to significantly expand its facial recognition capabilities before the end of this year. New technology will enable London’s Metropolitan Police to process historic images from CCTV feeds, social media and other sources in a bid to track down suspects. But critics warn the technology has “eye-watering possibilities for abuse” and may entrench discriminatory policing…  Political support for the use of facial recognition remains contested in the UK, with MPs from Labour, the Liberal Democrats and the Green Party all calling for regulations on the use of the technology. “I’m disappointed to see this latest development in the Met’s use of Retrospective Facial Recognition software,” says Sarah Olney, Liberal Democratic MP for Richmond Park. “It comes despite the widespread concerns as to its accuracy, along with its clear implications on human rights. Better policing ought to start from a foundation of community trust. It’s difficult to see how RFR achieves this.”’

Ten federal agencies are expanding their use of facial recognition

EndGadget 6.08.21

There’s a ‘snooping virus’ affecting everyone.  The War on Terror has morphed into the War on Privacy:

‘The Government Accountability Office has revealed in a new report that 10 federal agencies are planning to expand their use of facial recognition. In a survey involving 24 federal agencies on their use of facial recognition technology, the Agriculture, Commerce, Defense, Homeland Security, Health and Human Services, Interior, Justice, State, Treasury and Veterans Affairs departments told GAO that they're planning to use facial recognition in more areas through fiscal year 2023…  As of last year, the system, which can identify people in real time, was reportedly in use by 600 police departments across the US, including the FBI and DHS.’

Serbia’s smart city has become a political flashpoint

Wired 10.08.21

It’s not just ‘dictatorships’ that are using FR tech to prevent crime, their use is also ubiquitous in UK supermarkets.  (Am waiting to hear back from my local MP after she’d offered to write to Sainbury’s - will keep you posted):

‘By 2019, Serbia had decided to go all in on Chinese technology. The minister of interior at that time, Nebosja Stefanović, and general police director, Vladimir Rebić, announced on TV the installation of almost 1,000 smart cameras for video surveillance with advanced facial and license plate recognition software at 800 locations in Belgrade. The installation, in cooperation with Huawei, came about as part of the Safe City project, Huawei's initiative aimed at preventing and detecting crimes.  The Serbian commissioner for information of public importance and personal data protection, Milan Marinovic, was among the first to sound the alarm. “There is no legal basis for the implementation of the Safe City project”, said Marinovic, pointing out that the “existing [Serbian] laws do not regulate facial recognition and the processing of biometric data”.’

You have a choice: China’s top court empowers people to say ‘no’ to facial recognition use by private businesses

South China Morning Post 29.07.21

Private FR is no good in China unless consent is given, though it is sanctioned by the government in public places:

'China’s top court has made a key judgment related to facial recognition technology, empowering individuals to reject unauthorised facial recognition data collection by commercial entities such as hotels, banks and nightclubs.  The decision, which is included in a directive issued to local courts this week by the People’s Supreme Court, makes it clear that any collection and analysis of facial data by commercial operations must receive the “independent” consent of the individual concerned. If not, the act of using facial recognition technology can be defined as an infringement of personal rights and interests, a civil offence that allows the victim to file a lawsuit and claim compensation.’

Despite controversies and bans, facial recognition startups are flush with VC cash

TechCrunch 26.07.21 

I was horrified to see that my local supermarkets (London) have embraced FR.  Have written to my MP and will be taking it up with rights group in order to force these retailers to at least post a notice at their entrance specifying that they use this seriously invasive tech:

‘A breakdown of Crunchbase data by FindBiometrics shows a sharp rise in venture funding in facial recognition companies at well over $500 million in 2021 so far, compared to $622 million for all of 2020.  About half of that $500 million comes from one startup alone. Israel-based startup AnyVision raised $235 million at Series C earlier this month from SoftBank’s Vision Fund 2 for its facial recognition technology that’s used in schools, stadiums, casinos and retail stores…  In June, a group of 50 investors with more than $4.5 trillion in assets called on dozens of facial recognition companies, including Amazon, Facebook, Alibaba and Huawei, to build their technologies ethically.  “In some instances, new technologies such as facial recognition technology may also undermine our fundamental rights. Yet this technology is being designed and used in a largely unconstrained way, presenting risks to basic human rights,” the statement read.  It’s not just ethics, but also a matter of trying to future-proof the industry from inevitable further political headwinds. In April, the European Union’s top data protection watchdog called for an end to facial recognition in public spaces across the bloc.  “As mass surveillance expands, technological innovation is outpacing human rights protection. There are growing reports of bans, fines and blacklistings of the use of facial recognition technology. There is a pressing need to consider these questions,” the statement added.’

25 States Are Forcing Face Recognition on People Filing for Unemployment 

Gizmodo 23.07.21

US sates seem to be forging ahead with digital IDs through FR:

'Per the ID.me guide, claimants have to set up an ID.me account, with an email address, social security number, photo ID, and a video faceprint. ID.me says that it needs explicit consent before it shares information, so your choice here: do you want your rent money or not? The company says that you “may destroy your ID.me credential and authorized app at any time,” but adds in a footnote that “some data” related to credentials “will be retained after account deletion solely for fraud prevention and government auditing purposes.” ID.me may keep your biometric data for up to seven and a half years after you delete your account. ID.me co-founder Blake Hall told CNN that this is for government agencies, mostly to identify fraud.’

Europe makes the case to ban biometric surveillance

Wired 07.07.21

This may prove to be a futile attempt to tackle a technology that is being used at EU’s own border control: 

‘Both the European Data Protection Supervisor, which acts as the EU’s independent data body, and the European Data Protection Board, which helps countries implement GDPR consistently, have called for a total ban on using AI to automatically recognise people.  “Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places,” the heads of the two bodies, Andrea Jelinek and Wojciech Wiewiórowski, wrote in a joint statement at the end of June. AI shouldn’t be used in public spaces for facial recognition, gait recognition, fingerprints, DNA, voice, keystrokes and other types of biometrics, they said. There should also be a ban on trying to predict people’s ethnicity, gender, political or sexual orientation with AI…  This lack of transparency appears to apply to the EU’s own funding of biometrics. Between September 2016 and August 2019, the EU’s innovation fund, Horizon 2020, backed iBorderCtrl, a project that aimed to use people’s biometrics to help with identification and analyse people’s facial “micro-expressions” to work out if they were lying. Thirteen different companies or research groups were involved in the project that had aims, as its name suggests, of developing the technology for use at the EU’s borders.’


Government watchdog finds little oversight over the use of facial recognition technology by U.S. agencies

Coda 02.07.21

Nothing new here:

‘The report surveyed 42 federal agencies employing law enforcement officials about their use of facial recognition systems from January 2015 through March 2020. Nearly half of those surveyed — 20 — reported using the technology, investigators found. Those included the U.S. Customs and Border Protection, the Drug Enforcement Administration, the U.S. Secret Service, among others, as well as the U.S. Fish and Wildlife Service and the U.S. Postal Inspection Service.   Ten agencies reported using the controversial facial recognition start-up Clearview A.I., which the New York Times called the “secretive company that might end privacy as we know it”…  13 agencies using third-party vendors admitted they did not know which privately-owned facial recognition systems their employees are using — a revelation that Ferguson said “speaks of the dangers of an unregulated landscape where agencies can just get an idea and go with it without anyone watching.”’

Abu Dhabi justifies introducing invasive facial scanning by saying it “detects COVID-19”

Reclaim The Net 28.06.21

I would bet everything that FR tech will stay on long after this Covid scare:

‘Abu Dhabi has begun using facial recognition to “detect COVID-19” at shopping malls and airports. Supposedly, the new technology’s trial on 20,000 people registered “a high degree of effectiveness.”  As reported by state-run media house WAM and the country’s media office, the technology allegedly detects the virus through electromagnetic waves, which ideally change due to the presence of the RNA particles in the body of an infected person…  “The EDE scanning system will be used at shopping malls, as part of testing in some residential areas, and land and air entry points, as part of efforts to enhance precautionary measures and curb the spread of Covid-19 by establishing safe zone,” read a Sunday tweet by the Abu Dhabi Government Media Office.’

New Shazam for Birds Will Identify That Chirping for You

Gizmodo 26.06.21

The one good thing that would come out of FR tech:

‘Merlin Bird ID is more than just a sound identification app, though; it’s the result of tens of thousands of bird watchers and citizen scientists submitting over a million avian audio recordings to Cornell’s Macaulay Library through the eBird app in just the past few years. Given the volume of data, Weber and Macaulay Library research engineer Grant Van Horn, plus other members of the Cornell Lab of Ornithology, wondered last summer what it might take to create a birdsong identifying feature of the Merlin Bird ID app.  Sound identification is, in fact, an image recognition problem, Van Horn explained. Caltech and Cornell Tech engineers had already put together an image recognition neural network toolkit for birds using photos from the Macaulay Library to create the Merlin Photo ID feature. Sound ID converts audio into spectrogram images, processes them, and then traditional computer vision tools compares these spectrograms to spectrograms of existing bird recordings.'

Rights groups coalition demands global ban on facial recognition surveillance tech

Coda 07.06.21

I wonder how big an impact this letter would do, or has the sell-by-date worldwide regulations on this tech elapsed?:

'More than 175 civil rights groups, activists and researchers from across the world are calling for a global ban on facial recognition and remote biometric systems. An open letter, published today highlights human rights abuses enabled by the use of surveillance technology in countries such as China, Russia, Myanmar, Argentina, Brazil and the United States.   The document, signed by groups and individuals including Amnesty International and the Internet Freedom Foundation, demands a halt in all public investment in uses of technologies enabling mass surveillance and advocates for their prohibition in all public spaces.  The coalition was convened by the digital rights group Access Now, and the letter was drafted by European Digital Rights, Human Rights Watch, Instituto Brasileiro de Defesa do Consumidor and a number of other organizations. Signatories from Asia, Africa, Europe and Latin America include Big Brother Watch and Privacy International.’

McDonald's Is Being Sued By a Customer Over Its Latest Technology

Eat This, Not That 08.06.21

Acquiring voice-recognition data from McDonald’s customers reveals the absurd AI-drive running through most industries:

‘The chain's CEO Chris Kempczinski recently said that the company is testing new voice-recognition technology at several Chicago-area restaurants. Besides eliminating the need for human employees, the AI could improve the speed and accuracy of drive-thru orders…  Using a voice-recognition system to identify repeat customers, which is exactly what McDonald's plans to do with the technology, violates Illinois' Biometric Information Privacy Act. BIPA states that collecting biometric information such as voiceprints, fingerprints, facial scans, handprints, and palm scans requires consent from the parties in question. The voiceprints collected by the AI technology can identify customers' pitch, volume, and other unique qualities. The law also requires McDonald's to make its data retention policies public and clarify how long the information collected will be stored and how it will be used.  Furthermore, the lawsuit alleges that McDonald's connects the unique voice information to license plates to more easily recognize customers at any location they end up going to.’

NYPD under fire as ‘Orwellian’ surveillance system of 15k facial recognition cameras revealed

The Independent 03.06.21

FR multiplies in NY and is concentrated in black areas:

‘A new study by Amnesty International has found that the New York Police Department (NYPD) can track people across three of the city’s five boroughs using facial recognition technology combined with a staggering number of surveillance cameras. The NGO said the scale and power of the police department’s systems give it an “Orwellian” ability to track people across the city – with particularly severe implications for those already targeted by discriminatory policing practices… According to the project’s lead researchers, poorer neighbourhoods of colour are host to some of the most heavily surveilled street intersections. East New York in Brooklyn, which the last census recorded as 54.4 per cent Black, 30 per cent Hispanic and 8.4 per cent white, was top of Amnesty’s ranking of most heavily watched neighbourhoods, with 577 cameras found at street intersections across a relatively small area.’

States push back against use of facial recognition by police

AP 05.05.21

Hopefully, more US states will follow:

‘At least seven states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy. Debate over additional bans, limits and reporting requirements has been underway in about 20 state capitals this legislative session, according to data compiled by the Electronic Privacy Information Center… Complaints about false identifications prompted Amazon, Microsoft and IBM to pause sales of their software to police, though most departments hire lesser-known firms that specialize in police contracts. Wrongful arrests of Black men have gained attention in Detroit and New Jersey after the technology was blamed for mistaking their images for those of others.’

U.S. banks deploy AI to monitor customers, workers amid tech backlash

REUTERS 19.04.21

Banks finally home-in on data gathering to ‘understand’ their customers:

‘Several U.S. banks have started deploying camera software that can analyze customer preferences, monitor workers and spot people sleeping near ATMs, even as they remain wary about possible backlash over increased surveillance, more than a dozen banking and technology sources told Reuters… Widespread deployment of such visual AI tools in the heavily regulated banking sector would be a significant step toward their becoming mainstream in corporate America… Civil liberties issues loom large. Critics point to arrests of innocent individuals following faulty facial matches, disproportionate use of the systems to monitor lower-income and non-white communities, and the loss of privacy inherent in ubiquitous surveillance.’

Fears of vaccine exclusion as India uses digital ID, facial recognition

NewsTrust 15.04.21

Highly contest tech used under guise for greater vaccination efforts in India:

‘Millions of vulnerable people are at risk of missing out on COVID-19 vaccines as India uses its national digital identity for registration and pilots facial recognition technology at inoculation centres, rights groups and experts said. Amid a surge in coronavirus cases, authorities are testing a facial recognition system based on the Aadhaar ID for authentication in the eastern state of Jharkhand, and plan to roll it out nationwide, a senior official said last week. "Aadhaar-based facial recognition system could soon replace biometric fingerprint or iris scan machines at COVID-19 vaccination centres across the country in order to avoid infections," R.S. Sharma, chief of the National Health Authority, was quoted as telling an online publication. Sharma added later that the system would not be mandatory, but new guidelines indicate that Aadhaar is already the "preferred" mode of identity verification and for vaccination certificates.’

AI-driven CCTV upgrades are coming to the ‘world’s most watched’ streets – will they make Britain safer?

The Conversation 29.03.21

The second most surveilled country in the world, after China, wants to update its snooping measures, even when 1 in 70 cameras are privately-owned:

‘Earlier this month, the UK government appointed a new Surveillance Camera Commissioner, who has been tasked with governing the fast-moving world of surveillance cameras. Noticeably, this office has been combined with that of the Biometrics Commissioner – a possible indicator of the direction of travel for the UK’s CCTV ecosystem, which may be set to merge with biometrics and advanced surveillance software.  Still, the UK’s Safer Streets initiative does also look beyond CCTV: funding improved street lighting and increased street patrols. This points to a recognition that CCTV technology is no silver bullet solution for public safety issues – even within the limited scope of urban design.  In this context, and given existing flaws in the UK’s patchy CCTV ecosystem, faith in street surveillance as an effective public safety provision may be misplaced. Real street safety, extending far beyond the reach of CCTV cameras, won’t be achieved by technology – it’ll be achieved by social change.’

Protesters fear they are being tracked by cameras armed with facial recognition technology

News Trust 18.03.21

It’s the logical step for authoritarian governments to take:

‘Protesters in Myanmar fear they are being tracked with Chinese facial recognition technology, as spiralling violence and street surveillance spark fears of a "digital dictatorship" to replace ousted leader Aung San Suu Kyi.  Human rights groups say the use of artificial intelligence (AI) to check on citizens' movements poses a "serious threat" to their liberty.  More than 200 people have been killed since Nobel peace laureate Suu Kyi was overthrown in a Feb. 1 coup, triggering mass protests that security forces have struggled to suppress with increasingly violent tactics.’

Privacy fears as India’s gov’t schools install facial recognition

Al Jazeera 02.03.21

India adopts FR in schools on its road to authoritarianism:

‘The facial recognition systems are being installed without laws to regulate the collection and use of data, which is particularly worrying for children, said Anushka Jain, an associate counsel at the Internet Freedom Foundation, a digital rights group that became aware of the rollout last week.  “CCTV is already a violation of children’s privacy, even though some parents had supported it for the safety of their children … but the use of facial recognition technology is an overreach and is completely unjustified,” Jain said.  “Its use for children is particularly problematic because the accuracy rate is so low – so in the event of a crime, you could have children being misidentified,” she told the Thomson Reuters Foundation.’

Couriers say Uber’s ‘racist’ facial identification tech got them fired

WIRED 01.02.21

FR should be stomped out:

‘Uber Eats couriers say they have been fired because the company’s “racist” facial identification software is incapable of recognising their faces. The system, which Uber describes as a “photo comparison” tool, prompts couriers and drivers to take a photograph of themselves and compares it to a photograph in the company’s database.  Fourteen Uber Eats couriers have shared evidence with WIRED that shows how the technology failed to recognise their faces. They were threatened with termination, had accounts frozen or were permanently fired after selfies they took failed the company’s “Real Time ID Check”. Another was fired after the selfie function refused to work. Trade unions claim this issue has affected countless more Uber Eats couriers across the country, as well as private-hire drivers.’

'Face control': Russian police go digital against protesters

REUTERS 11.02.21

Like in most countries, surveillance first hides behind a grounded imperative then morphs into free-fall voyeurism:

‘The Moscow mayor's office announced it was rolling out a facial recognition system in the metro to spot wanted criminals in 2018, when Russia hosted the soccer World Cup. There are now surveillance cameras all over Moscow.  "There is still a lot we don't know about the facial recognition system in Moscow," said Kirill Koroteev, a lawyer at human rights group Agora.  He said it was unclear how automated the system was, whether all cameras used it and which databases it used.  "At first they said the system would be used to find lost kids and fugitive convicts," said Sarkis Darbinyan, an Internet freedom advocate. "Then they used it to monitor self-isolation during the pandemic, and now, as expected, to monitor protests and activists.”'

This is how we lost control of our faces 

Technology Review 05.02.21

Consent has become a relic of the past:

‘Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent…  Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge…  Raji says her investigation into the data has made her gravely concerned about deep-learning-based facial recognition.  “It’s so much more dangerous,” she says. “The data requirement forces you to collect incredibly sensitive information about, at minimum, tens of thousands of people. It forces you to violate their privacy. That in itself is a basis of harm. And then we’re hoarding all this information that you can’t control to build something that likely will function in ways you can’t even predict. That’s really the nature of where we’re at.”’

Why your face could be set to replace your bank card

BBC 25.01.21

Payment through FR may soon be a reality in places other than China:

"Tech is like a tide," she [Ling - not her real name] says. "There's no way you can swim against it. But I also want to make a stand of some kind, for as long as I'm able to do so”…  If technology in general is indeed a tide, then the rollout of facial recognition payment technology in China is something of a tsunami.  Almost all (98%) of mobile payments in China goes through just two apps - Alipay (owned by ecommerce giant Alibaba) and WeChat Pay - and both are racing to install their facial recognition systems across the country.  Alipay is spending three billion yuan ($420m; £300m) over three years, and according to Chinese state media, 760 million people will be using facial recognition payments by next year.’

The FTC Forced a Misbehaving A.I. Company to Delete Its Algorithm

OneZero 19.01.21

One step further would be to legally prohibit all law-enforcement agencies to access facial recognition:

‘This idea led to the tech industry adopting mantras like “data is the new oil,” and kicked off the compilation of gigantic datasets to train more A.I. models. Large and specialized datasets, like billions of images of faces, could be the differentiating factor between a failed algorithm and a successful one. But once the algorithm has been trained on data, that data’s value has been extracted.  The algorithms built on this ill-gotten data were critically important to the future of Paravision, which had just won an Air Force contract for facial recognition tech, according to Wired. Now, the FTC’s mandate of algorithm deletion means the facial recognition developed previously cannot be used in this contract, cutting off any future gains from circumventing user privacy. Paravision did not respond to a request for comment.’

Huawei patent mentions use of Uighur-spotting tech

BBC 113.01.21

Why they should recant on their Party’s discrimination policy is neither here nor there.  The Uighurs are being persecuted in China:

‘Huawei's patent was originally filed in July 2018, in conjunction with the Chinese Academy of Sciences .  It describes ways to use deep-learning artificial-intelligence techniques to identify various features of pedestrians photographed or filmed in the street.  It focuses on addressing the fact different body postures - for example whether someone is sitting or standing - can affect accuracy.  But the document also lists attributes by which a person might be targeted, which it says can include "race (Han [China's biggest ethnic group], Uighur)”.'

Facial recognition technology can expose political orientation from naturalistic facial images

Nature 11.01.21

A scientist who has previously worked with Facebook and Faception just published a paper showing that FR can accurately depict which political orientation the subject adheres to:

‘Apart from identifying individuals, the algorithms can identify individuals’ personal attributes, as some of them are linked with facial appearance. Like humans, facial recognition algorithms can accurately infer gender, age, ethnicity, or emotional state…  Some may doubt whether the accuracies reported here are high enough to cause concern. Yet, our estimates unlikely constitute an upper limit of what is possible. Higher accuracy would likely be enabled by using multiple images per person; using images of a higher resolution; training custom neural networks aimed specifically at political orientation; or including non-facial cues such as hairstyle, clothing, headwear, or image background. Moreover, progress in computer vision and artificial intelligence is unlikely to slow down anytime soon. Finally, even modestly accurate predictions can have tremendous impact when applied to large populations in high-stakes contexts, such as elections. For example, even a crude estimate of an audience’s psychological traits can drastically boost the efficiency of mass persuasion35. We hope that scholars, policymakers, engineers, and citizens will take notice.’ 

France bans use of drones to police protests in Paris

BBC 23.12.20

So much more needs to be done to curtail the pervasive use of a seriously invasive tech:

‘The Council of State ruled there was "serious doubt over the legality" of drones without a prior text authorising and setting out their use. LQDN said the only way the government could legalise drone surveillance now was in providing "impossible proof" that it was absolutely necessary to maintain law and order.  The decision is the second setback in months for Parisian authorities' drone plans. In May, the same court ruled that drones could not be used in the capital to track people in breach of France's strict lockdown rules.’

Is facial recognition too biased to be let loose?

Nature 17.11.20

A bad idea morphing into a worse one:

‘The world’s largest biometric programme, in India, involves using facial recognition to build a giant national ID card system called Aadhaar. Anyone who lives in India can go to an Aadhaar centre and have their picture taken. The system compares the photo with existing records on 1.3 billion people to make sure the applicant hasn’t already registered under a different name. “It’s a mind-boggling system,” says Jain, who has been a consultant for it. “The beauty of it is, it ensures one person has only one ID.” But critics say it turns non-card owners into second-class citizens, and some allege it was used to purge legitimate citizens from voter rolls ahead of elections.  And the most notorious use of biometric technology is the surveillance state set up by the Chinese government in the Xinjiang province, where facial-recognition algorithms are used to help single out and persecute people from religious minorities.  “At this point in history, we need to be a lot more sceptical of claims that you need ever-more-precise forms of public surveillance,” says Kate Crawford, a computer scientist at New York University and co-director of the AI Now Institute. In August 2019, Crawford called for a moratorium on governments’ use of facial-recognition algorithms (K. Crawford Nature 572, 565; 2019).  Meanwhile, having declared its pilot project a success, Scotland Yard announced in January that it would begin to deploy live facial recognition across London.’

The ethical questions that haunt facial-recognition research

Nature 18.11.20

Glad to see that this particularly pervasive tech is getting questioned on ethics:

‘The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community to take a firmer stance against unethical facial-recognition research. It’s important to denounce controversial uses of the technology, but that’s not enough, ethicists say. Scientists should also acknowledge the morally dubious foundations of much of the academic work in the field — including studies that have collected enormous data sets of images of people’s faces without consent, many of which helped hone commercial or military surveillance algorithms…  “The AI community suffers from not seeing how its work fits into a long history of science being used to legitimize violence against marginalized people, and to stratify and separate people,” says Chelsea Barabas, who studies algorithmic decision-making at MIT and helped to form the CCT this year. “If you design a facial-recognition algorithm for medical research without thinking about how it could be used by law enforcement, for instance, you’re being negligent,” she says.’

Army Considers Facial Recognition to Monitor Children in Its Care

NextGov 10.11.20

When China installed FR in classrooms a huge stink erupted.  When the US army deploys FR in children’s centres, hopefully a bigger stink will take place:

‘The Army is preparing to run a pilot program to explore how commercial facial recognition and artificial intelligence solutions can be integrated into its existing camera systems in a South Carolina-based child development center, or CDC—and ultimately used to assess new approaches for monitoring kids in its care.   A presolicitation synopsis released Friday articulated the Army’s Engineer Research and Development Center-Construction Engineering Research Laboratory, or ERDC-CERL’s intent to work with a contractor that can design and demonstrate facial recognition and analytics technologies to leverage at the Fort Jackson Scales Avenue Child Development Center.’

Dubai introduces facial recognition on public transport

TechExplore 25.10.20

Another country joins the growing fascist global trend that is FR:

‘Dubai is introducing a facial recognition system on public transport to beef up security, officials said Sunday, as the emirate prepares to host the global Expo exhibition.  "This technology has proven its effectiveness to identify suspicious and wanted people," said Obaid al-Hathboor, director of Dubai's Transport Security Department.  The emirate already operates a biometric system using facial recognition at its international airport.  Dubai, which sees itself as a leading "smart city" in the Middle East, has ambitions to become a hub for technology and artificial intelligence.  Both sectors will be on show when it opens the multi-billion-dollar Expo fair.  "We aspire to raise our performance by building on our current capabilities, to ensure a high level of security in metro stations and other transport sectors," said Hathboor.’

Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows

The Intercept 21.10.20

After the project Maven backlash, Google hides behind third party to enforce US military tactics:

‘In 2018, Google faced internal turmoil over a contract with the Pentagon to deploy AI-enhanced drone image recognition solutions; the capability sparked employee concern that Google was becoming embroiled in work that could be used for lethal purposes and other human rights concerns. In response to the controversy, Google ended its involvement with the initiative, known as Project Maven, and established a new set of AI principles to govern future government contracts.  The employees also protested the company’s deceptive claims about the project and attempts to shroud the military work in secrecy. Google’s involvement with Project Maven had been concealed through a third-party contractor known as ECS Federal.  Contracting documents indicate that CBP’s new work with Google is being done through a third-party federal contracting firm, Virginia-based Thundercat Technology. Thundercat is a reseller that bills itself as a premier information technology provider for federal contracts.  The contract was obtained through a FOIA request filed by Tech Inquiry, a new research group that explores technology and corporate power founded by Jack Poulson, a former research scientist at Google who left the company over ethical concerns.’

In Singapore, facial recognition is getting woven into everyday life

NBC 12.10.20

New ways to circumvent ways of using deepfakes and photos and the tech is not just being developed in Singapore:

‘Singapore's 4 million people will be able to access government services and more through a new facial verification feature in its national identity program, the country announced in July. Dubbed SingPass Face Verification, the new feature allows users to securely log in to their accounts without the need to remember passwords, and it is meant to be used at public kiosks and on home computers, tablets and mobile phones…  

The new system was jointly developed by iProov, a United Kingdom-based biometric authentication supplier, and Toppan Ecquaria, a Singapore-based digital government service platform provider. Instead of just verifying the face being presented to the camera, iProov's Genuine Presence Assurance technology uses the screen to illuminate a user's face with a cryptographic sequence of colors, which takes less than 7 seconds. It can be used on any device, as long as it has a screen, including web browsers.  The technology claims to be able to prevent logins made with photographs, masks and deepfakes. It has been tested by the U.S. Department of Homeland Security and the U.K. government, as well as the Singapore government. It can also prevent replay attacks, which use a recording of a person's face to authenticate.’

Live facial recognition is tracking kids suspected of being criminal

Technology Review 09.10.20

Abuse of FR tech by governments to be expected:

‘In a national database in Argentina, tens of thousands of entries detail the names, birthdays, and national IDs of people suspected of crimes. The database, known as the Consulta Nacional de Rebeldías y Capturas (National Register of Fugitives and Arrests), or CONARC, began in 2009 as a part of an effort to improve law enforcement for serious crimes.  But there are several things off about CONARC. For one, it’s a plain-text spreadsheet file without password protection, which can be readily found via Google Search and downloaded by anyone. For another, many of the alleged crimes, like petty theft, are not that serious—while others aren’t specified at all.  Most alarming, however, is the age of the youngest alleged offender, identified only as M.G., who is cited for “crimes against persons (malicious)—serious injuries.” M.G. was apparently born on October 17, 2016, which means he’s a week shy of four years old.’  Now a new investigation from Human Rights Watch has found that not only are children regularly added to CONARC, but the database also powers a live facial recognition system in Buenos Aires deployed by the city government. This makes the system likely the first known instance of its kind being used to hunt down kids suspected of criminal activity.  “It’s completely outrageous,” says Hye Jung Han, a children’s rights advocate at Human Rights Watch, who led the research.’

Singapore in world first for facial verification

BBC 26.09.20

Digital identity to be established in Singapore:

'The key difference is that verification requires the explicit consent of the user, and the user gets something in return, such as access to their phone or their bank's smartphone app.   Facial recognition technology, by contrast, might scan the face of everyone in a train station, and alert the authorities if a wanted criminal walks past a camera.   "Face recognition has all sorts of social implications. Face verification is extremely benign," said Mr Bud.  Privacy advocates, however, contend that consent is a low threshold when dealing with sensitive biometric data.   "Consent does not work when there is an imbalance of power between controllers and data subjects, such as the one observed in citizen-state relationships," said Ioannis Kouvakas, legal officer with London-based Privacy International.’

DHS Admits Facial Recognition Photos Were Hacked, Released on Dark Web

VICE 24.09.20

Your face and biometrics are for sale:

‘According to the new report, DHS’s biometric database “contains the biometric data repository of more than 250 million people and can process more than 300,000 biometric transactions per day. It is the largest biometric repository in the Federal Government, and DHS shares this repository with the Department of Justice and the Department of Defense…”  DHS report claimed that Perceptics accessed this data without its knowledge and “later in 2019, DHS experienced a major privacy incident, as the subcontractor’s network was subjected to a malicious cyber attack” by a hacker known as Boris Bullet-Dodger. At the time, DHS denied to Motherboard and others that people’s faces and license plates had ended up on the Dark Web.  But they had…  In May, 2019, Perceptics got a ransom note from Boris Bullet Dodger. “Perceptics received a ransom note via an email from a hacker by the name of ‘Boris Bullet Dodger’ demanding 20 bitcoin within 72 hours,” the report said. “The ransom note stated that, without the bitcoin, stolen data would be uploaded to the dark web. Perceptics did not pay the ransom and the hacker uploaded more than 9,000 unique files to the dark web.”’

Chinese cameras blacklisted by US being used in UK school toilets

The Guardian 21.09.20

A Chinese FR company is being singled out for being Chinese.  When those cameras are being used in UK school toilets, something is irremediably broken in our society:

‘But the deployment of the Hikvision cameras without any formal statement of concern by the British government – public records show they are being used in Kensington and Chelsea, Chelmsford, Guildford council, Coventry council, and Mole Valley council, among others – is evidence of how the British and US governments have diverged in their responses to concerns about Chinese surveillance and human rights abuses… In west Norfolk, public records show the cameras have been installed in toilets in Smithdon high school in Hunstanton, reportedly “to secure the health and personal safety of all students and to prevent vandalism and damage”…  A report in the Intercept last year estimated there were more than 1.2m Hikvision cameras in the UK.’

Portland passes unprecedented ban on facial recognition tech, despite $24,000 Amazon lobbying effort to kill initiative

RT 10.09.20

Hopefully this movement gains traction:

‘Portland lawmakers unanimously passed a sweeping ban on facial recognition technology, becoming the first city to bar both public and private entities from the controversial software and defeating Amazon’s bid to kill the measure.   The Portland City Council adopted the ban in the form of two separate ordinances on Wednesday. The first will block all city agencies, including the police, from using the tech, while the other prohibits private organizations from deploying facial recognition devices in public places…  

Though e-commerce giant Amazon announced a one-year “moratorium” on police use of its facial recognition software in June, calling for governments nationwide to impose “stronger regulations to govern the ethical use” of the tech, the company has fiercely fought Portland’s attempt to do precisely that. Despite its claimed commitment to shelving the technology to await stricter controls, Amazon spent a total of $24,000 on lobbying efforts to fight the city’s ban between last December and June, 30, 2020, public records show…  While Amazon has claimed it is not aware of how many police departments are using Rekognition, according to internal company promotional material obtained by the ACLU in 2018, the firm sees deployment by law enforcement as a “common use case” for the software.’

Border Patrol Has Used Facial Recognition to Scan More Than 16 Million Fliers — and Caught Just 7 Imposters

OneZero 04.09.20

What a useless piece of technology:

'The report also contained some interesting statistics about the effectiveness of the facial recognition programs in catching individuals traveling under false identities. In airports, CBP has scanned more than 16 million passengers arriving in the United States up to May 2020, and stopped a total of seven imposters. At the southern border, facial recognition was also used to scan 4.4 million pedestrians crossing into the United States between September 2018 and December 2019, and stopped 215 imposters… The GAO report reflects the confusing, uneven use of facial recognition technologies at border crossings today. While the number of cameras pointing at our faces continues to grow, we know less and less about why they’re scanning us or what databases they’re matching us against.’

Faulty Facial Recognition Led to His Arrest—Now He’s Suing

VICE 04.09.20

Hope he wins:

‘“[Police relied] on failed facial recognition technology knowing the science of facial recognition has a substantial error rate among black and brown persons of ethnicity which would lead to the wrongful arrest and incarceration of persons in that ethnic demographic,” the lawsuit states. The case comes after the same police department was exposed by the New York Times  in June for wrongfully arresting 42-year-old Robert Williams after using the algorithmic system.’

British police to trial facial recognition system that detects your mood

TNW 17.08.20

The UK police wants to know how you feel:

‘Lincolnshire Police will be able to use the system to search the film for certain moods and facial expressions, the London Times reports. It will also allow cops to find people wearing hats and glasses, or carrying bags and umbrellas… But critics say the system will violate people’s privacy.  “There’s a huge amount of money from the Home Office for this technology and they’re getting themselves into legal trouble, breaching human rights and expanding state surveillance while no one is watching,” said Silkie Carlo, director of civil liberties group Big Brother Watch.  The police are also yet to explain how the system works. Emotion detection AI is estimated to be a $20 billion market but there’s still little scientific evidence that the tech really works. In December 2019, research institute AI Now called for regulators to ban the tech from decisions that impact people’s lives, as the field is “built on markedly shaky foundations.”’

South Wales police lose landmark facial recognition case

The Guardian 11.08.20

Police in South Wales are having a hard time justifying the use of FR:

‘On two key counts the court found that Bridges’ right to privacy under article 8 of the European convention on human rights had been breached, and on another it found that the force failed to properly investigate whether the software exhibited any race or gender bias.  Liberty, which was a party to the case in support of Bridges, said the verdict amounted to a “damning indictment” and called on the force to “end immediately” its use of facial recognition software as a result.  Louise Whitfield, Liberty’s head of legal casework, said: “The implications of this ruling also stretch much further than Wales and send a clear message to forces like the Met that their use of this tech could also now be unlawful and must stop.”’

Jeff Merkley and Bernie Sanders have a plan to protect you from facial recognition

Vox 04.08.20

Moves are under way to protect from FR in the US.  Hopefully, they would be enshrined into law:

‘Sens. Jeff Merkley and Bernie Sanders are proposing more federal regulation for facial recognition technology. Among other limits, their newly announced National Biometric Information Privacy Act would require private companies and corporations to get written consent from people in order to collect their biometric data. Should companies violate the consumer protection in the law, regular citizens and state attorneys general could sue them…  A staffer familiar with the legislation said one purpose of the law is to increase consumer understanding of the pervasiveness of facial recognition, but they would not comment on how this law would impact specific companies. They explained that the law is modeled after Illinois’s Biometric Information Privacy Act, a powerful piece of state legislation that has cost Facebook $650 million in fines over its facial recognition-enabled tagging, and likely keeps Google from making available a facial recognition feature available on Nest home security cameras in the state. The statute has also kept Clearview from operating in Illinois following a lawsuit filed under that state’s law.'

Meet the computer scientist and activist who got Big Tech to stand down

Fast Company 04.08.20

We need more people like her:

'Her research culminated in two groundbreaking, peer-reviewed studies, published in 2018 and 2019, that revealed how systems from Amazon, IBM, Microsoft, and others were unable to classify darker female faces as accurately as those of white men—effectively shattering the myth of machine neutrality…  Through her nearly four-year-old nonprofit, the Algorithmic Justice League (AJL), she has testified before lawmakers at the federal, state, and local levels about the dangers of using facial recognition technologies with no oversight of how they’re created or deployed.’

Exams that use facial recognition may be 'fair' – but they're also intrusive

The Guardian 22.07.20

Is this for real? Couldn’t they have spaced desks in a huge hall somewhere?

‘As students sit their exams during the pandemic, universities have turned to digital proctoring services. They range from human monitoring via webcams to remote access software enabling the takeover of a student’s browser. Others use artificial intelligence (AI) to flag body language and background noise that might point to cheating… The Bar Standards Board (BSB), the regulatory body for barristers, circulated a briefing sheet regarding the exams. The vendor, Pearson Vue, says the software uses “sophisticated security features such as face-matching technology, ID verification, session monitoring, browser lockdown and recordings”.

Amazon, Google, Microsoft sued over photos in facial recognition database

CNET 14.07.20

Everyone seems to be a Clearview practitioner:

Amazon, Google parent Alphabet and Microsoft used people's photos to train their facial recognition technologies without obtaining the subjects' permission, in violation of an Illinois biometric privacy statute, a trio of federal lawsuits filed Tuesday allege.  The photos in question were part of IBM's Diversity in Faces database, which is designed to advance the study of fairness and accuracy in facial recognition by looking at more than just skin tone, age and gender. The data includes 1 million images of human faces, annotated with tags such as face symmetry, nose length and forehead height.  The two Illinois residents who brought the lawsuits, Steven Vance and Tim Janecyk, say their images were included in that data set without their permission, despite clearly identifying themselves as residents of Illinois. Collection, storage and use of biometric information is illegal in the state without written consent under the Biometric Information Privacy Act, passed by the Illinois legislature in 2008.’

UK and Australian regulators launch probe into Clearview AI

FT 09.07.20

More woes for Clearview as both Australia and UK probe into its illegal manoeuvres:

‘The UK and Australian information commissioners have announced a joint probe into controversial facial recognition company Clearview AI, whose image-scraping tool has been used by hundreds of police forces around the world…  The UK and Australian information commissioners said the joint investigation “highlights the importance of enforcement co-operation in protecting the personal information of Australian and UK citizens in a globalised data environment”.’

Detroit police chief cops to 96-percent facial recognition error rate

Ars Technica 30.06.20

This flawed tech should be scraped:

‘Research has found that the accuracy of facial recognition software varies by the race of the subject, with Black suspects being identified less often than white ones. And Koebler points out that the DPD's own statistics show the technology being used almost exclusively on Black suspects. According to police data, 68 out of 70 facial recognition searches were done on Black suspects, while two had a race code of "U"—probably short for “unknown."  The ACLU has called on the Detroit Police Department—and other police departments—to stop using facial recognition technology for investigations in light of its high error rate and racially disparate impact. Boston's city council voted to ban the use of facial recognition technology last week.’

A new US bill would ban the police use of facial recognition

Technology Review 26.06.20

Good news.  I hope the UK follows suit:

‘US Democratic lawmakers have introduced a bill that would ban the use of facial recognition technology by federal law enforcement agencies. Specifically, it would make it illegal for any federal agency or official to “acquire, possess, access, or use” biometric surveillance technology in the US. It would also require state and local law enforcement to bring in similar bans in order to receive federal funding. The Facial Recognition and Biometric Technology Moratorium Act was introduced by Senators Ed Markey of Massachusetts and Jeff Merkley of Oregon and Representatives Pramila Jayapal of Washington and Ayanna Pressley of Massachusetts.’

South Wales Police's facial recognition tech 'not legal’

BBC 23.06.20

Let’s hope this time it’s a win for Mr Bridges:

‘South Wales Police has been developing the use of AFR since 2015.  The technology scanned Mr Bridges at a protest in 2018 and while shopping in 2017, leading to him crowdfunding his legal action.  However, the High Court found the force's use of AFR was lawful.  AFR maps faces, then compares results with a list that can include suspects, missing people and persons of interest. It has been used at sporting events and concerts.  Mr Squires has argued there are insufficient safeguards in the current law to protect people from arbitrary use of the technology, or to ensure its use was proportional.  The three-day hearing continues.’

George Floyd: Amazon bans police use of facial recognition tech

BBC 11.06.20

Having thrown free tech at the police, Amazon now asking them not to use it.  What utter hypocrisy:

‘Amazon said the suspension of law enforcement use of its Rekognition software was to give US lawmakers the opportunity to enact legislation to regulate how the technology is employed.  "We've advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge," Amazon said in a statement.   "We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”… 

Facial recognition technology has been criticised for some time over potential bias, with studies showing that most algorithms are more likely to wrongly identify the faces of black people and other minorities than those of white people.  In the past Amazon has defended Rekognition against charges of bias, while continuing to offer it to law enforcement agencies.’

IBM will no longer offer, develop, or research facial recognition technology

The Verge 08.06.20

Good news from a big player:

‘IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge… “IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”’

Senator wants to know if police are using Clearview to ID protesters

Ars Technica 08.06.20

FR tech too tempting for law enforcement agencies not to use them:

‘Illinois law does regulate the collection and use of individuals' biometric information, and late last month the American Civil Liberties Union filed a lawsuit against Clearview in Illinois alleging violations of that law. "Clearview's business model appears to embody the nightmare scenario... a private company capturing untold quantities of biometric data for purposes of surveillance and tracking without notice to the individuals affected, much less their consent.”  The ACLU suit is not the first Clearview is facing. Vermont Attorney General T.J. Donovan also filed suit against Clearview in March, alleging multiple violations of state law regulating data-broker behavior.’

Protesters are weaponising memes to fight police surveillance

WIRED 05.06.20

May this trend flourish:

‘The recent counter-strikes are a mishmash of internet culture. Twitter searches for #BlueLivesMatter return photo after photo of Squirtle, Smurfs and other blue-faced cartoon characters, rather than the usual pro-police posts. On Sunday, protestors “jammed” Chicago police radio system by playing N.W.A’s “Fuck the Police” and the viral classic “Chocolate Rain.” That same day, K-pop stans (super fans of Korean pop music) responded to the Dallas Police Department’s call for “video of illegal activity from the protests” by flooding their informant app, iWatch Dallas, with a deluge of fancams (videos of performers shot by audience members) – a strategy that was repeated in the days that followed when similar initiatives were launched by police in Kirkland, Washington, and Grand Rapids, Michigan.’

California blocks bill that could’ve led to a facial recognition police-state

TNW 04.06.20

Good news:

‘As images of police brutality flashed across our screens this week, Californian lawmakers were considering a bill that would have expanded facial recognition surveillance across the state.  Yesterday, following a prolonged campaign by a civil rights coalition, the legislators blocked the bill.  The Microsoft-backed bill had been introduced by Assemblyman Ed Chau, who argued it would regulate the use of the tech by commercial and public entities.  But the ACLU warned that it was an “endorsement of invasive surveillance” that would allow law enforcement agencies and tech firms to self-regulate their use of the tech…  The ACLU was joined in opposition by a range of civil rights groups, public health experts, and technology scholars. Among them was Sameena Usman of the Council on American-Islamic Relations, who said in May:  If we let face recognition spread, we will see more deportations, more unjust arrests, and mass violations of civil rights and liberties.’

Signal Adds Face-Blurring Tool for Photos to Protect Protesters From Retaliation

PC Magazine 04.06.20

As surveillance will be ramped up during worldwide protests re George Floyd, Signal comes up with a tool:

‘“The latest version of Signal for Android and iOS introduces a new blur feature in the image editor that can help protect the privacy of the people in the photos you share,” Signal announced in a blog post on Wednesday… Signal released the face-blur tool as BuzzFeed reports the US Drug Enforcement Administration has been granted authority to “conduct covert surveillance” on the George Floyd protests to stop suspected criminal threats. Downloads for Signal, which offers end-to-end encryption to protect chats and video calls from snooping, has also skyrocketed in the wake of the protests.  If you’re looking to blur your protest photos on a desktop computer, a software artist named Everest Pipkin has created a web-based tool that can do just that. In addition, Pipkin’s “Image Scrubber” can also remove the metadata from your pictures, which can reveal when the photo was snapped and with what kind of device.’

Facial Recognition Is Law Enforcement’s Newest Weapon Against Protesters

OneZero 03.06.20

Protests are proving to be testing arenas for various FR technologies used by the police:

‘Police in Seattle, Austin, and Dallas, as well as the FBI, have all asked for video or images that can be used to find violence and destruction during protests over the weekend…  A slew of companies offers police the capabilities to identify and match mugshots to captured imagery. State law enforcement in Michigan, Maryland, Virginia, South Carolina, and Pennsylvania have bought such facial recognition from a company called Dataworks Plus. According to documents obtained by OneZero, Dataworks Plus also works with cities including New York, Chicago, Los Angeles, Miami, Philadelphia, San Diego, and Sacramento.  The Japanese electronics company NEC claims that a third of U.S. law enforcement use its products, including fingerprint and facial recognition technology. It has contracts in more than 20 states, according to documents obtained by OneZero…  Hundreds of state and local police departments in the United States also have access to Clearview AI, according to BuzzFeed News, allowing them to run facial recognition searches against billions of photos scraped from social media. It’s unknown exactly how many people are in Clearview AI’s database, and little is known about how it operates. There has never been an independently verified audit of its accuracy, for example.

The DEA Has Been Given Permission To Investigate People Protesting George Floyd’s Death

BuzzFeed News 03.06.20

Most agencies to be under one umbrella to apprehend suspects:

‘The Drug Enforcement Administration has been granted sweeping new authority to “conduct covert surveillance” and collect intelligence on people participating in protests over the police killing of George Floyd, according to a two-page memorandum obtained by BuzzFeed News…  Attorney General William Barr issued a statement Saturday following a night of widespread and at times violent protests in which he blamed, without providing evidence, “anarchistic and far left extremists, using Antifa-like tactics,” for the unrest. He said the FBI, DEA, US Marshals, and the Bureau of Alcohol, Tobacco, Firearms and Explosives would be “deployed to support local efforts to enforce federal law.”’

California Activists Ramp Up Fight Against Facial-Recognition Technology

WSJ 26.05.20

‘California privacy and civil liberties advocates are mobilizing to thwart a bill backed by Microsoft Corp. that would regulate facial recognition technology and that is working its way through the state legislature…  Speakers from more than a dozen groups, including civil liberties organizations and police unions, expressed their opposition to the bill at the hearing. The only speaker in favor of the bill was a lobbyist from Microsoft.   “In our view, a competitive race to the bottom in an unregulated marketplace benefits no one,” said Ryan Harkins, a senior director of public policy at Microsoft.’ 

Your face mask selfies could be training the next facial recognition tool

CNET 19.05.20

More fodder for FR tech:

‘Your face mask selfies aren't just getting seen by your friends and family -- they're also getting collected by researchers looking to use them to improve facial recognition algorithms. CNET found thousands of face-masked selfies up for grabs in public data sets, with pictures taken directly from Instagram.    The COVID-19 pandemic is causing a surge in people wearing face masks, and facial recognition companies are scrambling to keep up. Face masks cover up a significant portion of what facial recognition needs to identify and detect people -- essentially threatening the future of a multimillion-dollar industry unless the technology can learn to recognize people beyond the coverings.  To do that, they need more masked photos to train their algorithms.’ 

Amazon Is Quietly Fighting Against a Sweeping Facial Recognition Ban in Portland

OneZero 14.05.20

Can’t have a law driving down your profits:

‘Late last year, Amazon spent $12,000 lobbying against a new facial recognition law in Portland, Oregon. The proposed legislation would outright ban the use of the technology by government and private entities, and threaten a range of businesses that sell and use the technology in the city…  Not only would the new legislation prevent the use of facial recognition by government agencies and law enforcement, it would stop private entities such as businesses from using the technology, too. And it would even outlaw city agencies from evaluating facial recognition tech, including systems made available for free.’

Leaked pics from Amazon Ring show potential new surveillance features

Ars Technica 22.04.20

According to customer demand is the justification for rolling out FR:

‘Ring last week distributed a confidential survey to beta testers weighing sentiment and demand for several potential new features in future versions of its software. According to screenshots shared with Ars, potential new features for Ring include options for enabling or disabling the camera both physically and remotely, both visual and audible alarms to ward off "would-be criminals," and potential object, facial, and license plate detection.  Such surveys usually include options a company is considering offering, though not necessarily actively planning to implement. The source who shared the survey with Ars, who asked not to be identified for fear of retaliation, described these options as the "most troubling" of a much larger set of potential features described in the survey… Media reports have for many months been indicating that Ring may integrate facial recognition into its product line. The company does not at this time use any such technology, including Amazon's own Rekognition platform, but in a January 6 letter to Congress (PDF), Amazon left open the possibility for adding it in the future.  "We do frequently innovate based on customer demand," the company said, citing competing products from Google, Tend, Netatmo, Wisenet, and Honeywell that include facial recognition capability.  "If our customers want these features in Ring security cameras, we will release these features only with thoughtful design including privacy, security, and user control, and we will clearly communicate with our customers as we offer new features," Amazon added.’

Coronavirus: Russia uses facial recognition to tackle Covid-19 (Video)

BBC 04.04.20

Russia feared to be going towards the ‘Chinese scenario’:

‘As Russian cities go into lockdown to try to contain coronavirus, Moscow is using the latest technology to keep track of residents.  City officials are using a giant network of tens of thousands of cameras - installed with facial recognition software - which they plan to couple with digital passes on people’s mobile phones. It’s prompted concern about whether such widespread surveillance will ever be rolled back.  Sarah Rainsford explains how the system works in her own Moscow neighbourhood.’

New Facial Recognition Policy Signed Into Law In Washington State

PYMNTS 01.04.20

Encouraging news:

‘Washington’s new law aims to institute transparency and accountability for facial recognition, as well as mandates to safeguard basic civil liberties. Municipal and state authorities can use facial recognition for missing persons, in Amber Alerts and Silver Alerts, and for public safety.  “This balanced approach ensures that facial recognition can be used as a tool to protect the public, but only in ways that respect fundamental rights and serve the public interest,” Smith noted… 

The new law, which will take effect in 2021, can only be used by local and state government agencies if the company providing the facial recognition technology uses an application programming interface (API). Aside from an API, another technology can be used if it can enable “legitimate, independent and reasonable tests” for “accuracy and unfair performance differences across distinct subpopulations.”  The new law also requires government agencies to file regular reports regarding the use of facial recognition technology. Law enforcement needs to obtain a warrant before using it in investigations, unless something is considered an emergency. The bill also establishes a task force to scrutinize how, when and why the technology is used.’

For Moscow's quarantined, 100,000 cameras are watching

AFP 24.03.20


FR to monitor citizens during quarantine. And, I’m presuming, after quarantine:


‘The city rolled out the technology just before the epidemic reached Russia, ignoring protests and legal complaints over sophisticated state surveillance…
"Due to stronger data protection laws in Europe, facial recognition has not yet been implemented on a large scale. Russian and Chinese companies have had less legal constraints to gather and use data than their European counterparts," Weber told AFP. Before the coronavirus pandemic, critics warned of the potential for excessive state surveillance reminiscent of the all-seeing "Big Brother" in George Orwell's novel “1984". The fear was that rather than protecting the general public, the cameras would be used to monitor Kremlin opponents and undermine civil liberties. "The security argument is the one always used to justify loss of privacy and personal liberty. That's where the greatest problem and the greatest danger lie," said French cybersecurity researcher and renowned hacker Baptiste Robert.’

Facial recognition is in London. So how should we regulate it?

WIRED 16.03.20

Confusion regarding FR laws:

‘"A lot of the discussions around facial recognition technology assume that 'facial recognition technology' is one thing, one specific device or piece of software with the same set of applications," says Ellen Broad, a senior fellow with the 3A Institute and member of the Australian government's Data Advisory Council. "There are a range of technological applications that fall under the umbrella of facial recognition technology, from face detection settings on digital cameras which help to improve picture focus, through to identity matching.”

There is also the problem of technologies which do not come under the definition of facial recognition, but which fulfil more or less the same purpose and which can be easily substituted for facial recognition. Such technologies allow police to circumvent regulation on particular technologies in a way which obeys the letter of the law but not the spirit of it. In Marbella in Spain, for example, regional legislation bans the use of LFR data without consent; instead, authorities have implemented a surveillance system which conducts 'appearance searches,' which detects unique facial traits, the colour of a person’s clothes, age, shape, gender and hair colour.

If the goal of regulation is to prevent unchecked mass surveillance, the ease with which technologies can be substituted for the same purposes suggests that any future regulation should not be tied too closely to a narrow definition of facial recognition.’

Halt public use of facial recognition tech, says equality watchdog

The Guardian 12.03.20

Scotland Yard defends use of the tech claiming it to be legitimate and protective of citizens:

‘Prof Peter Fussey, an expert on surveillance from Essex University who conducted the only independent review of the Metropolitan police’s public trials on behalf of the force, has found it was verifiably accurate in just 19% of cases…. 

The demands for the technology to be halted add to pressure from civil liberties organisations, including Amnesty International, which has described the Met’s rollout as “putting many human rights at risk, including the rights to privacy, non-discrimination, freedom of expression, association and peaceful assembly”.

Scotland Yard’s legal mandate for using live facial recognition states that the Human Rights Act recognises action in the interests of national security, public safety and the prevention of disorder or crime as legitimate aims.

Singapore Prepares For Facial Recognition to Eliminate ID Cards

AlBawaba 10.03.20

Singapore opts for FR rather than ID:

'The facial recognition system is a major expansion of the Smart Nation Initiative, which began in 2014 under Prime Minister Lee Hsien Loong and through which the state has built up a biometric database on more than four million Singaporeans over the age of 15… 

By 2025, the government hopes the SingPass app in tandem with its expansive network of facial recognition cameras will eliminate the need for paper checks entirely.   The only thing a person would need to complete a transaction in a shop would be their face and a phone to verify the final amount charged to their profile.     The facial recognition software could also be used to dole out loyalty reward program points to people automatically, even if they forget they’re eligible.  The benefits of such an expansive biometric program, the government contends, is that private corporations won’t have control over sensitive data.’

With painted faces, artists fight facial recognition tech

AP News 08.03.20

Londoners fight back creatively:

‘The technique, developed by artist and researcher Adam Harvey, is aimed at camouflaging against facial detection systems, which turn images of faces into mathematical formulas that can be analyzed by algorithms. CV Dazzle - where CV is short for computer vision - uses cubist-inspired designs to thwart the computer, said Rowlands.  “You’re trying to kind of scramble that by applying these kind of random colors and patterns,” she said. “The most important is having light and dark colors. So we often go for blacks and whites, very contrasting colors, because you’re trying to mess with the shadows and highlights of your face.”’

'The new normal': China's excessive coronavirus public monitoring could be here to stay

The Guardian 09.03.20

Most governments will be aping China soon in the name of ‘protection’.  Coronavirus has become the new 9/11:

‘State authorities, in addition to locking down entire cities, have implemented a myriad of security measures in the name of containing the coronavirus outbreak. From top officials to local community workers, those enforcing the rules repeat the same refrain: this is an “extraordinary time” feichang shiqi, requiring extraordinary measures… 

Some worry current measures will continue in part because citizens are growing accustomed to them. Alex Zhang, 28, who lives in Chengdu, refers to Italian philosopher Giorgio Agamben’s theory on the state of exception, and how measures taken during a state of emergency can be prolonged.  “This type of governance and thinking for dealing with the epidemic can also be used for other issues - like the media, citizen journalists or ethnic conflicts. Because this method has been used before, citizens will accept it. It becomes normal,” he said.’

Even the Machines Are Racist. Facial Recognition Systems Threaten Black Lives.

Truthout 04.03.20

‘Consistent with the ACLU, McIlwain argues that we should be concerned by facial recognition in schools for the same reason we should be concerned about its use by Immigration and Customs Enforcement (ICE). “You make suspects out of people when you utilize facial recognition in a given area, when you surveil the people in a given area,” McIlwain says. While ICE justifies this surveillance under the pretext of law and order, the hypervisibility of BIPOC and profit-driven policing produces police control, and vulnerable people are therefore targeted for arrest, incarceration, deportation.” A similar dynamic could play out in schools. For example, law enforcement could use data about a child to target parents suspected of being undocumented workers.’

Apple blocks Clearview AI facial recognition on iPhones after developer violation

CNET 28.02.20

Apple has made its stance clear with Clearview:

Apple has blocked customers from using the controversial Clearview AI facial recognition app on iPhones after Apple determined the startup violated its developer agreement and suspended its account. The move is a new blow to the facial recognition startup that also faces lawsuits and challenges from privacy advocates. 

Clearview AI had used its developer account to distribute its software to law enforcement customers, an approach that let it bypass Apple's App Store and that violated Apple's requirements, BuzzFeed News reported Friday. That's against Apple's rules, an Apple representative said, so the company disabled Clearview AI's account.’

Secretive face-matching startup has customer list stolen

Ars Technica 26.02.20


The company which steals public images from the internet has had its own data stolen:


‘Clearview notified its customers about the leak today, according to The Daily Beast, which obtained a copy of the notification. The memo says an intruder accessed the list of customers, as well as the number of user accounts those customers set up and the number of searches those accounts have conducted.
"Unfortunately, data breaches are part of life in the 21st century," Tor Ekeland, an attorney for Clearview, told The Daily Beast. "Our servers were never accessed. We patched the flaw and continue to work to strengthen our security.”'

Met police chief: facial recognition technology critics are ill-informed

The Guardian 25.02.20


MET police chief discards claims that FR is biased:


‘[Cressida] Dick said trials of the technology had led to the arrests of eight criminals who would probably not have been caught otherwise and that it was not for the Met to decide the boundary between privacy and security. But she added: “Speaking as a member of public, I will be frank. In an age of Twitter and Instagram and Facebook, concern about my image and that of my fellow law-abiding citizens passing through LFR [live facial recognition] and not being stored, feels much, much, much smaller than my and the public’s vital expectation to be kept safe from a knife through the chest.”…
LFR critics responded angrily to her speech. BBW accused Dick of ignoring the report by Prof Pete Fussey, who conducted the only independent review on behalf of the Met of its trials, finding it to be verifiably accurate in just 19% of cases.
The civil liberties group said: “It’s unhelpful for the Met to reduce a serious debate on facial recognition to unfounded accusations of ‘fake news’. Dick would do better to acknowledge and engage with the real, serious concerns.”
Hannah Couchman, a policy and campaigns officer at Liberty, said the commissioner’s language was “misleading and dangerous”, maintaining that LFR undermined safety “by handing extraordinary power to the state to control our movements and behaviour”.’

LEAKED REPORTS SHOW EU POLICE ARE PLANNING A PAN-EUROPEAN NETWORK OF FACIAL RECOGNITION DATABASE

The Intercept 20.02.20

An internal EU report was leaked to The Intercept showing that the European bloc is considering building a massive FR data scheme:

‘The report, which The Intercept obtained from a European official who is concerned about the network’s development, was circulated among EU and national officials in November 2019. If previous data-sharing arrangements are a guide, the new facial recognition network will likely be connected to similar databases in the U.S., creating what privacy researchers are calling a massive transatlantic consolidation of biometric data… 

“This is concerning on a national level and on a European level, especially as some EU countries veer towards more authoritarian governments,” said Edin Omanovic, advocacy director for Privacy International. Omanovic worries about a pan-European face database being used for “politically motivated surveillance” and not just standard police work. The possibility of pervasive, unjustified, or illegal surveillance is one of many critiques of facial recognition technology. Another is that it is notoriously inaccurate, particularly for people of color.'

Facial recognition doesn't work – but that won't stop it from coming after you

The Independent 12.02.20

A tech that doesn’t work would still be used regardless:

‘It’s not just that this technology wrongly or rightly figures out who you are – it’s what happens to the information that’s collected that you should be concerned about too… 

As Labour’s Chi Onwurah told MPs last month, facial recognition “automates the prejudices of those who design it and the limitations of the data on which it is trained”. That should worry all of us.  This is an issue that spans the world, not just the UK. In the US, a company called Clearview, which is courting clients in law enforcement, is currently facing criticism for claims of accuracy when it comes to its FRT. It claimed that its technology is accurate for “all demographic groups”, despite carrying out tests on just 834 politicians

The European Commission too said it was considering a three- to five-year ban on FRT in public spaces, in order to allow for “a sound methodology for assessing the impacts of this technology and possible risk management measures”.’ 

Nowhere to hide

Chatham House 14.02.20

The more time spent on finding the appropriate regulation, the easier it would be to make it legitimate:

‘As countries consider whether to draft new laws to tame this rapidly advancing technology, they are likely to encounter at least three major policy challenges.  The first is trying to putting the genie back in the bottle. In a short period of time facial recognition has become common place in consumer devices, shifting the so-called Overton window – the range of policies that the public finds acceptable – by normalizing technologies once perceived as potentially dystopian… 

Secondly, fit-for-purpose regulation will need to unpick the conflated technical and legal concepts bundled together under the ‘facial recognition’ umbrella, recognizing which are specific to facial recognition or other biometric technologies and which are related to artificial intelligence capabilities more broadly.

Finally, domestic regulation may be effective in restricting the buying and deployment of facial recognition systems by the public sector, and preventing private companies from using facial recognition on physical premises, but its effectiveness is bound to be hampered by the global trade in these technologies.’

Met police deploy live facial recognition technology

The Guardian 11.02.10

Technology deployed in East London shopping centre:

‘Silkie Carlo, the director of the privacy rights group Big Brother Watch (BBW), stood by the van for much of the day with a placard saying “Stop facial recognition”.  She described the police operation as a tipping point. “If we let this slide, this is going to be the beginning of something much worse. If they are successful in rolling this out and the legal challenges don’t work we will see this on CCTV networks pretty soon.”’

The ACLU Slammed A Facial Recognition Company That Scrapes Photos From Instagram And Facebook

Buzzfeed News 10.02.20

Clearview is a thief as well as a liar:

‘“The report is absurd on many levels and further demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands,” ACLU Northern California attorney Jacob Snow told BuzzFeed News, which obtained the document through a public records request.  

Clearview’s announcement that its technology has been vetted using ACLU guidelines is the latest questionable marketing claim made by the Manhattan-based startup, which has amassed a vast repository of biometric data by scraping photos from social media platforms, including Facebook, Instagram, Twitter, YouTube, and LinkedIn. Among those claims, Clearview AI has told prospective clients that its technology was instrumental in several arrests in New York, including one of an individual involved in a Brooklyn bar beating and another of a suspect who allegedly planted fake bombs in a New York City subway station. The NYPD denied using Clearview’s technology in both of these cases.’

Frustration grows in China as face masks compromise facial recognition

QUARTZ 05.02.20

Coronavirus face masks are obfuscating FR in China:

‘Most complaints are about unlocking mobile devices. Apple confirmed to Quartz that an unobstructed view of a user’s eyes, nose, and mouth is needed for FaceID to work properly. Similarly, Huawei says that its efforts to develop a feature that recognizes partially-covered faces has fallen short. “There are too few feature points for the eyes and the head so it’s impossible to ensure security,” explains Huawei vice president Bruce Lee, in a Jan 21 post on Weibo.”We gave up on facial unlock for mask or scarf wearing [users].”’

Moscow Activists Protest Widespread Facial Recognition With Face Paint

The Moscow Times 07.02.20

This is gaining traction!:

‘The Telegram-based campaign, called “Sledui” (Follow), advises followers to paint their faces with bright, asymmetric patterns and thick black marks to throw off facial recognition… 

Sledui says its primary goal is to act as a visible symbol of discontent with the city’s facial recognition system, which it says lacks transparency and was implemented without input from the public.

“We do not want to enter the lenses of CCTV cameras without our consent. We do not want new technologies to lead to total control. … To protect ourselves from surveillance and facial recognition for a few minutes, we use makeup — makeup as a symbol of disobedience,” artist and activist Ekaterina Nenasheva wrote on her Facebook page.’

Clearview AI hit with cease-and-desist from Google, Facebook over facial recognition collection

CNET 05.02.20

Clearview sued by tech giants for stealing billions of users’ photos.  Clearview states it has the right to do so.  It all smacks of hypocrisy:

‘Critics have called the app a threat to individuals' civil liberties, but Clearview CEO and founder Hoan Ton-That sees things differently. In an interview with correspondent Errol Barnett on CBS This Morning airing Wednesday, Ton-That compared his company's widespread collection of people's photos to Google's search engine. "Google can pull in information from all different websites," Ton-That said. "So if it's public, you know, and it's out there, it could be inside Google search engine, it can be inside ours as well." Google disagreed with the comparison, calling it misleading and noting several differences between its search engine and Clearview AI. The tech giant argued that Clearview is not a public search engine and gathers data without people's consent while websites have always been able to request not to be found on Google.’

Hiding in plain sight: activists don camouflage to beat Met surveillance

The Guardian 01.02.20

We will see a lot more of camouflage tech in the coming months:

‘The Met faces challenges to its facial recognition plans. Big Brother Watch is bringing a crowdfunded legal challenge against it and the home secretary, according to Griff Ferris, the organisation’s legal officer. 

“Live facial recognition is a mass surveillance tool which scans thousands of innocent people in a public space, subjecting them to a biometric identity check, much like taking a fingerprint. People in the UK are being scanned, misidentified and wrongly stopped by police as a result.’

Moscow rolls out live facial recognition system with an app to alert police

The Verge 30.01.20

‘The use of live facial recognition has become a controversial issue. A report earlier this year from The New York Times shed light on a company named Clearview AI, which secretly scraped 3 billion photos from social networks in order to sell facial recognition services to US law enforcement. Scientific studies have also repeatedly found that top facial recognition systems, like those sold by Amazon, display racial and gender biases.  Experts are so worried about the implications of rushing the deployment of facial recognition that many are calling for a moratorium of the technology. Even big tech companies are worried, with Google backing a temporary ban earlier this month

NtechLab’s Minin, though, says fears about the technology are “overheated” and that companies like Clearview AI “that really do not care for privacy rights” are giving other firms a bad name.  “When carefully orchestrated, the system is not only harmless to regular people, it helps a lot in catching terrorists, criminals, pedophiles and pickpockets by aiding police to identify them in seconds and locate and capture them in hours instead of days and weeks,” says Minin. “The software itself doesn’t break any laws or do any harm.”’

THE RISE OF SMART CAMERA NETWORKS, AND WHY WE SHOULD BAN THEM

The Intercept 27.01.20

A historical look at the rise of surveillance machines and the growing video analytics industry:

‘Those who do not like new forms of Big Brother surveillance are presently fixated on facial recognition. Yet they have largely ignored the shift to smart camera networks — and the industrial complex driving it. To address the privacy threats of smart camera networks, legislators should ban plug-in surveillance networks and restrict the scope of networked CCTVs beyond the premise of a single site. They should also limit the density of camera and sensor coverage in public. These measures would block the capacity to track people across wide areas and prevent the phenomenon of constantly being watched.

The government should also ban video surveillance analytics in publicly accessible spaces, perhaps with exceptions for rare cases such as the detection of bodies on train tracks. Such a ban would disincentivize mass camera deployments because video analytics is needed to analyze large volumes of footage. Courts should urgently reconsider the scope of the Fourth Amendment and expand our right to privacy in public. Police departments, vendors, and researchers need to disclose and publicize their projects, and engage with academics, journalists, and civil society.

It is clear we have a crisis in the works. We need to move beyond the limited conversation of facial recognition and address the broader world of video surveillance, before it is too late.’

Facial Recognition Is Already Here: These Are The 30+ US Companies Testing The Technology

CBInsights 05.06.19

A CB Insights look at all the FR startups.

Facebook agrees to pay $550 million to end facial recognition tech lawsuit 

ZDNET 30.01.20

‘The $550 million settlement will be used for an all-cash fund to compensate users. However, the District Court presiding over the case must first approve the agreement and figure before the fund can be officially launched.  "This case should serve as a clarion call to companies that consumers care deeply about their privacy rights and, if pushed, will fight for those rights all the way to the Supreme Court and back until they are justly compensated," commented Paul Geller, the head of the consumer protection arm of Robbins Geller.’

Remember FindFace? The Russian Facial Recognition Company Just Turned On A Massive, Multimillion-Dollar Moscow Surveillance System

Forbes 29.01.20

‘Built on several tens of thousands of cameras and what's claimed to be one of the most advanced facial recognition systems on the planet, Moscow has been quietly switching on a massive surveillance project this month.  The software that's helping monitor all those faces is FindFace, the product of NtechLab, a company that some reports claimed would bring "an end to anonymity" with its FindFace app. Launched in the mid-2010s, it allowed users to take a picture of someone and match their face to their to social media profiles on Russian site Vkontakte (VK)…  NtechLab CEO Alex Minin claims, in an interview with Forbes, that it’s the biggest “live” facial recognition project in the world, even if there are larger non-real-time deployments. Real-time, live recognition can pick faces out in a crowd and instantly say whether or not they match those in police databases of wanted criminals. London’s Met Police have been testing out a similar system with Japanese provider NEC. Older, “archived” facial recognition is slower as police have to take recorded footage and run it through a facial recognition system to find a match.’

Privacy groups want a federal facial-recognition ban, but it’s a long shot

Fast Company 28.01.10

’“There’s no safe way for governments to use facial recognition for surveillance purposes,” said Evan Greer of Fight for the Future in an email to Fast Company. “That’s why there’s growing consensus that governments and law enforcement agencies should be banned outright from using this technology,”  “There should also be strict limits on corporate and private use of this technology, and it should not be allowed in public spaces or institutions like colleges, hotels, or airports,” Greer wrote.’

40 groups have called for a US moratorium on facial recognition technology

Technology Review 27.01.20

‘The company, Clearview AI, scraped public photographs from Facebook, YouTube, and other websites to create a database of more than three billion images. Such technology, the letter argues, not only risks being inaccurate for people of color but could be used to “control minority populations and limit dissent.” The letter was signed by organizations including the Electronic Frontier Foundation, Color of Change, Fight for the Future, and the Consumer Federation of America, and sent to the Privacy and Civil Liberties Board, an agency within the executive branch.’

Facial recognition cameras will put us all in an identity parade

The Guardian 27.1.20

New measures are needed to protect the public :

’As long as the UK lacks a statutory law with a clear and binding code of practice, it simply isn’t ready for the mass deployment of this technology. At the very least, we need to have a genuine public debate. As hard as it may be, democratic governments need to resist the temptation to undermine civil liberties in the name of safety and security. The stakes are far too high.’ 

London Police to Deploy Facial Recognition Cameras Despite Privacy Concerns and Evidence of High Failure Rate 

TIME 24.1.20

‘“Turning surveillance cameras into identity checkpoints is the stuff of nightmares,” Carlo wrote in TIME last year, in response to public trials of the technology. “For centuries, the U.K. and U.S. have entrenched protections for citizens from arbitrary state interference — we expect the state to identify itself to us, not us to them. We expect state agencies to show a warrant if our privacy is to be invaded. But with live facial recognition, these standards are being surreptitiously occluded under the banner of technological ‘innovation.'”’

Met Police to deploy facial recognition cameras

BBC 24.01.20

The Met to deploy facial recognition: 

‘Big Brother Watch, a privacy campaign group, said the decision represented "an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK”.  Silkie Carlo, the group's director, said: "It flies in the face of the independent review showing the Met's use of facial recognition was likely unlawful, risked harming public rights and was 81% inaccurate.’  Last year, the Met admitted it supplied images for a database carrying out facial recognition scans on a privately owned estate in King's Cross, after initially denying involvement.’

Humana to allow members to connect hundreds of wearables to Go365 wellness program

FierceHealthcare 22.01.20

On inter-connected wearables:

‘Humana is teaming up with health data company Validic to allow its members to connect a slew of wearable devices to the insurer’s Go365 wellness program. Through the partnership, Humana members will be able to connect hundreds of smart devices, ranging from high-tech smartwatches to more simple blood glucose monitors, the companies announced Wednesday. The goal is to grow access to Go365 and thus grow the data the program has available, as members are not restricted to high-end devices.’

Sometimes breaking the law is the 'only moral' choice: Snowden opens up to Ecuador's ex-president Correa (VIDEO)

RT 24.01.20

Snowden talking about whistleblowing:

‘One of the core threats to the rule of law in a society... is the government using secrecy as a shield against democratic accountability. Using secrecy… to excuse themselves from public awareness of what it is exactly that they've been doing.’ 

Swipe left on Big Brother? Tinder adds user-tracking ‘panic button,’ because there’s always a predator somewhere

RT 23.01.20

Tinder wants to track your movements: ‘

The Noonlight tracker is one of several “safety” measures Tinder is rolling out that would be a bonanza in the hands of any surveillance state. A new photo verification feature adds a blue checkmark to profiles whose users can upload in real time a selfie matching a random pose requested by the app. Given the online-dating cliche in which the real-life user has 60 pounds and 20 years on the photo, this feature is sure to be popular. Another feature flags “potentially offensive” messages and asks the user if they’re offended, seemingly little more than a cheap way to train a ‘civility-police’ AI. Tinder has over 50 million users globally, making it one of the most popular dating apps in existence. A study conducted earlier this month by Norwegian consumer advocate Forbrukerradet found Tinder to be a virtual sieve for sensitive customer data. The app circulates personal information among 45 Match Group brands and third-party advertisers without asking or notifying the user outside of the privacy policy they agree to upon signing up for the service, in a way Forbrukerradet alleged runs afoul of European GDPR privacy law.’

A US government study confirms most face recognition systems are racist

Technology Review 20.12.19

Racism in facial recognition:

‘NIST shared some high-level results from the study. The main ones: 

  1. For one-to-one matching, most systems had a higher rate of false positive matches for Asian and African-American faces over Caucasian faces, sometimes by a factor of 10 or even 100. In other words, they were more likely to find a match when there wasn’t one. 

  2. This changed for face recognition algorithms developed in Asian countries, which produced very little difference in false positives between Asian and Caucasian faces. 

  3. Algorithms developed in the US were all consistently bad at matching Asian, African-American, and Native American faces. Native Americans suffered the highest false positive rates. 

  4. One-to-many matching, systems had the worst false positive rates for African-American women, which puts this population at the highest risk for being falsely accused of a crime.

'We are hurtling towards a surveillance state’: the rise of facial recognition technology

The Guardian 05.10.19

Facial recognition in bars in UK:

‘Facewatch HQ is around the corner from Gordon’s, brightly lit and furnished like a tech company. Fisher invites me to approach a fisheye CCTV camera mounted at face height on the office wall; he reassures me that I won’t be entered on to the watchlist. The camera captures a thumbnail photo of my face, which is beamed to an “edge box” (a sophisticated computer) and converted into a string of numbers. My biometric data is then compared with that of the faces on the watchlist. I am not a match: “It has no history of you,” Fisher explains. However, when he walks in front of the camera, his phone pings almost instantly, as his face is matched to a seven-year-old photo that he has saved in a test watchlist.  “If you’re not a subject of interest, we don’t store any images,” Fisher says. “The argument that you walk in front of a facial recognition camera, and it gets stored and you get tracked is just.” He pauses. “It depends who’s using it.’

Shoshanna Zuboff: Surveillance capitalism (video)

VPRO 20.12.19

Brilliant interview from ‘The Age of Surveillance Capitalism’.

Privacy fears as India police use facial recognition at rally

Al Jazeera 30.12.19

India’s increasing use of facial technology:

‘Indian authorities have said the technology is needed to bolster a severely under-policed country.  But "its use has strayed from finding missing children to being deployed in peaceful public gatherings" with a complete lack of any oversight or accountability, said Gupta.’

How China Tracks Everyone | VICE on HBO (video)

VICE 23.12.19

Dystopian and present reality in China through facial recognition.

A US government study confirms most face recognition systems are racist

Technology Review 20.12.19

‘The use of face recognition systems is growing rapidly in law enforcement, border control, and other applications throughout society. While several academic studies have previously shown popular commercial systems to be biased on race and gender, NIST’s study is the most comprehensive evaluation to date and confirms these earlier results. The findings call into question whether these systems should continue to be so widely used.’ 

Facial recognition fails on race, government study says

BBC 20.12.19

Failure for facial recognition and identifying black people:

‘"While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied," said Patrick Grother, a Nist computer scientist and the report's primary author.  "While we do not explore what might cause these differentials, this data will be valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.”’

75,000 people call on Congress to ban on facial recognition tech 

Daily Dot 16.12.19

Lawsuit in US against facial recognition:

‘The signatures on the petitions were from BanFacialRecognition.com, a website set up in July that calls for the ban of facial recognition technology’s use among law enforcement. The call has been endorsed by nearly 40 digital rights and civil rights organizations and groups.'

Push to rein in facial recognition stalls

Politico 16.12.19

‘“A once-promising bipartisan effort to limit the federal government's use of facial recognition technology has come to a halt in the House,” Cristiano Lima reports in a new dispatch for Pros. Democratic and Republican lawmakers on the committee spearheading the push offered two reasons: the death of House Oversight Chairman Elijah Cummings, who championed the push, and the ongoing impeachment battle, which has embroiled several of the committee's top leaders.’

Airport biometrics use grows in Beijing, Rome, San Fran, Tokyo and Delhi as Schiphol spoofed

Biometrics Update 12.12.19

Facial recognition grows at airports and is easily spoofed:

’The effectiveness of facial recognition technology for sensitive applications like aviation security and payments is being called into question, however, after a team of researchers from artificial intelligence company Kneron was able to successfully spoof the biometric self-boarding system at Schiphol Airport with a photo on a phone screen, as well as several systems in China, according to Fortune.  Kneron employees say they defeated payment systems from AliPay and WeChat with high quality 3D masks, and used images on phone screens to defeat payment and boarding systems at Chinese rail stations.’ 

REPORT: FACIAL RECOGNITION SHOULD BE BANNED FROM EVERYDAY LIFE

Futurism 13.12.19

A call to ban facial recognition:

‘The report came with a grim warning, MIT Techological Review reports. It argues that because AI tools like facial recognition, and especially emotion-detecting algorithms, can be highly inaccurate — and propagate systematic racial and gender biases - they should be removed from society.’ 

Bill Would Constrain Some Police Use of Facial-Recognition Tools

NextGov 9.12.19

New bill aims to restrict facial recognition in US after three days - using private closed circuit cameras:  (Lots of links)

‘Police would need a warrant to use facial-recognition tools to track an individual for more than three days under a proposed law that would place the first federal limits on law enforcement’s use of the technology.’

About-face? US border agency scraps plans for mandatory facial recognition scans for American travelers

RT 6.12.19

Facial recognition dropped out of US airports for Americans:

‘Leading the congressional opposition was Senator Ed Markey (D-MA), a longtime critic of federal agencies’ use of facial recognition technology, who slammed the proposal as “disturbing government coercion.” He celebrated the reversal on social media.  “Thanks to our pressure, DHS is reversing course and NOT moving forward with its dystopian facial recognition proposal at US airports,” Markey said in a tweet, adding “we cannot take our right to privacy for granted.” Going forward, Markey also said he intends to bring legislation that would “ban this kind of surveillance” outright.’ 

How Ring Went From ‘Shark Tank’ Reject to America’s Scariest Surveillance Company

VICE 3.12.19

Ring and US police:

‘This amounts to a picture of paralyzing scale: Amazon, one of the three largest publicly-traded companies in the world, owns a company that has been quietly building a privatized surveillance network throughout the United States.’

Microsoft Funds Facial Recognition Technology Secretly Tested on Palestinians

Truthout 30.11.19

Microsoft facial recognition with Israel in Palestine:

‘Most recently, AnyVision, an Israeli facial recognition tech company funded by Microsoft, has been wielding its software to help enforce Israel’s military occupation, using the occupied West Bank to field-test technology it plans to export around the world… In other words, the company carried out its operations in a part of the world where democratic freedoms are not only “at risk,” but nonexistent — and, in doing so, it directly violated Microsoft’s principle of “lawful surveillance.”’

Big Brother is watching: Chinese city with 2.6m cameras is world's most heavily surveilled

The Guardian 2.12.19

Chinese city surveillance:

‘With 2.58m cameras covering 15.35 million people – equal to one camera for every six residents – Chongqing has more surveillance cameras than any other city in the world for its population, beating even Beijing, Shanghai and tech hub Shenzhen.’

Chinese tech firms are shaping UN facial recognition standards, documents show 

TechInAsia 2.12.19

China helping to shape UN facial recognition regulations!:

‘China’s telecommunications equipment maker ZTE, security camera maker Dahua Technology, and state-owned Chinese telecommunication company China Telecom are among those proposing new international standards in the UN’s International Telecommunication Union (ITU) for facial recognition, video monitoring, city, and vehicle surveillance, said the report on Monday.’ 

China wildlife park sued for forcing visitors to submit to facial recognition scan 

The Guardian 4.11.19

Facial Recognition in Chinese park:

‘A Chinese wildlife park has sparked outcry after making visitors submit to facial recognition scanning, with one law professor taking it to court. ‘Guo broadly backed the use of such technology by authorities but also said that the issue needed to be discussed more widely in China. “I think it is OK and, to some extent, necessary for government agencies, especially police departments, to implement this technology, because it helps to maintain public security,” Guo said, according to an interview with Beijing News. “But it’s still worth discussing when it comes to the legitimacy and legality of using the technology.”’ 

10 Actions ThAT Will Protect People From Facial Recognition Software

Brookings Institution 31.10.19

Brookings Institution sets out Facial Recognition parameters.

Facebook’s new AI tweaks video so you can’t be identified by face recognition tech 

Facebook Paper 2019

A new facial de-identifier has been developed:  

‘This technology could allow for more ethical use of video footage of people for training AI systems, which typically require several examples to learn how to emulate the content they’re fed. By making people’s faces impossible to recognize, these AI systems can be trained without infringing on the test subjects’ privacy.

Opposing mass surveillance IS patriotic: Edward Snowden opens up about govt spying programs on Joe Rogan show

RT 24.10.19

‘“All this information that used to be ephemeral…now, these things are stored. It doesn’t matter whether you’re doing anything wrong,” Snowden said. “That’s how bulk collection – the government’s euphemism for mass surveillance – works. They simply collect it all in advance in the hope that it will one day become useful.”  Google Street View cars, wireless access points, and seemingly innocent apps are all tools of surveillance, he said, explaining that “there’s an industry that is built on keeping [bulk data collection and surveillance] invisible.”’

How Photos of Your Kids Are Powering Surveillance Technology

NYT October 2019

Kids’ images as fodder for AI:

‘By law, most Americans in the database don’t need to be asked for their permission — but the Papas should have been.   As residents of Illinois, they are protected by one of the strictest state privacy laws on the books: the Biometric Information Privacy Act, a 2008 measure that imposes financial penalties for using an Illinoisan’s fingerprints or face scans without consent.   Those who used the database — companies including Google, Amazon, Mitsubishi Electric, Tencent and SenseTime — appear to have been unaware of the law, and as a result may have huge financial liability, according to several lawyers and law professors familiar with the legislation.’ 

Could Privacy and Security Scandals Scuttle the IoT’s Many Benefits?

Industry Week 10.1019

Horrid privacy issues with IoT:

‘Companies and individuals can’t afford to abandon the IoT: the benefits and technological inevitability are simply too great. However, the escalation of the US/Russia cyberwar efforts to infiltrate each other’s smart power grids, and their potential for economic ruin and/or shooting wars as a result, plus the growing signs of public distrust due to shoddy privacy protections, mean that Congress and the private sector must work together now to craft flexible, evolving regulatory protections based on the EU ones that will assure security while at the same time not inhibiting innovation.’ 

U.S. expands blacklist to include China's top AI startups ahead of trade talks

Reuters 7.10.19

US puts ban on Chinese surveillance companies:

‘China said the United States should stop interfering in its affairs. It will continue to take firm and resolute measures to protect its sovereign security, foreign ministry spokesman Geng Shuang told a regular media briefing without elaborating.  Hikvision, with a market value of about $42 billion, calls itself the world’s largest maker of video surveillance gear.  SenseTime, which says it is valued at more than $7.5 billion, is one of the world’s most valuable AI unicorns while Megvii, backed by e-commerce giant Alibaba, is valued at around $4 billion and is preparing an IPO to raise at least $500 million in Hong Kong.’

France plans to use facial recognition to let citizens access government services

Technology Review 3.10.19

‘Singapore is building a facial recognition ID scheme for government services, while India uses iris scans as part of its national Aadhaar identity system. However, France’s government insists that, unlike China’s, its ID system won’t be used to monitor citizens, or integrated into identity databases. It says face scans will be deleted when the enrollment process is over.’ 

Getting a new mobile number in China will involve a facial-recognition test

QUARTZ 3.10.19

Faces to be scanned in China before buying mobile phone:

‘From Dec. 1, people applying for new mobile and data services will have to have their faces scanned by telecom providers, the Ministry of Industry and Information Technology said in a Sept. 27 statement (link in Chinese).’ 

Facial recognition row: police gave King's Cross owner images of seven people

The Guardian 4.10.19

Controversial facial recognition cameras in Kings Cross had been used by UK police:

‘Images of seven people were passed on by local police for use in a facial recognition system at King’s Cross in London in an agreement that was struck in secret, the details of which have been made public for the first time.’

CHINA INVENTS SUPER SURVEILLANCE CAMERA THAT CAN SPOT SOMEONE FROM CROWD OF THOUSANDS

Independent 2.10.19

Camera facial recognition amongst thousands in China:

‘Last year, the country began introducing gait recognition technology that uses artificial intelligence to recognise people from up to 50 metres away just by the movement of their walk.  Another initiative uses dove-like drones to monitor crowds from the sky.  The so-called “spy bird” programme uses flocks of robotic birds equipped with high-resolution cameras in order to secretly surveil people on the ground.’  

Google reportedly targeted people with 'dark skin' to improve facial recognition

The Guardian 3.10.19

‘“Facial recognition software has bias baked into its coding, and has primarily been used to control our movements and decide who belongs and who doesn’t, in public and private spaces,” he added. “This technology is dangerous – especially for black people – and that’s why Color Of Change is mobilizing for complete legislative bans on facial recognition across the country. We don’t need more tech experiments. We need government regulation to stop the unfettered growth of this technology.” 

Normal Intrusions: Globalising AI Surveillance

Off-Guardian 27.09.19

‘“As these technologies become more embedded in governance and politics, the window for change will narrow.”  The window, in many instances, has not so much narrowed as closed, as it did decades ago.’

New surveillance tech means you'll never be anonymous again

WIRED 16.09.19

More than just facial recognition:

‘"Fundamentally, we need to think about democracy-by-design principles," Michael says. "We just can’t keep throwing technologies at problems without a clear roadmap ahead of their need and application. We need to assess the impact of these new technologies. There needs to be bidirectional communication with the public.”’ 

Smile-to-pay: Chinese shoppers turn to facial payment technology

The Guardian 4.09.19

Facial technology for payments in China with twisted justification:

‘“The facial recognition technology helps to protect our privacy,” explains IFuree engineer Li Dongliang.  “In the traditional way, it’s very dangerous to enter the password if someone stands beside you. Now we can complete the payment with our faces, which helps us secure our account,” he insists.’ 

Police use of facial recognition is legal, Cardiff high court rules

The Guardian 4.09.19

‘Although the mass surveillance system interferes with the privacy rights of those scanned by security cameras, two judges have concluded, it is not illegal.  The legal decision came on the same day the mayor of London, Sadiq Khan, acknowledged that the Metropolitan police had participated in the deployment of facial recognition software at the King’s Cross development in central London between 2016 and 2018, sharing some images with the property company running the scheme.’ 

King's Cross face recognition 'last used in 2018'

BBC 2.09.19

Surveillance cameras dodging the issue of legality:

According to a statement on its website, the two cameras were operational between May 2016 and March 2018 and the data gathered was "regularly deleted".The King's Cross partnership also denied any data had been shared commercially.  It had used it to help the Metropolitan and British Transport Police "prevent and detect crime in the neighbourhood", it said.  But both forces told BBC News they were unaware of any police involvement.  It said it had since shelved further work on the technology and "has no plans to reintroduce any form of FRT [facial-recognition technology] at the King's Cross estate”.’

THE U.S. BORDER PATROL AND AN ISRAELI MILITARY CONTRACTOR ARE PUTTING A NATIVE AMERICAN RESERVATION UNDER “PERSISTENT SURVEILLANCE”

The Intercept 25.08.19

‘Elbit Systems has frequently touted a major advantage over these competitors: the fact that its products are “field-proven” on Palestinians. The company built surveillance sensors for Israel’s separation barrier through the West Bank, which has been deemed illegal under international law, as well as around the Gaza Strip and on the northern border with Lebanon and Syria… In the process of opposing the towers, Tohono O’odham people have developed common cause with other communities struggling against colonization and border walls. David is among numerous activists from the U.S. and Mexican borderlands who joined a delegation to the West Bank in 2017, convened by Stop the Wall, to build relationships and learn about the impacts of Elbit’s surveillance systems.  “I don’t feel safe with them taking over my community, especially if you look at what’s going on in Palestine — they’re bringing the same thing right over here to this land,” she says. “The U.S. government is going to be able to surveil basically anybody on the nation”.’

Halt the use of facial-recognition technology until it is regulated

Nature 27.08.19

‘Facial-recognition technology is not ready for this kind of deployment, nor are governments ready to keep it from causing harm. Stronger regulatory safeguards are urgently needed, and so is a wider public debate about the impact it is already having. Comprehensive legislation must guarantee restrictions on its use, as well as transparency, due process and other basic rights. Until those safeguards are in place, we need a moratorium on the use of this technology in public spaces… These tools are dangerous when they fail and harmful when they work. We need legal guard rails for all biometric surveillance systems, particularly as they improve in accuracy and invasiveness. Accordingly, the AI Now Institute that I co-founded at New York University has crafted four principles for a protective framework.’