SPVM refuses to admit to using face recognition
The SPVM is refusing to admit whether it’s using facial recognition software.
I don’t know why people are fussing. The software and the tech exists, so it will be used. If government, police or commerce claim not to be using it, they will be lying. We simply have to accept that it is now part of public life.
Ian 09:13 on 2020-02-18 Permalink
It’s not so much that they refuse to admit but also that they refuse to deny…
I saw an article pretty recently that mentioned that some SPVM officers were using Clearview without authorization and the organization told them to stop it… but with this article it now seems like maybe they are just using different software and don’t want to admit it?
Anyway radio-canada made this same statement in this article from last month so I guess people aren’t all that worked up about whether the cops are using facial recognition tech or not if la Presse is publishing this as “news” a full month later.
Le Service de police de la Ville de Montréal (SPVM) dit pour sa part ne pas pouvoir “confirmer ou infirmer que le SPVM utilise une quelconque technologie de reconnaissance faciale”, rappelant au passage qu’il “s’assure toujours de mener ses opérations et ses enquêtes en respectant les lois en vigueur”.
Kevin 10:03 on 2020-02-18 Permalink
Ad companies are now using facial recognition technology to deliver custom messages through electronic billboards at Central Station.
Which is a guarantee I will never, ever, buy those products.
Tim S. 10:54 on 2020-02-18 Permalink
It’s because of stuff like this that I’m trying to convince my kid’s school that posting all kinds of pictures of children on public social media accounts is a bad idea. It’s proving a bit to get through, unfortunately.
Kevin 13:31 on 2020-02-18 Permalink
Tim S.
They need your permission to take your kid’s photo and to post it. I get a notice every year.
Evan 13:44 on 2020-02-18 Permalink
I travel a lot, and these days wherever there’s a crowd there seems to be people taking video with minicams or phones for social media. the concept of permission or privacy doesn’t really exist anymore, it’s the times we live in.
Tim S. 19:58 on 2020-02-18 Permalink
Kevin: I get the notice. I fill it out every year. The school posted a bunch of pictures anyways. Even though they took them down from the public feed, there’s probably no way to remove the pictures from the servers and whatever third parties they sold them onto. Trying to explain this to the administration has proved a challenge.
Chris 22:51 on 2020-02-18 Permalink
>The software and the tech exists, so it will be used.
I see that meme a lot, and there’s truth to it, but things are no so absolute.
The tech also exists to fire small metal projectiles from great distances and anonymously murder someone. Yet this is done very rarely (in our country). Why? Laws. Laws are useful and largely work. Our police also have the tech to just disappear people, yet it (almost) never happens. So it can be with facial recognition. We could ban it. And it’s use could become very rare. (Not saying we should/will, just that we could, and it could work.)
Kate 23:19 on 2020-02-18 Permalink
We’re amassing so much surveillance video that it was only a matter of time before we came up with A.I. methods of sorting through it.
The problem is that sometimes we’ll have a “good” reason to use it, e.g. the current story about the man who attacked the woman on Saturday, mentioned a couple of posts after this one. Police have given out a video still of the presumed assailant with a clear image of his face. If software can put a name to the guy and help the police round him up, I think most people would say go for it.
The problem begins when the A.I. is allowed to create huge dossiers on all of us and our comings and going and our habits. Most of this material will never be seen by human eyes, but the A.I. will flag up anomalies. What if the A.I. can find people acting shiftily because they may be planning crimes or terrorism? But what if the “crime” is a demonstration against an injustice? What if the A.I. can spot a married person having an affair? What if your boss is warned that you’re looking for a new job? Or the A.I. sees you going into bars so you’re ordered into mandatory rehab?
Any change of personal routines or habits could be flagged up as suspicious.
We know that supposedly secure entities like banks and government have had leak after leak of sensitive data, so let’s not kid ourselves that the valuable files of anomalous human activities won’t leak out or be sold on the black market. How much would you pay for your wife not to find out you’ve been seeing a stripper on the side?
We have to learn to live with this.
Robert H 12:47 on 2020-02-19 Permalink
My reservations exactly. In this world, there’s no such thing as an unmixed blessing.