Cow-nter Surveillance: A DIY Guide to Messing with Surveillance Systems

As part of our year-long, four issue series about the uses of analog media and DIY arts as activist tools, we’ve brought on cyber security expert and multi-media artist Kate Bertash to share some tips and tricks for creators like you. In this first edition, Bertash explains how to respond to surveillance systems in the world around you using seamless patterns, image manipulation, make-up and more. 

A DIY GUIDE TO
MESSING WITH SURVEILLANCE SYSTEMS
USING FABULOUS PATTERNS
(AND MORE!)  

Image recognition systems are everywhere, it’s true. It is tempting to think of these artificial intelligence accumulators as magical, all-knowing black boxes right out of science fiction. But that’s what the people who make (and employ) surveillance tech want you to believe. In reality, computers aren’t that smart, and they’re much worse at seeing than people.

That’s because computers must be trained to see. For an AI to recognize an image of a cow, it first has to be fed hundreds, sometimes thousands of photos of other cows. These pictures help the computer learn which features of a picture, on average, suggest that it is depicting a cow. You guessed it — white fur with black spots is one of them.

But imagine we clip the cow right out of the pasture and paste it against a blue sky, floating among clouds. Many image recognition systems would respond, “I know! It’s a plane!” Having seen picture after picture of cows by now, the AI has come to understand proximity to grass as a key feature of cow-ness.

These so-called “errors of association” are super common, including in the systems sold to police and defence agencies. Of course, the public doesn’t hear about these mistakes often – the people who hawk these things prefer to vastly overstate their accuracy.

This is just one example of clumsy computing, but it’s instructive. For artists and activists opposed to ubiquitous surveillance, these innate flaws offer a way to push back against invasive tech. Mistaken identity and junk data are so easy to produce, and we can exploit that in the name of disruption or protest. Here’s a general guide to do just that.

 

Atlas of Surveillance

IDENTIFY YOUR INTENT:

There are two core objectives that ground anti-surveillance fashion and art: the defensive stance of blocking image recognition, or the offensive stance of messing with a system, often by overloading it with false-positives. Blocking image and face recognition tools aims to make it difficult for the system to “see” the object — essentially, wear something or obscure the camera’s eye, and it won’t know how to match it. Overloading an image recognition system with lots of hits can also be a satisfying form of protest. Piling junk data into these systems makes them less useful and more expensive to use and clean. And it’s pretty easy to drum up these adversarial images once you’ve played around a bit.

Step 1

To choose a system to mess with, you’ll want to find out who is watching and how. You likely won’t have to look far. At this point, police routinely run facial recognition on videos of protesters, and almost every cop car has an Automated License Plate Reader (ALPR) on it.

If you’re in the U.S., check out the Atlas of Surveillance by The Electronic Frontier Foundation. It’s a database collecting information about local surveillance practices across the country. Canadians may have to dig a bit more, but check out OpenCanada.org or CIPPIC.

Step 2

It’s time to play around! To start, fire up any app with facial detection features. If you’re not comfortable with code, never fear — there exist many no-code ways to do this. You might even start with Instagram or Snapchat filters. Now, see how it reacts to different things. Try various angles, lighting, props, or accessories to unearth some of the algorithm’s logic — and its weaknesses. It won’t take long for you to figure out what kind of thing blocks or over-stimulates these technologies. Other creative ways to test image recognition systems:

//  Grab some masks or bandanas and cover different parts of your face. Notice which colours, or patterns, or placements disrupt facial detection in the app you’re using.

//  Hold up different squares of patterned paper or shapes over different parts of your face, and see where it breaks facial detection.

//  Test out your “base face” versus makeup or other face coverings at DragVsAI.org.

// Point the iDetection app at different objects in your home, at your pets, at yourself with a mask or costume on.

You can also feed images into image recognition software like iDetection. You can use some of your own photos, or use these shortcuts to pile up pictures:

// Search for open data sets online, like “license plate data set” or “masked faces.”

// Try various search terms on Google image search, or Pinterest, then capture many files at once by using a browser extension such as “Download All Images.”

// Thisfacedoesnotexist.com is a site of AI-generated faces, a great way to get sample faces that belong to no one.

It benefits all of us to be cautious about collecting pictures and uploading them to apps. Put people’s privacy first.

PREP & TEST YOUR PATTERNS:

Once you know which images are consistently read by your system, you can test the system’s tolerance by toying with them in an image editing software. This could mean shifting the picture orientation, changing sizing, adding and subtracting image components, changing colours, blowing out the contrast, or desaturating the colours. Now, hold up your phone with your tester app pointed at the modified pictures, and see what happens.

Here’s an example. I wanted to create an attractive pattern that still triggered the ALPR systems, so I aimed to identify which elements I could get away with removing.

Below you can see a bunch of different license plate edits. Some have their spacing changed, others have components removed, and I even tried some different fonts. Here you can see which ones still worked when I put them into OpenALPR.

This edit-and-test process works for any kind of image, including faces. Make note of “cue” attributes that disproportionately affect the AI’s ability to understand an image, and roll with them.

No one knows for sure what computers are looking at when they decide two pictures are the same. As a result, mistakes caused by overvaluing the wrong features are more common than we realize:

Examples of image classification errors from Wolfram Research’s Image Identification Project

 

FROM PICTURES TO PATTERNS

Tips to adapt adversarial images into patterns:

// Borrow styles and colours from your favorite brand.

// Consider all-over print and wrap-around patterns, so they work at multiple angles.

// Work in high DPI print files, as your prints may need to be large to work from a distance.

// It’s OK to make aesthetic trade-offs if it will invite more people to actually wear it. E.g.: if you need to go simpler in your design for better colours, that’s better than nobody wearing it at all.

 

MANUFACTURE YOUR OWN MATERIALS

Now, the real test comes —will these patterns work in the real world once they’re made into clothing? I’ve found it easy to prototype anti-surveillance garments to test super quickly using “Drop Ship” manufacturers where items are made to order and shipped directly from the manufacturer.

These setups are lower risk. Items are printed on-demand so there’s no minimum order. Most importantly, this brings down the cost of your investment and the retail price of your goods. More people will take a chance on experimental fashion if they can afford it. That said, do your research on different print-on-demand companies to find out about their labour practices and politics and make sure they line up with yours. Some of these companies, domestic and offshore, rely on exploitative labour or other shady practices, but others are more transparent about how things get made.

Of course, there’s always DIY! You can definitely go analog with printing. Many designs are easily translated to handmade techniques. Silkscreen, stencils, knitting, beading, painted murals all work well to read into these recognition systems! You can also incorporate adversarial patterning tactics into designs for events, murals, billboards, and more.

I hope with these tips you’ll see that your artistic eyes are keen enough to outsmart surveillance systems and disrupt them, serving as a powerful form of protest. Our creative non-compliance is one of the greatest assets we have in pushing back on the surveillance state. And if we’re lucky, we might all look just a little more stylish while doing it.

 

Kate Bertash works at the intersection of tech, privacy, art and organizing. This is the first of four mini-guides for artists and DIYers about developing and executing homegrown counter-surveillance and digital security practices. She is @KateRoseBeeon Twitter, or check out her website at katebertash.com