Author: Aaron Kling

Problem Space

The cellular phone has revolutionized communication for individuals worldwide and the smartphone has pushed computing power into a shape that can fit in a person’s pocket. The internet has linked everyone with a connection, and the internet of things has greatly extended that connection beyond people to people. Oversight, recording, and history have reached a point where anyone can have their story feature in the record. Yet, with these benefits, we cannot overlook the potential consequences of a world where connection is guaranteed. With a camera equipped phone in the hand of 94% of Americans, extending to 44% of the global population, a perhaps involuntary pipeline has been established between state and citizen (Bank My Cell, 2020: Pew Research Center, 2019). No matter the society, no matter the type of governance, information is now easily collected and centralized by those that have the resources, technology, and desire to do so. For some, this can be seen as a step in the direction of safety and transparency, but for others, this could spell an era of discrimination, incrimination, and persecution.

Let it be known that year by year, total surveillance ceases to be a work of science fiction. Globally, governments have already used improved communications technology to monitor their populace, and the same can be said for America as well.

Democracies are struggling to balance security interests with civil liberties protections. In the United States, increasing numbers of cities have adopted advanced surveillance systems. A 2016 investigation by Axios’s Kim Hart revealed, for example, that the Baltimore police had secretly deployed aerial drones to carry out daily surveillance over the city’s residents (Feldstein, 2019).

From its inception in the 1960s, facial recognition has progressed from a nascent field to one that many organizations eye with increasing interest. “According to NIST’s Patrick Grother […] the rapid advance of machine-learning tools has effectively revolutionized the industry […] about 25 developers have algorithms that outperform the most accurate one we reported in 2014” (Boutin, 2018). Governments with the power to identify one out of many can certainly use the program to curtail criminal activity, but consider also the system’s power to discriminate and isolate individuals from larger crowds, such as protestors or dissidents. In Beijing, advancements in smart city and oversight technology has enabled greater governmental oversight during protests. For example, “if more than three protests occur in one town within a certain period, the new system could alert administrators, who could then send more police to that area or make other policy adjustments to maintain stability” (Privacy International, 2017). In Uganda, spreading closed circuit television (CCTV) cameras have led to democratic parties fearing for their privacy and right to demonstrate, leading to members to state: “The CCTV project is just a tool to track us, hunt us and persecute us” (Biryabarema, 2019.)

Historically, those who are targeted by surveillance technology are often minorities within a population. Even when managed by algorithms or artificial intelligence, the larger systems at work within security networks prefer to target the “usual suspects” already prone to suspicion and scrutiny. In Great Britain, “camera operators have been found to focus disproportionately on people of color […] Black people were between one-and-a-half and two-and-a-half times more likely to be surveilled than one would expect from their presence in the population” (ACLU, 2002). Likewise in Britain, the success rate of facial recognition has been, at times, abysmal, boasting an 81% false positive rate (a circumstance where a test says something exists when it does not) when scanning crowds for potential threats (Manthorpe and Martin). The digital feedstock, the millions of images that feed the development of a neural net program, are in turn created by fallible individuals with their own biases. Here in the US, reports from congressional hearings “found that commercial computer models struggled most when it came to recognizing women with darker skin. IBM’s system was incorrect for 34.7 per cent of the time when it came to identifying black women” (Quach, 2019). When a database is populated with predominantly black or white faces, this can cause skewed results in testing. Though facial recognition is indeed more powerful than ever, “Black females … is the demographic that usually gives the highest FMR [Failure to Match Ratio]” (Simonite, 2019).

The body of evidence is staggering: technology’s advancement has not been proven to end our human biases and divisions. Surveillance technology remains, at its core, a tool that can be used by anyone, for good or ill. Yet, when given the choice, governments consistently gravitate towards oppressive policies. Even now, with the outbreak of SARS-CoV-2 globally, governments continue to utilize their technology to erode personal privacy and target populations. Experts urge that “any surveillance, must be proportionate, limited to what is strictly necessary. The minimum amount of private data should be collected, […] surveillance powers should have a sunset clause, expiring after a specific period to ensure they don’t continue indefinitely” (Gallagher, 2020). The situation only stands to become more fraught as technology continues its march forward.

What is Misprint?

Misprint is a narrative about an individual who’s prior political activity has flagged their account and personage as potentially hazardous. Due to this renegotiated relationship with society, the character ends up with obstructions to both personal freedom and upwards mobility. The objective of Misprint is very simple: get to work, though the narrative will gradually ramp up how difficult this task is over time. With a nebulously defined ticking clock and the threat of joblessness floating over the player’s head, they must navigate twisting metro routes, random stop and searches, and a hostile surveillance state. By placing the player into the shoes of a disenfranchised population, the intent is to allow them a chance at empathy with individuals who may find themselves in similar positions. Through this empathy, the hope is to ensure that the player thinks more carefully about government or private sector attempts to erode privacy laws and reinforce digital control.

Misprint takes the form of an online, browser based narrative created through the “Twine” program. Twine is a node based story telling platform with some very light coding/logic elements intended to allow a story to be interactive. A reader looks over options and selects which option they prefer, allowing them to select how they progress through a narrative and reducing the linearity often associated with the medium. Think the “choose your own adventure” novels you might have looked over as a child. I have selected Twine because it has the functionality to store information related to a user’s choice, which would benefit my intended project.


Misprint owes a lot to an indie game called Papers Please in which you play as a border security agent stamping passports amidst Eastern European styled brutalism and rampant terror attacks. It’s about nationhood, constantly changing rules, the odd power between desperation and domination, and stamps. During research, the game Nanopesos stuck out, as it was also about surviving in a hostile situation. Nanopesos is more economically minded, however, while Misprint is focused on oppression. Still, both games are as much about what you can’t do as what you can. From the other side of the lens, we have the games Orwell and Floor 13, which really have to be mentioned in the same breath as each other. As narrative experiences, these games discuss what having power over others does to your sense of morality as a player, though Floor 13 is much more ruthless in nature than Orwell, what with all the enhanced interrogation and slander operations. Finally, Hacknet is a game about exploring the world through a command line, and it alongside Barcelona inspired the metro station and the sense of “feeling your way around a series of tubes.”

All of these games have something in common, and that is the power of an interactive experience to move you from where and who you are, to another place and self. By guiding a person along a path and offering them the ability to choose, you influence how they view the world you’ve built. Begin taking away those choices, and the designer may influence how the player views choice itself. That is ultimately what Misprint is about; a person’s relationship to choice, and the idea that some populations simply lack the freedom to choose options without living in fear of consequences. Cheery stuff, right?

Why Misprint?

I love games, I honestly can’t get enough of interactive mediums and how they tell stories. It can be difficult to visualize what some people go through, so Misprint is an attempt to get players to relate to difficult experiences through frustration, some humor, and a creeping sense of despair. Misprint isn’t about being a hero, or even particularly special. As an experience, it puts you in the shoes of someone very normal who has to struggle against an ever changing rule-book. I chose Twine because I’ve tried programming games in the past, and it hasn’t particularly worked out for me! What I can do, at least a little, is write, so it was a no-brainer in that way. I was told once that the best way to use your voice is to play to your strengths, so hopefully Misprint speaks to that.


Growing technological interconnection is an opportunity for everyone, but that opportunity unfairly benefits governments and corporations over individuals. I made a Twine game called Misprint about just that, where you play as a person trying to get to the only job they are allowed, given the “mistakes” they’ve made in their past. It was inspired by a lot of small games with nonstandard interfaces and difficult outlooks on the world. Thank you for reading!

This project came out of the course COM 367 Multimedia Production & Digital Culture at North Carolina State University in fall 2020, taught by Dr. Noura Howell. More posts from our class:

Gender Gap in Pro Sports: Jonathan Hudson and Tommy Delaunay

Toxic Task Force (Content Moderation): Madison Neeley, Ashley Mullins and Alex Koonce

Candid Curly Collaborative: Marissa McHugh & Sandra Garcia

Sexism in Television: Madison Mallory, Chloe Campbell, Jenaye Gaudreau, & Greer Gorra

#NoWomanLagBehind — TJ & Lucas

Works Cited

WHAT’S WRONG WITH PUBLIC VIDEO SURVEILLANCE? (2002, March). Retrieved November 12, 2020, from ACLU:

Smart Cities: Utopian Vision, Dystopian Reality. (2017, October 31). Retrieved November 12, 2020, from Privacy International:

Mobile Facts Sheet. (2019, June 12). Retrieved November 12, 2020, from Pew Research Center:,Mobile%20phone%20ownership%20over%20time,smartphone%20ownership%20conducted%20in%202011.

HOW MANY SMARTPHONES ARE IN THE WORLD? (2020). Retrieved November 12, 2020, from Bank My Cell:

Biryabarema, E. (2019, August 15). Uganda’s cash-strapped cops spend $126 million on CCTV from Huawei. Retrieved November 12, 2020, from Rueters:

Boutin, C. (2018, November 30). NIST Evaluation Shows Advance in Face Recognition Software’s Capabilities. Retrieved November 12, 2020, from NIST:

Feldstein, S. (2019). The Global Expansion of AI Surveillance. Carnegie Endowment for International Peace.

Gallagher, R. (2020, June 2). Surveillance Technology Will Only Get More Intense After Covid. Retrieved November 12, 2020, from Bloomberg:

Manthorpe, R., & Martin, A. J. (2019, July 4). 81% of ‘suspects’ flagged by Met’s police facial recognition technology innocent, independent report says. Retrieved November 12, 2020, from Sky News:

Quach, K. (2019, May 22). We listened to more than 3 hours of US Congress testimony on facial recognition so you didn’t have to go through it. Retrieved November 25, 2019, from The Registrar:

Simonite, T. (2019, July 22). The Best Algorithms Struggle to Recognize Black Faces Equally. Retrieved November 12, 2020, from Wired: