The documentary Coded Bias directed by Shalini Kantayya premiered in January 2020 at the Sundance Film Festival. It features several researchers and activists working with different types of artificial intelligence algorithms and the impact they have on our lives. The core message of Coded Bias is that “machine-learning algorithms intended to avoid prejudice are only as unbiased as the humans and historical data programming them” (Kantayya), and this has a profound impact on our lives. Several researchers: Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silkie Carlo, and others are featured in the movie, but the narrative is anchored by Joy Buolamwini’s journey from her excitement of working with AI at MIT’s Media Lab to finding out the biases of such algorithms to becoming an activist for equitable and accountable AI. By narrating Dr. Buolamwini’s story as a hero’s journey, the director is using Logos. The viewer will see the logical progression of Dr. Buolamwini’s career and will be able to arrive at the same conclusions that she did. As evident in the list of characters featured in the documentary, the director Mrs. Kantayya armed it with a lot of Ethos. The main message is emphasized throughout the movie, to a point that some viewers might feel overwhelmed. The documentary starts with the computer scientist Dr. Joy Buolamwini, describing the moment she realized the facial recognition algorithms she was using couldn’t recognize her face because of her being female and black. The visual is compelling; she covers her face with a white mask and the software can recognize her face, she removes the mask and the computer cannot “see” her anymore. Four minutes later we see a mathematician explaining why biased algorithms are dangerous, their impact on our everyday lives, and how powerless individuals are to contest decisions made by AI systems used by corporations and government agencies. Four more minutes and we are now seeing a watchdog group in the UK fighting against the use of facial recognition programs by the police. They inform that about 80% of the matches identified by the facial recognition system used by the police are incorrect. After five more minutes, a professor explains the basis of these algorithms, which were only viable because now humanity has accumulated enough data in digital format to train such algorithms and that historical data used to predict future behavior tend to perpetuate historic injustices. This is pretty much the pace of the movie, one scene after another hammering its message. Other notable scenes are the community activist comparing facial recognition systems with branding the Jews during WWII or installing microchips in pets, and a very touching scene of an award-winning teacher in Texas, that was misclassified and negatively impacted by an AI system used to evaluate teachers’ performance. One of the challenges of the documentary is to convey very sophisticated scientific concepts in an accessible way to the layperson in a way that is not oversimplified and is still relevant for the audience with the knowledge of the subject.
In the opening scene, the audience is introduced to Dr. Buolamwini, a computer scientist, describing how she discovered the bias in facial recognition programs. The director couldn’t have asked for a more perfect name for a hero! The scene starts with her entering the MIT Media Lab building and walking through the impressive halls and stairways until she gets to her office. The audience walks with her on the journey. While she is walking, she is always shown at the bottom of the screen, very small and moving upwards. As she moves upwards, the camera starts to zoom in on her and the audience sees her importance growing. It’s setting the audience up to see her as David who is about to face Goliath. When she enters her office, it is this small space, full of books, gadgets, equations, and diagrams on the whiteboards. It establishes her reputation as a scientist. Because her project involves an augmented reality mirror, the director juxtaposes the mirror with the scientist’s image, one on each side of the screen, showing a stark comparison: the “augmented” reality is a cruel distortion of reality. Here she was, a reputable scientist from a prestigious college, but the mirror cannot see her unless she covers her face with a white mask. The scene ends with the white mask on the table in the center of the screen to grab the audience's attention to the disparity and right beside the mask we can see the first word of the title of her paper: “Unmasking…”. It’s at the same time powerful and poetic. This is the hook to grab the viewers’ curiosity about what will be revealed.
After the opening scenes, the viewer is presented with contrasting views of what the entertainment industry portrays as artificial intelligence, as amazing robots with superhuman capabilities with what is achievable today, what the researcher calls narrow AI (Coded Bias), which is pure Math. This sets the viewer to meet mathematician Dr. Cathy O’Neil, who wrote the book “Weapons of Math Destruction”. In the book, she explains how algorithms reinforce the existing inequalities, creating a vicious cycle of discrimination. Her book attracted the attention of the main “hero” of the documentary, Dr. Buolamwini, and inspired her to become an activist in AI accountability. She is introduced just by her voice speaking in the background while Dr. Buolamwini is reading her book as if she is in a lecture. As the camera follows Dr. O’Neil going into an interview, still explaining the thesis of her book, her image is shown in constant movement, going from one side of the screen to another, interacting with people, then she is zoomed out, shown in a lecture hall packed with students. The main concepts she is explaining are written as subtitles superimposed on the image as if the audience is indeed watching one of her lectures. Logos is at play here: the main points are in the superimposed text and the viewer can follow the mathematician's rationale. Her blue hair and colorful clothes against a black background bring a whimsical atmosphere to make her not a cold, distant math professor but an affable motherly lady who wants to protect the audience from harm. The motherly figure is later on reinforced when she is shown with her children. She is on the viewers’ side and she is one of them.
Pathos is at play when the viewer is taken to London, where a group of activists is protesting against the use of flawed facial recognition systems by the police. The viewer gets a sense of watching a revolutionary cell preparing for war, even if it is just a rhetorical one, with the intent to create laws to protect citizens against the invasion of privacy of such technologies. This whole segment might make the viewers uneasy, going from the hopeful view of Dr. Joy Buolamwini to the dark predictions of Silkie Carlo, the director of the NGO Big Brother Watch. Juxtaposing the “Hollywoodian” idea of AI against the reality of AI, the movie uses a lot of superimposed texts shown with special animations and sound effects, mirroring sci-fi movies like The Terminator and Minority Report, exploring the use of augmented reality. The superimposed text shows the decisions made by the algorithms in real time. The audience starts getting a sense that “Big Brother is watching you”.
The documentary goes on a crescendo showing how pervasive AI algorithms are in everyone’s lives today. It compares the use of smart surveillance in China with the use in western countries. In the former, AI is used to control the citizens to make sure that they are all aligned with the government’s vision of the common good. The latter is used for consumerism. One might disagree and say that they are the same, invading the privacy of individuals to control them. Unfortunately, Coded Bias doesn’t explore this topic in more depth.
Another poignant scene is that of the Houston School district using the Educational Value-Added Assessment System or EVAAS to fire underperforming teachers (“Education Visualization and Analytics Solution | SAS EVAAS for K-12”). It shows teacher Daniel Santos on the left side of the screen in his classroom, picking his awards, one by one, and filling the screen with them. As he tells about all the excellent performance reviews he has received year after year, the evaluation done by EVAAS is superimposed on the screen to the right side showing that he is one of the underperforming teachers. When Mr. Santos says “this algorithm came back and classified me as a bad teacher” his voice is full of emotion. If the viewers are not heartbroken at this point, maybe they were already taken over by evil AI algorithms themselves. Pathos was very well utilized in this sequence.
The revolution goes on in the documentary, scene by scene compounding arguments and evidence reinforcing its message. The audience is now prepared for the “battle scene”, the congressional hearing where the hero Dr. Buolamwini will present her findings. The audience learns that Congress decided to act only after ACLU pointed out that Amazon’s facial recognition tool incorrectly classified 28 of its members as criminals (Snow). There is some irony in the parade of Congressmen mugshots, their smiling faces from political ads, being classified as criminals. Congress holds a public hearing to decide if such a tool can be used by law enforcement. In this segment, we see several of the other characters converging on Washington DC. It shows the line forming to enter the Congress building and the viewer can’t help but compare this scene with the opening scene of the hero entering MIT’s building. The opening scene was the hero facing the Imperial Destroyer and now the rebels are battling the Death Star! The director carefully highlights two prominent figures of Congress during the hearing, Alexandria Ocasio-Cortez, and Jim Jordan, from opposing parties, both agreeing that AI has flaws and should be regulated. It brings home her point that this is an issue that affects all of us.
In the end, there is this sense of joy and mission accomplished but also the recognition that this was just a major battle, the war is not over. Maybe inspired by the fact that Dr. Buolamwini is also a poet, the director composed the documentary with extreme poetic sensibility. From the scene compositions to the song and visual effects to analogies, metaphors, and references to other movies, everything seems to exude poetry. It’s fitting that the movie ends with a poem by Dr. Buolamwini herself.
By implying the parallel between Dr. Buolamwini with the hero’s journey, and using visual effects similar to well-known sci-fi movies, the director makes the complex subject accessible, because it establishes a connection with the common knowledge of the audience and the new concepts introduced in the movie.
One might argue that the documentary doesn’t cover the counterarguments or that it has its own biases. There are some hints in the movie that maybe the parties with opposing views were not interested in participating in the discussion. Or maybe the asymmetry of power between big tech corporations and individuals gives her permission to amplify the voice of the voiceless. Others might argue that the explanations of the algorithms are oversimplified. The documentary does a good job of tackling complex subjects in an accessible and appealing way in less than 2 hours without boring experts that watch the movie. It’s an important subject. Everyone should watch it, talk about it, beware and “join the light side of the force”.
Coded Bias, directed by Shalini Kantayya, featuring Joy Buolamwini, Meredith Broussard, and Cathy O’Neil, produced by Michael Beddoes, Karin Hayes, Sabine Hoffman, Shalini Kantayya, 2020
“Education Visualization and Analytics Solution: SAS EVAAS for K-12.” SAS, SAS Institute Inc., 2022, https://www.sas.com/en_us/software/evaas.html.
Kantayya, Shalini R, director. Coded Bias. PBS, 7th Empire Media, 2020, https://www.pbs.org/independentlens/documentaries/coded-bias/. Accessed 4 Nov. 2022.
Snow, Jacob. “Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots | News and Commentary.” American Civil Liberties Union, 29 Aug. 2022, www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28.