This is an article that was originally part of an academic exercise. It explores the elementary basics that show how artificial intelligence, via algorithmic bias, is becoming a problem for a number of people who fall outside of the norm, for many women, minority racial groups, creative thinkers and independent investigative journalists. – Eve Lorgen 4/24/2018
Introduction
If knowledge is power, then those who control the flow of information and hence, knowledge, can hold power over others. With the rapid advances in digital technologies, we are in a paradigm changing era of global communications. With these advances also emerge technological means for the control of information. Artificial intelligence and specifically the use of computer algorithms, can automate information filtering, resulting in a new, more powerful form of censorship. Enthusiastic confidence in data science now dominating our lives can be premature. Although there are many valid and successful uses of algorithms, this essay will show how algorithms can be biased, racist and cause unfair censorship. Simple algorithms for facial recognition software, to more complex data filtering systems operating within social media, will demonstrate that algorithms can be discriminatory and unfair. Moreover, these biases and forms of censorship are operating in large measure without our full understanding and awareness. The complexity in which algorithmic bias and censorship is occurring makes it more challenging for journalism to maintain its role as the fourth estate. Countermeasures to this dilemma are offered in light of these observations.
Algorithmic Bias
A simple definition of an algorithm is a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer (Algorithm, n.d). Algorithms are used in a multitude of applications such as resume filtering, insurance and credit applications, personality tests, college exams, search engines like Google and facial recognition software. According to Cathy O” Neil, author of “Weapons of Math Destruction: How Big Data Increase Inequality and Threatens Democracy”, algorithms are often used when critical decisions are made in our lives, such as college exams, insurance applications, jobs or mortgage loans. People can get sorted as winners or losers, despite the unfairness of such algorithmic decisions. For example, because black males historically died earlier than white males, insurance companies charged them more for life insurance until regulations changed racist policy (O’ Neil, 2018). In another instance, if someone wanted to do a job search for success in engineering, the algorithm would filter out women in the workplace, based on the parameters for success in that field. (Rao, 2016) Algorithms are unfair if blindly applied and have a tendency to maintain the status quo because they are using historical data. In essence, the programming itself is biased. O’ Neil explains how and why the use of algorithms is really about power: “… the data scientists in those companies are told to follow the data, to focus on accuracy. Think about what that means. Because we all have bias, it means they could be codifying sexism or any other kind of bigotry” (O’ Neill, 2018). She advises us that “the era in blind faith in big data must end” (O’ Neill, 2018). This blindness is ironically exemplified in Joy Buolamwini’s Ted talk about howshe observed that when working with social robots, the facial recognition software was unable to recognize her face because she is black. The original code did not take into consideration dark skin color. She explains, “If the training sets used to create the coding software are not diverse enough, the program will exclude you” (Bouolamwini, 2016).
A racist error occurred with Hewlett Packard computer cameras in 2009 when, “they could not track faces of black people in common lighting conditions” (Sanvig,C., Hamilton, K., Karahalios, K., & Langbort, C., 2016, pg. 1). This created a backlash of public relations problems and a demand for more ethics in algorithms. Machine learning is part of facial recognition software, but apparently is not failsafe. It can increase the likelihood of racist errors for example, “if this machine-learning algorithm is giving weight to race—without even having a defined category for race—then the algorithm has a strong potential for racist consequences without employing any explicitly racist rules…” (Sanvig, et. al, 2016). This can be tricky because the ethics of a system is judged based on how the algorithm learns from history or its previous operators. To counteract these potential biases, Sanvig, et. al. suggests creating algorithmic testing modules to assess for consequence, virtue and norms. They petition for a new kind of scholar who can bridge the gap between sociological and technical. Such a scholar is MIT graduate student Joy Buoloamwini (2016). She created the Algorithmic Justice League so that biases can be reported and training sets designed to test for ethics. She advocates for more transparency in what is coded, how it is coded, who codes it and why. These questions become more significant when we consider the ongoing reports of “questionable” data filtering in the media.
Censorship and Unfairness in the Media
Censorship can be more easily understood from the description of the five filters of propaganda as elaborated in Manufacturing Consent: A Propaganda Model by Edward S. Herman and Noam Chomsky (Herman & Chomsky, 1988). We can see the first filter of size, ownership and profit occurring in the media as a result of digital giants like Google, Facebook and Amazon. Since 2001 the newspaper publishing industry profits fell as much as seventy percent, while profits for Google, Amazon and Facebook soared. According to New York Times journalist Jonathon Taplin, “Billions of dollars have been reallocated from creators of content to owners of monopoly platforms. All content creators [who are] dependent on advertising must negotiate with Google or Facebook as aggregator, the sole lifeline between themselves and the vast internet cloud.” (Taplin, 2017). With the upsurge in fake news filtering agendas, digital monopolies like Facebook and Google are on the cutting edge of algorithm based data filtering technologies, yet ethical issues persist with their “black box” opacity tactics (Boyarsky, 2017). The troubling issue with these search engine filtering algorithms is that they are rarely transparent about who or what is really being filtered. Many web sites caught in the front lines of these filtering agendas discovered a drastic reduction in web traffic from April to July of 2017 when Googleupdated their search engine algorithms to filter out access to “low-quality, conspiracy theories and fake news” web sites (Damon, 2017). In July of 2017, many “left-wing” and alternative web sites had sharp declines of up to 70 percent in their web site traffic” (Damon, 2017). Alternative media web sites like Freeman TV for example, was hit in this filtering campaign (Personal communication with Freeman, December 29, 2017). World Socialist website was also singled out and according to Damon, this is deliberately done to “[assail] political news and opinion web sites that challenge official government and corporate narratives” (Damon, 2017). Such censorship behaviors appear to fall into the fourth filter category of “flak and enforcers” in Herman and Chomsky’s propaganda model. In addition, an insidiousness to the algorithm filtering technology exists, in such a way that many who are being targeted for censorship are unaware of it until they realize a loss in internet visibility, often within a short period of time. David Icke, a conspiracy researcher in the United Kingdom, reports a social media censorship called “ghost banning” (Icke, 2018). He quotes former UK Ambassador Craig Murray, a recent target of ghost banning: “I never heard of ghost banning until I was ghost banned from Twitter. [The idea that]they censor you without you realizing you are censored. People are no longer notified of tweets when posting, it only turns up on Twitter line of followers who are online at the time of tweeting” (Icke, 2018). The reason Murray believes he is being censored, says Icke, is because he now speaks out about corruption in high places. Icke explains, “The goal is to deny offending web site(s) traffic until it disappears from public view. It has the effect of the tagged web site to be present yet invisible in plain sight but the user is unaware” (Icke, 2018). Alternative news outlets are, according to Icke, prime targets of ghost banning on social media cites like Reddit, Facebook and Twitter.
In November of 2016 an organizational study called PropOrNot carried out a news filtering sweep to find pro-Russian propaganda (Boyarsky, 2017). Over 200 media outlets were blacklisted as “pro-Kremlin”, spanning from left to right on the political scale, and Truthdig, an independent news site, was unfairly precluded among them. Boyarsky maintains that, “the threat of censorship is especially worrisome now that the search for fake news is becoming automated”. He also argues that “determining the difference between muckraking (another term for media’s watchdog role) and fake news is too difficult for even the smartest of artificial intelligence”(Boyarsky, 2017). One has to wonder if the PropOrNot “study” was in part, an example of the fifth filter of propaganda, aka the anti-communism ideology at work.
Countermeasures to Algorithmic Bias and Censorship
With censorship as well as algorithmic bias, the complaint is often a lack of transparency with the coding itself. The complexity of artificial intelligence can be a viable reason for the opacity issues surrounding algorithms. A call for ethical oversight is paramount regardless of the complexity of computer programming. A suggestion of countermeasures is listed below:
- Transparency of who, what and why of coding and algorithmic testing (Buolamwini, 2016)
- Develop practical algorithmic training modules that test for ethics of virtue, consequence and norm (Sandvig. et. Al., 2016)
- Professional scholars who can bridge the gap between sociological and technological for ethical oversight (Sandvig, et. al., 2016)
- Convert monopolies like Google to a utility, reduce other digital monopolies (Taplin, 2017)
- Open discourse and public demand for transparency by digital giants like Google, Amazon, Facebook, Twitter
- Subsidize publishing and journalism industry so its watchdog role can be maintained
- Blockchain AI technology “cryptocurrency” funded journalism—to allow funding outside of financial central authorities (Woolf, 2018)
- Filtering/assessing data with a closer study of networks, distributed character, cultural and social practices of meaning making and media systems, as opposed to content in isolation (Bounegru, L., Gray, J., Venturini, T., & Mauri, M., 2017, pg. 202)
The blockchain cryptocurrency technology (such as Bitcoin) has the ability to “distribute data across a network of participating nodes using cryptographic proof and removes the necessity for information to be controlled by a central authority…” (Woolf, 2018). This could remove journalism from the usual paywall drawbacks of advertising or through digital giants, and instead enable “an ecosystem of micropayments, in which everyone pays fractions of a penny for every article they read…” (Woolf, 2018). With the listed countermeasures, there is a need for more knowledgeable people to be involved in the ethical oversight and advocacy for algorithmic transparency. Notwithstanding, there is still the enduring question of the tyrannical control of the flow of information from monopolistic corporate and government entities.
Conclusion
Artificial intelligence is the pillar of our computer age, and yet is still operating under much opacity, to the vexation of those caught in the casualties of algorithmic bias. Uncorrected coding bias in concert with machine learning can cause discrimination, as demonstrated in the cases of facial recognition software errors. Other biases occur with job search results, insurance policies and credit applications. Part of the problem is because algorithms learn from historical data that can be biased, resulting in a maintenance of the status quo. This translates into unfairness and inequality, unless promising technological scholars with the social and emotional intelligence can correct what goes into coding, who codes it and why. Digital giants like Google and Facebook are of great concern for media outlets because of their black box opacity tactics in fake news filtering (Boyarsky, 2017). The rise in financial profits of Amazon, Facebook and Google resulted in great losses from 2001 onward for the newspaper publishing industry. These monopoly platforms result in a kind of propaganda filter—the first filter of size, ownership and profit—over the smaller publishing and media outlets, according to Herman and Chomsky’s propaganda model (Herman & Chomsky, 1988). The PropOrNot news filtering sweep to find pro-Russian propaganda caused a blacklisting of over 200 web sites spanning from left to right on the political scale (Boyarsky, 2017). Although some of their targeted sites may have been genuine propaganda, many were not. In this instance, one could question whether the fourth filter of “anti-communism” is being waged upon a vulnerable public. Many left wing, alternative and independent web sites were especially hit hard with drastically reduced web traffic when Google executed their new search engine filtering algorithms from April to July of 2017. This form of censorship is believed to be an attack upon those with political views and opinions which challenge the corporate and official government narratives (Damon, 2017). This kind of censorship fits into the fourth filter of “flak and enforcers” in Herman and Chomsky’s propaganda model. People are essentially being punished with reduced web traffic in ways they are largely unaware until after the fact. An incident with Twitter ghost banning is revealed in David Icke’s presentation of former UK ambassador Craig Murray’s experience being censored (Icke, 2018). Icke states that the reason Murray believes he is ghost banned is because of publically speaking out against corruption. This is also a “fourth filter” propaganda example of censorship instigated upon a former high-ranking government official.
To avert unfair censorship and algorithmic bias countermeasures need to be implemented. Demanding transparency and ethical testing of algorithms is suggested. Reducing the size of digital monopolies or converting one such as Google to a utility has been recommended (Taplin, 2017). Implementing artificial intelligence as an ally, rather than a roadblock is advocated by some who propose funding journalism with cryptocurrencies (Woolf, 2018). Open discourse must happen in such a way that journalism’s watchdog role is recovered. We must hold the power of knowledge in our own hands before we lose our awareness and ability to speak out.
References:
- Algorithm. (n.d.). Google definition. Retrieved from
2. Boyarsky, B. (2017, February 8). Will Facebook’s System to Detect Fake News Lead to
Censorship? [Blog Post]. Retrieved from https://www.truthdig.com/articles/will-facebooks-system-to-detect-fake-news-lead-to-censorship/
3. Buolamwini, J. (2016, November). How I’m fighting bias in algorithms [TEDxBeaconStreet]
Retrieved from https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
4. Damon, A. (2017, July 31). Google’s chief search engineer legitimizes new censorship algorithm
[Blog post]. Retrieved from
https://www.wsws.org/en/articles/2017/07/31/goog-j31.html
5. Herman, E. & Chomsky, N., Pantheon Books (1988), Manufacturing Consent: A Propaganda
Model, Retrieved from
http://www.thirdworldtraveler.com/Herman/Manufac_Consent_Prop_Model.html
6. Icke, D., (2018, March 30). The Fake News Hoax[Video file] Retrieved from
https://www.youtube.com/watch?v=2T8IIfE0wEk&t=5s
7. Liliana Bounegru, Jonathan Gray, Tommaso Venturini and Michele Mauri., (2017). A Field
Guide to “Fake News” and other Information Disorders. Retrieved from
8. O’ Neil, C. (2018, Jan. 26). Do algorithms perpetuate human bias. [Video file], retrieved from
https://www.npr.org/2018/01/26/580617998/cathy-oneil-do-algorithms-perpetuate-human-bias
9. Rao, P. (2016, September).
Math Is Biased Against Women and the Poor, According to a Former Math Professor. The Cut. Retrieved from https://www.thecut.com/2016/09/cathy-oneils-weapons-of-math-destruction-math-is-biased.html
10. Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2016). Automation,
Algorithms, and Politics | When the Algorithm Itself is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal of Communication, 10, 19. Retrieved from http://ijoc.org/index.php/ijoc/article/view/6182
11. Taplin, J. (2017, April 22). Is it time to break up Google? The New York Times.
Retrieved from https://www.nytimes.com/2017/04/22/opinion/sunday/is-it-time-to-break-up-google.html?_r=0
12. Woolf, N. (2018, February 13). What Could Blockchain Do for Journalism [Blog post].
Retrieved from
https://medium.com/s/welcome-to-blockchain/what-could-blockchain-do-for-journalism-dfd054beb197