When people talk about artificial intelligence and machine learning, it’s often couched in optimistic terms like “predictive analytics” and “optimization”. But these rosy terms often conveniently gloss over the very real ethical issues that may arise with machine learning models, particularly when human biases are unintentionally baked into machine learning systems that are used for mass surveillance or identifying crime suspects, or for deciding who gets a loan or a job, or who gets released from prison.

But as a recent paper from the Center for Applied Data Ethics (CADE) at the University of San Francisco suggests, it may be useful to use the language of anthropology — particularly those that examine bureaucracies, states, and power — to describe the broader structural impacts that machine learning algorithms and AI in general have on society.

The premise of the paper, titled “To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes,” asserts that the use of powerful machine learning tools without a critical assessment will likely cause further harm to already marginalized groups of people.

AI’s ‘Abridged Maps’

In particular, the paper’s author, social computing researcher and CADE fellow Ali Alkhatib, draws upon the work of American political scientist and anthropologist James Scott in his book Seeing Like A State, which examines the disconnect between bureaucracies’ superficial understanding of the world and the lived experiences of real people. These inaccurate and “abridged maps” created by the “bureaucratic imagination” is what leads to an “administrative ordering of nature and society” — much like how AI’s limited understanding of the world leads to these systems making faulty decisions that can be irrevocably life-altering. While such machine learning models may offer convenience in the short term, they can cause harm later on when they generate and impose an uneven worldview based on these “abridgements.” Worst of all, Alkhatib points out that the absurd decisions based on these incomplete maps often have the most negative impact on those who can least challenge them.

Read More:   Update GitLab’s Meltano, a Data Pipeline That Uses Git as the Source of Truth

“Machine learning systems construct computational models of the world and then impose them on us,” explained Alkhatib in a pre-recorded video done for the 2021 ACM Conference on Human Factors in Computing Systems (CHI). “Not just navigating the world with a shoddy map, but actively transforming it. These systems become more actively dangerous when they go from “making sense of the world” to “making the world make sense” when we take all of this data and tell a machine learning system to produce a model that rationalizes all of that data.”

“The rules machine learning systems infer from the data have no underlying meaning or reason behind them. They’re just patterns, without any insight into why Black people are in prison at much higher rates than white people, for instance.

— Ali Alkhatib

The paper highlights several instances of machine learning gone horribly wrong, such as research showing how facial recognition technology used by law enforcement consistently misidentifies people of color, because the data used to train these systems is often heavily skewed toward white male individuals as the “default.” For anyone else who doesn’t fit into this standard, the machine learning model will flag them as anomalies, based on data that never sufficiently represented them in the first place. In addition, such AI systems often do not take into account the contexts like the historical oppression and “criminalization of Blackness”, and the expansion of the “carceral state” in the 1970s.

“The rules machine learning systems infer from the data have no underlying meaning or reason behind them,” said Alkhatib. “They’re just patterns, without any insight into why Black people are in prison at much higher rates than white people, for instance. There’s no dataset in the world that adequately conveys white supremacy, or slavery, or colonialism. So at best, these systems generate a facsimile of a world with the shadows of history cast on the ground — skewed, flattened and always lacking depth that only living these experiences can bring. These rules are devoid of meaning, and they punish or reward us for fitting into the model they generate — in other words, the world they construct.”

Read More:   Update Data Access Management with ACEs vs. ACLs: The Power of ‘AND’ and ‘NOT’

Alkhatib cautions that our growing dependence on algorithms — especially the flawed massive AI models created by tech giants — may ultimately result in the computational restructuring of society, potentially weakening civil society and paving the way toward an authoritarian kind of algocratic governance, which may replace today’s bureaucratic regime.

“We shouldn’t allow these systems to have the kind of power that allows them to derail a person’s life in the first place,” added Alkhatib. “We should work to disempower algorithmic systems wherever we can, and do whatever we can to help people escape these systems when they feel mistreated by them.”

To tackle this growing issue, Alkhatib suggests the advocating of human review mechanisms within these algorithmic systems, so that they don’t end up exacerbating pre-existing social inequalities. He also calls for the abolition of the technology whenever possible, as well as the refusal to participate in the development of the technology itself.

“We have to constantly be paying attention to the power dynamics between the systems we deploy and the people that system acts upon,” he concluded. “If we’re not careful, it’ll try to force everyone to live in the algorithmic imagination it constructed — to live in their utopia.”

Read more in the paper.

Feature Image: Photo by Benjamín Gremler via Unsplash.