MISCELLANEA /

Do Algorithms Have Politics?

Do Algorithms Have Politics?

06 Febbraio 2021 Redazione SoloTablet
SoloTablet
Redazione SoloTablet
share
Even In The Hands Of The Most Virtuous Product Designers Adhering To The Most Robust Set Of Ethical Principles, There Are Inherent Features Of Algorithmic Tools That Invite Certain Ways Of Ordering Our World.
Per gentile concessione di Philip Walsh. Un articolo pubblicato sul suo Blog Thinking Tech

"There is no morality attached to technology" — this is the view we will address and criticize below. It remains such a natural, intuitive view. I think part of the problem is an overly narrow understanding of what it is to "have a morality" — as if a piece of technology could hold moral views and act on them.

Of course technology does not "have a morality" in this sense. But does this mean technology is a neutral tool? No. Perhaps a better way to think of it is this: technology is always "value-laden." Technologies invite certain patterns of action, certain habits, certain policies. They open up certain possibilities while foreclosing others. But now I'm getting ahead of myself. Let's get on with the discussion of Langdon Winner's "Do Artifacts Have Politics?" so that we may dwell on its continued relevance.

Ways Of Ordering The World

In 1980 political theorist Langdon Winner published what has become a seminal paper in the philosophy of technology: “Do Artifacts have Politics?”

Technologies, Winner tells us, are “ways of building order in our world.” Whether deliberate or inadvertent, the design features of our technologies are bound up with choices that shape the patterns of individual and communal life.

Winner subtly threads the needle between two overly simplistic views of technology: techno-determinism and technology as neutral tool.

Earlier in the 20th Century, reeling from the effects of industrialization and world wars, many philosophers and artists came to think of technology as evolving according to its own internal dynamic or “essence,” destining us along an inexorable path. The great (and notorious) German philosopher Martin Heidegger, for example, responded to “The Question Concerning Technology” rather pessimistically, arguing that technology has become an all-encompassing historical force whereby humans come to view everything (including one another) as mere resources to be optimized.

The corrective to this view is the idea that what matters is not the technology itself, but the social, political, and economic systems in which it is embedded. This is probably the common sense view nowadays, and can be described as the “instrumental” or “anthropocentric” view of technology. On this view there’s nothing inherently good or bad about technology. Technology is a neutral tool that can be put to good or bad uses, by good or bad human actors, with good or bad intentions. “Guns don’t kill people….” the saying goes. The only overarching essence of technology is that it is a means to an end (hence the “instrumental”). Human beings are free to choose their values, and thus establish the ends that technology is put to (hence the “anthropocentric”). There is something obviously correct and deeply attractive about this view. At the center of it lies a noble esteem for human freedom and responsibility. We are the authors of our own lives, not the pen (or word processor) with which we write.

While techno-determinism holds an impoverished view of human agency, the instrumental view radically overestimates it. The techno-determinist thinks we are doomed and ends up resigned or cynical. The instrumentalist shrugs, “If a certain technology is so bad, just don’t use it!” Implicit in this dismissal is an overly heroic conception of agency. One must always be master of one’s technological engagements. Think smartphones are like addictive slot machines in our pockets? Just exercise some discipline! Don’t want companies sharing your data? Should have read all of those user agreements!

Winner refuses this simplistic binary. No, technology does not set us along some inexorable destiny. And yes, humans have genuine agency and exercise control over their technical creations. But technology, insofar as it becomes embedded in larger socio-economic-political systems, certainly acquires inertia. Technology might be a means to certain ends, but that doesn’t mean it’s “merely” a means. The very fact of the availability of certain means alters the landscape of human life, inviting certain patterns of behavior while foreclosing others. Guns might not kill people, but they sure do make point-and-click-death-at-a-distance more readily available.

Technologies might not make certain behaviors logically necessary, but they can structure our choice environments such that they strongly invite certain behaviors, making them all but practically necessary.

The key to a responsible ethics of technology, according to Winner, is close attention to the “seemingly innocuous design features” in various technological systems that “actually mask social choices of profound significance.

These design decisions typically do not take the form of deliberate machinations for the sake of clearly defined political or social goals, although they certainly can and have taken that form in some notable instances. To illustrate, Winner discusses seminal urban designer Robert Moses. Moses was the “master builder” of roads, parks, and bridges for New York from the 1920s to the 1970s. According to Robert Caro’s biography, Moses intentionally designed the overpasses along the parkways of Long Island to be only nine feet high, specifically for the sake of inhibiting the twelve-foot tall buses – and the lower income, racial minorities they carried – from accessing Jones Beach.  Here we have a clear cut case of the design and implementation of a technology (highway design, bridges, etc.) as a way of exercising power. (Moses’ legacy is complex, and a contested affair, but for the sake of argument we can simply entertain the example to illustrate an idea.)

More interesting, however, and less easily dismissed, is the sense in which technology has inherently political features. Again, not in the sense of “destining” us, but in the sense that it strongly invites or is highly compatible with certain political structures. Winner’s example here is nuclear power. Safely and effectively administering a nuclear power plant requires “a techno-scientific-industrial-military elite.” In other words, nuclear power invites authoritarian governance. Now, this does not mean that only authoritarian societies can effectively adopt nuclear power. Clearly that is not true. What we are talking about here is how the nuclear power plant itself is run. A nuclear power plant requires an authoritative and centralized administrative structure. The knowledge required is too highly specialized, the risk involved too high, for open democratic debate about daily operations. It just does not seem practically possible that a nuclear power facility could be some sort of egalitarian worker co-op, as nice as that might sound.

An important question follows from this: if a certain kind of authoritarian structure is a necessary internal feature of technologies like nuclear power (or, say, a high speed rail network), what is the relationship of such internal features to the external political structures outside the power plant? In other words, to what extent will the requisite authoritarian power structure of a safe and effective nuclear plant bleed over into the political structures of the society in which it is embedded? What is the overall relationship between the internal functioning of the technical system and the larger social, political, and economic structures in which it is embedded?

An Algorithm Cannot Be Neutral

Winner’s analyses prove remarkably prescient when considering the ethics of algorithms and big data, but with a twist: rather than worrying about internal features of the technology bleeding over into society at large, it seems that most of the discussion has focused on pre-existing social injustices “infecting” our supposedly neutral algorithmic tools. And while structural injustice and the problem of dirty data are certainly pressing concerns, we must also avoid the naiveté of the instrumental view, i.e. the view that our algorithms are mere neutral tools and the only problems that could possibly arise come from without.  Even in the hands of the most virtuous product designers adhering to the most robust set of ethical principles, there are inherent features of algorithmic tools that invite certain ways of ordering our world. More detailed analyses of specific algorithmic design features will have to wait for future posts, but let me conclude with two brief points on this matter.

What is the “product” that algorithmic tools produce?

We use algorithms (and the big data sets they feed upon) to “derive insight” and produce “actionable intelligence.”  So in a word, the product of this technology is knowledge. Algorithms are ways of knowing. But we run into trouble when we think this peculiar “way of knowing” is just passively registering what is already out there, having no effect on the domain it targets. Take the much discussed predictive policing software PredPol. As discussed in an episode of Barry Lam’s excellent Hi-Phi Nation, even with privacy protections in place and an explicit commitment to eschew racial categorization (although race can still end up being tracked by proxy), this technology creates a data-hungry beast. By adopting the technology, we implicitly commit ourselves to more and better data capture.  This incentivizes more surveilling of the population (which usually means the population of specific neighborhoods).  

As we gather more data on a population, data-hungry technologies further incentivize increased integration of that data. This fact has led philosopher Evan Selinger and legal scholar Woodrow Hartzog to call for an outright ban on facial recognition technology.

Effective regulation of this technology, they argue, is a naïve dream of the instrumental view. Is it coherent to think of facial recognition tech as a neutral tool, equally useful for stopping the bad guys as it is for totalitarian control? Of course it is, but you can make the same argument for landmines. The point is that a technology like this makes certain practices so easy to implement, so commodiously available, that we would be fools to count on the better angels of our nature to effectively regulate and constrain their use. Databases of faces and a physical infrastructure of cameras are already in place. Facial recognition systems merely need to be plugged into these existing systems, creating a tool so power that we are simply better off without such a “mere means” being available.

These considerations do not warrant a call for a ban on all algorithmic technologies. That would be a gross overreaction, and obviously not even feasible at this point. What they do call for, at minimum, is a definitive rejection of the instrumental view, which has become entrenched as common sense at this point. As algorithmic systems become increasingly embedded in our lives, it is imperative that we come to grips with the fact that we are not “optimizing” an existing way of life; we are forging an entirely new one.

 

comments powered by Disqus

Sei alla ricerca di uno sviluppatore?

Cerca nel nostro database