Tech Philosophers Explain The Bigger Issues With Digital Platforms, And Some Ways Forward
POSTED ON BY S. ABBAS RAZA
by Filippo Santoni de Sio, along with Marianna Capasso, Rockwell F. Clancy, Matthew Dennis, Juan Manuel Durán, Georgy Ishmaev, Olya Kudina, Jonne Maas, Lavinia Marin, Giorgia Pozzi, Martin Sand, Jeroen van den Hoven, Herman Veluwenkamp
Abstract: This article, written by the Digital Philosophy Group of TU Delft is inspired by the Netflix documentary The Social Dilemma. It is not a review of the show, but rather uses it as a lead to a wide-ranging philosophical piece on the ethics of digital technologies. The underlying idea is that the documentary fails to give an impression of how deep the ethical and social problems of our digital societies in the 21st century actually are; and it does not do sufficient justice to the existing approaches to rethinking digital technologies. The article is written, we hope, in an accessible and captivating style. In the first part (“the problems”), we explain some major issues with digital technologies: why massive data collection is not only a problem for privacy but also for democracy (“nothing to hide, a lot to lose”); what kind of knowledge AI produces (“what does the Big Brother really know”) and is it okay to use this knowledge in sensitive social domains (“the risks of artificial judgement”), why we cannot cultivate digital well-being individually (“with a little help from my friends”), and how digital tech may make persons less responsible and create a “digital Nuremberg”. In the second part (“The way forward”) we outline some of the existing philosophical approaches to rethinking digital technologies: design for values, comprehensive engineering, meaningful human control, new engineering education, and a global digital culture.
The Social dilemma, the bigger picture
As citizens of the world in the 21st century, we should all be grateful to active digital critics who speak up in Netflix’s The Social Dilemma. Tristan Harris, Shoshana Zuboff and the others have put again the spotlight on the incredible power of the US social network companies. However, the one and a half hour documentary and the experts it features tell a little piece of a bigger story. They focus on the power of Google/Facebook/Twitter/Instagram and the problems of addiction and manipulation and only briefly touches upon the broader ethical, political and economic dimension of digital technologies. So, if you found the documentary depressing or shocking, it is important to realise that things are not as bad as they look – in many ways they are much worse. But there is also good news. Whereas the last ten minutes of the documentary give a vague impression that something can be done about it, it falls short of doing justice to some existing constructive proposals that have been put forward to rethink the digital systems. It also fails to give an impression of how deep the ethical and social problems of our digital societies in the 21st century actually are. Here is a modest attempt at filling these two gaps: giving a richer sense of the problems we are facing and proposing some general ways to address them.
Nothing to hide, a lot to lose
Let’s start with the bad news. One big merit of The Social Dilemma is showing once more that the big issue with Google, Facebook & co is not simply privacy, it’s power. The talk of privacy seems to point to a problem of user’s control over private data: companies or governments should not check what I am doing on my computer. This may not appear as a big problem: many feel they have “nothing to hide”, and may have even have a lot to gain by trading their boring personal data for way more exciting services. So let people be free to share their data, if they want. But collection of personal data generates a different kind of harm when they take the form of collection of big data about big groups. You as an individual are not so important, but companies, governments or criminals may have bigger plans. For instance, they may want to influence elections, as some political actors may have done in 2016 with the Brexit referendum and the US election, by using the massive amount of user data collected via an apparently innocuous Facebook app and then sold via the infamous consultancy firm Cambridge Analytica. Harvesting data at scale makes us collectively vulnerable in ways that go beyond breaches of individual privacy. This is a new and nasty problem: even when individual rights are formally considered (e.g., via privacy consent forms), the consequences for society may be very harmful. It is a form of ‘accumulative harm’, not unlike what we see with environmental pollution. A couple of plastic bags in the sea is not a problem but a steady stream of them day in day out over decades may destroy the ecosystem. You may think you have “nothing to hide”, but we all have something to lose by allowing big data collections.
The new masters
The Social Dilemma clearly shows how manipulative and addictive social media platforms are, and highlights other big societal risks created by them: political polarisation which makes democratic debate difficult, fake news and harm to truth. These are all symptoms of a deeper problem. The problem is not what big tech companies actually do, but the very fact that they are in the position to do whatever they want without being subject to any control: economic, legal, political, social. Facebook has remained in the top ten most valuable companies despite the Facebook-Cambridge Analytica scandal, Twitter can decide to silence the President of the United States, Google may decide to leave an entire country without their services. This unaccountable, dominating position of Big Techs is a present-day exemplification of the master-slave dynamic, often referred by so-called neo-republican political philosophers: Even if a master would not harm his slave and contributes to his wellbeing, he could have chosen to do so. The serious moral problem is being at the mercy of someone who could wield power over you arbitrarily.
In the pre-digital world, few people would have found acceptable that a few private multinational companies would control, say, most roads, shops, restaurants, public buildings, TVs, telephones, hospitals etc. and that they wouldn’t even care to disclose how they would run their business, what would be the next space they would buy or occupy, and that they would even exploit their position to make experiments on the people, possibly with the goal of manipulating their behaviour. But this is what is happening now with Big techs in the less visible digital infrastructures. Not just social media platforms, but for instance all kinds of e-commerce platforms (Uber, Amazon etc.) leverage their privileged, sometimes quasi-monopolistic, intermediary positions to perpetuate information asymmetries, keeping their business practices opaque and collecting vast amounts of data from their users and customers. This in turn enables systematic manipulation through various personalisation and hyper-personalisation techniques exploiting the best available psychological knowledge. Such practices may be less effective on an individual level than shown in The Social Dilemma, but the sheer scale and systematic character of these practices is the real reason for concerns. We do not want to be so crucially dependent on systems on which we have no power and who see us solely as objects for nudging and manipulation and exploitation. This is not only an undesirable scenario, it’s a moral and political dystopia.