Homo sapiens e Homo digitalis /

We will not be able to set efficient rules framing development and use of AI ( Dr Emmanuel R. Goffi)

We will not be able to set efficient rules framing development and use of AI ( Dr Emmanuel R. Goffi)

20 Febbraio 2023 The sapiens
The sapiens
The sapiens
share
Unfortunately, so far, ethical reflections come under what I call COSM-ETICS, namely calibrated narratives aiming at reassuring potential consumers through a specific set of words related to ethics. In other words, current reflections on ethics are not based on ethics per se, but on marketing communication. The vocabulary refers to ethics, but the analysis is all about building trust in order to sell AI-fitted products.

As Noah Harari puts it “when neural networks will be replaced by intelligent software (…) human history will come to an end and a completely new process will begin.” This prevision might prove itself wrong but if we reflect on the profoundness of current change and the role technology is playing in determining it, we can easily see that we are going through a phase of paradigmatic change. When novelty emerges, we might not even be human anymore; some sort of Cyborg, symbionts, mere artificial intelligence more or less hybridized, powerful, intelligent and able to learn (apprehend) but not humans anymore. 

If this prospective is plausible, an extensive, precise and incisive reflection on what is happening will be needed more than ever. For this reflection, the topic of artificial intelligence is indisputable more than others for it clearly suggests the risk and the challenge that all humankind is faced with. This risk has been underestimated and this challenge has probably been embraced superficially by many. This topic arouses general interest, so it is worth dealing with it in-depth. Furthermore, this reflection must be reflected upon by AI experts and advocates constantly bearing in mind the fact of belonging to humankind. 

Solotablet has decided to do it involving people who have invested and are working on AI, philosophizing and creating possible future scenarios. 

In this paper we propose the interview of Carlo Mazzucchelli  with Dr Emmanuel R. Goffi, AI ethicist and Director of the Observatory on Ethics & Artificial Intelligence at the Institut Sapiens and research associate with the Big Data Lab at the Goethe Universität in Frankfurt, Germany.

 

Dear [Mr Goffi], thanks for accepting my interview proposal. To start could you please tell us something about yourself, your company, your current business and interest in AI? Do you find reflection on technological era and the information we are experimenting useful? What projects and business ideas are you working at? What goals and objectives do you have? To whom are they addressed and in which fields?

I am currently the Director of the Observatory on Ethics & Artificial Intelligence at the Institut Sapiens in Paris and research associate with the Big Data Lab at the Goethe Universität in Frankfurt, Germany. My work consists in questioning the ethical acceptability of the development and use of AI-fitted products. This means that with the advent of AI, as with any other technologies, we have faced a large number of ethical issues that must be addressed in order to make sure that AI fit our expectations and that we will be able to limit its risks and maximize its benefits.

Unfortunately, so far, ethical reflections come under what I call cosm-ethics, namely calibrated narratives aiming at reassuring potential consumers through a specific set of words related to ethics. In other words, current reflections on ethics are not based on ethics per se, but on marketing communication. The vocabulary refers to ethics, but the analysis is all about building trust in order to sell AI-fitted products.

This is why, my main goal is to bring AI ethics back to philosophical thinking and to open a real debate on the ins and outs of AI. A nuanced debate is necessary to assess as thoroughly as possible the risks and benefits of AI, and make educated decision about what we consider acceptable and what we consider not acceptable.

To that end, I am, for instance, working on the cultural ground of ethics trying to demonstrate that AI ethics is culture-related and cannot be imposed by the West through a Western superficial understanding of ethics at best, or through cosm-ethics at worst. The point here is to question philosophical, and thus ethical, perspectives from a cultural standpoint and to see how through these different lenses people see their relation to technology at large and to AI specifically.

It is really important, when it comes to AI, to recognize and accept that ethics is plural for it depends on several factors such as values and their relative importance, religions and other beliefs, history, political systems and so no. So ethics is related to a philosophical system of thought that must be understood even before deciding what is deemed acceptable or not regarding the development and use of AI.

 

Nowadays, everybody speaks about AI, but people probably do it without a clear understanding of what it is, of its implementations, implications and effects. Information, comprehension and knowledge are not supported by media either. There’s a general inability to distinguish among the various kinds of AI machines such as purely reactive machines (Arend Hintze) like Deep Blue or AlphaGo, Specialized AI (like those programmed to perform specific tasks in the automobiles), general AI (AGI or Strong AI) with the capacity to simulate the human mind and elaborate representations of both people and the world, superior AI (Superintelligence) with self-awareness capacity to the point of determining the technological singularity. How would you define AI and what do you think its current and future state of evolution is? Do you think that Superintelligence will ever be able to lead us towards Singularity, as Kurzweil defines it?

To be honest there is no definition of AI, and I do not feel legitimate in providing such a definition.

That is an important point since depending on what you understand when you hear AI will define how you will apprehend AI. Definitions are key. We are unable to define intelligence. We are even unable, at least from a philosophical perspective, to define artificial. So trying to define AI is not only pointless, it is a waste of time. Yet, when you want to address AI related question, you need to know what you are talking about. So the point here is to adjust your answers and analysis to way the people you are dealing with understand or define AI. Most of the time people actually see AI as the replication of human cognitive abilities in machines.

So if AI aims at mimicking human brains capacities, it is fair wondering whether at some point these new kind of intelligence will take over human beings made of blood and flesh. I do believe that we are moving towards something we can call singularity or transhumanism or else. It is difficult to precisely define when we will reach this point, but we will reach it for sure.

This is something we should discuss right now to make sure we all agree that it is what we want for the future. The problem is that lots of people are still waiving this idea aside saying that it is pure sci-fi. Yet no one can affirm with 100% certainty that singularity, or whatever the name we use, will not occur. So even if the probability I slow and if it is to occur in the long-run, we have to discuss it now.

 murales

 

The beginnings of AI date back to the Fifties, but only at present there has been a concerned reaction about the role it might play on the future of humankind. In 2015, several scientists signed an appeal demanding a regulation on AI, for some it was just a hypocritical way to clear their conscience. What do you think about it? Are you in favor of free research and implementation on AI or you think a regulation is better? Don’t you think that life would make less sense to human beings if Intelligent Machines stole the command to them? We should be worried about the supremacy and power of machines, but above all about the irrelevance of humankind that could be its aftermath. Or this is simply our fear of the future and of the hybridization characterizing it? According to the philosopher Benasayag, machines are meant to function correctly while humans are meant both to function (chemical processes) and exist (live). Human beings can’t simply undergo the data collection (Big) process or obey binary calculation rules; human beings need complexity, a body, sense, culture, experiences, finally we need to experience negativity and lack of knowledge. This is not true for machines and it will never be, or you think it will?  Don’t you think that relying completely on machines will lead humankind towards a state of impotence?

I do not think we will ever be able to set efficient rules framing the development and use of AI.

There are too many interests (mainly economic and diplomatic) behind this technology and stakeholders, private and public, are not really keen on accepting constraints that would deprive them from the financial outlets of AI. There is a fierce AI race out there, and all actors want their share of the AI pie.

That does not mean that I am against regulation. On the contrary, I think regulation is essential. My concern is that we have naively entered into a quest for a universal code of ethics. This is risky for two reasons. First, as I already mentioned it, such a code would deny the diversity of ethical perspective and impose kind of a Western AI ethics tyranny which is not desirable. Second, focusing on ethical regulation is a way to avoid legal instrument through what Thomas Metzinger calls “ethics-washing”. The problem with ethics is that it is never accompanied with formal sanctions. If you do not stick by the rules, the worst you risk is symbolic negative consequences (which can be heavy), while with legal norms, if you violate them you will face effective penalties. A universal AI ethics code would be both a violation of diversity and pointless in that it would not be efficient, short of a strong regime of sanctions.  The solution lies in local regulations monitored at the international level by an autonomous body.

I do not think life would make less or more sense with intelligent machines taking control. The sense of life is a philosophical question that goes beyond machines. For some people life in the era of Covid constraints does not make any sense. Besides, I feel like people are, at least in some parts pf the world, slowly getting used to the presence of AI in their everyday life. They do not even question the fact that they, that we might already be losing our control over technology. Just consider our relations to cars, our dependency to the Internet, to our cellphones, the way we react to recommendations made by algorithms, the way we respond to red led flashing and other notifications. Machines have not yet taken control over humans, but we are already losing our control over machines. We have become mere cogs of a huge network. And we do not want to admit it. We do not want to see it.

What we are moving toward is a world in which humans and machines will collaborate, will mix into each other. Sci-fi movies are interesting it that sense showing that the future is not only made of high technologies, but of a mix of ancient and very modern things. That will be the same for AI.

To some extent, we can even consider that machines are the future of humanity. That is not foolish for we are unable to define what it is to be human. Is Robocop still human? Where does humanness lies? In our mind? Our souls? Our body? In that case in which part of our body? Is someone fitted with artificial limbs and heart still human? Is Robocop human? These are philosophical questions we have yet to answer.

Technology, in many places, is seen as an instrument aiming at making our lives easier. In that sense we are all falling into impotence. Machines will not make us impotent. We are making machines because we are lazy and hedonists at a point where we refuse anything that can be difficult or harmful. We want to benefit from the beauty of life, but we reject any of its difficulties. What is paradoxical is that happiness and sadness go hands in hands: you cannot fully enjoy life if you do not experience difficulties. Our laziness and quest for happiness will lead us to a world where machines will decide for us, where we will no longer take risks and responsibilities. Machines are not responsible for that, we are. 

 

In his last book Le cinque leggi bronzee dell’era digitale (Five strong rules of the digital era), Francesco Varanini develops a critical sense of the history of AI re-reading it through recently published books written by authors such as Vinge Tegmark, Kurzweil, Bostrom, Haraway, Yudkowski et al. His criticism addresses to those techno-enthusiastics who celebrate the solar advent of AI, showing the highest interest of a technician towards the Machine, but to the detriment of the human being. Apparently, through AI they want to pay tribute to the human being but they are sterilizing and reducing its power, as well as making man play a submissive and servile role. Which side do you take in this matter? Are you with the technician/expert/technocrat or with the human being, or you choose the middle ground? Does the power, development and expansion of AI concern you? (e.g. in China the digital system is used to control and keep society under surveillance).

I do not take side. I do not feel I have to take side actually. I believe the truth, if it ever exists, is somewhere in between techno-enthusiasm and technophobia. It also depends on your own definition of human. If you see machines as the mere continuation of humanness by other means, to paraphrase Clausewitz, then you will not be concerned. If you see them as the end of humanity, you will be concerned. I do not have to judge that. Both stances raise philosophical questions. The problem is mainly that people taking side quite often do so without any deep reflection on the subject. This inevitably leads to a polarization between pros and cons, which kills the debate.

Nonetheless, the anarchical development and use if AI is somehow concerning. The case of China is an interesting one since it lies much more on geopolitical elements than on philosophical ones. China is not the only country using facial recognition. There is much more CCVs per inhabitants in the US than in China, and I do not feel that the way the US is using facial recognition is more acceptable than the way China is using it. Besides, facial recognition must also be put back into a specific cultural context instead of being read through a unique Western lens, as well as in a wider context of the use of personal data for disputable purposes. In that latest case, the US is at least as questionable as China. The main ethical point here is that focusing on China we do not question our own practices. Interestingly the social credit system in China has been inspired by control over drivers in the US to adjust their insurances. That shows that we should put our own houses in order before criticizing others.  

 

 

In the times of Coronavirus, many people are wondering about the disappearing workplaces while others are celebrating smart working and the technologies that make it possible. In those businesses where smart working is not possible, the use of robotics, automation and AI is widely spread. The debate on the disappearance of jobs/workplaces due to the technique (the ability to perform) / technology (the use of technique and the knowledge to perform) is not new, but today it has become urgent more than ever. Both manual and cognitive work are substituted for AI. The current ongoing automations are demolishing entire supply chains, as well as economic and organizational models. What do think about this? Considering the role AI is having today, do you think that it will open up new job opportunities or you think it will be the protagonist of the most significant destruction of jobs in history, as many fear? For some, the labor world will be crowded by new workers/employees, such as technicians who shape new machines (software and hardware), make them work properly and see to their maintenance, and again technicians who train other technicians and other work typologies associated to the proper functioning of technological machines. Do you think this is likely to happen? And even if this were likely to happen it wouldn’t be so for many people. Elite groups would be created but many people would lose their job, the only thing men need to be themselves. Would you like to share with us any thoughts or worries on this point?

Like any other technologies before, AI is leading to the disappearance of some jobs while being at the origin of new ones. The problem is less the destruction of work than our inability to adjust to the new environment. Once again, anything that would jeopardize our comfort, our habits is seen as a threat. Besides, it is paradoxical to praise the advent of AI and then complain about its consequences. Interestingly, lots of people are, for very relevant reasons, concerned about the impact of AI on their jobs, while not taking part in the debate around the extensive use of AI and its intrusion in our everyday lives. I am not sure that people working on a supply chain in the automobile industry are concerned about the fact that building autonomous cars will lead to the loss of professional drivers’ jobs. But they are certainly worried that their jobs could be taken over by machines. Individualism and hedonism, along with laziness, lead to very egoistic perspectives on the topic and to the polarization of stances, again leading to the killing of constructive debates.

We can also consider that if machines are meant to replace humans, this question of job destruction will just be transitory and that at some point it will no longer be relevant since all jobs will be occupied by machines.

Basically, we need to ask ourselves what kind of society we want for the future. But it requests time and efforts, and a strong ability to debate, which means to listen to others, in order to make a thorough analysis of the situation. We do not all share the enthusiasm for long-lasting discussions about the future of humanity. Consequently most of us are delegating, intentionally or not, consciously or not, our free will making ourselves voluntary servants of what we call pejoratively the “elites”, or to people we see as legitimate. We are not mere victims, we are, at least partly, responsible for our fates.

 

AI is a political issue as well. It has always been, but today it is more specific for its use in terms of surveillance and control. Even if little has been said about this, we can all see clearly (simply looking at it is not enough) what is happening in China. Not so much for the implementation of the facial recognition system as for the strategy of using AI for future world control/dominion. Another very important aspect, which is maybe established by the pervading control made possible by all data control, is people complicity, their participation to the national strategic project giving up their own freedom (privacy). Could this be a signal of what might happen tomorrow to the rest of us in terms of less freedom and cancellation of democratic systems typical of the western civilization? Or could this be an exasperated reaction not motivated by the fact that AI can be somehow developed and be governed with different aims and purposes?

First and foremost, I think we should stop focusing on China. Some practices in the Western world are questionable in terms of tracking or massive surveillance, especially in the current context of the pandemic. Second, we should not see the issue in terms of control or dominance over the world. It is a competitive world we live in. It has always been the case. Empires have always tried to increase their power and to control some aspect of human activities: education, defense and security, economy and finance, trade, transportation, diplomacy and so on.

It is a normal process in the international realm to get as much power as possible. Here AI is only one way to get power. It is one element among others. Conversely to what President Putin of Russia stated, who will be the leader in AI will not control de world. Power is a complex concept made of different factors, not only of AI.

Regarding democracy, her too we have to revise our definition of this political regime. Philosophers such as Aristotle and Socrates already warned against democracy and its downward slides. Besides, democracy is a very complex notion to define. Democracy in France is not democracy in the US, which in turn is not democracy in the Democratic republic of Congo. There is myth that must be questioned regarding democracy as the gathering of citizens on the Agora in Athens to make decision about the City. It is not the way it worked actually.

Democracy in many places of the Western world, and in France specifically, is already shaken and at risk. Democracy is a myth that has been built throughout centuries. We are slowly discovering its flaws. AI will not bring anything new under the sun od political regimes. It will certainly accelerate what has already been done by, for instance, social networks and new technologies, and at a larger scale by globalization.

 

 

We’re inside the digital era and we live in this era as happy sleepwalkers in possession of tools humankind did not have the chance to use before. We live in parallel realities, all perceived as real and we accept technological mediation in every single activity such as cognitive, interpersonal, emotional, social, economic and political. The spread acceptance of this mediation results in an increasing difficulty in the human ability to comprehend both reality and the world (machines can do it) and in an increasing uncertainty. How do you think machines i.e. AI could play a different role in replacing man at the centre in satisfying their need of community, real relationship and to overcome uncertainty?

I like the image of happy sleepwalkers. I do believe that we lost our appetence to knowledge. We, in some part of the world, have slowly slipped into a very comfortable “voluntary servitude” as Etienne de la Boétie wrote it. This is the price of hedonism, of the promise of a life without constraints, without difficulties.

The level of demand in schools has decreased to please students and their parents. The obvious corollary is that people are no longer trained to think by themselves, to question the world they live in, not to mention to question themselves. Rational doubt has been replaced by imposed conviction on the nature of the world and human beings. It is way more comfortable to learn by heart than to think.

We are feeding people with data they think are information. The problem is that as human beings we are not able to process such an amount of data. So at some point we rely on others to do the job and provide us with one slide and three bullet points explaining the world. That is called silver bullet thinking: solving complex issues with immediate simple solutions. It is quite developed in Northern America, and it is slowly coming in Europe. But when you wait for others to feed you intellectually, you clearly abandon your free will, and your freedom at large. You open the doors to Foucault’s governmentality, namely the control of minds and bodies by public, or even private, organizations.

Machines will exacerbate this tendency. Uncertainty prevails in many domains and the more complex the world is becoming, the more uncertainty will be strong. AI will not help overcoming uncertainty. They will add uncertainty to uncertainty. Especially since we will not be able to understand how things work in the AI black boxes. This is why trustworthiness is the new buzzword. By selling trustworthy AI, we are selling the fact that uncertainty is part of the equation and that we have no other choice than to trust AI. Trust is what remains when knowledge is weak. It is an intellectually comfortable last resort.

Concerning relationship, here again, the question should be put into specific contexts. We do not share the same appetence for social relationship. We don’t even understand social relationship the same way depending on where we live. In some places AI will create social ties, in other in will kill them. But AI is not responsible for that. Technologies have slowly reshaped relationship between humans. AI is a new step, not a revolution.

Would you like to say something to the readers of SoloTablet, something to read on Artificial Intelligence for instance? Would you like to suggest related topics to be explored in the future? What would you suggest to contribute sharing and publicizing this initiative in which you yourself have been involved?

I would not recommend any specific readings. I would just call for curiosity and open-mindedness. The more we read, the more we learn the more we are able to make our opinions.

One topic that is worth being studied in a deeper way, is the importance of culture in the assessment of AI ethics. It sis important to put things into perspective and to avoid offering simple immediate solutions to complex issues. AI ethics is a complex issue that must be addressed thoroughly and as objectively as possible. It must be addressed using all available tools. So far the perspective is a Western, with Western concerns, solved by Western solutions supposedly universally effective.

This is why I will keep on working on that aspect of AI ethics. I do not want a world where one-track thinking is the norm. I think diversity must be praised, for real.

 

comments powered by Disqus

Sei alla ricerca di uno sviluppatore?

Cerca nel nostro database