“The advent of machines has obliged us towards a new education, or better towards an auto-education; it imposes us to react, thus discovering our potentiality, our strength, our courage, our wisdom. We should learn to choose. We should discover our sense of proportion, develop the ability to refuse when necessary, or develop the ability to recognize when to set a limit to the invasion of machines in our life, in our own body.” - Francesco Varanini.
As Noah Harari puts it “when neural networks will be replaced by intelligent software (…) human history will come to an end and a completely new process will begin.” This prevision might prove itself wrong but if we reflect on the profoundness of current change and the role technology is playing in determining it, we can easily see that we are going through a phase of paradigmatic change. When novelty emerges, we might not even be human anymore; some sort of Cyborg, symbionts, mere artificial intelligence more or less hybridized, powerful, intelligent and able to learn (apprehend) but not humans anymore.
If this prospective is plausible, an extensive, precise and incisive reflection on what is happening will be needed more than ever. For this reflection, the topic of artificial intelligence is indisputable more than others for it clearly suggests the risk and the challenge that all humankind is faced with. This risk has been underestimated and this challenge has probably been embraced superficially by many. This topic arouses general interest, so it’s worth dealing with it in-depth. Furthermore, this reflection must be reflected upon by AI experts and advocates constantly bearing in mind the fact of belonging to humankind.
Solotablet has decided to do it involving people who have invested and are working on AI, philosophizing and creating possible future scenarios.
Dear [Mr/Mrs], thanks for accepting my interview proposal.
To start could you please tell us something about yourself, your company, your current business and interest in AI?
My interest in ethics and artificial intelligence started 2 years ago when I graduated from my MASTER degree. My dissertation topic was on the ethics of artificial intelligence. At that time, I considered my work as unfinished. That's why I started to get a PhD in order to be able to answer the questions I asked myself during my thesis.
Do you find reflection on technological era and the information we are experimenting useful?
I find it entirely useful to have a period of reflection. It’s during a period of reflection that we can study the usefulness, the potential of a technology. But also, to reflect on its deviations (e.g., Apple's AirTags, the initial idea of finding lost objects is fully respectable; but used in an insidious way, they can be transformed into an object to spy on their holders). This is why it’s essential to have a period of reflection to consider the pros and cons.
As for its duration, a reflection period should not be too long. A period that is too long can lead to either a delay or a cancellation of a project that could potentially have improved the living conditions of humanity. On the other hand, a period that is too short, except in exceptional situations (such as the current health crisis), may have the opposite effect to that desired.
In any case, a period of reflection must be carried out over a period fixed in advance, with enough time to analyze all the ins and outs of a project.
What projects and business ideas are you working at?
At the moment I’m not working on any projects. I’m looking for a position in the field of ethics and artificial intelligence.
What goals and objectives do you have?
At the moment, my main objective is to find a company that wants to integrate me on a fixed-term contract (3-year contract) in order to obtain my PhD in ethics and artificial intelligence. And ideally, to be permanently integrated in this company.
To whom are they addressed and in which fields?
Nowadays, everybody speaks about AI, but people probably do it without a clear understanding of what it’s, of its implementations, implications and effects. Information, comprehension and knowledge are not supported by media either. There’s a general inability to distinguish among the various kinds of AI machines such as purely reactive machines (Arend Hintze) like Deep Blue or AlphaGo, Specialized AI (like those programd to perform specific tasks in the automobiles), general AI (AGI or Strong AI) with the capacity to simulate the human mind and elaborate representations of both people and the world, superior AI (Superintelligence) with self-awareness capacity to the point of determining the technological singularity.
How would you define AI and what do you think its current and future state of evolution is?
Overall, I define artificial intelligence as a computer program (because artificial intelligence is a multidisciplinary field) based on biased data, either voluntarily (e.g., press article) or naturally (e.g., The Cognitive Bias Codex, Wikipédia); on obsolete data (data retrieved at a point in time whose characteristics are probably no longer current at the time when the program uses these data to interact (e.g., Recruitment of female engineers at Amazon, Reuters, 2018).
An artificial intelligence is also an intelligent and specialized program, but in a single field (e.g. Deepmind’s Go-gifted AI fails a high school mathematics test, Le Figaro, 2019).
As far as its future is concerned, I would like to see developers focus on unbiased artificial intelligence rather than taking themselves for gods. They (the developers) should fix the quality of their algorithms (Biases in AI: Mapping Needed to Try to Correct Them – Le Mag IT, 2021) while it’s still possible to correct design errors and before it leads to bigger catastrophes. I have the impression that the programs currently on the market are technically very advanced but that the settings aren’t refined, either intentionally or unintentionally. Technical prowess is useful, but it should be done in a research setting, in a test environment. The marketing of a program should only be done after having purged almost all the biases in order to limit possible deviations. The last point is the use of a program. The most dangerous thing about a program is how a program is used, its ultimate intention of use. This point will always be unpredictable because no one has the means to predict how humans think and react.
Do you think that Superintelligence will ever be able to lead us towards Singularity, as Kurzweil defines it?
In the case of a super intelligence, I like to compare its evolution to that of living beings: adaptability, evolutivity and perhaps even, why not, an awareness. I take as an example the character of Sonny (a robot endowed with a conscience) from the science-fiction movie I, Robot (Alex Proyas, 2004).
But I’m asking a bit too much and maybe if an artificial intelligence approaches this degree of consciousness, wouldn’t it be a danger to the existence of humans being? Wouldn’t it (the AI) be able to control the human species on the grounds that humans being are a danger to themselves? And perhaps James Cameron’s film Terminator and Skynet will no longer be part of popular culture but (unfortunately) part of the sad daily reality. These questions must be asked now with the development of military drones (Usine Digitale, 2021).
The most worrying thing at the moment is that the technical prowess is more important than the supervision of the use of artificial intelligence. However, since spring 2021, the European Union has formalized its draught regulation on artificial intelligence (Actu IA, 2021). I understand that it can be restrictive and frustrating for developers, but it’s necessary for the security of all of us. The development of artificial intelligence shouldn’t be used to develop and flatter the ego of one person or group of people but to help all living beings (including animal species).
By way of comparison, in computer networks when a firewall is installed it is a general rule that all ports are closed for security reasons. Ports are only open if they’re needed. I like to think that for the use of artificial intelligence it’s the same. I prefer a strict framework of rules to start with and that little by little, depending on the mastery of the use of the programs, there are fewer constraints and more freedom to arrive at an equitable balance.
The beginnings of AI dates back to the fifties, but only at present there has been a concerned reaction about the role it might play on the future of humankind. In 2015, several scientists signed an appeal to demand a regulation of AI, for some it was just a hypocritical way to clear their conscience.
What do you think about it?
I think that it is, first of all, benevolence towards others and that more than the majority of scientists who signed this appeal did so in all honesty towards humanity. After that, there are surely a few people who have done it to ease their conscience or to enhance their curriculum vitae. But this small group of individuals must be a minority among these scientists and we shouldn’t focus on these people.
Are you in favor of free research and implementation on AI or you think a regulation is better?
My answer is paradoxical. On the one hand, I’m in favor of freedom in the field of artificial intelligence research, because I think that restricting scientific minds isn’t a solution to the field of research and can be detrimental to the benefits that artificial intelligence has to bring to humanity.
On the other hand, it is in the day-to-day use that regulation must take place. I think that as soon as a computer program (or an invention in the broadest sense of the term) infringes on the integrity of a human being, the computer program must be either modified (or even abandoned altogether) by means of common and above all universal regulations. Regulation must serve as a safeguard for scientific minds as well as to allow the fairest possible use of artificial intelligence. In this case, I find it preferable, when recommending legislation, to follow the example of the Council of Europe, to restrict, in this case, to restrict at the beginning and then to grant more freedom later.
Don’t you think that life would make less sense to humans being if Intelligent Machines stole the command to them?
Your question is very interesting because I see two meanings in your question. Firstly, I’m thinking of the substitution of human beings in arduous, repetitive or low value-added jobs by machines. My answer is that I prefer to privilege human dignity and integrity rather than to see human beings assigned to inhuman and laborious tasks.
Then, the second meaning of your question makes me think of people who have to make decisions, the executive (judges, heads of state, chiefs of staff). To this I would reply that if the algorithms are ‘pure’ of any bias. Then in that case the decisions of AI will be more accurate than if they were decisions made by human beings.
We should be worried about the supremacy and power of machines, but above all about the irrelevance of humankind that could be its aftermath. Or this is simply our fear of the future and of the hybridisation characterising it?
At the moment, machines need humans to be created. The human being is still useful to the machines, whether it’s in the design or the manufacture, because in the process. However, the day when a machine will be created by another machine. Then that day will, first of all, see the birth of a new species on Earth. And it’s at that moment that humans being will have to ask themselves questions about their future.
Concerning the supremacy of machines over humans being, I think that the day when artificial intelligence is endowed with a freedom of reasoning as well as an awareness of their capacity to interfere autonomously with the environment around them. There will be a very high risk that humans being will be in danger from machines. What I mean is that humans are a danger to themselves. Humans have always been at war with or enslaved their fellow Humans being, from the Paleolithic period until today in 2021. In his book ‘Leviathan (1651)’, the English philosopher Thomas HOBBES understood this well, and I quote an expression from his book: ‘Man is a wolf to man.’ If machines were to be instructed to protect humans being, they would have to deal with a number of problems. The latter will have to face complex dilemmas to protect him from himself". People may be entertained by films such as ‘Terminator’, ‘Matrix’ where robots chase living beings. But when we know that a country such as Russia has set up a regiment of armed autonomous vehicles (C4ISRNET, 2021). It’s highly likely that if humans don’t regulate the use of this type of weaponry, humanity’s future conflicts will no longer be about conquest, but only about survival against machines.
In conclusion, I consider that as long as the machine isn’t fully autonomous and the human being remains an irreplaceable link in the process of making machines. Then Humans being will be the dominant species on Earth. The only risk is that the machine will become a slave to the human being in its daily tasks (Wall-E, Andrew Stanton, 2008).
According to the philosopher Benasayag, machines are meant to function correctly while humans are meant both to function (chemical processes) and exist (live). Humans being can’t simply undergo data collection (Big) process or obey binary calculation rules; Humans need complexity, a body, sense, culture, experiences; finally, we need to experience negativity and lack of knowledge.
This isn’t true for machines and it will never be, or you think it will?
I fully agree with BENASAYAG’s comments on the fact that for his personal development, the human being needs to interact with the elements that surround him, to have freedom of movement and freedom of thought; on these two points, the current health crisis has shown the needs and the limits of tolerance that Humans have to be restricted in their personal freedom. At this point, the health crisis has clearly shown the effects of deprivation of individual freedom. I digress on another unrelated topic, but this temporary deprivation of rights that we have had, and that many have decried, should be used to improve or redesign our prison system.
BENASAYAG’s words are confirmed with our children, and from a very young age. I don’t have the experience of having children, but I have been able to observe them and get feedback from friends who do. It’s certain that children have a need to experiment, to test (both physically and psychically), to feel the environments around them. What we can conclude from this is that in humans this is innate, whereas for machines it’s acquired, for the time being.
I insist on the expression ‘for the time being’, because with the current state of scientific and technical knowledge and skills, a machine needs to acquire a minimum amount of data to function. The AlphaGo program (Illumin Magazine, 2019), for example, beat the second best Go player. Despite this performance (go is a very complex game), the developers (i.e. the human being’s interference in the program) had to integrate all the rules of the game into AlphaGo. The artificial intelligence wasn’t able to play the game without a minimum of prior knowledge.
Don’t you think that relying completely on machines will lead humankind towards a state of impotence?
My greatest fear is in the military field. Already nowadays, leaving military power to a human being or a group of human beings without safeguards is dangerous for the security of the world’s population. So I can’t imagine leaving this destructive power to a machine or machines. The difference is that it’s easier to reason with one or more human beings than with a machine. In most cases it’s possible to reason with a person (with emotions, with feelings or with economic and financial issues). But with a machine, how can you make it understand that it’s wrong? And how will you manage to reason with it knowing that it’s devoid of feelings, and that it’s not interested in anything that might interest a person, such as money?
What I foresee is also a break in employment by the replacement of humans who are assigned to tasks with little added value but easily replaceable by machines (handling…) or thankless tasks. The people affected by this evolution will be those who are resistant to change, whether voluntary or involuntary.
It’s complicated for some people to evolve in an unfamiliar environment or to learn about new technologies. This is already the case today with the use of computers and digital tools in everyday life. And from a personal point of view, I don’t think it’s a question of age but rather a question of neuroplasticity of the brain, and that’s what makes us, all living beings, superior to machines. It’s this category of people, those with learning difficulties, that we must support in order to prevent them from becoming isolated in a world where digital technology and robotics will predominate.
In his last book Le cinque leggy bronze dell’s digital era (five strong rules of the digital era), Francesco Varanini develops a critical sense of the history of AI rereading it through recently published books written by authors such as Vinge Tegmark, Kurzweil, Bostrom, Haraway, Yudkowski, et al. His criticism addresses to those techno-enthusiasts who celebrate the solar advent of AI, showing the highest interest of a technician towards the machine, but to the detriment of the human being. Apparently, through AI they want to pay tribute to the human being but they’re sterilising and reducing its power, as well as making man play a submissive and servile role.
Which side do you take in this matter?
I take the side of the human being. I think AI should be a tool to assist or even a tool to collaborate (as for example in the medical field).
Are you with the technician/expert/technocrat or with the human being, or you choose the middle ground?
I’m on the side of the middle ground, but I’m still a human being above all.
I’m a measured person, but I remain a human being above all. I’m for progress and technological development but not at the expense of humanity. I see technology as a tool to improve the human condition and not to satisfy humanity.
Does the power, development and expansion of AI concern you? (e.g. in China the digital system is used to control and keep society under surveillance).
It’sn’t the expansion of AI that concerns me most. But it’s the way AI is used. I start with your example of the use of AI in China with the Uyghur people and more generally with the repression and surveillance of the Chinese population.
Then, my other concern is about bias, racism and sexism in AI decision-making (in human resources or in justice).
Another concern is the use of AI in the military domain with the development and use of autonomous weapons on the battlefield.
In the times of coronavirus, many people are wondering about the disappearing workplaces while others are celebrating smart working and the technologies that make it possible. In those businesses where smart working isn’t possible, the use of robotics, automation and AI is widely spread. The debate on the disappearance of jobs/workplaces due to the technique (the ability to perform)/technology (the use of technique and the knowledge to perform) isn’t new, but today it has become urgent more than ever. Both manual and cognitive works are substituted for AI. The current ongoing automation is demolishing entire supply chains, as well as economic and organisational models.
What do think about this?
I think we’re witnessing a new industrial revolution (I’m Referring to Claud Schwab’s Book, The Fourth Industrial Revolution, 2017). Maybe it’s my humanistic and optimistic side, but I see the implementation of AI, in general, as helping and improving the working conditions. I’m fully aware that jobs will disappear and people will be left without jobs. My thoughts may be harsh, but I think it’s better that dangerous, thankless and non-value-added tasks are done by machines so that human beings can take up more interesting jobs with better working conditions and better pay.
However, I’m well aware of the inequality of each individual in the face of change. And state bodies will have to accompany these people through appropriate training so that no one is left by the wayside.
Considering the role AI is having today, do you think that it will open up new job opportunities or you think it will be the protagonist of the most significant destruction of jobs in history, as many fear?
AI will destroy jobs, that is unfortunately obvious. But AI will also create jobs, in new fields. Just as previous industrial revolutions have destroyed jobs to create others. But as I said above, we (governments) will have to support the most disadvantaged people to help them through this new social revolution. We have all the cards in hand to help them (access to knowledge bases with the Internet, distance learning, free courses, MOOCs, funding, etc.).
Le machine al lavoro, gli umani senza lavoro felici e contenti!
For some, the labour world will be crowded by new workers/employees, such as technicians who shape new machines (software and hardware), make them work properly and see to their maintenance, and again technicians who train other technicians and other work typologies associated with the proper functioning of technological machines.
Do you think this is likely to happen?
Yes, I believe there will always be a need for more technicians, because there will always be more technological machines, more training and maintenance to be given on these types of machines.
And even if these were likely to happen, it wouldn’t be so for many people. Elite groups would be created but many people would lose their job, the only thing men need to be themselves.
Would you like to share with us any thoughts or worries on this point?
I understand that some people need their jobs for their personal well-being. But I also think that human beings are capable of adapting and doing another job. Even if for some people, it’s difficult to change jobs after 20 or 30 years in the same job.
As far as the elites are concerned, I find the term ‘elite’ outdated and inappropriate. I feel like I’m back in the ’90s or earlier, when only a handful of elites had access to knowledge. Nowadays, access to knowledge is available to almost everyone, almost without restriction. I see this more as a lack of willingness on everyone’s part to evolve rather than the restriction of access to knowledge by elite groups. For example, 12 years ago I was a simple handler in a postal sorting office with a simple baccalaureate. Now, by dint of willpower and tenacity, and despite the difficulties encountered on all sides, I have managed to reach the level I’m at now. I think that most individuals, unfortunately not all of humanity, remain in control of their destiny and at some point, in their lives, have favored one choice or another (one to favor his personal life, another to favour his professional life). I’m aware that my comments aren’t kind. But I think that people who denigrate successful people don’t know what difficulties these people went through to get there. Of course, there has always been a privileged group, but I don’t think we should generalize to all the ‘elites’.
AI is a political issue as well. It has always been, but today it’s more specific to its use in terms of surveillance and control. Even if little has been said about this, we can all see clearly (simply looking at it’s not enough) what is happening in China. Not so much for the implementation of the facial recognition system as for the strategy of using AI for future world control/dominion. Another very important aspect, which is maybe established by the pervading control made possible by all data control, is people complicity, their participation in the national strategic project giving up their own freedom (privacy).
Could this be a signal of what might happen tomorrow to the rest of us in terms of less freedom and cancellation of democratic systems typical of the western civilization?
First of all, we’re dealing with two different civilizations, the West and the East. This implies two different cultures I think it’s difficult to compare two different cultures.
I think we’re already in a phase of mass surveillance, not only by countries with their intelligence agencies, but by private sector companies that collect personal data, legally with or without our consent.
As far as the Council of Europe’s strict recommendations on AI are concerned, I think it’s better to restrict as much as possible to start with and then allow a bit more freedom.
Or could this be an exasperated reaction not motivated by the fact that AI can be somehow developed and be governed with different purposes?
Unfortunately, no, this isn’t an exasperated reaction, AI, like all inventions that human beings have invented, is neither good nor bad in its origin. The problem isn’t with the invention itself, but with how that invention (or in the case of AI, the data collected) is used.
We’re inside the digital era and we live in this era as happy sleepwalkers in possession of tools humankind didn’t have the chance to use before. We live in parallel realities, all perceived as real and we accept technological mediation in every single activity such as cognitive, interpersonal, emotional, social, economic and political. The spread acceptance of this mediation results in an increasing difficulty in the human ability to comprehend both reality and the world (machines can do it) and in an increasing uncertainty.
How do you think machines i.e. AI could play a different role in replacing man at the centre of satisfying their need for a community, real relationship and to overcome uncertainty?
I don’t see AI as a replacement for human beings but rather as a complement to them.
I take Japan as an example. In this country of about 129 million inhabitants (in 2019) with an ageing population, the Japanese will experience a shortage of caregivers to take care of their elderly. To solve this problem, engineers have developed robots that can assist or even replace caregivers in nursing homes.
The use of robotics should be a means of relieving human beings of difficult and thankless tasks.
Would you like to suggest related topics to be explored in the future?
If it is not already done, I think of philosophical and technical subjects, namely if AIs (or AGIs) will be able to develop a consciousness or is it only a capacity acquired only to living beings?
 The Cognitive Bias Codex – Wikipédia
 Amazon scraps secret AI recruiting tool that showed bias against women – Reuters
 Deepmind's Go-gifted AI fails a high school maths test – Le Figaro
 Biases in AI: mapping needed to try to correct them – Le Mag IT
 I, Robot – 2004
 NATO Seeks to Frame the Use of Artificial Intelligence for Military Purposes – Usine Digitale
 Europe: Official announcement of the draft regulation on artificial intelligence – Actu IA
 A Warning to DoD: Russia advances quicker than expected on Artificial Intelligence, battlefield tech – C4ISRNET
 AI Behind AlphaGo: Machine Learning and Neural Network – Illum Magazine
 A Warning to DoD: Russia advances quicker than expected on Artificial Intelligence, battlefield tech – C4ISRNET
 Graying Japan to Face Unprecedented Challenges – Nippon.com