Homo sapiens e Homo digitalis /

Humans are complex beings (Prantik Mukherjee )

Humans are complex beings (Prantik Mukherjee )

07 Gennaio 2022 The sapiens
The sapiens
The sapiens
share
We are not all about performing task and processing information. We have feelings, emotions, empathy for each other etc. AIs haven’t achieved that level of advancement where they can show these kind of complexity like human beings. If we are able to develop Sentient AIs in the coming decades then that might become a reality.

The advent of machines has obliged us towards a new education, or better towards an auto-education; it imposes us to react, thus discovering our potentiality, our strength, our courage, our wisdom. We should learn to choose. We should discover our sense of proportion, develop the ability to refuse when necessary, or develop the ability to recognize when to set a limit to the invasion of machines in our life, in our own body.” - Francesco Varanini.

As Noah Harari puts it “when neural networks will be replaced by intelligent software (…) human history will come to an end and a completely new process will begin.” This prevision might prove itself wrong but if we reflect on the profoundness of current change and the role technology is playing in determining it, we can easily see that we are going through a phase of paradigmatic change. When novelty emerges, we might not even be human anymore; some sort of Cyborgsymbionts, mere artificial intelligence more or less hybridized, powerful, intelligent and able to learn (apprehend) but not humans anymore. 

If this prospective is plausible, an extensive, precise and incisive reflection on what is happening will be needed more than ever. For this reflection, the topic of artificial intelligence is indisputable more than others for it clearly suggests the risk and the challenge that all humankind is faced with. This risk has been underestimated and this challenge has probably been embraced superficially by many. This topic arouses general interest, so it’s worth dealing with it in-depth. Furthermore, this reflection must be reflected upon by AI experts and advocates constantly bearing in mind the fact of belonging to humankind. 

Solotablet has decided to do it involving people who have invested and are working on AI, philosophizing and creating possible future scenarios. 

In this paper we propose the interview of Carlo Mazzucchelli  with Prantik Mukherjee (Senior Executive- data privacy, NIIT Ltd.) 

Dear [Mr/Mrs], thanks for accepting my interview proposal. To start could you please tell us something about yourself, your company, your current business and interest in AI? Do you find reflection on technological era and the information we are experimenting useful? What projects and business ideas are you working at? What goals and objectives do you have? To whom are they addressed and in which fields?

I am a Data Privacy Lawyer working for the MNC “NIIT Ltd., India”.

AI is the next big thing for technology lawyers like us. I grew my interest in AI back in my law school days and even did a certification course to enhance my knowledge about this field. The AI Bill which has been introduced in the European Union is a step towards legal compliance for AI and will be attracting many people from the legal field to practice in this area.

I find the reflection on technological era and the information you are experimenting quite useful as it covers range of ongoing matters related to the field of technology.

I work on Privacy Audits and legal compliance for our esteemed organization and further aspire to become a prominent name in the Technology Law field specializing in but not limited to Data Privacy, Fintech and AI.

Nowadays, everybody speaks about AI, but people probably do it without a clear understanding of what it is, of its implementations, implications and effects. Information, comprehension and knowledge are not supported by media either. There’s a general inability to distinguish among the various kinds of AI machines such as purely reactive machines (Arend Hintze) like Deep Blue or AlphaGo, Specialized AI (like those programmed to perform specific tasks in the automobiles), general AI (AGI or Strong AI) with the capacity to simulate the human mind and elaborate representations of both people and the world, superior AI (Superintelligence) with self-awareness capacity to the point of determining the technological singularity. How would you define AI and what do you think its current and future state of evolution is? Do you think that Superintelligence will ever be able to lead us towards Singularity, as Kurzweil defines it?

Artificial Intelligence can be defined as the ability of machines to demonstrate intelligence at par with human intelligence.

Mankind has able to achieve Artificial General Intelligence for AI but I think in another 25 years we will be able to achieve Artificial Super Intelligence and also we will see emergence of Sentient AIs. A lot of the manual tasks are being handled by AI right now and in future according to my opinion many unskilled, semi-skilled and skilled tasks will be handles by AI.

I agree to Kurzweil’s statement to some extent as with the emergence of Sentient AI's that can become quite a possibility. It actually all depends upon how humans are going to handle it and how the regulatory bodies and supervisory bodies going to act in case of crisis.

Every new technology comes with its own set of pros and cons. So, to minimize the cons we have to set standards, adhere to regulations and periodic audits should be conducted to keep it under control.

 

The beginnings of AI date back to the Fifties, but only at present there has been a concerned reaction about the role it might play on the future of humankind. In 2015, several scientists signed an appeal demanding a regulation on AI, for some it was just a hypocritical way to clear their conscience. What do you think about it? Are you in favor of free research and implementation on AI or you think a regulation is better? Don’t you think that life would make less sense to human beings if Intelligent Machines stole the command to them? We should be worried about the supremacy and power of machines, but above all about the irrelevance of humankind that could be its aftermath. Or this is simply our fear of the future and of the hybridization characterizing it? According to the philosopher Benasayag, machines are meant to function correctly while humans are meant both to function (chemical processes) and exist (live). Human beings can’t simply undergo the data collection (Big) process or obey binary calculation rules; human beings need complexity, a body, sense, culture, experiences, finally we need to experience negativity and lack of knowledge. This is not true for machines and it will never be, or you think it will?  Don’t you think that relying completely on machines will lead humankind towards a state of impotence?

I think free research and regulation can go hand in hand. It basically depends upon how the authorities are going to handle AI as a technology.

The main purpose behind developing new technologies is to make human lives easy. AIs were developed to make human life easier. As I said earlier that every new technology comes with pros and cons of its own. We have to find solutions to eradicate or minimize the cons.

Humans are complex beings. We are not all about performing task and processing information. We have feelings, emotions, empathy for each other etc. AIs haven’t achieved that level of advancement where they can show these kind of complexity like human beings. If we are able to develop Sentient AIs in the coming decades then that might become a reality.

Humans have already started relying on machines for various tasks. AI is slowly becoming a part of our daily lives. In my opinion instead of seeing AI as a possible case of impotency for humans, we can see it as a boon for the future. Also, as I am continuously emphasizing about regulation, it can minimize the ill effect of AI and can enhance a symbiotic relationship between machines and humans.

 

In his last book Le cinque leggi bronzee dell’era digitale (Five strong rules of the digital era), Francesco Varanini develops a critical sense of the history of AI re-reading it through recently published books written by authors such as Vinge Tegmark, Kurzweil, Bostrom, Haraway, Yudkowski et al. His criticism addresses to those techno-enthusiastics who celebrate the solar advent of AI, showing the highest interest of a technician towards the Machine, but to the detriment of the human being. Apparently, through AI they want to pay tribute to the human being but they are sterilizing and reducing its power, as well as making man play a submissive and servile role. Which side do you take in this matter? Are you with the technician/expert/technocrat or with the human being, or you choose the middle ground? Does the power, development and expansion of AI concern you? (e.g. in China the digital system is used to control and keep society under surveillance).

I will take a middle approach here. As I said earlier that technological advancement is required for the wellbeing of mankind but too much of anything or giving freehand to any technology can pose some risk. AI is actually helping scientists and researchers in solving complex problems and even it has helped archeologists to decipher an Egyptian tablet which it was getting difficult for the human mind to comprehend and decipher.

The real threat might come from the ability of the AIs to self-program themselves. If the AIs self-program themselves to such an extent that it is beyond the control of humans then I think in that case humans have to be ready beforehand and come up with a solution to the problem like a kill-switch to the AI or develop a bug which will disintegrate its system in case of an arising risk.

Power, development and expansion of AI concern me to some extent. Involving AI directly into state affairs might pose risk to the national security of a nation. There’s a high risk of AI manipulation by enemy nations or the AI might wrongly leak personal or sensitive personal data of the citizens. So, keeping it under a check and balance is a requirement especially if it’s involved in state affairs.

murales

 

In the times of Coronavirus, many people are wondering about the disappearing workplaces while others are celebrating smart working and the technologies that make it possible. In those businesses where smart working is not possible, the use of robotics, automation and AI is widely spread. The debate on the disappearance of jobs/workplaces due to the technique (the ability to perform) / technology (the use of technique and the knowledge to perform) is not new, but today it has become urgent more than ever. Both manual and cognitive work are substituted for AI. The current ongoing automations are demolishing entire supply chains, as well as economic and organizational models. What do think about this? Considering the role AI is having today, do you think that it will open up new job opportunities or you think it will be the protagonist of the most significant destruction of jobs in history, as many fear? For some, the labor world will be crowded by new workers/employees, such as technicians who shape new machines (software and hardware), make them work properly and see to their maintenance, and again technicians who train other technicians and other work typologies associated to the proper functioning of technological machines. Do you think this is likely to happen? And even if this were likely to happen it wouldn’t be so for many people. Elite groups would be created but many people would lose their job, the only thing men need to be themselves. Would you like to share with us any thoughts or worries on this point?

AIs will impact jobs in the semi-skilled and unskilled areas. It will come at a cost affecting a section of people of the society but it will also open up new avenues and demand for new set of skills will be at rise.

I think many people will remember or even know that before the invention of ATMs, Bank Tellers were employed in banks to do the task. But when ATMs came into the picture, it proved to be much effective and so it easily replaced the manual labor.

So, my point is it all depends upon the effectiveness the machines are showing in comparison to human labor. Even if few job opportunities might become obsolete but it will open new avenues for the current and coming generations.

 

AI is a political issue as well. It has always been, but today it is more specific for its use in terms of surveillance and control. Even if little has been said about this, we can all see clearly (simply looking at it is not enough) what is happening in China. Not so much for the implementation of the facial recognition system as for the strategy of using AI for future world control/dominion. Another very important aspect, which is maybe established by the pervading control made possible by all data control, is people complicity, their participation to the national strategic project giving up their own freedom (privacy). Could this be a signal of what might happen tomorrow to the rest of us in terms of less freedom and cancellation of democratic systems typical of the western civilization? Or could this be an exasperated reaction not motivated by the fact that AI can be somehow developed and be governed with different aims and purposes?

As I discussed earlier that involving AI into politics can pose risk to the national security and privacy rights of citizens if not kept in check.

The European AI Bill has addressed the issue on AI posing a threat on personal data of citizens. The GDPR might cover the same in the near future too. The AI Act and GDPR might work hand in hand while dealing with privacy issues arising from AI.

Common people don’t have much knowledge about the working of an AI or any complex technology. So, they often give out their personal data not understanding the repercussions it can cause. This can only be tackled if basic technology related awareness is spread among people.

We’re inside the digital era and we live in this era as happy sleepwalkers in possession of tools humankind did not have the chance to use before. We live in parallel realities, all perceived as real and we accept technological mediation in every single activity such as cognitive, interpersonal, emotional, social, economic and political. The spread acceptance of this mediation results in an increasing difficulty in the human ability to comprehend both reality and the world (machines can do it) and in an increasing uncertainty. How do you think machines i.e. AI could play a different role in replacing man at the centre in satisfying their need of community, real relationship and to overcome uncertainty?

As the technology advancement is taking place, AI is being programmed to make it a companion for humans.

A few years back a Hollywood Sci-fi film came out titled “Her” where it clearly portrayed how AIs can become a companion to humans. The AIs are becoming more adaptive to the environment and I think in near future when we achieve Artificial Super Intelligence, we can witness growing personal relationship of man and the machine.

As we are achieving technological advancement, humans are finding more difficult to bond with others as in today’s world bonds are created based on material possession or external look of a person. So, AI here can interfere and become companions for those who find difficult to bond with people.

And talking about overcoming uncertainty, AI is already doing that. As I said earlier, AI is self-programming itself and become more adaptive to the environment. Hence, it assess the environment and helps in overcoming any uncertainty. One such example is Robo Advisers.  Robo-advisors or robo-advisers are a class of financial adviser that provide financial advice and investment management online with moderate to minimal human intervention. So, basically they assess the financial environment and give out the best solution for investment to humans with little or no human intervention in the middle.

 

Would you like to say something to the readers of SoloTablet, something to read on Artificial Intelligence for instance? Would you like to suggest related topics to be explored in the future? What would you suggest to contribute sharing and publicizing this initiative in which you yourself have been involved?

AI is a very evolving field. So, one should keep himself / herself updated by continuously reading and consuming various materials online. They can go through books like “Introduction to AI by Philip C. Jackson” and Artificial Intelligence: a Modern Approach by Stuart Russell and Peter Norvig.” They can go through the MIT Technology Review Journal and they can follow YouTube channels like “ColdFusion” or “V-Sauce” etc. to get a thorough understanding of AI.

Moreover, I will suggest related topics to be explored in future. Being a part of the Privacy fraternity, I will suggest that if more contribution or publicizing is done in the area of data privacy and AI because it has become a crucial topic to be explored and researched on so that we can get a solution where we can avail the benefit of AI without risking our personal data.

Frutta

comments powered by Disqus

Sei alla ricerca di uno sviluppatore?

Cerca nel nostro database