Homo sapiens e Homo digitalis /

Machines are unable to learn based on real life experience ( Muneeb Sikander)

Machines are unable to learn based on real life experience ( Muneeb Sikander)

04 Ottobre 2021 The sapiens
The sapiens
The sapiens
share
With respect to the evolution of AI, I believe the question of whether machines can think is no more interesting than the question of whether a submarine can swim. It is humans who decide where AI should be applied. As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership. Finally, I am not as keen on the idea of singularity as Kurzweil defines it.

The advent of machines has obliged us towards a new education, or better towards an auto-education; it imposes us to react, thus discovering our potentiality, our strength, our courage, our wisdom. We should learn to choose. We should discover our sense of proportion, develop the ability to refuse when necessary, or develop the ability to recognize when to set a limit to the invasion of machines in our life, in our own body.” - Francesco Varanini.

As Noah Harari puts it “when neural networks will be replaced by intelligent software (…) human history will come to an end and a completely new process will begin.” This prevision might prove itself wrong but if we reflect on the profoundness of current change and the role technology is playing in determining it, we can easily see that we are going through a phase of paradigmatic change. When novelty emerges, we might not even be human anymore; some sort of Cyborg, symbionts, mere artificial intelligence more or less hybridized, powerful, intelligent and able to learn (apprehend) but not humans anymore. 

If this prospective is plausible, an extensive, precise and incisive reflection on what is happening will be needed more than ever. For this reflection, the topic of artificial intelligence is indisputable more than others for it clearly suggests the risk and the challenge that all humankind is faced with. This risk has been underestimated and this challenge has probably been embraced superficially by many. This topic arouses general interest, so it is worth dealing with it in-depth. Furthermore, this reflection must be reflected upon by AI experts and advocates constantly bearing in mind the fact of belonging to humankind. 

Solotablet has decided to do it involving people who have invested and are working on AI, philosophizing and creating possible future scenarios. 

In this paper we propose the interview of Carlo Mazzucchelli  with Muneeb SikanderEconomist & Strategic Planning Expert | LSE Startup Mentor | AI Innovation


 

Dear [Mr. Muneeb Sikander], thanks for accepting my interview proposal. To start could you please tell us something about yourself, your company, your current business and interest in AI? Do you find reflection on technological era and the information we are experimenting useful? What projects and business ideas are you working at? What goals and objectives do you have? To whom are they addressed and in which fields?

Thank you for inviting me to share my thoughts in this interview Carlo.

So after finishing my higher education in the United Kingdom from the LSE back in 2015 - I began my career as an economist and a strategy/innovation consultant. However, it was in 2020 when I first began to develop a strong interest in Artificial Intelligence (AI) after beginning to experiment the practical applications of AI and Machine Learning (ML) in the sphere of economics and financial markets.

Over time after developing a more avid interest in the subject, I soon began to take a more pro-active role in trying to shape the direction of global AI policymaking as well the practical application of ML and AI. This is what soon led me to soon making the acquaintance of Dr. Emmanuel Goffi (We will not be able to set efficient rules framing development and use of AI) with whom I began to interact and resulted in me being one of the founding members of the Global AI Ethics Institute (GAEI) where I currently serve as an Executive Board Member.

Recently, I have also joined the International Group of Artificial Intelligence (IGOAI) as an advisory board member and their country adviser for Pakistan. 

 

Nowadays, everybody speaks about AI, but people probably do it without a clear understanding of what it is, of its implementations, implications and effects. Information, comprehension and knowledge are not supported by media either. There’s a general inability to distinguish among the various kinds of AI machines such as purely reactive machines (Arend Hintze) like Deep Blue or AlphaGo, Specialized AI (like those programmed to perform specific tasks in the automobiles), general AI (AGI or Strong AI) with the capacity to simulate the human mind and elaborate representations of both people and the world, superior AI (Superintelligence) with self-awareness capacity to the point of determining the technological singularity. How would you define AI and what do you think its current and future state of evolution is? Do you think that Superintelligence will ever be able to lead us towards Singularity, as Kurzweil defines it?

So I tend to look at this question from the lens of economics, whereby AI can be viewed as General Purpose Technology (GPT). The way two distinguish weak and strong for me personally is the method as outlined by Wisskirchen et al. With weak AI “the computer is merely an instrument for investigating cognitive processes the computer simulates intelligence. Strong AI, on the other hand, entails “the processes where computers are intellectual, self-learning processes. 

Computers can understand through the right software/programming and can optimize their behavior based on their former behaviour and their experience.  Strong AI includes automatic networking with other machines, which leads to a dramatic scaling effect. The most notable economic disciplines of AI are deep learning, robotization, dematerialization, the gig economy and autonomous driving cars among others. 

With respect to the evolution of AI, I believe the question of whether machines can think is no more interesting than the question of whether a submarine can swim. It is humans who decide where AI should be applied. As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership. Finally, I am not as keen on the idea of singularity as Kurzweil defines it. 

For me a significant factor missing from any form of artificial intelligence is the inability of machines to learn based on real life experience. Diversity of life experience is the single most powerful characteristic of being human and enhances how we think, how we learn, our ideas and our ability to innovate. Machines exist in a homogeneous ecosystem, which is ok for solving known challenges, however even Artificial General Intelligence will never challenge humanity in being able to acquire the knowledge, creativity and foresight needed to meet the challenges of the unknown.

 

 

Don’t you think that life would make less sense to human beings if Intelligent Machines stole the command to them? We should be worried about the supremacy and power of machines, but above all about the irrelevance of humankind that could be its aftermath. Or this is simply our fear of the future and of the hybridization characterizing it? According to the philosopher Benasayag, machines are meant to function correctly while humans are meant both to function (chemical processes) and exist (live). Human beings can’t simply undergo the data collection (Big) process or obey binary calculation rules; human beings need complexity, a body, sense, culture, experiences, finally we need to experience negativity and lack of knowledge. This is not true for machines and it will never be, or you think it will?  Don’t you think that relying completely on machines will lead humankind towards a state of impotence?

I beleve that machines and humans both have a role to play and I do not necesarrily believe the two to be at odds as each has a comparative advantage. As the economist Paul Samuelson once put the idea quite well although he was not referring to the context of technology but international trade, “Thousands of important and intellignet men have never been able to grasp the concept of comparative advantage or believe in it, even after it was explained to them.” Let me share an example of how this would apply in corporate strategic decision making.

Legacy V Zero Assumption

Given the strategy is human-driven, analytics are typically only brought used after a strategy is being executed. And in doing so, the option to refine the strategy using a data-driven approach is only available at the next implementation. By doing the former, firms have difficulty creating value. 

Data science is applied inherently on already known information or process, which makes an "optimization trap." The former does not attempt to create value in any novel way from multiple signals before setting the strategy making uncompetitive products, all at an inflated cost - both monetary and temporal 

On the other hand a domain analysis starts with zero assumptions about what should be the core focus or topics for prioritization. Instead of using the legacy approach one ought to machines to find signals within open-source and alternative data to anchor the strategic priorities within the said domain. 

Taking this approach leads to less bias, faster intelligence, and more precision than experience alone ever could. Combined with human expertise, the technique produces unrivaled outputs as humans alone lack the requisite processing power to consider all relevant varibles which machines can easily compute. 

 

We’re inside the digital era and we live in this era as happy sleepwalkers in possession of tools humankind did not have the chance to use before. We live in parallel realities, all perceived as real and we accept technological mediation in every single activity such as cognitive, interpersonal, emotional, social, economic and political. The spread acceptance of this mediation results in an increasing difficulty in the human ability to comprehend both reality and the world (machines can do it) and in an increasing uncertainty. How do you think machines i.e. AI could play a different role in replacing man at the centre in satisfying their need of community, real relationship and to overcome uncertainty?

In an interconnected world, problems, opportunities, and trends are latent. Large firms exist within the confines of a system that is influenced by established competition or disruption — both known and unknown. There are too many signals, noise, and information for anyone to process, let alone understand. 

This intelligence gap has led to disastrous consequences for our political systems in recent years, but it's just as detrimental to corporate decision-making in VUCA business environments, creating angst, confusion, and retreat to the known.  

The only solution to these issues is to create a robust machine intelligence ecosystem that can be applied to multiple problem sets and anchor the core of an enterprise strategy. It’s possible to do in hours what once took months at a higher level of precision for a fraction of the cost (if not free - think Google Trends). The catch?

  • Companies need to promote unorthodox insights that are divergent to legacy belief systems.
  • Business leaders must be OK with ambiguity, and iterating forward with lightly defined outcomes while guided by insights found along the way. No one can predict the future.
  • Understand that there are drastic consequences for indecision, lack of creativity, and lack of technical excellence.
  • Business leaders that retreat to familiar legacy routines in complex and volatile scenarios will find their programs ineffective.

To avoid irrelevance requires a convergence of an in-depth technical understanding of machine learning, systems thinking, and intellectual value creation embedded as the anchor of strategies and operations—a rare thing among the managerial elite in legacy organizations. In most corporations, especially old ones, the characteristics that would make an AI program truly great are disincentivized or systematically rooted out.

Thinking with machines fills those gaps to help managers become fluent and pragmatic at embedding AI at the core of ALL decisions and strategy. The future belongs to the principled, creative, and curious. Machines will amplify, not replace, those that embrace those practices. 

 

 

The beginnings of AI date back to the Fifties, but only at present there has been a concerned reaction about the role it might play on the future of humankind. In 2015, several scientists signed an appeal demanding a regulation on AI, for some it was just a hypocritical way to clear their conscience. What do you think about it? Are you in favor of free research and implementation on AI or you think a regulation is better?

I most certainly believe there is a need for nationw to have a sense of direction towards the development of AI. This is because any technology such as AI shall havesubstantial impacts on the economy with respect to productivity, growth, inequality, market power, innovation, and employment. Hence, for me the idea of not having a national AI policy or any regulation at all to govern the development of AI or its use - is not very intelligent behavior.   

As  Fyodor Dostoyevsky one wrote in Crime and Punishment - “It takes something more than Intelligence to act Intelligently.” Countries around the world in particular need to wake up and their governments must immediately begin to design and formulate regulatory structures or “Rules of the Game” to ensure responsible use of Artificial Intelligence. Machines are non-living things, but they still can absorb human attitudes. 

For instance, algorithms now decide your credit score, which patients receive medical care, and which children enter foster care. For low-income individuals, this hidden web of automated systems and alogrythmic bias can trap them into poverty. The responsibility to draft such regulation is far more pressing for developing countries in the backdrop of the Covid 19 crisis -which has already resulted in a record number of their populations being pushed into poverty. 

 

In his last book Le cinque leggi bronzee dell’era digitale (Five strong rules of the digital era), Francesco Varanini develops a critical sense of the history of AI re-reading it through recently published books written by authors such as Vinge Tegmark, Kurzweil, Bostrom, Haraway, Yudkowski et al. His criticism addresses to those techno-enthusiastics who celebrate the solar advent of AI, showing the highest interest of a technician towards the Machine, but to the detriment of the human being. Apparently, through AI they want to pay tribute to the human being but they are sterilizing and reducing its power, as well as making man play a submissive and servile role. Which side do you take in this matter? Are you with the technician/expert/technocrat or with the human being, or you choose the middle ground? Does the power, development and expansion of AI concern you? (e.g. in China the digital system is used to control and keep society under surveillance).

I tend to be highly critical of views held by techno-enthusiasts as I firmly I believe in the idea existence before essence. I would cite two quotes to justify my reasoning for having such an opinion on the matter.

Technological progress is like an axe in the hands of a pathological criminalThe human spirit must prevail over technology.” – Albert Einstein

“We feel that even if all possible scientific questions be answered, the problems of life have still not been touched at all. Of course there is then no question left, and just this is the answer. The solution of the problem of life is seen in the vanishing of this problem.” – Ludwig Wittgeinstein 

 

In the times of Coronavirus, many people are wondering about the disappearing workplaces while others are celebrating smart working and the technologies that make it possible. In those businesses where smart working is not possible, the use of robotics, automation and AI is widely spread. The debate on the disappearance of jobs/workplaces due to the technique (the ability to perform) / technology (the use of technique and the knowledge to perform) is not new, but today it has become urgent more than ever. Both manual and cognitive work are substituted for AI. The current ongoing automations are demolishing entire supply chains, as well as economic and organizational models. What do think about this? Considering the role AI is having today, do you think that it will open up new job opportunities or you think it will be the protagonist of the most significant destruction of jobs in history, as many fear? For some, the labor world will be crowded by new workers/employees, such as technicians who shape new machines (software and hardware), make them work properly and see to their maintenance, and again technicians who train other technicians and other work typologies associated to the proper functioning of technological machines. Do you think this is likely to happen? And even if this were likely to happen it wouldn’t be so for many people. Elite groups would be created but many people would lose their job, the only thing men need to be themselves. Would you like to share with us any thoughts or worries on this point?

So this is a question which deeply concerns me at a personal and a professional level as an economist. The issue of employment is where we can learn from previous technological revolutions. For example, the former Bank of England chief economist Andy Haldane rightly pointed out that the original ‘luddites’ in the 19th century had a justified grievance. They suffered severe job losses, and it took the span of a generation for enough jobs to be created to overtake the ones lost. It is a reminder that the introduction of new technologies benefits people asymmetrically, with some suffering while others benefit.

To realise the opportunities of the future we need to acknowledge this and prepare sufficient safety nets, such as well-funded adult education initiatives.

For education policy, Trajtenberg (2018) highlights three types of skills that are likely to be needed as AI diffuses: analytical and creative thinking, interpersonal communication, and emotional control. Agrawal, Gans, and Goldfarb (2018) emphasize the role of human judgment, defined as the ability to identify what to do with a prediction (e.g., determine the payoff function in the context of decision making). Judgment is the skill of knowing the objectives of an organization and translating that into data that can be collected. Similarly, Francois (2018) emphasizes the skill of telling the machines what to optimize.

The issue is also discussed in detail by Dr. Carl Benedict Frey who is the Director of the Future Work Program at the University of Oxford in his book The Technology Trap. Frey’s point, about the importance of education should send alarm bells ringing around the world. In the so-called race between technology and education, current budget cuts amid a general slowdown in human capital accumulation will present workers (both present and future) with even greater troubles ahead. His book is an excellent and unique reminder that the future of work will depend on policy choices made in the present.

AI is a political issue as well. It has always been, but today it is more specific for its use in terms of surveillance and control. Even if little has been said about this, we can all see clearly (simply looking at it is not enough) what is happening in China. Not so much for the implementation of the facial recognition system as for the strategy of using AI for future world control/dominion. Another very important aspect, which is maybe established by the pervading control made possible by all data control, is people complicity, their participation to the national strategic project giving up their own freedom (privacy). Could this be a signal of what might happen tomorrow to the rest of us in terms of less freedom and cancellation of democratic systems typical of the western civilization? Or could this be an exasperated reaction not motivated by the fact that AI can be somehow developed and be governed with different aims and purposes?

The parallel movement toward universal surveillance in Eastern and Western societies reveals a remarkable convergence in thinking. The criticism of each other by either the East or the West is meaningless on the issue of digital surveillance as both happen to be two sides of the same coin. Mass surveillance has been explained in economic terms as a requirement of contemporary capitalism. Credit scoring is necessary if loans are to be efficiently allocated, for example. I tend to a realist in this situation and see either little or constrained role of human agency to make any impact on this matter. As one my favorite economists once wrote:

"However, whether favorable or unfavorable, value judgments about capitalist performance are of little interest. For mankind is not free to choose. This is not only because the mass of people are not in a position to compare alternatives rationally and always accept what they are being told. There is a much deeper reason for it. Things economic and social move by their own momentum and the ensuing situations compel individuals and groups to behave in certain ways whatever they may wish to do—not indeed by destroying their freedom of choice but by shaping the choosing mentalities and by narrowing the list of possibilities from which to choose."― Joseph Alois Schumpeter, Capitalism, Socialism, and Democracy 

 

Would you like to say something to the readers of SoloTablet, something to read on Artificial Intelligence for instance? Would you like to suggest related topics to be explored in the future? What would you suggest to contribute sharing and publicizing this initiative in which you yourself have been involved?

I enjoyed the format and the wide variety of questions that you have integrated as part of the interview which gives the reader a wide perspective on the topic of AI.  I would like to end with once again expressing my gratitude for inviting me to share my thoughts with you and the readership of SoloTablet.

 

comments powered by Disqus

Sei alla ricerca di uno sviluppatore?

Cerca nel nostro database