This article has been written in co-authorship by Alexandre Martin, Lee Schlenker and Emma Pitzalis
The activities of this department are the cornerstone of the administration or the company. It is responsible for defining the activities and the profile of a position, for selecting the ideal candidate from a predefined list. But not only that! This is where the term “human resources” comes into its own in relation to “personnel administration”. In addition to these initial tasks, the Human Resources (HR) department has to manage other tasks with the staff.
In an employee’s career cycle, the Human Resources Department has several important tasks. These different tasks fall into 4 main categories: personnel administration ; resource management (recruitment, training, staff remuneration); management of communication tools; and improvement of working conditions. However, all these activities are time-consuming and require human resources. Nevertheless, they are important activities for the administration and the companies.
This evolution in the role of human resources has been accompanied by an increase in the capabilities and functionality of artificial intelligence (AI). In recent years, the capabilities of AI have increased from a simple chess player in the 2000s. Today, AI is able to detect tumours, create paintings or videos.
AI has the ability to converse with humans through conversational agents (or chabots). For example, Owlie [1] (Falala-Sechet, Antoine and Thiriez, Owlie, un chatbot de soutien psychologique : pourquoi, pour qui ? l’information psychiatrique, 2020), capable of providing psychological support to employees.
As another example, the company OpenAI [2] has been allowing users to test its conversational agent ChatGPT since the end of November 2022. ChatGPT is able to dialogue with humans. But not only that, ChatGPT is also able to generate lines of code to run computer programs [3] (Towards Data Science, 2022) or to write letters or SMS [4] (Radio Télévision Suisse, 2022).
The use of these tools would reduce operating costs in companies (elimination of administrative assistants, reduction of IT staff…). However, the other strong point of this technology would be its ability to alleviate the current talent shortage.
In HR, AI can be used to automate time-consuming, repetitive tasks that do not add value for HR professionals, such as writing job descriptions, processing applications, conducting initial interviews and developing personalized career paths for employees.
In addition to these recruitment and training management tasks, the use of conversational agents will make it possible to answer employees’ questions about human resources issues.
The use of algorithms in human resources opens up new opportunities to “relieve” in this department. But also to improve the quality of life of employees by increasing the availability within the department.
According to a study by researchers [5] (Jia & Coll., A Conceptual Artificial Intelligence Application Framework in Human Resource Management, 2018), automation allows efficiency to be multiplied by a factor of 3. It also reduces costs by around 70%. In addition to reducing costs, this automation allows HR teams to focus on what is most important: the human relationship between HR professionals and employees.
In this case, the use of AI could allow VSEs and SMEs to reintegrate part of their HR activities (recruitment management…) that have been outsourced to external companies.
If the use of AI brings its share of advantages, it also brings its share of disadvantages. During the design phase, biases can be built in, whether intentionally or not. Then there are the selection criteria for recruiting AI. As an example, we can cite the case of Amazon during a recruitment campaign [6] (Reuters, 2018), where biases were implemented in the selection of only male candidates when there were also female candidates.
Another drawback is the quality of the data. Are they recent? Are they representative of needs and reality?
The implementation of AI in HR departments is supposed to restore a sense of human relations within companies. On the contrary, doesn’t this automation risk having the opposite effect by dehumanizing employees?
While the use of AI in human resources brings benefits in terms of increasing productivity, streamlining the workforce and reducing administrative costs, there are also drawbacks. The use of AI also brings other disadvantages.
Let’s mention the case of companies such as Amazon or Uber using AI to manage staff schedules [7]. But these companies are also using decision-making algorithms [8] to make outrageous layoffs.
Another point is the use of AI as an analytical tool. In fact, some programs analyze body movements and correlate them with answers to create a psychological portrait of candidates. This technological advance allows companies to invade the intimate lives of candidates.
Finally, with the implementation of AI in decision-making and management functions, what will be the place of the human being in the organization of the company? We know that a Chinese company has already replaced its CEO with AI [9].
Another question arises: the place of the human being in the company. How can we imagine the place of the human being in a company where the use of automation is increasing?
All these questions can be summed up in a few words:
- Can we imagine a total replacement of HR functions?
- Is the use of AI in HR a source of profit or a source of problems for companies?
However, some of these questions are more specific to the human relationship in this new working environment.
- What is the place of the human being in an automated company?
- Will AI be able to replace the notion of human free will?
- Or would AI be able to enhance the potential of human thinking?
- With the advent of algorithms in management, will humans be under the servitude of AI, or will AI be a new collaborator?
To try to answer these questions, we will present concrete cases from companies located on the North American continent and on the European continent.
——————
Information technologies are never applied objectively, because their use influences the way users see their company and their market. Since the various applications of artificial intelligence are no exception to this rule, it is important to understand how AI conditions our vision of management and to adopt or modify this transformation.
Let’s take a closer look at the issues of personal data protection, implicit bias, agentivity and managerial responsibility in relation to human resource management.
The protection of personal data, or even privacy, is at the center of several legislative initiatives, such as the General Data Protection Regulation [10] (GDPR) and the Artificial Intelligence Act [11].
Privacy refers to an individual’s ability to determine when, how and for what purpose their personal data is used. Jurisprudence is based on the principle that each individual is the sole owner of his or her personal data and has the right of access, rectification, erasure and portability. On the other hand, the issue of data ownership is more controversial in companies, where the boundary between private and organizational property has been the subject of its own case law.
The example of absence management illustrates this issue in companies. Absenteeism management aims to reduce absenteeism, which disrupts the company’s activities, through the organization of work and the quality of the employment relationship.
The causes of absenteeism include the vagaries of private life, dissatisfaction at work, the articulation between work and private life, and the problem of mental health in the company.
Current approaches attempt to predict absenteeism by processing data on the location of work, gender, age, occupation and history of hours worked.
These predictions can be greatly improved by using employee behavioral data from social media and external company databases. But this improvement comes at a price: the line between employees’ personal and professional lives is blurred or even erased.
Implicit bias is another level of concern in the use of AI. Bias can be defined as the quality of an object, idea or event that cannot be reduced directly from the data itself.
Behind visions of shareholder value, customer value or customer lifetime value is assumptions about how the market works, how we view data and what “better” means. While bias is neither good nor bad in and of itself, it can distort our ability to make “objective” decisions and lead to incorrect conclusions. The advertised objectivity of algorithms masks the unconscious biases of their programmers.
Let’s take the case of personality tests used in executive recruitment. In France, according to APEC, one out of two executives has taken at least one test when they were hired, and in 90% of cases it is their personality that has been tested to predict their performance.
The tool asks relevant questions and compares the answers with a large amount of historical data to create typical profiles.
Using artificial intelligence, it is possible to correlate the candidate’s answers with their body movements during the interview. Machine learning offers the opportunity to learn more about a candidate’s personality based on their regular posts on social networks. The danger lies in its interpretation by the company, as these tests was originally intended only for the occupational psychology community. Moreover, the relationship between personality profiles and business performance is highly questionable, as personality is a poor predictor of success and performance.
Human agency, or agentivity, refers to an individual’s ability to act in relation to his or her environment. It is a uniquely human characteristic—the ability to exercise control over one’s own processes of thought, commitment and action. Freedom of action in turn depends on the ability to distinguish the best from the worst, to develop an objective view of the world around us.
However, the desire to automate work is at odds with freedom of action: the price of the low friction of digital life is the loss of context and control over these processes. The blind dependency of employees and management of digital tools will only get worse as automated algorithms become more complex.
The business of lead scoring can serve as an example. Lead scoring is a practice that provides insight into lead management based on demographic and behavioral data. Current computer applications that incorporate artificial intelligence match or exceed the efficiency and processing capabilities of humans for this specific task. These algorithms make these decisions for us in a way that maximizes the efficiency (in the narrow sense) of the process and maximizes the profitability of the tool rather than the employee. There is a real danger: the use of these tools can lead to the devaluation of the employee’s own knowledge and contribute to the problems of deskilling and career development.
Finally, the use of artificial intelligence tools raises the question of managerial responsibility: to what extent is a manager held responsible for decisions made in an “automated” way by the tools entrusted to him? Automated decision-making exploits algorithms based on derived profiles, as well as assumptions about our way of thinking. The interpretability of these “black boxes” is a real challenge—because humans, even those who design them, cannot understand how variables are combined to make predictions.
As recent counter-examples of allegedly “unfair” dismissals at Amazon, IBM and Uber have shown, managerial responsibility for automated decision-making is a major issue of managerial accountability. IBM has developed an AI-based HR program to predict employee turnover, which has led to a one-third reduction in the size of its HR department since its introduction. Amazon’s performance management system, which uses AI to automatically track employee productivity rates, reportedly led the company to lay off hundreds of employees in the US in less than a year. Uber was taken to court in the Netherlands last year for firing employees “solely on the basis of automated monitoring processing.”
In summary, artificial intelligence can produce remarkable results in observable decision environments involving a limited number of actors, where actions are discrete and outcomes are specified by known or predictable rules. The systems that underpin today’s global financial markets, corporations, militaries, police forces and medical, energy and industrial operations all depend on some form of networked AI. However, the trade-off between business and society is more complex: value is created in dynamic multi-agent environments where the rules and variables are often unknown and/or difficult to predict.
If employees view AI investments negatively, they will not trust them and therefore not use them, even if AI vendors fix the algorithms and improve the processes. As a result, leaders will be reluctant to integrate AI-generated insights into the management of the organization, and it will take longer to see returns on their AI investments. The biggest barrier to AI success is AI adoption, and the biggest barrier to AI adoption is the nature of human intelligence.
——————
The benefits, but also the risks, of using AI must encourage us to think critically. It is not a question of rejecting the use of these new technologies, but of making conscious choices about them.
Intuitively, we feel that bringing people together to perform a task requires the finesse of judgment that the machine does not have. Yet, paradoxically, it is quite possible that AI will be more efficient than a recruiter at finding the right profile.
The problem is not so much using AI as using the CV, i.e., sorting the applications according to mathematical criteria. For example, even before the arrival of AI in the recruitment market, HR was eliminating people whose CVs did not meet the required criteria (years of experience, experience in a similar position) without taking the time to meet them or talk to them on the phone.
In fact, we have already given up trusting our intuition in favor of a statistical approach to reality. If AI is maliciously replacing humans, it is primarily because humans have maliciously substituted statistics for symbolism.
Since the advent of quantum physics, we have been trapped in a functionalist relationship to the world, believing that what works is necessarily true. We have changed the definition of what we call “true.”
In the ancient, humanistic approach, we called “true” whatever allowed us to move towards a revelation, that is, any knowledge that could help humanity tormented by its passions and the misery of earthly existence.
Today, truth is characterized by its potential for immediate effectiveness. What is true is not what we understand or what we feel. What is true is what is immediately effective.
Thus, the use of AI in HR processes seems obvious to many entrepreneurs because it saves time and money, at least in the short term.
By rushing to the tools that allow us to save resources instantly, we deprive ourselves of the magical power of the word.
Language is miraculous in the sense that it enables
- To make things appear that did not exist before, such as a new relationship with a candidate we take the time to call when nothing compels us to do so.
- To evoke a feeling other than the one the situation predisposed us too. For example, the respect of an employee that we receive in person to discuss their redundancy.
But talking involves taking a risk that most of us would rather avoid before taking any action at all. For HR professional, talking to a colleague who carries the risk of disagreement or pain, of being confronted with a potentially painful situation.
But we must take that risk if we hope to become a better person: honest, fair and resilient.
By arguing that using the machine will give us time to be more human, we are moving closer and closer to a robotic state without even knowing it. We are so preoccupied with our goals, such as making the perfect hair and avoiding the financial loss of a failed hire that we prefer to delegate this task to a machine whose inner workings we don’t know. This risk aversion reduces our ability to think outside the box, to try new things, to be creative and to learn from our mistakes.
We are increasingly putting ourselves at the service of machines that we believe will make our lives easier. And while the functioning of algorithms is getting closer to human intelligence [12], human thinking is getting closer to that of a computer.
After all, it is by exposing ourselves to the risk inherent in human relationships that we best protect ourselves, others, and the human qualities that bind us together and define our species (especially human agency).
The wisest attitude is probably to accept the extreme tension between the chemical and spiritual poles of existence. Not to reject the use of AI in HR processes, while at the same time allowing oneself the opportunity to confront the reality of relationships.
————————————————————————————————————
[1] Falala-Séchet, C., Antoine, L. & Thiriez, I. (2020). Owlie, un chatbot de soutien psychologique : pourquoi, pour qui ? L’information psychiatrique, 96, 659-666.
https://www.cairn.info/revue-l-information-psychiatrique-2020-8-page-659.htm
[2] ChatGPT: Optimizing Language Models for Dialogue, OpenAI. https://openai.com/blog/chatgpt/
[3] Heiko Hotz, I Used ChatGPT to Create an Entire AI Application on AWS, Towards Data Science. Article publié le 2 décembre 2022
https://towardsdatascience.com/i-used-chatgpt-to-create-an-entire-ai-application-on-aws-5b90e34c3d50
[4] Pascal Wassmer, ChatGPT : la nouvelle étape de l’intelligence artificielle. Article publié le 11 décembre 2022.
[5] Jia Q., Guo Y., Li R., Li Y.R., & Chen Y.W. (2018). A Conceptual Artificial Intelligence Application Framework in Human Resource Management. In Proceedings of The 18th International Conference on Electronic Business (pp. 106–114). ICEB, Guilin, China, December 2-6.
[6] Jeffrey Dastin, Amazon scraps secret AI recruiting tool that showed bias against women, Reuters. Article publié le 11 octobre 2018
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
[7] Jamie Medwell, When the Algorithm Is Your Boss, Tribune. Article publié le 30 janvier 2022
[8] Jamie Medwell, When the Algorithm Is Your Boss, Tribune. Article publié le 30 janvier 2022
[9] Rédaction Radio France, Chine : une femme robot pilotée par une IA devient PDG d'une entreprise de plusieurs milliers de salariés, France InfoTV. Article publié le 28 septembre 2022
[10] Regulation (Eu) 2016/679 Of The European Parliament And Of The Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&qid=1676282201303&from=FR
[11] Regulation Of The European Parliament And Of The Council – Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN
[12] Jean Rohmer, Comprendre l’intelligence artificielle symbolique, The Conversation France. Article publié le 20 novembre 2018.
https://theconversation.com/comprendre-lintelligence-artificielle-symbolique-104033