Much has been said about the potential impact of artificial intelligence (AI) and automation on jobs and the future of work. A common view is that many occupations are ‘at risk’ from AI, potentially leading to large-scale job losses. Our research shows that while there are risks, there are at least as many opportunities to increase the number and quality of jobs. But AI and automation unquestionably have a major impact on work. This change needs a proper people strategy and people professionals should play a central part.

This factsheet describes some of the main forms of artificial intelligence and automation that are having an impact on the world of work. It looks at the ethical implications of using these technologies in the workplace, and considers the role of people professionals in shaping a human future of work.

Artificial intelligence (AI) and automation are often labelled emerging technologies. They have the capability to alter business and economic models through their use. In our review The impact of artificial intelligence, robotics and automation technologies on work we identify three forms of emerging technology:

  • Artificial Intelligence has been defined as the development of computers to engage in human like thought processes, such as learning, reasoning, and self-correction. Others have defined AI as a ‘broad suite of technologies that can match or surpass human capabilities, particularly those involving cognition.’ All these definitions highlight the role of AI in modelling human behaviour and thought, but do not go as far as to talk about using AI technologies to build other smart technologies. It includes machine learning and cognitive computing.

  • Robots, including service robots, robot-assisted procedures, and robotic process automation (RPA). Service robots provide assistance to a human to complete a physical task.

  • Automation is the performance of tasks by machines (often computers) rather than human operators often to increase efficiency and reduce variability.

Our People and machines survey of UK employers shows that improving quality and efficiency are the most common drivers of investment in AI and automation. There’s good evidence that such performance benefits can be realised in practice, although it’s worth noting that the majority come from the healthcare and transport sectors. Evidence from the transport sectors shows, for example, that:

  • An automated decision support system for air traffic controllers, advising them on optimal solution in a real-time setting, increased their performance and accuracy without increasing their workload.

  • A comparison of a realistic rail-signalling automation model and experienced human rail signal operators found that as automation increased, the perceived workload of human operators, both mentally and physically, decreased and the consistency of performance increased.

In some cases, conflict in human-tech interactions – when a machine and a person would make different decisions about which action to pursue – could have a negative impact on performance. One reason for this is the human instinct to resolve the conflict rather than consider alternative forms of action. The French military illustrated this in an experiment where a robot was used by humans to identify a target. When the robot ran low on battery and was programmed to return to the base to recharge, the human operator overruled it to focus on completing the task in hand, despite the issue. Such decisions by people to override robots may lead to serious negative outcomes, for example, jeopardising human safety on aircraft flights. However, overriding technology may be exactly what is needed, as shown in numerous examples of satellite navigation guiding motorists into dangerous situations.

Workers’ attitudes and behaviours in relation to technology will be influenced by the trust they have in it. One way to build trust is to involve workers whose jobs will be affected by AI and automation early in the design and implementation of technology. It’s important to address any concerns workers have and reduce the chance of glitches and unintended consequences for new technologies.

The impact of AI and automation on work is a hotly debated topic. While some predict potentially large scale job losses, others are more modest in their estimates. However, there are two major limitations of such analysis. First, they tend to consider jobs being affected by AI and automation purely as a ‘risk’ for that occupation, not recognising that in augmenting jobs, new technology can enhance them rather than make people redundant. Second, while predictions about the future are important, they are necessarily speculative, so we should also be looking at how employers are already making use of AI and automation and what impacts these are actually having in practice.

Our People and machines research explores this gap by investigating how and why employers have invested in AI and automation in the last five years, and what the results have been for workers and their organisations. It provides evidence that while there are risks for some jobs, in general AI and automation are creating slightly more jobs than they are destroying. Moreover, on average, workers whose jobs are affected by AI and automation tend to see an increase in their autonomy (i.e. they are more empowered to make decisions), have more interesting and skilled work and, in line with this, be better paid.

For many employees, AI and automation can free them up from routine, administrative or low-skilled tasks to create more value for their organisations and do more rewarding jobs. An example from the healthcare sector is of an automated dispensing system in a UK hospital. Here, automation reduced the amount of time pharmacists spent on technical tasks in the dispensary and increased the time they could spend on wards with patients.

The ethical implications of using AI and automation in the workplace and the impact they have on people must be considered seriously. It’s crucial that HR teams and other people professionals act as ‘critical friends’ as well as the main stakeholders ensure technology strategies are people-focussed.

There's ongoing debate around who is responsible for the actions of intelligent machines: the human co-worker – even if the machine is significantly more intelligent? Or the people who built the system, or the organisation that uses it? Currently, these are grey areas, but they need to be addressed by government regulation and legislation.

There are also ongoing concerns around the use of data that comes from the application of new workplace technologies and it's security. Intelligent systems gather and store immense amount of people data. Our Workplace technology: the employee experience research illustrates that the workforce has real concerns about how new technologies may infringe on their privacy and rights, for example workplace monitoring and surveillance. Almost 9 in 10 UK workers expect monitoring to increase, but three-quarters believe that monitoring damages trust between workers and their employers.

Other ethical concerns focus on:

  • Humanoid robots that imitate the behaviours and mannerisms of humans, especially in situations involving vulnerable people who may have difficulty in determining whether they are interacting with a robot or a human.

  • Robot rights – the idea that intelligent robots should have the same rights as animals.

  • AI systems becoming more intelligent than humans – potentially creating their own successor systems, which may be able to self-modify their goals acquiring a level of autonomy. Some scientist warn of the danger of losing control over machines (for example highly intelligent drones/lethal autonomous weapon machines) in the future.

Legal and policy-making approaches have traditionally been reactive rather than holistic/proactive. There have been calls by some to design machines with a moral status. The EU has proposed legislation to allow it ‘to fully exploit the economic potential of robotics and artificial intelligence’, while simultaneously guaranteeing a ‘standard level of safety and security’. The UK needs to have a similar approach and should develop a framework for the safe usage of these technologies.

Although the impact of AI and automation on work and jobs is more positive than many make out, there are nonetheless risks. It's also clear from our People and machines research that these impacts – both positive and negative – are far greater than those employers see from other types of new technology. These major changes need to be managed effectively to maximise opportunities for more value-added jobs, help workers manage any risks for their jobs, and reduce the risks to performance from conflict in human-tech interactions. To do this, employers need to integrate their technology strategies with well developed people strategies.

This presents a clear challenge to HR and people professionals. They need to keep aware and up-to-date with the rapid developments in this field, rely on robust evidence, and proactively engage with critical organisational stakeholders to shape a people-focussed technology strategy. They need act as ‘critical friends’ and be a sounding board in times of technological transformation. The challenge for HR professionals is to balance the needs and expectations of their organisation and employees, and ensure that any use of technology is for the benefit of both. Additionally, HR professionals need to be aware of how technology is impacting their own profession and upskill themselves to add value (see section below on HR and L&D technology).

People professionals should also ask themselves:
  • What’s the evidence for the impact of technology on the world of work and on the profession?
  • What impact might this have on your organisation specifically?
  • Do you have the knowledge and insights to make evidence-based decisions around implementing such technologies?
  • What’s being considered when making these decisions in your organisation? For instance, are people factors being given equal consideration as efficiency measures?
  • Does your organisation have a strategy for accessing the skills needed to work with AI and automation?
    • If yes, how could this be further developed in consultation with employees?
    • If no, is a skills audit planned to identify the gaps?
  • Are you able to community these messages to critical organisational stakeholders?

Listen to our podcast HR tech revolution: friend or foe? in which we chat with three experts about how technology is affecting work and working lives, and the new opportunities it presents to HR and L&D.

Advances in technology present new opportunities for HR and L&D professionals. HR systems are becoming increasingly sophisticated, allowing more automated reporting, self-service options for employees and connections with other business systems. For L&D practitioners, digital learning platforms make collaboration and knowledge sharing across dispersed work force easier than ever.

To date, HR functions are relatively unlikely to see applications of AI and automation. Our People and machines research shows that just 14% of employers who had invested in AI and automation had applied them to HR functions, compared to 44% to operations and 28% to IT processes.

Practitioners need to consider how best to use AI and automation as they continue to develop. This includes ethical considerations, as highlighted in the previous section, but also using technology to align with business and individual needs, and not just implemented for technology’s sake.

In our report, The future of technology and learning, we looked at the research evidence and key considerations for learning technologies. It explored the digital tools L&D practitioners use now and are planning to use in future and found a gap between strategic ambition and practice. To combat this, the report emphasised the need to apply what we know about offline learning to digital strategy.


CognitionX- the AI advice platform

Future Work Centre

Institute for the Future of Work

IPPR – The Progressive Policy Think Tank – jobs and skills

The Oxford Martin Programme on Technology and Employment 

Books and reports

ACAS (2019) New technology and the world of work: the winners and the losers. London: Acas.

LAWRENCE, M. ROBERTS, C. and KING L. (2017) Managing automation: employment, inequality and ethics in the digital age. IPPR: The Progressive Policy Think Tank.

SWANN, A. (2018) The human workplace: people-centred organizational development. London: Kogan Page.

TUC (2018) I’ll be watching you: a report on workplace monitoring. London: TUC.

WEATHERBURN, M. (2017) Don’t believe the hype: work, robots, history. Resolution Foundation.

WILLIS TOWERS WATSON. (2018) The future of work: debunking myths and navigating new realities. Willis Towers Watson.

Visit the CIPD and Kogan Page Bookshop to see all our priced publications currently in print.

Journal articles

CHURCHILL, F. (2020) Use of automation must play to human strengths and flaws, says Fry. People Management (online). 12 June.

DAUGHERTY, P.R., WILSON, H.J. and CHOWDHURY, R. (2019) Using artificial intelligence to promote diversity. MIT Sloan Management Review. Vol 80, No 2, Winter. Reviewed in In a Nutshell.

DAVENPORT, T.H. and RONANKI, R. (2018) Artificial intelligence for the real world. Harvard Business Review. Vol 96, No 1, pp108-116. 

HOWLETT, E. (2019) Employers need diverse AI teams to guard against unethical use of technology. People Management (online). 9 August.

HOWLETT, E. (2019) Most workers feel automation has ‘made work life better’. People Management (online). 3 September.

TOWERS-CLARK, C. (2018) How does HR fit into an artificially intelligent future?People Management (online). 8 February. 

CIPD members can use our online journals to find articles from over 300 journal titles relevant to HR.

Members and People Management subscribers can see articles on the People Management website.

This factsheet was last updated by Jonny Gifford and Edward Houghton.

Jonny Gifford

Jonny Gifford: Senior Adviser for Organisational Behaviour

Jonny is the CIPD’s Senior Adviser for Organisational Behaviour. He has had a varied career in researching employment and people management issues, working at the Institute for Employment Studies and Roffey Park Institute before joining the CIPD in 2012. A central focus in his work is applying behavioural science insights to core aspects of people management. Recently he has led programmes of work doing this in the areas of recruitment, reward and performance management. 

Jonny is also committed to helping HR practitioners make better use of evidence to make better decisions. He runs the CIPD Applied Research Conference, which exists to strengthen links between academic research and HR practice. 

Ed Houghton

Edward Houghton: Head of Research 

Edward Houghton is the Head of Research at the CIPD. Since joining the institute in 2013 he has been responsible for leading the organisation's human capital research work stream exploring various aspects of human capital management, theory and practice; including the measurement and evaluation of the skills and knowledge of the workforce. He has a particular interest in the role of human capital in driving economic productivity, innovation and corporate social responsibility. Recent publications have included “A duty to care? Evidence of the importance of organisational culture to effective governance and leadership” for the Financial Reporting Council’s Culture Coalition, and “A new approach to line manager mental well-being training in banks” an independent evaluation of the Bank Workers Charity and Mind partnership to deliver mental health awareness training in the UK financial services sector. 

Practical guidance

Keep informed about employment law and a wide range of current HR, L&D and OD topics with our updates, factsheets and guides

Read more