The Trouble with AI in Learning and Development

In helping workers cope with the modern workplace’s rapid pace of change, L&D professionals can (and do) use technology – to allow more L&D materials to be delivered to learners wherever they are, whenever they need to learn or benefit from some performance support.

Yet, while it can help the L&D professional, technology – particularly artificial intelligence (AI) – can also hinder L&D initiatives and strategies.

Writing for FT | IE Business School Corporate Learning Alliance, the business writer, blogger and a former editor at The Wall Street Journal, Catherine Mazy, explains that AI programs can keep ever-closer tabs on staff performance and potential time-wasting. From an employer’s view, this appears to have some advantages but it may not be popular and ‘motivating’ for the workers. Moreover, opting out isn’t always easy for staff to do, despite the European Union’s General Data Protection Regulation (GDPR).

the trouble with AI in learning and development

AI monitoring

This AI monitoring is based on screen time and, as such, raises questions about the definition of ‘productivity’. Mazy adds that, faced with this increased surveillance, workers could – in return – start to demand time back for working late or at weekends.

According to Mazy, until recently, knowledge workers – typically, white-collar employees – were evaluated by the quality of their ideas rather than the quantity of things they produced. Now, however, AI programs claim to keep tabs on how they do their jobs and when they’re wasting time. This, effectively, puts these people in competition with machines.

While it’s fair to assume that employers check which websites their staff visit; retain email logs as possible evidence in any future disciplinary action or client dispute, and record or monitor phone calls for quality assurance purposes, modern software can now:

  • take photos every three to ten minutes via the desktop’s webcam;
  • take screenshots of workstations;
  • track app use;
  • log or count keystrokes;
  • detect keywords, such as ‘football’ or ‘shopping’ or ‘résumé.’
  • judge whether email content is gossip or work;
  • use calendar apps to track billable hours;
  • generate productivity, focus or intensity scores for employees, and
  • provide a dashboard to compare employee productivity scores and assess engagement levels.

These programs can be hidden in running processes. So, people may not know that the data is being collected. Although GDPR allows employees and consumers to have access to the data being gleaned about them, they must ask for it – which they can do only if they know the data is being collected.

.

Productivity

Similarly, writes Mazy, employees who’re expected to work late or on weekends at home haven’t taken to the idea of being tracked around the clock by their company smart phone. There’s a ‘productivity’ bargain to be struck to enable them to gain some time back in return for the time they spend working at home.

But the whole issue of ‘productivity’ is a potential minefield. Typically, electronic monitoring collects data on those behaviors that are easily monitored – not necessarily the one that should be monitored.

This data can tell you what an employee has done but not why or, necessarily, how. That could give rise to monitoring – and incentivizing – behavior that turns out to be ineffective and counter-productive. Furthermore, this monitoring ignores all the emotional, preparatory work that contributes to productivity but which can’t be monitored or assessed.

Consequently, today’s technology-enabled worker monitoring raises major questions about the real nature of productivity and performance. What’s certain is that purely using AI algorithms to determine the answers is entirely false.

Is AI the future of hiring? Read our article and learn more about AI in L&D.

the trouble with AI in L&D

The secret of success

Abdul Kalam, a former President of India (2002 – 2007), is quoted as saying, “What’s the secret of success? Right decisions. How do you make right decisions? Experience. How do you get experience? Wrong decisions.”

These words serve as a warning to corporate leaders who make decisions based on possibly-faulty algorithms. Just as you wouldn’t let a person who’s read everything but never done surgery operate on a member of your family, or feel happy relying entirely on a self-driving car, humans need to gain experience of relevant decision making – and continue to exercise those decision-making skills in ‘lifelong learning-by-experience’.

Mazy – again, writing for FT | IE Corporate Learning Alliance – argues that effective decision making depends on a person’s ability to recognize patterns instantly, and not be overwhelmed even amid a flood of choices.

AI can outperform human experts on spotting various cancers because, fed by increasing amounts of data, computers can learn – but only humans can develop new ideas about disease. They do this through conducting research. While not arguing that all decisions must be made by humans, totally removing humans from relatively mundane tasks or relying entirely on AI algorithms can lead to a loss of the creativity, insight, innovation and intuition that comes from research, ‘doing things’ and making decisions.

Algorithms aren’t perfect

The key message is that algorithms aren’t perfect.

Not only can algorithms be as biased (intentionally or unintentionally) as those who program them but the data they work with can contain hidden biases or features. Moreover, machine learning can create a self-reinforcing model when the cost of making a wrong positive decision is higher than the cost of making a wrong negative decision.

Even if the data that feeds an algorithm can be stripped of typical bias markers, such as race and sex, hidden factors linked to history and society can re-introduce bias. In creating an algorithm, its designers must specify what they’re not biased against and collect data about those points – even if monitoring data is kept separate from, say, recruiting data. The only way to be completely unbiased is to toss a coin.

It’s important to remember that getting computers to make decisions for us requires a great many judgment calls.

It’s a decision of society, not computers – even those fueled by AI – what factors we should take into account where ‘data’ is concerned. Of course, if we take nothing into account, we’re merely making random decisions – but what we take into account isn’t a decision a computer can make.

About the Author:

For over 20 years, Bob Little has specialized in writing about and commentating on, corporate learning – especially e-learning – and technology-related subjects. His work has been published in the UK, Continental Europe, the USA, South America and Australia. You can contact Bob via bob.little@boblittlepr.com or visit his blog.