
There’s something undeniably seductive about using AI in the workplace.
It’s fast, compliant, doesn’t take lunch breaks or call in sick with a suspicious-sounding cough. It doesn't grumble about the air con, complain about a bad back or eat smelly lunches. On paper, it's a dream colleague - efficient, scalable, eerily unflappable.
But - and here’s where it gets awkward - the moment we invite machines into decision-making spaces, into hiring, firing, reviewing, surveilling... well, things start to get murky. Not dystopian, necessarily. Just... ethically fidgety.
Let’s start with the basics. AI doesn’t think, at least not in any conscious, sentient way. It learns patterns. From data. And where does that data come from? Us. Gloriously flawed, historically biased, frequently contradictory humans. Which means if we’ve got a track record of, say, favouring a certain demographic for promotion or undervaluing certain types of labour, the AI - being the dutiful mimic it is - will quietly bake that bias into its decision-making. Without even meaning to. That’s the unnerving part. It doesn’t have to be malicious to be damaging.
Then there's the matter of transparency. If an AI recommends restructuring a team, or flags someone as ‘low-performing’, can anyone explain how it reached that conclusion? Or are we nodding sagely at a black box, grateful it saved us from a difficult conversation? The moment we stop understanding the tools we use, we lose not only control, but accountability. "The system decided" is not an ethical defence. It’s barely a sentence.
Of course, it's not all doom and ambiguity. There are genuinely helpful, humane applications. AI can spot burnout patterns before people even realise they’re stretched too thin. It can highlight gender pay gaps, flag dodgy hiring practices, or help neurodivergent staff navigate complex workplace systems. Used responsibly, AI can be a kind of mirror - albeit one that occasionally distorts the reflection.
But responsibility is the key word. Who sets the guardrails? Who gets to say what’s fair, what’s just, what’s humane? A well-meaning HR team? A procurement manager choosing between vendors with slick demos and even slicker price tags? The risk isn’t just that we’ll automate bad decisions. It’s that we’ll do so with such confidence - such efficiency - that we forget to question them.
But look, no one’s seriously suggesting we go full Luddite. The AI horse has well and truly bolted. But we do need to stay awake. Ask awkward questions. Build in pause buttons. Maybe even hire a few philosophers - or at least people who know how to say, “Hang on a second…”
Because at the heart of all this, we’re not really talking about machines. We’re talking about people. About values. About the kind of workplace - and world - we’re shaping. AI might help us get there faster. But we should be very, very sure about where ‘there’ actually is.