Leadership

What your caricature from ChatGPT is telling the world

You've seen the trend, no doubt. You've probably seen people railing against it too — AI is, for whatever reason you wish to pick from ecological to anti-capitalist, as abhorrent as any previous innovation has ever been. But has anyone considered what it's really telling us?

“A humorous, illustrated caricature of a man looking worried and thoughtful, with wide eyes and a hand on his chin, sitting at a desk in front of a laptop showing a ChatGPT logo. He is surrounded by exaggerated office posters and documents labelled with phrases like ‘Confidential Marketing Plan,’ ‘Questionable Data Analysis,’ ‘ChatGPT Scripts,’ and ‘Compliance?’. A thought bubble above his head reads ‘Uh-oh… Is this… too much?’, conveying anxiety about whether AI-assisted work use is becoming too obvious or inappropriate.”

Create a caricature of me being concerned that people creating cariactures using you to show things about their job might reveal inappropriate use of chatgpt (against the rules)

You’ve seen the trend, no doubt. You’ve probably seen people railing against it too — AI is, for whatever reason you wish to pick from ecological to anti-capitalist, as abhorrent as any previous innovation has ever been. But has anyone considered what it’s really telling us?

Create a caricature of me based on what you know about me and my job

It’s a natural evolution of the trend a few weeks ago to share an image, created by AI, showing how you work with AI. They’re both, ultimately, a way of coming to terms with how AI is becoming a significant part of our lives.

I was quite pleased when I saw the image ChatGPT created of me and ‘him’ working together. It looked like an exciting place to work, and — flatteringly, of course — I was the enthusiastic, visionary leader. But ‘he’ was looking happy too. Ticking things off, helping us with our collective endeavour.

AI as a robot at a desk with a checklist of items ticked off, with "me" pointing to the sky with a new idea.
How ChatGPT told me I work with him.

Does it say more than you think?

This second wave is perhaps more telling. Because of the inclusion of “and my job”.

The more accurate the image is, the more obvious it is how much you’ve been using AI for work. And that could be concerning.

Some will worry that using AI at work is some kind of cheating. It isn’t, of course. It’s making use of the tools of the day, and ultimately — as long as you’re responsible — there’s no issue with that.

But what concerned me more was whether those people should have been using AI at work. Is corporate data leaking out into the internet through use of AI tools that haven’t been checked out, haven’t been signed off and — one assumes — aren’t set up for enterprise use?

If you can’t beat them, join them

It’s a lesson not for those who are doing it — though there will no doubt be some of those lessons being learnt, and perhaps jobs lost too — but for those of us leading organisations. Where the speed and efficiency AI can bring makes it ever so attractive, we need to make sure there is an approved solution, that it’s a good one, and that it’s well communicated.

Keeping corporate data safe, keeping the data our organisations are custodians of safe, and the general good governance around both — that’s all super important. But with AI, something is different.

No one uses a CRM in their personal life. No one will — beyond some free-tier tinkering on an educational project — look to using a survey tool. And we’ve all, at some point, put up with a clunky piece of software not because it was the best on offer, but because it was the one governance had approved. The one hosted in the EEA, rather than the US. The one that could fill out the forms.

Generative AI is not like that. It’s a consumer product first, and the intense competition — and money flowing in — means it’s in continual evolution. It’s personal preference, and to an extent brand affinity, that’s driving adoption.

My prediction is that this will change things

As leaders, we’re going to need to change how our organisations approach this. “You’ll have to use the approved product” will need to give way to a simpler process for getting products that can prove their value — and pass the tests — approved faster.

From speaking with friends in other industries, it’s a shift that’s already started in the tech world: development companies onboarding multiple options, encouraging everyone to try them out.

The AI companies will need to adapt too — significantly upping their game on responsible data management, making it easy for companies, even in some of the most regulated sectors, to take up their offer.

And while it’s good enough in its own right, we really shouldn’t allow Microsoft’s dominance with 365, Windows and everything else in corporate to carry Copilot to the top by default.

But here’s the bit that matters most.

If we get the governance right — if we make it easier to say yes safely, rather than defaulting to no — we unlock something much bigger than just “approved tools.”

We give people permission to experiment. And experimentation is where the real value lives. Not in the corporate rollout of a single platform, but in the individual who discovers that AI can cut a two-hour task to twenty minutes. The team that finds a better way to brief, draft, or analyse. The person who — because they were trusted to explore — develops a skill set that didn’t exist in your organisation six months ago.

The productivity gains from AI won’t arrive in a single procurement decision. They’ll come from hundreds of small discoveries, made by people who were given the room to find them.

That requires a different kind of leadership. Less gatekeeper, more enabler. Less “wait until it’s approved,” more “let’s find a way to approve it.” It’s a bold shift — but the organisations that make it will be the ones whose people grow fastest, adapt quickest, and find the gains that everyone else is still waiting for permission to look for.