You are not getting ‘left behind’ in your AI exploration

So my view, right now is, you’re not “getting left behind” with AI. I don’t find the headlines sharing that view helpful, inspiring or insightful. In moments of, what can feel like, revolutionary change, there is a swell of getting busier, ‘catching up’, chasing, getting ahead. But that can also cause burnout and overwhelm before we’ve even started. So I’ve been exploring how we can approach AI development with an open curiosity, to understand what it can really do, and contribute to the world, rather than join the 100m sprint into the land of ‘Who-Knows-Where’ (which should’ve been a land in the Never Ending Story but wasn’t).

The pace of development is so rapid, so fluid, that the landscape reshapes itself faster than any individual or organisation can master it. It can feel like we are suddenly the Ghosts chasing Pacman who has just eaten a power pill and is now chasing us. It’s frantic, it’s hectic, but is there a plan?

No sooner do we think we are slowly getting to grips, then a new thing comes along. Today’s cutting-edge tool becomes tomorrow’s outdated feature; today’s AI expert becomes tomorrow’s beginner again. In a world moving this quickly, it’s becoming clear there is no finish line, only cycles of learning, unlearning and relearning.

This isn’t the first time humanity has faced such acceleration. I’ve been going back over history to try to sense make and process. During the Industrial Revolution, people feared steam power, mechanised labour and electrification. But history shows something reassuring: societies adapt. And it wasn’t always those who got involved first who won out eventually. People who adopted new tools at their own pace didn’t fail, they evolved with the change, often finding smarter, more resilient ways to work because they used the advantage of being a follower.

What we can learn from past revolutions, if this is a revolution, is this: meaningful progress isn’t about being first, it’s about being ready; psychologically, ethically and practically. Those who took time to observe, question and adapt often became the most effective innovators.

So your pace with AI isn’t a weakness. It’s wisdom. In a constantly shifting landscape, staying curious and intentional matters far more than trying to outrun the change. And I would also say it’s not just about leaning into the revolution, in this case learning AI as a skill set, it’s about observing the opportunities on the edges of change as well. Remember the most profitable businesses during the gold rush were tool stores, not the gold diggers.

So how can we think through this Revolutionary Change calmly?

Navigating the world of AI can feel like standing at the edge of a vast, uncertain sea. With promises of transformation and warnings of unintended consequences, it’s easy to feel overwhelmed or left behind. What if, instead of diving in head-first or freezing in place, we take a more gentle, structured approach? In this blog I’ll share an idea using the five stepsChange → Pause → Mess → Play → Try/Restart — that invites curiosity rather than fear, ease rather than overwhelm, and purpose rather than ‘just because’.

Along the way we’ll also consider the tech-adoption curve, which I’ve written about before here, why being a “laggard” is perfectly fine (and sometimes even beneficial), and what AI means for our climate.

You could start your thinking by using a reminder of the tech-adoption curve. Where do you stand right now? The curve describes how new technologies are adopted in stages: innovators, early adopters, early majority, late majority, and laggards. And just for clarity - being on the “laggard” side isn’t shameful, it simply means that you may adopt later, after the technology is stabilised, proven, and more thoughtfully integrated. (I doth protest too much?!).

Acknowledging Change

The first step is recognising that something has changed (or is changing): AI is increasingly accessible, embedding itself into our tools, workflows, even decision-making. But change doesn’t always mean “you must immediately redesign everything”. Rather, it means becoming aware: What has shifted? What new possibilities are in front of me? What might this mean for my work, my goals, my values? And perhaps the more immediate, obvious, but often over looked, question - How can this help what I’m already doing?

Try asking -

  • What aspect of my life or work might / is AI influence - for better or worse?

  • What am I hoping to get from engaging with AI (or from avoiding it)?

  • What am I afraid of; about AI, about getting left behind, about making mistakes, about missing opportunities?

This step primes you for intentionality: you’re not just chasing the new technology because “everyone else is”, you’re asking why. When I sat with these questions fora while I realised maybe for me it was more of a really big dose of FOMO than any real threat right now. Even just bringing that to my attention brought more clam and ease.

Using the 5 steps for change to help develop the thinking

  1. Pause

This is a moment of reflection and thinking before action. The goal is to step back and assess before jumping in. AI can tempt us to “just try everything” or “fear falling behind”. Pausing gives you space to ground yourself.

Questions to ask:

  • What are my priorities and values right now? Does “use AI” align with them or potentially conflict with them?

  • What problem or purpose am I trying to address (beyond “just for fun”)?

  • What assumptions do I have about AI (that it will automatically solve things, or that I’m too late to bother)?

A moment of pausing is creating space to take in, synthesise, make sense, find out more, get data, create foundations for good idea development and decision making. We are allowed to think, for as much time as we need. There isn’t actually a rush after all.

2. Mess

After pausing, you move into Mess, the very human, very real moment of noticing what stirs inside you. This is where the emotional clutter shows up: resistance, scepticism, fear, excitement, confusion. “Mess” isn’t about breaking things out there; it’s about understanding what feels unsettled in here.

AI can trigger big feelings, fear of being replaced, worry about doing it “wrong,” frustration at not understanding, or even guilt for not being further ahead. This phase invites you to sit with that discomfort long enough to learn from it. Instead of pushing the feelings away, get curious: Why is this stirring me? What story am I telling myself? What am I afraid might happen? What am I hoping might happen?

Mess is where you acknowledge your reactions without judgment. It’s where you recognise that resistance isn’t a barrier - it’s information. Emotions become clues: What values feel threatened? What habits feel disrupted? What identity feels challenged?

Guiding questions for this phase:

  • What emotions am I noticing as I think about using AI?

  • What exactly am I resisting and why?

  • What fears need naming so they can be worked with, not avoided?

This is the inner learning stage. Before you experiment with AI, you learn about yourself.

3. Play

Now we can shift into Play. This is more creative, less apologetic. You’re not just testing “does this work?”, you’re exploring “how might this work?” Let your curiosity run. Combine tools. Re-imagine processes. Think of new use-cases you hadn’t considered.

Questions for this phase:

  • What could be enabled by AI here? Workflows? Processes? Content ideas? Problem solving?

  • If I had no fear of making a mistake, what would I try?

  • How could I collaborate with AI (rather than just delegate to it)?

Also revisit the ethical, climate and sustainability dimension of AI in this phase: For example, AI infrastructure consumes significant electricity, water and rare materials. Ref -  UN Environment Programme .

On the other hand, AI is also being used in energy optimisation, emissions tracking and other sustainability tools.

So you might also ask:

  • If I apply AI, what is the environmental cost relative to the benefit?

  • Could I choose or design AI use in a more sustainable way?

  • What values do I want to apply to my use of AI and my work?

Try

After Play where we generate ideas and possibility we can move into Try. Where we try but limit the risk. This is the experimenting phase: not polished, not production-ready, but moving forward slowly. Try small experiments. See how the AI tool behaves. Let it surprise you.

Ask:

  • What happens if I apply AI here?

  • What surprised me? What didn’t work?

  • What unintended output came from the AI that might point to deeper possibilities (or problems)?

In this phase we try to enjoy the trying. So we drop the resistance, the reluctance, the fear and we focus on what it might solve, how it might can help, the way we are using it, and how its ok to make a few mistakes. Because learning happens in imperfection. Because you’ll learn faster (and more honestly) if you allow mis-steps. For example: pick a small task you’ve been avoiding, or a process you find tedious, feed it into an AI assist, see what happens. Don’t expect perfection. Expect insights and data.

5. Restart

Here we are at the start. Because now you can move forward with more intention, ownership, an understanding of how it can help, what you are looking for in new models, new products and services. You have reduced the guessing game. The world of AI is dynamic: new models, new affordances, new risks will constantly bring new thought. By pausing, embracing the mess, playing, trying, and then restarting, you keep yourself adaptable but focused on what you are trying to achieve with it. You are making AI your enabler, rather than AI guiding you.

In this phase ask:

  • How will I use AI to help solve challenges, enable my work / life?

  • What values do I need to protect and use to guide my thinking as I progress?

  • How do I want to show up when navigating the opportunity AI presents (for example: curious, learner, experimenter)?

Why this five-step loop helps

  • It reduces fear because you’re not trying to be perfect or jumping head-first - you’re intentionally learning.

  • It increases curiosity because you give yourself space to explore and play rather than simply conforming.

  • It frames AI as tool + purpose, not just “fun gadget” or “dangerous black box”. You’re asking “for what purpose am I using it?”

  • It honours your pace. Whether you’re an early adopter or a laggard, this model supports you. The tech adoption curve shows that by the time you engage, the technology (and social context) may be more stable, reducing risk.

  • It builds resilience. When you restart the cycle, you adapt, iterate, refine, learning becomes continuous, a skill for life!

It’s perfectly okay to feel behind. On the tech-adoption curve, the so-called “laggards” often bring clarity, realism and depth to innovation. They’ve watched others make mistakes, they’ve absorbed lessons, and when they act, they often act better. So if you’re not the first to adopt AI, you’re not failing, you might just be positioning yourself to do it smarter or have a different path.

AI carries climate-and-environment implications: energy-use, water-use, e-waste, mining of rare materials. But there’s also huge potential for AI to help us manage those very challenges. That duality emphasises why purpose matters: you’re not just playing with novelty, you’re choosing how you engage in a shifting landscape.

By following the loop of Pause → Mess → Play → Try → Restart, you give yourself a structure to engage with AI in a way that’s thoughtful, experimental, aligned. You reduce overwhelm, you increase curiosity, you ground your actions in purpose. Whether you’re dipping your toes, or diving in, this framework gives you permission to move at your pace and to make your relationship with AI your own.

And fun fact - I used AI (as an experiment) to try to write this blog. But I ended up rewriting it and then completely changing the whole format. So my lesson there was, it’s still quicker for me to just write my own blogs, but it’s quite good for getting a few ideas going.

Download the framework as a thinking tool (and buy the book, Another Door Opens, to help you get a deeper understanding of the framework).

Book me to run a workshop for your teams to create space to think about what it means to them. Drop me a note.

Next
Next

The Liminal Space: Navigating the ‘In-Between’ of change