AI transformation is a human-first, not tech-first change

Summary

AI transformation is more about behaviour change than technological change. Job insecurity, misaligned incentives, incompatible work environments and fragmented work systems can be impediments to this new way of working.

One has to live under a rock to escape news about the AI-hustle in their industry or at their employer’s. It feels like society, industry, and life are reliving a version of Evgeny Morozov’s “To save everything, click here”. Every nail is looking for an AI hammer. Commentators, such as yours truly, can’t stop raving about every release of a frontier model, and every new feature of an AI model. The tech is, no doubt, more innovative than anything we saw even five years back. Morozov’s 2014 quote, however, serves as a sobering reminder of how we must approach such innovations.

“Celebrating innovation for its own sake is in bad taste. For technology truly to augment reality, its designers and engineers should get a better idea of the complex practices that our reality is composed of.”

As I reflect on the transformations I’m responsible for and the ones I read about, I’m reaching the conclusion that we must take a human-first approach, rather than a tech-first, solutionist approach. Surely, it can’t just be about slapping on the right AI magic on every business process!

Last week, I revisited a few of my favourite books about facilitating change. The first stop was at Damon Centola’s. As I’ve revised Centola’s work, I’ve concluded that AI transformation is a complex contagion. 

Simple contagion Complex contagion
Spreading ideas that have no lasting impact on behaviour. E.g. sharing a meme or a video Spreading behaviours that need social reinforcement and have a lasting impact. For example, changing the way you work by employing AI agents.

Complex contagions, such as an AI-first way of working, don’t happen just because the CEO said so. Complex contagions happen when people:

  • feel optimistic about the change;

  • see the clear benefits of change;

  • find relatable evidence of success;

  • & get plenty of support from people they trust.

A complex contagion requires motivational fuel, nudges that help teams and individuals make informed decisions, and workflow design that encourages the adoption of new habits. The gold rush FOMO of AI, however, driven by an “innovate or die” urgency, creates fear, uncertainty and doubt instead. Experiments are dime a dozen, but business value is scarce.

There are two sets of ideas I took away from my reading last week. Number one is Centola’s idea of wide bridges. Centola is dismissive of the “fireworks display” style, where a few influencers use their reach to spread information. While it’s effective for simple contagion, fireworks fail at complex behaviour change. 

Fishing nets beat fireworks displays

Instead, Centola advocates for a fishing net style change, where people with strong ties to one another, such as team members, support each other in adopting new behaviours. And what accelerates the complex contagion? It’s when multiple innovators and early adopters from each fishing net cultivate redundant connections with their counterparts in other fishing nets. Those redundant connections are the wide bridges across which social contagion spreads.

Image showing Damon Centola's fishing nets connected by wide bridges

Wide bridges help innovators in different groups support and amplify each other

Easy then. Identify a few pilot groups and ask them to collaborate. Voila! Six points in a song; Bob’s your uncle. Happy days, eh? Sorry, there’s more. 

The elephant, the rider and the path

The second set of ideas I took away was from the Heath brothers’ book, Switch. The Heaths lay out another set of metaphors that can guide a transformation strategy. The brothers animate people’s journey of change using the metaphor of a rational rider, mounted on an emotional elephant, navigating an unknown path. The Switch framework raises three sets of questions.

  1. How do you motivate the elephant and provide people compelling reasons to change?

  2. How will you direct the rider to make choices that support the change you’re after?

  3. And finally, how will you shape the path so it serves up cues for desirable behaviours? How will new habits feel frictionless?

Image showing the Switch metaphors for driving change

Motivate the elephant, direct the rider & shape the path

If the two sets of ideas I’ve expounded on feel generic, it’s because they are. Introducing new technology is less about the technology itself and more about the people involved. Tech is predictable. People are not. People have emotions. Tech doesn’t. Implementing tech is the easy part. Getting people to change behaviour is the tricky bit. And that leads me to the hard questions that probably stall all tech transformations, AI included.

  • How do you motivate the elephant to embrace AI when AI-induced job losses are abound? Who wants to render themselves redundant? Can employment stability be a way to catalyse AI transformation?

  • How do you direct riders when they are already struggling with long workweeks? What’s the incentive to do one more thing? Should riders pave corporate cowpaths or should they reimagine the way they navigate their work? How do corporations provide their employees the mental space to reimagine work?

  • Habit change is hard. They’re especially hard when people face countervailing influences. For example, successful AI transformation needs data. Unlike office-bound work, remote and asynchronous work, through transcripts, artefacts, and written communication, serves up a wealth of data for AI.  RTOs that sing hosannas about watercooler conversations and shoulder taps are at odds with the data-rich environment remoteness fosters. How do companies resolve these contradictions?

  • And finally, how do you build wide bridges if every department runs experiments with its preferred tech, creating walled gardens of excellence? If one group’s bright spots don’t translate to another, wide bridges will indeed collapse. How do companies provide their people with a universal platform to drive AI transformation? How do they build communities on such platforms? How do you share transferable learning?


Those questions, dear reader, are the ones I’ve been pondering over in recent times. If you’re responsible for an AI transformation, I suppose you must think about them too! Otherwise, we run the risk of being tech solutionists and focusing more on the efficiency promise of technology than on the context in which we’ll implement it.

Next
Next

AI and the perpetual beta mindset