Thursday 13th June 2019 was Day 2 of the CIPD’s Festival of Work, during which I had the pleasure of attending a ‘masterclass’ presented by David Clutterbuck.
As I wrote in my pre-event-blog, I went to this session because just the title “The new AI Coach – explore how AI will help humans deliver better coaching” was adversarially evocative and my first thought was ‘it won’t’. Those usually end up being the best sessions and I wasn’t disappointed. Plus, I think Clutterbuck intended to provoke the audience and create a stir – like the best academics he’s open to and willing for criticality from all perspectives in the debate about Artificial Intelligence (AI) and coaching.
Before we get into that, the Festival of Work closed with a keynote from Neil Harbisson (the first human cyborg artist – watch his TED Talk here). Neil sees in greyscale, and now has a chip installed into his head to enhanced his perception of colour, turning it into sound. His sensory perception is wildly enhanced. He dresses in musical notes, and perceives faces and food in chord combinations. Once he could perceive 360 colours – the human capacity of seeing the colour wheel – Neil chose to extend his capacity beyond this, to perceive ultraviolet and infa-red, and he encourages us to become a cyborg and choose which sense we want extend and how far we want them to go. He suggested that Virtual Reality (VR) and Augmented Reality (AR) and now dated concepts and that we should be talking about Revealed Reality (RR). I’m not sure, as we’ve all got a different reality, but it does make a debate about AI and Coaching appear reductive and ignorant.
What’s more, my second favourite session this day was with Educationalist (and founder of Action for Happiness), Sir Anthony Seldon. With an obvious passion for the future of work for young people, Anthony asserted that until we understand the true spectrum, breadth, and depth of human intelligence, we risk flooding it with AI and mis-using the technology. Poorly used AI will undermine and further reduce our capacity for growing our human intelligences: including spiritual, social, cultural, physical and emotional.
On reflection, I’m glad that it’s Clutterbuck taking up this research challenge and exploring AI and coaching with his long commitment to this area of academia and the credibility that affords him. During his session his argument was broad, and neutral. I can’t tell what he really thinks or where he really stands… and, I think that’s ok. Better in-fact. I think his intent was to leave us thinking and deciding for ourselves – not taking a position but open to exploring and hungry to learn more. To be academic and critically take up a space in the debate about this craft we love.
He lead us through a series of questions, which he attempted to answer. Starting with an attempt to define what AI is, how it works and what it can do. Clutterbuck makes a differentiation from ‘coachbot’ to ‘super AI’ – the latter we’ve not got to, but which holds the best opportunity for true partnership with AI and coaching.
He shares some interesting findings so far, that make me wonder further:
- AI can analyse micro-expressions, reading a person’s communication and state to support the coach to understand them. But can it interpret the meaning and narrative behind it, because surely this is what actually matters. Not our assessment, but theirs.
- AI can assess a stress response. I think this is great data for the individual (bio-feedback) and their coach, but does it miss the extremity of the individual’s interpretive experience of that ‘response’ and the causality. Again, missing meaning,
- In trauma research people prefer an AI therapist rather than a human one because they are less judgemental. I imagine also that AI won’t have any emotional reaction to manage – something that is a practice commitment for me. The AI-Coach will implicitly ‘matter because they don’t matter’ with great ease that I try hard to accomplish.
- With AI you can combine multiple and different coaching approaches in one, including broad or detailed cultural differences, for example eastern and western concepts all understood.
- And, algorithms of AI creations often reflect the biases of those who designed them – equally amazing and obvious. Seldon also spoke about the history of IQ test – reducing intelligence in children to one measure – providing high results for those people who were similar to the white, middle-class, educated, males who designed the testing.
Clutterbuck compared artificial and human intelligence, and cautioned that AI can over-simplify complex situations. Comparatively, humans have general intelligence: the ability to apply learned knowledge in multiple situations reflexively. AI has weak intelligence: the ability to do one thing really well. Creative breakthroughs come from unpredictability and breaking rules.
Recognise emotions, but cannot feel them
Deduce emotion better than humans can, from analysis of written or spoken words
Project empathy (even though it does not feel it) – but not compassion
Innovate (e.g. create a haiku) but only experiment with combinations – with no capability of imagination
Accomplish masterful manipulation (my addition)
This is revolutionary, right? The AI coach won’t get tired, or have a bad day, or… care.
Arguably, the role of the coach and the coaching process is to accomplish insight. Clutterbuck questions whether AI can achieve Insight. Articulated as: different to normal problem solving in that is involves a strong emotional response, often a stuck feeling before the insight arrives, not necessarily a clear linear or conscious process to solving it, and often appears suddenly and with instant perception that it is ‘right’. (Referenced from The Aha! Moment, NY Times Blog, 2011 – apologies I couldn’t find the link). He questions whether AI can achieve such creativity, suggesting that creativity is a result of our consciousness and curiosity to understand what’s happening in other people’s minds. And our own? If we still don’t know the depth and capacity of our own thinking how do we generate AI that will reach it or enable us to reach it? As the creators of the AI we by default limit it’s capacity to that of ours. Is it arrogance that would motivate this?
But if Coaching is about the GROW model – which Clutterbuck calls “Get Rich On Waffle” the then coaching profession is at risk. Because the formulaic process of using a structured approached such as GROW can be performed better by AI than humans. AI can perform skills and basic performance coaching, especially at this low level (level 1). It’s better for transactional, and fast knowledge transfer coaching and mentoring – and for some people that’s exactly what they want – or what they think they want, or what their manager wants them to have, or what’s being provided for them. Clutterbuck also proclaimed that a large amount of people branding themselves as ‘exec coaches’ aren’t skilled at the basic coaching conversations. There is no correlation between price, qualification, hours… and the quality and impact of coaching.
AI can’t perform the more delicate and complex level 2 coaching, working in the realms of untrue and limiting assumptions. It can’t employ wisdom – especially meta-wisdom that brings together multiple and shifting perspectives.
The conclusion was that AI is best served as a partnership approach. An example in research and practice is using Virtual Reality to support visualisation of issues and inner critical voice, e.g. this manifestation becomes a viewable external entity providing a dissociation for learning how to minimise and master it. And I can’t decide if this is dark, or light? If it’s ultimately empowering or disabling the individual’s capacity to cope and master this stuff in their own mind?
If there’s a role for AI in Transformational coaching is becomes a 3rd entity. Does the coach then need to use the pauses to check the AI-coach data before continuing the session, and/or is this a new dance and duet the would benefit the individual. There is a lot to learn and understand.
The loudest message that resonated around the Festival Of Work was that ‘the future of work is human’, and I got a sense that it was all about connection (although the shiny-newness didn’t always feel connecting). I’m partially with Neil Harbisson, in he’s call to action: that we choose which senses we want to extend and go beyond what we ever imagine we could do. I choose connection, and independent thinking. I need you for both.
Whilst I’ve not delved into this research area since, it stays with me that ‘facilitated communication’ approaches have significantly enabled children with autism to communicate and demonstrate their capacity for thinking and intelligence. For example, it’s only when another person places a hand on the arm or shoulder of the child they are able to communicate, and answer questions using eye patterns or typing – actions they do not accomplish without this support.
There is much more to understand and know about our human capacity for intelligence and independent thinking that we haven’t even scratched the surface of. And it’s here where I remain. As a ‘thinking environment’ coach, with the single purpose of encouraging independent thinking with a committed curiosity to ‘how far can they go?’ if I create the right conditions that are only possible from another human. With connection; that innate biological driver that we call have.