Generative Artificial Intelligence has dominated the recent technology headlines. There’s little doubt that with OpenAI, ChatGPT, Bard, and all the other new entries, the field sits at the “Peak of Inflated Expectations” to borrow the term from the Gartner Hype Cycle. AI has emerged from the specialized days of large and expensive implementations like IBM Watson and into the mainstream with access for all.
The current expectation is that AI will be all things and answer all questions. There is a growing sense of FOMO (Fear of Missing Out) if you and your company are not at least doing “something” in the area. There is also a growing sentiment that AI will replace many white-collar jobs. Recent headlines include:
Well. Actually, the last of those was the approximate title of my high school Computer Science paper, written in 1984 (the year that also gave us “When Doves Cry”, “Like A Virgin” and “I just called to say I love you”).
That early 80s paper is probably why I’m not running around with excitement about AI quite yet. Yes - AI has finally come of age. It will be disruptive and it will be useful, and it will continue to accelerate the pace of change for many organizations.
But I also believe that in the rush to adoption, we are guilty of the same hubris that dominated the hype around computers in the early 80s. That WE are actually smart enough to use technology effectively.
Because the reality is - we often don’t know what question to ask and we also don’t know how to use the answer.
In my work as an Executive Coach, I’ve learned that the first question is never the RIGHT question, the REAL question, or even the MOST IMPORTANT question. And while AIs are great at answering the question in front of them, I’ve not met one yet that would respond:
Or any of the other 20 questions that a good coach would ask when presented with a seemingly simple question.
For instance, I was recently asked “How do companies address the ethical considerations of AI-generated responses?” which seems like a perfect question to ask an AI.
But the reality is, it’s not a good question at all. In my role as a coach, I would have asked my clients follow-up questions based on my understanding of who they are; who their business is; who their customers are; what happens in their culture; and how they make decisions.
Questions such as:
The reality is that “How do companies address the ethical considerations of AI-generated responses?” is just not the right question. It is missing crucial elements such as context, intent, culture, and the broader frame of the conversation in which we were engaged.
We have to spend time defining the right question. Otherwise, we get a great answer to the wrong question.
When I did ask that question of a public AI it gave me a pretty good answer that consisted of:
If you just read that list, and we are honest with ourselves (based on my experience working with hundreds of executives and companies), you probably think the first and 3rd answers seem reasonable. You probably have no idea how to “train” a public AI, you probably discounted involving stakeholders as too much work and you probably wouldn’t go the extra step of collaborating with other companies and organizations.
This echoes the “data-based decisions” trend from 5 years ago. All of a sudden everyone was saying "data", “data”, “data”. “We need to use data as the basis for all decisions.” But only the smartest organizations and leaders were asking questions like:
The reality is that we have no idea how AI weighs the relative impact of the components of its answer. Perhaps executing the 1st and 3rd bullets will give us a false sense of security that we’ve “done something” while actually ignoring the most critical components.
We have to stop thinking we are smarter than AI and can interpret results based on what’s “easy” or “comfortable”.
This brings me to the point of this article. Yes, AI is fantastic. It will be an integral technology in our future in the same way that “The Internet”, “Social Media”, “Cloud Computing”, “isomething” and “Hey, random woman’s name” is today (and the fact you knew what I was talking about for all those at-one-time-geeky-terms shows how truly ubiquitous they have all become).
But there’s still 42. That’s another geeky answer. From The Hitchhikers Guide to the Galaxy. Inspired by the author while he was lying on his back in a field in Austria. High as a kite. 42 is infamous as being THE ANSWER to "Life, the Universe, and Everything.” The basic moral of “42” is: be careful about the answer you get, because most of the time you have no idea about the question you’re asking.
So to be smart enough to use AI effectively we have to start by understanding the question. The right question. The important question. The question behind the question.
And when we can carefully frame the “best” question and get a response from an AI, we then have to ask ourselves questions about the answer such as:
And when we’re sure about both the questions we are asking and have the discipline to ask these deeper kinds of questions, I believe we’re smart enough to use AI effectively.
So how do we get smart enough to use AI effectively? Two simple steps require us to stop, pause and think.
Mastering the use of generative AI in decision-making will require us to focus on asking the right questions and exploring the answers thoroughly. By carefully framing questions with context, intent, and culture in mind, and critically evaluating AI-generated responses, we can unlock the full potential of AI in our lives and our businesses.
About the author:
David Meredith is an experienced Executive Coach and strategic advisor, with a background in organizational psychology, who has worked with numerous C-suite teams and executives to drive positive change and unlock their full potential. Connect with him further on Linkedin