Juliana Stancampiano [00:00:00]:
I would love for the learning leaders to come up with their AI strategy that they can present to their executives and get buy in on it so that everybody’s on the same page of what we’re trying to do. I think there’s a lot of assumptions out there right now about where AI is at and what’s possible, and doing more research and understanding it and putting that together I think would be helpful for any business, especially in the learning space.
Heather Cole [00:00:27]:
This is Go-To-Market Magic.
Steve Watt [00:00:29]:
The show where we talk to go to market leaders and visionaries about the “aha!” moments they’ve experienced.
Heather Cole [00:00:35]
And the pivotal decisions they’ve made.
Steve Watt [00:00:37]
All in the name of growth. Heather, who’s up today?
Heather Cole [00:00:44]:
I’m really excited to be talking to Juliana Stancampiano. She is the CEO of a company called Oxygen Experience. She’s also the former president of the executive board of the Sales Enablement Society and did amazing things in the top leadership position there over the years. She works with companies big and small, some of the largest companies in the world. Listen to her and her organization on how do you take really strategic initiatives from the boardroom to the keyboard to the salesforce to the people that are delivering them. And she manages those processes so she has a front row seat to what is going on within learning, within enablement, and within AI and how it’s being executed in these companies and her own company.
Steve Watt [00:01:32]:
Oh, this is going to be good.
Heather Cole [00:01:36]:
Juliana. Welcome to Go-To-Market Magic.
Juliana Stancampiano [00:01:39]:
Thank you so much for having me, Heather.
Heather Cole [00:01:41]:
So you and I have been talking about this topic for a while, as has the rest of the world. So AI is huge right now. It’s on everybody’s lips. And I think that there’s a lot of talk about content as far as marketing content and content that’s written learning content is a little bit different, and the learning applications, beyond just content are really very exciting to me, but a little bit different than the conversation that’s going on in kind of the general AI space. What do you think the most exciting possibilities are when we’re thinking about how AI will play out in learning?
Juliana Stancampiano [00:02:17]:
Yeah, I think it’s really interesting, and as you said, everybody started in the content space, really like marketing some of the other spaces. Learning is definitely different, and we’re embarking on it with clients. I think the exciting part for me is that content people always have referred to this whole content as king for so many years, and it’s been debated on every which side, and now the content is out there and we have access to so much content. And I think that’s the exciting part. And the drawback, I think, for the learning side is there is so much content, how do we know what good looks like? And I’d say that is really the crux of it from a learning perspective. So exciting because you have access to so much. And so no matter whether you have a small team, I think we’ve all been around long enough where when businesses start learning is like maybe the last thing starts once they’ve hit some sort of level of maturity and amount of people, they’re like, okay, we should probably be teaching our people something. And I think that can start a lot earlier for companies and they can start educating their workforce much earlier in the process of having a company and building a company versus what we’ve seen today because the access is going to be there.
Juliana Stancampiano [00:03:38]:
I feel like I’m full of caveats. It’s like that being said. That being said, and I also think that there are going to be a lot of people that are able to do this that are maybe more subject matter experts in their area, but not very good at learning. And they’re going to be able to use AI to create that learning that is going to actually help somebody versus what they might create themselves, which is typically way too dense, way too much, way too that’s everything, really.
Heather Cole [00:04:08]:
Interesting because I think people are thinking about it from the opposite perspective of taking people who are in the learning profession, who are architects of learning and that may not have a lot of subject matter expertise, can suddenly potentially pour all this stuff in there source this stuff potentially from AI that’s pointed either at your internal or even external sources and say, build me something. I know nothing about these topics or I know very little about these topics. We already have this structure in place. Here’s all the inputs. Now go build something. So it’s interesting that you see it from the opposite side of making subject matter experts great at developing learning. That is a fascinating kind of a different way of looking at it.
Juliana Stancampiano [00:04:56]:
I think that there’s a lot of science about how adults learn that’s out there that I’m hoping makes its way into the AI and hopefully it’s the good stuff. And I think that’s the interesting part right there’s the internal, where you point things to and then there’s the external. And the companies at least that we’re working with are treading very slowly because they don’t want to infiltrate their internal systems with things that are not what they would want to have within their company. So we’re seeing that not even started, but they’re just trying to figure out how to build the internal capability to then go and put the information or the content they want within the technology. So I think that from my perspective, who knows the content best is always the SME. And typically you have the learning person who knows learning really well and they pull content from the SME and then create the learning. If they have either access to the SME content through AI, they could do that still. And I definitely think the reverse is possible.
Juliana Stancampiano [00:06:10]:
With if the SME has access and the right ways in which to talk to the AI to build it, it can help from that perspective because they’re going to know the content and what they want somebody to know and be able to do out of it versus the learning person to your point.
Heather Cole [00:06:27]:
So that’s interesting because part of those SMEs, especially in the enablement space when it’s customer facing roles, are people actually doing the job. And for a few years now, there’s been this drumbeat around peer to peer learning. You learn from the people who are doing it really well, but the people that are doing it really well can most times not do much more than a video from a learning perspective. So that opens up a lot of possibilities of making it really super simple for high performers to be able to kind of do a brain dump and have that become something that’s meaningful learning.
Juliana Stancampiano [00:07:04]:
Yeah, I think as long as they can do the brain dump, I think sometimes that’s the hardest part for the SME, right? And that’s where we ask a lot of the questions. It’s like how do you do this, how do you do that? And when you start asking those questions, they’re able to explain it to you. I think there are a lot of content sales in other areas, though, that have a lot documented about what they do and how they do it or how they foresee it happening within the company. And that’s where it’s going to be the easiest to pull. But I think in a lot of the customer facing roles, there’s a lot of unconscious competence and people that have a harder time explaining to you what it is that they do because they just do it very naturally. And so having somebody that can interview and tease out what that is, that can be put into the learning or put into the AI is going to be really important. And I think that’s the interesting thing with AI and learning as well is that it creates new roles that we haven’t necessarily had before or talk about, which is somebody that can understand the business enough to be able to have that conversation to pull out what it is that’s needed for the company, so that it can then be programmed into whatever the internal AI tool is that the company is using.
Steve Watt [00:08:25]:
Where are we at in this evolution? I’m thinking about that unconscious competence, as you said. So let’s say I’ve got a salesperson who’s exceptional top performer all the way. Can I take 20 of their call recordings and 100 of the emails that they’ve sent to customers? And can the AI make sense of and maybe a comparison set of a bunch of call recordings and emails from some low performing salespeople? And will the AI be able to suss out what the real differences are between what this individual is doing versus their lower performing peers or are we not quite there yet?
Juliana Stancampiano [00:09:09]:
I don’t think that we are there yet because I don’t think we know. So you have to be very specific in your commands and your prompts for AI to be able to do the thing that you want it to actually do. I’ve had team members and myself, we’ve used it multiple times and it’s like, okay, that’s not quite what I was we’ve probably all done this. It’s like, how do I be more specific about it? And so I think that is part of it. Right. Do you know exactly what it is that you’re looking for? And can you give all the prompts to be able to suss out like you’re looking through data, essentially, right. Data mining in order to find the nuggets that you’re looking for of what’s helping this person be successful versus what’s making this person less successful. And so I think as long as we can be very clear and figure out what those prompts are for, the I mean, the AI is only as good as what it knows and what we put in it.
Juliana Stancampiano [00:10:02]:
Right. And that’s kind of the tricky part, I think, that we’re in right now.
Steve Watt [00:10:06]:
So it’s still got to be very much within the realm of conscious differences. I know what I’m looking for, as opposed to here’s a bunch of stuff. I don’t know what I’m looking for, but I just know this person is massively outperforming that person. We’re not there yet.
Juliana Stancampiano [00:10:23]:
No. And I think that’s where it’s kind of funny because I think people jumped off from there in the beginning and then I think a lot of us realized that it was not there yet and that there’s still so much more work to be done. We spoke to somebody, guy named Travis, on our own podcast. He was actually behind the invention of the online fantasy football. So he’s been in the technical space for a long time, and he’s been doing a lot of AI research over the last through COVID. He basically was like, well, I don’t have anything else to do. I’m going to do a deep dive in AI. And it’s so conceptual still right with where we’re at versus user friendly from a perspective of yes, we can go out and kind of create some marketing content and we can ask for better titles and it does a decent job at kind of the basics.
Juliana Stancampiano [00:11:15]:
But as soon as you get into more complexity, there’s a lot more work to be done.
Heather Cole [00:11:19]:
Yeah. So it’s interesting because the two things you were talking there, we were talking a little bit about can it do these pieces? And how is somebody who is an enabler or a learning professional going to actually work in the future to be able to leverage AI for that? I think part of this is thinking about the concept and it’s out there a lot for a lot of professions, knowledge professions, which is the person that are going to be most successful in the future, are the ones that know how to talk to the AI and get it to do what you want it to do. And the second piece of this is there’s a lot of kind of low-hanging fruit with learning where it can auto summarize that lesson, it can write your learning objectives, it can pull out the key points. It can do a lot of that sort of I wouldn’t call it administrative, but the stuff that is like enhances the productivity. So it begs the question if it’s going to enhance the productivity of the learning and development professionals, what are they going to do with the rest of their time? What does the profession of the future look like and do you have kind of a view into that or an opinion on it?
Juliana Stancampiano [00:12:29]:
I think it definitely changes. There are going to be people that are at the higher end of the learning consultant that are going to be extremely important because they’re going to be able to understand the holistic view of what you’re wanting to do and to organize the material and to be able to also review and say if it’s good or not. And then you’re going to get a whole new role, as you said, of people that can prompt the AI, can get the information that’s needed, can call it and organize it, but there’s going to be less. And I would say as well, having creativity in your mindset when you’re going through it, because that’s what I’m curious about is like the spontaneousness of AI is not there, right? And so that is going to be a huge asset, I think, from a learning content person, is how do you take what you’re given and bring it to life for your audience. And I think that’s something that is a conundrum also today. And I’m hopeful that that’s something we can lean in more as learning professionals.
Heather Cole [00:13:36]:
I think one of the things that I always find fascinating about your work specifically is that you work with some really monster clients on very specific initiatives that they’re trying to do to transform their organizations, but you’re also working with some medium and smaller customers too. So I’m curious about are you specifically using components of AI within your organization and then what are you seeing being done? How are people you said they’re moving slowly, how are they playing with it in the large companies and are the smaller companies more likely to jump in with both feet?
Juliana Stancampiano [00:14:13]:
Well, there’s a huge security aspect which I’m sure you all are experiencing as well. And we have to be really careful with something that’s proprietary, right? So we can’t put anything in any sort of OpenAI. So everything has to be locked down. And companies are navigating this right now and they’re trying to figure it out as well. We’ve used it for small things like can I get a better sentence? Some of the writing can definitely help with and titles and some of the creative elements that can come back because that’s kind of that marketing content that we talked about in the beginning where it’s probably been used the most. Where we’re looking at it that I think is really fascinating with one of our larger clients is actually so as a company, what is it that we want our people to know about these different soft skills? Right? So if you think about leadership, there’s over when I first looked it up, it was like over 70,000 leadership books. There’s probably over 700 now that we can all self publish on Amazon or whatever. Exactly.
Juliana Stancampiano [00:15:20]:
But which one do you pick as a company that you want to double down on and say, this is our leadership philosophy here in this moment, and so this is what we want to feed into our AI tool. So that if anybody in the company is going to look for this content, we’re getting consistent content across the organization. That’s a huge undertaking for a large company. Whereas you can imagine there’s content, just so much content available and out there. And so that’s what some of these engineers are thinking about is like, okay, as we are going and developing it, who gets to decide for the company what goes in and what doesn’t? How do we put it in? Who does this address? Do we parse it into different audiences? Whether what your role is in the company and what you’re doing specifically. Because there’s a lot of role based learning that’s out there as well. So there are so many questions right now and what we’re trying to wade through are just some of the like, how do we start sussing out some of the answers and helping them with that. They’re very excited about it because they collect all the data around these soft skills and now they have this being implemented and created for the company, but they’re trying to figure out how to bring it in and win.
Heather Cole [00:16:42]:
And it definitely sounds like that can be the evolution of an enabler, especially with customer facing roles where you know what your methodology is for certain sales plays. They may be different in a large company, but you’ve got those different methodologies and you can feed it very specific things that you know, where the parameters are. To me, it sounds like it’s a really exciting time specifically for enablement in the learning professional because of the high definition that in many organizations you have around what’s your methodology, what’s your process, what’s your leadership, all of those things. So it sounds like it’s going to be a great, very interesting time, for sure.
Juliana Stancampiano [00:17:30]:
Absolutely. And I think it also is interesting because it will track changes as we make them in our organizations. And I think that’s also an interesting concept with this is we’ve known a lot of companies. It’s the maturity model. Right? So as you mature, you have a sales methodology. As you continue to mature, that may mature as well. We’ve watched that happen across so many spaces. And so how do you continue to feed it and update it? So it’s always giving people the most recent, this is how we work, this is what we do.
Juliana Stancampiano [00:18:05]:
Because almost every year a company’s strategy is shifting and changing and it kind of just trickles down, especially within the sales and go-to-market. Areas of the organization has to be on top of it. And so keeping it as current as possible is going to be just like anything today. But this is going to be huge for AI if you have your people pulling from it in order to get the content that they need to address a customer, especially. And we’re working in a customer facing area as well, and it’s the same. Right. So AI is both changing how the job that the people that we’re helping do is like, that’s changing the role. And on top of it, we’re feeding the AI content that’s going to be able to help them going forward. So it’s a layered thing happening out there.
Heather Cole [00:18:57]:
Yeah. And that can lead to a lot of scary things happening. I think everybody’s heard the story where Chat GPT made up a story that made up litigation components that never happened. Sources never happened. Yeah, the sources it wants to please. I guess it’s meant to please, wants.
Juliana Stancampiano [00:19:17]:
To give you an answer.
Heather Cole [00:19:18]:
Yeah, it’s like, I can give you.
Steve Watt [00:19:19]:
An answer and then I’ll make it sound authoritative. Even if it has to make it up.
Heather Cole [00:19:23]:
Exactly. So I heard a story the other day on a call where our customer said that they had a rep feeding things into Chat GPT to even source their own website. So they were like, give me a case study on their company, this thing. And they were doing it externally because they didn’t have the resources internally. And it sourced a case study, took it from a competitor and replaced their name in it. That’s the kind of scary stuff.
Juliana Stancampiano [00:19:53]:
And I think the larger companies are just like, okay, hang on, especially the tech companies. We just created this thing. But we all need to be to tread very carefully because you can see the lawsuits flying in those moments. And also so that’s the other thing that we have discussed a lot with our clients is the use for your sales roles, right. Or your customer facing roles, whatever they are, how do you ensure that they are getting the right information when they’re going to look for it? That’s a huge component of what we’re talking to them right now, because exactly of what you just said, they go out there, they get not great information, and then what? And so you still need to have the maturity layers in your organization so that people can help or things can be run by somebody as we all learn how to figure this out. So that you don’t put yourself in a situation of ripping off somebody else’s thing or just telling the customer totally.
Heather Cole [00:20:53]:
The wrong thing, that’s completely the wrong thing. Yeah, having managed RFP databases in the past and the way that those things can balloon into like yes, custom answer for a customer and now it’s being used for everybody and I kind of see that happening here on such a large scale. So I guess this kind of gets us to the question of when you think about AI, what scares you the most, either within learning enablement or just in general, what scares you the most about it?
Juliana Stancampiano [00:21:23]:
I think as we think about it for what we do, it’s almost exactly what you said, where people go too fast, too soon and we’re ruining our data source right when that happens because we’re inputting the information that’s not good. And so that’s probably my biggest anxiety right now is like everybody that’s inputting information into know we have no idea where that’s coming from necessarily and how good it is. And I think I would say that was the same issue with the internet. Like when Google first launched, right, it’s search engine. It was a lot of the questions about where did this information come from and we’ve gotten a lot better and way more mature about our sources and verifying them, know a lot more savvy. And I think AI is in a similar spot, right? So where is it coming from, how do we know it’s good information? And I think that’s going to be a huge crux for companies as we move forward, especially for the sales organizations.
Heather Cole [00:22:24]:
It’s ironic that it’s really good at taking a ton of information and boiling it down to what’s kind of meaningful to people. But then if you want it to be accurate, you have to limit its capabilities of tapping information and auditing what’s going in. And so you’re putting the reins around the power, but it’s the only way to maintain control.
Steve Watt [00:22:49]:
How far have we come in using these technologies to understand the impact of what we’re doing? I think we’ve been talking a lot about the creation of learning content, perhaps the delivery of it, but to shift more to the impact of it. For instance, let’s say we’ve been putting a real effort on training our people with better discovery skills and better skills around asking great questions. Are we then able to crunch through large numbers of call recordings and other things and actually demonstrate that those who ask more questions and better questions are getting follow up meetings, they’re advancing opportunities, they’re getting to revenue, bigger revenue, more quickly. Are we at that point where we can at least I know when I asked you before, can we just go very open? Like I don’t know what I’m looking for, just tell me what good is. The answer was no, but if we narrow it in and say, I do know what good, really good discovery, really good question asking is, we believe it’s critical. Can we now use these tools to demonstrate that that is correct?
Juliana Stancampiano [00:24:02]:
I think that you’re going to have to further define what that means for the tool, because immediately I go, yeah, you can ask good questions, but if your timing is off or you have the wrong person, it doesn’t matter how good your questions are, it’s not going to have the effect that you’re looking for. And I think we’ve all watched that happen with somebody that’s just, like, drilling questions, what about this? And tell me about this, and you’re like, oh, gosh, your timing is like you’re not reading the other person. Right? So no matter. There is the art of questioning and helping somebody learn how to ask good questions, but there’s also the art of the timing of the question. And so until we can get through and learn both of those aspects so that we’re not doing one thing without the other, it’s still going to be difficult. We’re humans at the end of the day, unless you’re going to put the two machines to talk to one another, which they might really love each other, definitely not have a great conversation.
Steve Watt [00:25:08]:
More questions is good. Full stop. I’ve been on some discovery calls where it was like an interrogation. It was not good at all. It was an interrogation. It was 18 questions that the seller needed to fill out an answer for, to establish banter to fill their salesforce instance or whatever it was. And it was terrible for me as a buyer. So, yeah, no, I’m with you all the way there.
Juliana Stancampiano [00:25:30]:
But how do we clue in the AI to be able to pulse that through? Right? So how do you explain somebody asking questions at the right cadence and the AI can’t see the other person? So that’s hard, right? When you can’t hear the tone, can’t see facial features or whatever. If we’re on Zoom calls, et cetera. Whether we’re in person.
Steve Watt [00:25:58]:
Those are to be able to know, though, whether I’m just going through a list of 18 questions and not even honestly listening to what you’re saying, because I’m just on to the next question, or whether it’s a dialogue and whether my next question builds on your response.
Juliana Stancampiano [00:26:13]:
That’s right. So how good are the parameters that we give it to be able to figure that out so that it can give us in return good content? And I think that’s the hard part. Right? So somebody that doesn’t have a lot of experience wouldn’t be able to do that because they’re not going to have been in all the sales you’ve been in Steve, and witnessed and seen this happen, know all of that. And so I do think that that’s going to be a really critical part that a lot of us play that have a lot of experience is making sure that we’re giving the right parameters for the AI so that it’s pulling content that we believe is decent.
Steve Watt [00:26:52]:
Right. It does seem like a really powerful place to be focused because I do worry, and I know Heather and I talk about this a lot, that AI is just largely being used as a quantity play. I can create more content, more emails, more training, more this, more more. We need to focus on better, and that’s harder, right? More is easy. Better is hard.
Juliana Stancampiano [00:27:20]:
More is definitely easy. And I would say there’s plenty out there that we don’t need more. Hence the leadership book examples. Or you could go into sales books, et cetera. There’s so much out there. And typically, I don’t know about you all, but what we see is when people aren’t doing the things right, it’s the human skills that are lacking. It’s not necessarily the skills that are easier to teach.
Heather Cole [00:27:46]:
Yeah, that’s so true. And when we think about we used to talk a lot about this back at Serious Decisions and Forrester, that productivity is really the combination of effectiveness and efficiency, how much you can do and how well you can do it. And if you over pivot on either one of those, you start to go awry. And part of it, like you mentioned, is the ability for the machine to have all of the context of what’s going on. And even with conversation intelligence and the biometrics that are being embedded in that, there are all these studies that are coming out on top of those saying that there’s racial inequities where they’re saying they’re reading the face is wrong ageism, they think my wrinkles mean I’m mad. All of that fun stuff, that it’s never going to be perfect, they’re never going to be able to.
Juliana Stancampiano [00:28:43]:
Well, and then you take that, just like you said, and you go via different cultures and I can bring this full circle for you. I would think that most French people are mad most of the time, but apparently it’s just passion. There you go. Right. But from my American view, it’s like, wow, they’re really aggressive. My husband’s like super fine. And so not knowing that across different cultures, again, you’re pulling in data and it’s not accurate. I’m not reading them right.
Juliana Stancampiano [00:29:14]:
So I’m having to adjust how I read somebody to be able to get the right temperament. That’s hard.
Heather Cole [00:29:22]:
It’s hard. And I think I’m sure there are computer scientists and AI experts who would argue that you show the machine enough and eventually it’ll get it. You give it enough input and it’ll figure it out. And the question is, to what end?
Juliana Stancampiano [00:29:40]:
Is that what you really need get from it?
Heather Cole [00:29:44]:
Exactly. So finally, I have one last question for you. If you had one piece of advice for learning leaders who are looking at leveraging AI in their processes right now, and what would you say to either do or not do? Or maybe there’s one of both.
Juliana Stancampiano [00:30:02]:
Yeah, it’s interesting because I want to say, like, go slow to go fast, which is an oldie but I feel like we have to remind ourselves of that constantly because it’s so easy, as Steve, as you were saying, to just jump in and be like, oh, we can get all this stuff right and instead create a strategy around it. I would love for the learning leaders to come up with their AI strategy that they can present to their executives and get buy in on it so that everybody’s on the same page of what we’re trying to do. Instead of a lot. I think there’s a lot of assumptions out there right now about where AI is at and what’s possible, and doing more research and understanding it and putting that together, I think, would be helpful for any business, especially in the learning space. And I also think embracing it as well, which is kind of the opposite of what I’m saying in some ways, but it’s here. It’s going to be something that we use. So have people play with it, set up, let’s play with it for a while. Let’s not try to use it to meet some end that is in our goals this year.
Juliana Stancampiano [00:31:10]:
That’s going to be really difficult and you’re probably not going to be very successful, but let’s start playing with it and making some safe spaces so that your people can get used to it so that it doesn’t feel as scary for them. That’s what we’re very much trying to do. It’s like, well, let’s put this in and let’s see what we get. And I have people on my team doing that as well and being critical. Right. So how do you think critically then about what the AI gives you and whether it’s good or not and the usage of it? So I think there’s a lot in there that we can do, but I would say embracing it and also having a really strategic plan around how we want to approach it, probably the two things that the customers I see doing that. I think it’s very smart.
Heather Cole [00:31:53]:
Yeah, I think there is some fear out there that it will replace jobs and that it will cause harm. One of the greatest comments I saw on it was CNN was interviewing one of the top specialists in this area from Stanford and the guy from, you know, the question was, do you think this is going to replace a lot of jobs? And he said, well, if you employ psychotic six year olds in your organization, then yes, go out and have large language models, go replace those psychotic six year olds. And I think that’s kind of indicative of a little bit where that is right now. But six year olds grow up, so playing with it and understanding it is great advice.
Juliana Stancampiano [00:32:48]:
Yeah. I think making it light if you’re a leader, like, making it really light right now for your teams and letting them have the opportunity to play with it and tell you what they think and do some fun things makes it less, I think, you know, to your point, heather edgy. I think there’s a lot of fear out there. Is this going to replace my job? Et cetera. In fact, when Chat GPT first came out, I had somebody on my team go in and ask for it to write something up about Oxygen and came back with he was like, really good. And I was like, yeah, because that’s basically what we wrote. It’s just giving you your own work back. So good job. Good job to you.
Heather Cole [00:33:25]:
Wait a minute. Isn’t that what consultants do, Juliana?
Steve Watt [00:33:31]:
They make it all the time know it looked prettier.
Heather Cole [00:33:37]:
All right, well, thank you so much for joining us, Juliana, today. It’s been great. And I know it’s late in France, so we’re going to let you go, but we really appreciate the conversation.
Juliana Stancampiano [00:33:46]:
Thank you for having me. It’s been fun.
Steve Watt [00:33:48]:
Thanks, Juliana.
[TAKEAWAYS]
That was great catching up with Juliana. Heather, what were some of your largest takeaways from that think?
Heather Cole [00:33:58]:
You know, everybody is talking about AI, and it is a very popular topic right now. And I found that it was quite interesting that she diverted a little bit from what we’re hearing. The popular topics being, first of all, not content marketing and also thinking about it very specifically in the learning space, but not necessarily about making the rep more effective and efficient, which clearly we want to do as an outcome or the enabler more effective and efficient, which is clearly another outcome, which we touch on later. But also first she went to the ability to make folks who have a lot of stuff in their brains those SMEs. And how do we take what they know and maybe even skip the intermediary so they can do a brain dump and be able to make good learning from what they know instead of them having to interpret that to somebody who interprets it to somebody who creates the learning, which obviously will make the entire process much more effective and efficient. And that, I think, is something that hasn’t been talked a lot about.
Steve Watt [00:35:03]:
Yeah, I think that’s really exciting. You think about the amount of institutional knowledge that resides in every firm, and it’s in the brains of some of your leaders, some of your experts in various space, and it’s hard to get it out in manageable ways. It’s hard for that subject matter expert to find the time to really structure it and work it through with learning and enablement people. And it’s also hard for others to fully understand their own subject matter experts and the nuances of what they’re saying. And the idea of being able to brain dump as you said. And also I’m thinking we ought to be able to feed in recordings of conference speaking, recordings of podcasts, recordings of calls that these people have done and then have the AI make sense of that in a way that an enablement professional or a learning professional can have a real assist in turning that tremendous amount of knowledge into some structured consumable delivery. I think that’s going to be incredibly impactful.
Heather Cole [00:36:20]:
It is. And as Juliana said, we’re not there yet. But part of that is feeding the beast, because you have to have a tremendous amount of examples of what good looks like for it to learn appropriately and to make correlations. And most companies do not, at least not down to the level of specificity within a role or within this is what good looks like talking to a financial services person, but not necessarily what good looks like when you’re talking to somebody in the transportation industry. So those nuances need a lot of examples for it to be able to learn appropriately. So, yeah, we’re not there yet, but the possibilities are just amazing, and I just can’t wait to see what happens in the coming years.
Steve Watt [00:37:04]:
Yeah. And that takes you to the balance that she spoke of a couple of times in a couple of different ways. On one hand, you need to be grounded in strategy. You need to know what you’re doing and why you’re doing it. But on the other hand, you got to get on with it. You’ve got to just do it. And I see it almost like a road with a ditch on either side. And some companies are going to drive into the ditch of inaction, the ditch that says, this isn’t important or it isn’t now, or we’re not ready.
Steve Watt [00:37:39]:
But I think others are going to drive into the opposite ditch of going too fast, too far without the strategic guidance and just an insatiable appetite for delivery of more and more content, more and more training. And I think the key, at least what I was hearing from Juliana, is that we need to avoid both of those ditches, and we need to stay on the road, rooted in strategy, but also getting on with it and enabling and empowering our people to get in there and learn and explore. And if we can avoid both of those ditches, we’re going to go a long way on that road.
Heather Cole [00:38:15]:
Yeah. And I think setting that strategy, as you said, making sure there’s guardrails around how it can and can’t be used, and then thinking about it, about those low hanging fruit things that can be done really easily, like, how do you write a summary, how do you write a quiz? All of those things are really those are things that generative AI can do really well. But then there’s the stuff that you were saying is the art of the possible that we need to experiment with and play with and we all need to understand what is going to work, what is working now and what can we do with it in the future and what makes sense.
Steve Watt [00:38:47]:
Exciting times indeed.
[OUTRO]
Heather Cole [00:38:52]:
If you enjoyed this episode, follow the show on YouTube or your favorite podcast app.
Steve Watt [00:38:57]:
And check out gotomarket-magic for show notes and resources.
[CONFERENCE CALL]
Steve Watt [00:39:02]
You want more conversations like these, but live and in person. Join us at Shift. Shift is the annual conference for go to market leaders in San Diego this year. It’s October 23 to the 26th, and it’s going to be fantastic. Go to seismic.com/shift for registration and more information. Bye!