Good learning outcomes are defined by good measurement
From capability at the point of work to learning transfer, Paul Matthews is always looking to make workplace learning as effective as possible. Here he shares his latest thinking with UNLEASH.
The podcast, digested
Before you can even talk about outcomes, you need to know what you're measuring - and why.
Learning transfer expert Paul Matthews knows that the more questions you ask, the closer you will get to your desired outcomes as learning professional.
Read on and listen in for the crucial questions you need to ask.
UNLEASH editor Jon Kennard speaks to People Alchemy’s Paul Matthews about learning outcomes, his most recent interest when it comes to HR and L&D’s non-stop drive to upskill, cross-skill and improve the performance of the workforce.
We kick off with Paul issuing some stark words to anyone who’s booked a training program for the wrong reasons…
+++
Paul Matthews: It’s amazing how many training programs start with the outcomes, and there’s not a lot of clarity over what the outcomes are. Or some stakeholders think they’re clear, but different stakeholders have different ideas about what the outcomes are.
So if you’ve clearly got to start with the end in mind, that’s that old Stephen Covey thing, you need to know where you’re going to do that. And so what you want to do is define those outcomes, and get agreement across the different stakeholders who have a vested interest in what’s going to happen.
But define them in terms of observable terms; what will we see or hear or feel when those outcomes are achieved? Or if we don’t achieve them? So it’s actually quite useful to ask the stakeholder if we achieved at a 100% level, what you’re looking for, what would you get? What would you see? What would you hear? How would feel you personally, Mr. Stakeholder? How would you personally know that the program is 100% successful?
And they then have to go inside and figure out, how would I know rather than just what measures and that might be something they observe, it might be some numbers on, it might be the delegates themselves behaving a certain way in the future, and a few months time, it might be figures on a spreadsheet. But a really interesting question is then to say, well, if it was only 50%, successful, how would you know then that it was only 50%?
And that makes them really think carefully about, what are they personally using as measures. Because ultimately, when the programs run, they’re going to be measuring it with their own personal set of criteria, even if there’s other measures in place.
So that’s one thing, is that definition about coming to agreement across the stakeholders, or what those are, and then get them signed off. Often, and that shouldn’t be often behavioral terms, unless it’s compliance training or something. But the other one you’ve got to figure out before you start measuring is what’s a COI or cost of inaction? If we don’t do this for six months, what’s it going to cost the organization, you’re going to measure the downside of doing nothing.
Because that then gives you a bit of an idea of where it sits in the grand priority of things. In effect, people who have a training need, usually they’re competing for limited resources to provide and service that need, whether that’s internal or external resources. So the only way they can really compete or shouldn’t be competing, is on the biggest cost of inaction.
Jon Kennard: I think one of the first well worn cliches that I heard when I moved into L&D publishing was ‘what if you train people and they leave versus if you don’t train people, what if they stay’, and the idea that we’re so focused on return on investment, and rightly so, but the flip side of that, ‘the cost of inaction’, that’s a really interesting way of looking at it.
PM: You mentioned ROI, why measure, and I have a bit of a bugbear with some people who get so focused on the ROI stuff; it’s almost like they’re trying to say, ‘we created impact with our learning, therefore my salary or my budget is justified. Please keep paying me’. And a lot of L&D people end up measuring with that as an implicit or even sometimes the explicit purpose behind the measurement is justifying the existence of L&D.
I think actually, there’s nothing wrong with that. But I think it doesn’t serve the organization that well; I think it’s important to be thinking, measuring impact of learning and saying, well, how can we improve that impact? What are we trying to get to? And that doesn’t necessarily mean a monetary return on the investment, because very often, it’s about the expectations of those stakeholders. And that’s why we talk about ROI, obviously.
And the expectations are based on what I said before, what are the almost internal yardsticks those stakeholders using for them to know whether a program is working or not, because they must be using some yardstick because in order for them to even ask for a program, they’ve already decided, internally within themselves that a program is needed. So they’re already using a set of measures to decide that things are not quite what they want them to be at the moment.
So that set of measures is what they’re going to use to understand whether the program’s been successful or not. So I think it’s important if you’re going to talk about ROI to know why you’re doing it. It’s better, I think, than justifying the existence of L&D with a budget is to measure it in a way to are we achieving those outcomes that we agreed?
And sometimes, of course, you want to measure ROI just to satisfy the CFOs curiosity, perfectly honest, because he’s written a check, and he wants to know what he’s getting. So sometimes you get dictated to do some return measures, just because the CFO wants some numbers.
JK: I can see that. But then the next question is, from what end are you measuring? Like you said, an increase in people’s engagement and increase in productivity is it’s simply and productivity can be quite a nebulous way of looking at things
PM: You need to be a bit more exact in how you define that. But what are you classing as what you should measure, I guess, when it comes to the measures you’re going to use will depend on the outcomes, you’ve already agreed. Because once you’ve got a set of outcomes, then you’re going to be measuring something that will help you prove whether those outcomes have been attained or not.
So it’s difficult to say what to measure. It may be you’re measuring memorized info; can they pass the test now? And can they do it again, in a month’s time or two months’ time? Pretty basic, not that useful in my view. Are you measuring observable behaviors? Are they doing the things that we want them to do? And are they still doing them in three months time or six months time? In other words, were they embedded and sustainable over that period? So there’s that.
There are some other related KPIs you might be doing; times on calls in a call center or fixed times or something like that. Then you’ve got those measures that those individual stakeholders using as a personal level, it might be somebody might say, when I walk in the front door at nine o’clock in the morning, I want to feel ‘X’. That’s a very personal measure that a stakeholder might be using about the culture, for example, and if the program was designed to move the culture a little bit or something like that.
So it very much depends on what the outcomes are, until you have that set of outcomes, and you’ve defined them in terms that can be measured, then you don’t know what the measures are. So it’s really important that the outcomes are defined in terms. That means you can measure them, in other words, that they’re observable in some way, whether that’s a first order effect, because you can see people doing things or second or third order effects where they might turn up in some data or KPIs or something like that.
JK: To digress a little, I’ve just been writing up a recent webinar that we held and one of the participants on the webinar said something – I’m guessing I’ve heard it before but not for a while – and it is slightly controversial in that she said, if we took away the learning and development department tomorrow, who would notice?
And she wasn’t meaning to say that the profession is dead in the water, I think what she was trying to say is that we do have to look at the purpose of what you’re doing and moving towards a facilitation model and understanding that your value is tied completely to the business. And you’re not just cut off in silo, drawing up courses for the sake of it. So she was talking about taking the outcomes and working backwards, like you said. What do you think about that? Is that slightly controversial, slightly unfair?
PM: Well, no, I mean, it’s a fair question. If something suddenly disappeared, who would notice? I think it’s important for L&D to realize, as you say, they’re part of the greater whole, they are part of the organization, they are a cell in a multicellular organism, not a separate thing sitting on the edge.
And a lot of L&D departments almost as you say will sit in their space and design and create programs which someone will go and deliver. But it’s somewhat divorced from the flow of the business, often. And that’s bad, which is why you’ve got to talk to the stakeholders about what is it that you want to see as a result of this program? What’s really clear with a before and after snapshot, what is the gap between where we are now and where we want to be.
And then what do we need to develop as a program to bridge that gap. But you’ve got to define that gap. And you’ve got to say, when we’ve crossed that gap, how do we know that we’ve done that? What are the things that we will observe that will prove to us we’ve crossed the gap, or we’ve fallen in the chasm between, or how do we know whether we’ve got those outcomes are not they’ve got to be observable in some way.
And so with your vector SMART goals, or is it measurable and observable and realistic, and all the rest of it, you’ve got to be thinking in those kinds of terms, and define them in a way that a set of stakeholders can agree, yes, that’s what I want. If we get that, then we have happy bunny stakeholders. And this is how we will know that we got that.
And then that’s really what you’re working towards as an L&D person is, creating that set of outcomes on the assumption being that the outcomes being asked for by the business, the ones that will help them execute the organization or business strategy effectively.
But that’s not L&D’s job. That’s the business’s job. So I think it’s also really important that if people aren’t performing, or as productive as they need to be that L&D doesn’t try and take on that problem, or accept taking on the problem as the managers problem. L&D should not accept it, they should say, we will help you fix that problem. How will you know it’s fixed? How can we help with that process of getting it fixed? What are all the different things that might need doing? Not just from a learning perspective, but from other things, environmental process and so on?
Which is why you need a proper behavioral diagnostics or a task analysis at the beginning to look at what are the jobs to be done. And then how do we work? To get to the point where those jobs can be done efficiently and effectively by the people who need to be doing them. So you start from that beginning, start with the end in mind; what are the jobs to be done? And what do we need to be seeing and observing when those jobs have been done? Then from there, you can start flowing into how do we measure and what do we measure.
And all of that depends on the measures that will actually prove to you one way or another whether the outcomes have been achieved or not…
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Editorial content manager
Jon has 20 years' experience in digital journalism and more than a decade in L&D and HR publishing.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields