In today’s Agile Coaches’ Corner podcast episode, your host, Dan Neumann, is going solo. He talks about Agile assessments, and — using medical terminology — discusses whether it’s a helpful remedy or harmful toxin.
If someone is receiving the right medication for the right ailment, it can be absolutely lifesaving. But if you take that same medication and apply it to the wrong circumstance, it can be incredibly detrimental — such as a toxin, poisoning your system. Dan views agile assessments in a similar way. They can be super helpful tools for organizations that are trying to understand their current state of agility, or, they can potentially be destructive to team safety and employee morale — essentially becoming weaponized and destructive to teams.
Join Dan to explore this topic and learn how to properly leverage agile assessments in your organization.
- Why organizations look to agile assessments in the first place:
- To establish a baseline for performance
- To look at which teams are doing (or not doing) well
- To look for areas where help can be applied
- To validate assumptions
- The four categories of gathering an assessment:
- Externally measuring
- Having an expert come in and observe what’s happening in the teams
- Looking at the inner workings
- Dan’s tips for a successful assessment:
- Having phases in the assessment and planning with intentionality
- Know your “why”
- Collect data and interpret the results in a collaborative way
- For planning, ask yourself: What do you hope to learn? And what decisions might this enable?
- Look to agile survey tools for electronic data collection
- Look to people and interactions over processes and tools
- When receiving the assessment, create options for the teams
- Collaborate with the organization being assessed and those doing the assessment to figure out the next steps and how to move forward collectively
- Why an assessment may not serve an organization (AKA: the pitfalls):
- Help isn’t always helpful; sometimes teams just need to work through a problem and sometimes the intervention of an outsider is not particularly helpful
- The information is used to evaluate (i.e. ranking teams)
- Jumping to evaluation and looking to reward or punish
- How to go about interpreting the results of an assessment collaboratively:
- Use open space technology
- Bring the data
- Share observations
- Ask participants to organize around the data and observations and add their own perspective to what is happening
- Identify patterns and ask the team what they have the energy for turning into action.
Mentioned in this Episode:
Like what you heard? Check out our podcast page for more episodes full of interesting discussions, agile insights, and helpful resources.
Intro: [00:03] Welcome to Agile Coaches’ Corner AgileThought. The podcast for practitioners and leaders seeking advice to refine the way they work and pave the path to better outcomes. Now here’s your host, coach and agile expert, Dan Neumann.
Dan Neumann: [00:16] Welcome to this episode of the Agile Coaches’ Corner. I am your host, Dan Neumann. Thank you for joining me today. Just a brief reminder before we get going, the thoughts and opinions you’re going to hear on the show today they’re mine. I’m not joined by one of my colleagues or an external guests, so these are my thoughts, not necessarily those of AgileThought or any other company or people. Today we’re going to be exploring agile assessments and using a medical terminology. Is agile assessment a helpful remedy or a harmful toxin? So we’re going to follow that metaphor through and hopefully it’s entertaining and a little bit enlightening for you. If you have a topic you’d like to hear about, email me at firstname.lastname@example.org or tweet with the #AgileThoughtPodcast and we’ll consider it for a future episode. In medical terminology, medication that people are going to receive. If it’s the right medicine for the right ailment, it can be incredibly helpful and even lifesaving when you take that same medication and apply it in the wrong circumstances, if it gets mixed in with all their medications and has a reaction to them or simply the dose is too much, that same remedy can become a toxin and poison your system. And I view agile assessments in a similar way. They can be super helpful tools for organizations that are trying to understand their current state of agility or they can potentially be destructive to team safety, employee morale, and essentially become weaponized and destructive to teams. And so we’re going to explore that today. The first facet I want to share with you is, is really what I see as the reason that many organizations look to assessment in the first place. And the first one is to establish a baseline of performance. You can think of this like going to your doctor for your annual physical and they’re going to take certain measurements. Occasionally they’ll run blood tests and what they’re really looking to do is establish a baseline and understand when you potentially become ill in the future. What does a normal blood pressure look like for you? What does a normal measure of the composition of your blood, things like that. They’re trying to really understand the system in its normal state. That way when things become out of alignment, they have that to refer back to. So establishing some kind of baseline of performance is one reason organizations can look to do an assessment. Another might be to look at who or what, which teams are doing well or which teams are not doing well. It’s a worthy concern. That’s something we would definitely want to look at and we’ll explore some ways to make sure that psychological safety is not damaged as we’re starting to explore which teams maybe have some challenges and which teams have certain strengths. Sometimes it’s because the manager wants to help. And for those of you that are interested in the visuals that go along with this particular discussion, they’re available on SlideShare and we will link them in the show notes at AgileThought.com/podcast because some of the visuals that go along with this may help the message resonate.
Dan Neumann: [03:50] Help isn’t always helpful. Sometimes teams just need to work through a problem and sometimes the intervention of an outsider is not a particularly helpful activity. But in circumstances where a functional manager is really looking to improve the environment in which teams are operating, help indeed can be helpful. So managers look at how the environment can be good managers who are unnecessarily giving direction on the tasks ,it is not necessarily help. And the last one sometimes an organization is just trying to validate what they already believed to understand to be the truth. And that can be good as long as that is not something that is done disingenuously. So we do want to approach assessments with curiosity and that’s healthy and we want to use them as an enabler of conversations. And so we’ll explore that a little bit. So establishing a baseline, seeing what’s going well or not, looking for areas maybe where help can be applied and validating some assumptions. So those are kind of four reasons that I tend to see, it was an organization looking for assessment. There are four kind of categories of assessment that we see. One you can think of as the medical history form that you fill out when you go into the physician’s office, it’s self-reported and certainly I’m going to put down my name and my date of birth. Then you get to things like, weight. Well is it a morning weight? Is it an evening weight after a large meal, one might be inclined to under report their weight if they are feeling like they’re a little bit heavier than they want to be or occasionally over report their weight. I’ve seen statistics, men generally over report their height and so self-reporting has with it this degree to which the, the data might be skewed. It’s a little harder to skew when the doctor actually has you step on the scale. So this is where we begin to take a uh, pretty easy measurement and we can do this with agile teams as well or with organization’s agility. Yup. There’s some self-reporting that’s one option and then there’s taking a bit of an external measurement. The third approach in increasing order of involvement is really having an expert come in and observe what’s happening on the teams. And so while maybe we could rely on surveys where that’s appropriate for self-reporting, we can rely on some measures to add additional context. Sometimes you really need an expert to watch the actions. An analogy that comes to mind for me would be physical therapy. When I was working through an ankle issue that I was having and I went to the physical therapist and she had me do some basic things. She would watch me walk across the room and back and looking at the mechanics of the ankle. Something that I wouldn’t be able to self-report and a measure wouldn’t really show but going to an expert who can watch the action is really valuable. And so I think of this as being a consultant or an agile coach. Somebody who really has a deep understanding of how teams work, how let’s say the Scrum framework might be well applied or not well applied and having them watch the team and observe and provide some pointers from that perspective. And then the fourth one is really digging in and doing some more invasive types of things. So I think of drawing blood has a medical analogy here. What do we need to do to really get inside and look at the inner workings of the organization, of the teams, of the, the interactions that people have. And so those are kind of four aspects or four ways of gathering assessment. Everything from self-reporting, a little external measure, expert observation, and then really something a little bit more invasive from really trying to see what’s happening. Regardless of which of those four approaches might be appropriate, it’s important to really have some phases of your assessment. I’ve seen it happen where an assessment is started without any intentionality to it. Meaning there isn’t any kind of planning. So we want to make sure planning happens and I’ll share with you some tips on things you would consider during planning. We want to collect data, interpret the results, but do that in a collaborative way. And then fourth plan for action and again I think it’s really critical that this be done in a collaborative way. We’ll talk about why that is in a little bit. For planning, I think the two most important questions are what do you hope to learn and what decisions might this enable? Essentially those help you form an assessment. It helps inform where to invest the time and the energy where to dig deep. Maybe where one might go a little more shallow. As far as collection, electronic collecting can certainly help. So you might look to agile survey tools. There are some of those on the market and we can put links to those in the show notes and you can also look to electronically collect information from your systems. If you’re running a web app, you might look at the responsiveness of the website. You might look to pull information from your support database and really explore where, where things are maybe causing users to engage with your support system mark. So electronically surveying people, pulling information from systems but don’t just rely on the electronic part. You also want to look to humans of course people and interactions over processes and tools is right there in the agile manifesto. So it would be a little careless I think to not go beyond electronic surveys. So human interviews, human observations, both events that are held in group settings as well as one on one interactions are really important to understand as one goes through an assessment and really tries to get a deep understanding of what’s going on.
Dan Neumann: [10:39] So after having planned and figured out kind of how we want to go through the data collection informed by the planning, what do we want to learn, what decisions does it enable, what do we want to be able to do with the information? There are also a couple other questions that factor into planning that are important, which is how do you want to be able to slice and dice the data? Are we looking at teams, their specialties, locations, all of the above and how do you plan to get a well-rounded perspective? You don’t just want the perspective of one particular specialist group, you know we don’t just want the perspective of the managers, we don’t just want the perspective of the Scrum teams. We don’t just want the operations folks. We want to get business and operations and IT and representation from lots of different geographies included and so use that to inform your data collection. Once we’ve got a plan in place, we know why we’re doing the assessment and we know how we’re going to collect the data. I think it’s really important to keep in mind that assessments are good for a here and now, a point in time assessment, they’re going to tell you as best they can, what’s happening now. And then having a cadence often is valuable to come back and reassess or resurvey or collect additional data at some period down the line, maybe three months, six months, maybe it becomes an annual activity because we’re really getting a point in time assessment and this assessment that we’ve got. Then it’s important to provide options. So we want to create options for the teams and realize that what we’re talking about is a complex adaptive system. Organizations are complex animals and there’s not a single best practice that is going to be appropriate for your next steps. And so with assessment, we want to look at the options that are being illuminated by getting the results. So we might look at maybe different agile frameworks or how they’re being applied within the organization. We may look to perhaps generate more transparency into the workflow that’s happening and start to collect data based on what we see in an assessment. But it’s critical that we then collaborate with the organization that’s being assessed and the folks doing the assessment to identify what those options are and then as part of the continuous improvement plan, have a conversation about how we’re going to move forward collectively. We’ll talk about some techniques for doing that collaboration in just a few minutes. Some of the pitfalls that can come out of doing assessment, the biggest one that I see is when the information is used to evaluate. By this I mean we, we start to say, oh, this is a good team. That’s a bad team. Or we start to rank teams and at the point where individuals start to feel a lack of safety about the transparency is when the transparency goes away. And so my favorite phrase here is we want to use this as information and not as evaluation. So approaching the information we’re learning out of an assessment with curiosity. I wonder why we’re seeing this result or I wonder why we’re seeing this behavior or what’s happening with the interaction. And not to jump to evaluating that as a good or a bad thing or worse yet to reward or punish certain information you might see. But to really approach that with a sense of curiosity. Because once the psychological safety is, is destroyed, that’s when team members will start to not respond, let’s say as forthrightly. You’re not going to get as honest a picture as you might want if people don’t feel safe in telling the truth.
Dan Neumann: [15:01] So let’s say we’ve done the collection of the data. How do we go about interpreting results collaboratively? I’m a big fan of using open space technology. So an open space approach is where you have a facilitator that is creating a space for the conversation to happen. And I think of this both as a physical space, uh, as well as putting a boundary on that space with what the topic is. What are we exploring? And you want to convene folks that are interested in that topic, in that space. And then in this case we would bring the data. What information have we collected about the systems? What survey results do we have? What have we learned from interviews or sharing some observations? So bring the data, share those observations and ask the participants to organize around the data and the observations and add their own perspective to what’s happening. What are they seeing, what patterns are they seeing, what interpretation do they have of those events or what, what’s below the surface of this information. And once we’ve identified some of those patterns, what might the team have the energy for turning into action. So you might start to form experiments that want to be conducted and these might not be big experience. In fact, often the best approach is to conduct a whole series of smaller experiments and see what’s happening. As we start to improve the system. By convening open space, we’re really inviting everybody who’s impacted or participated in the assessment to become a participant in helping to create the path forward. And so for doing this with a lot of perspectives, a lot of different levels of the organizations and getting a lot of curiosity about the system that’s going to give us a lot of good options for moving forward. So, much like some folks will talk about metrics which you can neither live with nor live without. I think it’s also important to look at how we are assessing and evaluating the performance of groups of individuals, of individuals, of specialties and locations and really make sure that we’re doing this in an intentional way with planning, that we’re getting a breadth and a depth of information, whether it’s from interviews or electronic data or assessments of surveys I should say and then collaboratively decide what our action plan is going forward. So if that’s helpful, if you are curious about assessments or have any feedback for us, again you can email me at email@example.com or tweet it with #AgileThoughtPodcast and perhaps we can go deeper on some of the topics we briefly touched on. Thanks for listening today and we’ll look forward to hearing your comments.
Outro: [18:23] This has been the Agile Coaches’ Corner Podcast brought to you by AgileThought. Get the show notes and other helpful tips from this episode and other episodes at agilethought.com/podcast.