Topspin on Every Ball

One night a few weeks ago, I was playing ping pong.  On a whim, I decided that I would use topspin on every ball.  Something really cool happened.

I’ve played ping pong since I was a kid, the way a lot of people have: casually and occasionally.  In short, I’m nothing fancy!  Some high school friends I played with were really good, so I at least knew about fancy things like putting spin on the ball, but I never really got good at it.  I had tried over the years to get better, but only half-heartedly while playing the occasional game.

Then I did my little experiment the other night. Within about 10 minutes, my ability to use topspin improved more than it had in the last 30 years of me playing.  It felt like magic.

Doing it on every shot helped me tweak my movements, getting better at basic execution of topspin.  I might have predicted this result, but I would have expected it to happen less quickly.  I wouldn’t have expected to learn how the topspin was affected by factors like speed and spin of the ball, position of the ball relative to the table, and my position relative to the ball and table.  I was also surprised to learn that in addition to making it harder for my opponent to return the ball, topspin also greatly increased the chance that I would get the ball on the table in the first place.  In those 10 minutes, I went from winning about 50% of the points to winning about 90% of them.

Remember: it’s not like I never tried to improve my use of topspin before.  So what was different?  I have a few guesses.

First, I think that isolating the skill had a lot to do with it.  Practicing it again and again was like drilling any skill (multiplication tables, dribbling a basketball, piano scales), allowing more fluent application when integrated into a real-world situation.

Second, removing the skill as a variable made it the focus of my experimentation.  In my normal game I would vary my spin from stroke to stroke (left spin, right spin, backspin, topspin, and no spin) in addition to other variables.  No wonder that I was not learning from a series of experiments where I constantly changed multiple variables.  Switching to “topspin every time” turned this into a simpler experimental model which focused on topspin itself.  This allowed me to learn about how other changes affect the behavior of topspin.

Third, the contrived nature of “topspin every time” led me to try topspin in situations where I would not have otherwise.  There were many shots where it felt awkward at first because I had probably never used topspin in that position.  Quickly, I got over that, stretching the range over which I applied the skill.

Thinking about this, it occurred to me how little we do focused practice on the many things we do each day.  What if one were to pick a skill or behavior, no matter how small, and apply it to every single situation in a given day or week?  The skill or behavior might not fit in every case, but would one learn just as quickly as I did that night playing ping pong?  What if I said “this week I will work on improving listening by never being the first person to respond to a point/question in a group meeting”?  Or “today I will not send any emails longer than 200 words”?

I’ll report back on applications of this approach, but I welcome comments from any of you who try your version of putting topspin on every ball.

Matthieu’s Playbook

Over the years, people occasionally came to me and said things like, “We wrote down all of our process in a doc.  Can you look it over and tell us if it’s right?”  I was cringing before they even asked me to be some sort of Agile Judge.  I’d scream in my own head, “Writing it all down misses the point!”  But here I am writing down ideas of process and practice.  What gives?

Part of the genius of the Agile Manifesto is that it doesn’t tell you exactly what to do. It gives you a resilient foundation of values and principles that is grounded in discovered truths, and then lets you figure out how to apply it. Scrum describes process a bit more, but still leaves a lot of open questions. That means that these ideas can flex to just about any situation.  So why do I want to go messing it up by defining that process more prescriptively?

I’ve seen that people just getting started find it all daunting. Even after going through Certified ScrumMaster training, new practitioners may be a little lost as to what exactly to do next.  A year or so ago, I happened to work with three new teams in quick succession.  I realized I was recommending the same basic set of concrete practices to get them started, and I took the scary step of writing them down.

I’ve gone on to present these ideas publicly (AgileDC 2018, Agile Denver’s MileHighAgile 2019).  When sharing the playbook, I’ve done so with some important notes:

  • The playbook is temporary scaffolding.  This is not supposed to be your new process forever.  Your team will grow in its own direction.  As soon as it has outlived this playbook, throw it out!!
  • The playbook is meant as a coherent set, but can be used a la carte.  The set of practices outlined are what I like to use to ensure that the flywheel of continuous improvement is turning.  I think that failing to address any one of the problems that I target (see the gray background “villain” slides) risks undermining your continuous improvement practice.  That said, the tools might be useful to you individually, and if that’s the way you feel it makes sense to use them, I think that’s great.
  • Feedback is welcome! I shared the doc as a Google Slides presentation with comment rights for all users.  If you have a question or comment about something, ask it via the commenting feature (or just contact me), and I’ll do my best to respond/adjust the presentation.

Good luck!

Matthieu’s Playbook: Tried and True Patterns for Kickstarting Scrum Teams New and Old

The Daily Question

You start your sprint thinking you can get the work done.  But as soon as the sprint starts, new information comes your way: stories are harder than expected, someone is out sick, a blizzard knocks a day out of your team’s schedule, etc.

When do you deal with this new information? Daily standup.

Standup is of course a chance to commit to each other what you will accomplish during the day.  But it is also a chance to replan the rest of your sprint as necessary.  From the Scrum Guide:

Every day, the Development Team should understand how it intends to work together as a self-organizing team to accomplish the Sprint Goal and create the anticipated Increment by the end of the Sprint.

Part of that is answering: are we still on track? If not, what are we going to do about it? Unfortunately, I have found that teams don’t do this naturally. Here is the best trick I’ve found for building and maintaining this important habit:

  • At the end of every standup, ask this question: On a scale of 1-5, how confident are you that–as a team–you will complete the Sprint Goal by the end of the sprint?
  • Count 3-2-1 and then everyone vote with their fingers (5=very confident)
  • If anyone votes 3 or lower:
    • Ask the voter: Why are you concerned?
    • Ask the team: What can we as a team do to get this back on track?
  • Repeat this for anyone else who voted 3 or lower.

In my experience, the hardest part about this is just asking the question.  Teams naturally leap into action trying to help each other and problem solve; they just need the information brought to the surface.  The voting takes virtually no time, and it forces the team members to look from the team/sprint perspective as opposed to the me/today perspective.

I now recommend this practice for every team, old and new.  I hope it helps your team!


A few other things:

  1. This doesn’t work very well if you start mid-sprint.  It’s actually best if you ask the question before closing sprint planning, to ensure that you are actually starting your sprint from a position of confidences (4s and 5s).
  2. If you aren’t using Sprint Goals, change the question to ask whether you will complete all stories by the end of the sprint.
  3. Here’s a PDF of the daily question you can print and put on your team’s scrum board as a reminder.

When OKRs Attack: A Framework for Reviewing OKR Problems

You may have heard the reminder that the daily stand-up meeting in Scrum is not a status meeting, it’s a commitment meeting.  It’s a chance to pick a focus for the day, share it with your team, and push everything else to the side.  I like to think of it as giving yourself the gift of focus.

A few years ago, I was reflecting on failures with my own personal daily commitments.  Thought I wasn’t on a scrum team, I was setting out my tasks at the start of each day.  Rather than saying them out loud to my team, I wrote them down.  I found a few frequently recurring categories of problems.  Later, doing retrospectives on OKRs, I was reminded of these categories.  The timescale was different, but the problems were largely the same.

There are seven categories.  The first six are typical problems that trip us all up, especially in the first two to three quarters of OKR adoption.  Once we can routinely manage those six hurdles, we run into the seventh, which is a catch-all: execution.  It’s a good thing when your main failures (if any) come from execution.  It means you have made routine of evading those other pitfalls, and you can dive into fine-tuning your performance.

Sometimes you will fall short of a commitment for a combination of these reasons.  The value of this framework is not in perfectly assessing which category was to blame; it’s in helping you pick apart what could otherwise be a confusing jumble of conflated problems.  Do your best to pinpoint the biggest reason you failed to hit each Key Result, consider what you would have done differently if you could do it again, and keep this in mind as you head into your next set of OKRs

Underestimation
The work was bigger than you expected.

  • Examples:
    • A known step took longer than expected.
    • You discovered unknown steps along the way.
  • Remedies:
    • Roughly plan the sequence of events for quarter.  Backwards planning can be good here.  Careful not to overdo it.  If you do, it will be harder to respond to new information as it comes.  Even if you manage to throw away your plans when things change, you will have lost that planning time forever.  So keep the planning rough/high-level.
    • Do more up-front risk/dependency analysis.
    • Write outcome-based OKRs that give you more latitude to adjust.

Overcommitment
The work was about what you expected, you simply took on more than you could possibly achieve.  We all do this.  Remember: there is no virtue in taking on more than you can possibly accomplish.

  • Examples:
    • Too many OKRs.
    • OKRs too big/ambitious.
  • Remedies:
    • Don’t overcommit!  Easier said than done, of course.  But if you have 2-3 straight quarters of data telling you that you took on too much, it will be a little easier.

Blocks
You felt unable to proceed at some point.

  • Examples:
    • Had to wait 5 wks for department X to send material.
    • Development environment databases down for five days.
  • Remedies:
    • Is the block really, completely out of your control? We often take too passive a stance in response to blocks.  Don’t let them defeat you!  What have you tried? Could you have acted sooner or gone further?  Could you have escalated or sought other assistance to unblock yourself?
    • Before the quarter starts, take ownership of identifying all critical needs and gaining commitment from others on meeting them.
    • Write outcome-based OKRs that give you more latitude to adjust.

Distractions
You focused on something other than the OKRs you set.  Death from a thousand cuts of distraction is one of the most common problems, and one of the hardest to avoid.

  • Examples:
    • Prioritized other work over OKRs.  This often happens unconsciously: something else comes up, and you do it without considering the negative impact on your prior commitments.
    • Went an extra mile on a given OKR.  Going the extra mile is often seen as a good thing, but you have to consider the impact of that extra work on other priorities.  Remember that every choice to do something is also a choice not to do something else.
  • Remedies:
    • Strictly check impact on OKRs whenever other work shows up.  Will doing the work jeopardize your chance of success with OKRs?  If so, is it higher priority than the OKRs?  If so, what is the likely impact on your OKRs?  Share that that impact with all stakeholders before proceeding.
    • If you’re about to make give something an extra touch, ask yourself what impact it has on other OKRs. If you are taking extra time, it has to come from somewhere!

Poorly Written OKRs
OKRs were too task-oriented or too vague about what done would look like.

  • Examples:
    • I finished the task-oriented Key Results, but missed the spirit of the objective.
    • Partway through the quarter, I abandoned the Key Results because I realized it was the wrong approach.
    • I had Key Results like “Start initiative X” or “Make progress on project Z”.
  • Remedies
    • Remember: Key Results aren’t what you do (tasks/outputs); they are what happens as a result of what you do (outcomes).
    • Remember: Key Results show what you will finish, not what you will start.
    • When setting a Key Result, imagine yourself scoring it at the end of the period.  Will it be easy to tell whether or not the work is done?  If not, rewrite the Key Result with a more definitive finish line.

Mid-Quarter OKR Change
If at all possible, you should avoid doing this.  But sometimes you have no choice. Two examples:

  • A new demand appears out of the blue, taking priority over your OKR.  A big demand, not a dentist’s appointment.
  • After some progress, you learned that the entire OKR should be scrapped.  Could be due to a change in the business environment, a major technical miscalculation, etc.

If you must make such a change, clear it with all stakeholders first.  Then, at the end of the quarter, make sure you look back to figure out what happened and how to avoid such changes in the future. If this happens to you more than once or twice a year, you definitely need to re-examine what’s going on.

Execution
If the problem is not in one of the six categories above, congratulations! You’ve avoided the common pitfalls.  You are likely operating with a much higher degree of focus and alignment than you were before adopting OKRs.  But the work continues: without the confusion of those common pitfalls, you can now start fine-tuning your approach to maximize achievement of your outcomes.  Good luck!

Up Periscope!

“Inspect and adapt” is a key phrase in Scrum.  Many parts of the framework ask you periodically to check in on something, see how its going, and modify your next steps as appropriate.  One of the inspect-and-adapt meetings is Sprint Review, where you inspect the product increment (Scrum’s fancy way of saying “what you built in the last sprint”).  There are a bunch of important things to check, but here I’d like to focus on this one:

  • How are we doing toward our longer-term goals?

During sprints, we are cruising along, focused on the details in front of us.  It’s incredibly important to look up from what you are doing and make sure you are still headed where you meant to go.

Here is one particularly effective way I’ve found to connect this inspection question to adaptive actions.  It assumes that you are using OKRs (Objectives and Key Results) set on quarters, but you can easily swap in whatever mechanism/time period you are using to set goals beyond those of the sprint.

  1. Bring up the OKRs on a shared screen.
  2. Read out the first key result.
  3. Ask the question: “Are we on track to complete this key result by the end of the quarter?” If people aren’t ready to vote, discuss.  Otherwise, on the count of three, everyone votes as follows:
    • Thumbs up = Green: We are on track.
    • Thumbs sideways = Yellow: We think we will complete this, but we have some definite risks
    • Thumbs down = Red: At present, it doesn’t look like we will complete this.
  4. Discuss differences between the votes.  Revote as necessary.  Once you have a consensus, if the consensus is yellow or red, discuss two follow-up questions:
    • Why is it yellow/red?
    • What specific actions can we take to get it back to green?
    • Write the answers somewhere everyone can see.
  5. Repeat until you’ve gone through all key results.  (Note: I don’t find it as useful to do this for the objectives, but if you want to do that, knock yourself out!)

A few common questions about this technique appear below.

If we’re behind, can we really afford the time to have a conversation about risk?
A frequent mistake is to stop after step 3 and say, “Oh, boy, we’re in trouble! We’d better get back to work!” It’s very tempting to do this!  You just identified that you are behind, and it is counterintuitive to think that the right action is to not jump right back to work.  Think of it this way instead: you have identified that you are on a course to failure; why rush getting back on that course?  Instead, take a moment to look at the risk in more detail as noted in step 4.

Why write the answers about risk somewhere everyone can see?
As a general rule, it’s a great practice to record the conclusions of a conversation in a shared place.  It greatly reduces the risk that participants come away with different memories of the conversation.  In addition, you can refer back to it, and if you make it fully public it can help communicate status to people outside the team.

How detailed should my answers be?
Only as detailed as is useful for your team. For example, you might write something as simple as: “Concern over content being ready for us in time. Juniper escalating to Max.”

How do we rate a Key Result we haven’t started work on yet?
You should rate every Key Result in every iteration.  Again, rate it based on confidence you’ll complete it by the end of the period.  Just because you haven’t started something doesn’t mean it’s at risk.  Let’s say you are renovating a bathroom.  Painting the walls might be one of the very last tasks.  Early on, you can still be confident that you’ll finish it on time even though you won’t start it until much later.  On the other hand, if earlier work starts running behind, you might start to worry that you won’t get the painting finished.  In fact, the piece that is running long (say, building the vanity) might be green, meaning you’ll finish it by the end of the period, but its overruns mean that later pieces (like painting) switch to red.

Why not replace the confidence ratings with our percent done?
You should certainly consider percent done so far, but the amount you have completed is not a great indication of whether you will get it all done on time.  Being ahead early doesn’t necessarily mean you’ll finish on time, and being behind late doesn’t necessarily mean you’ll finish late.  Since the point of this exercise is to inspect progress and adapt appropriately, we need a signal to tell us whether to adapt.  Confidence that we will get it done by the end of the period is simply a better indication of whether or not we need to change our plans.

What’s the Hippopotamus?

Me: I think the big bites can burn your mouth more than small bites, because there are more molecules–those little balls bouncing around–that can bump into your mouth and warm it up than there would be in a smaller bite. At least, that’s my guess.

My young daughter: That’s your hippopotamus?

Me: Yes.  That’s my hippopotamus.

In my last post, I wrote that teams past the prototypical startup phase should still use the approach of validating hypotheses, highlighted in Eric Ries’ The Lean Startup.  I left off with the question: where should teams find hypotheses to test?  See the answer in my full post on the Amplify Engineering blog.

 

The Lean Grownup

Eric Ries’ The Lean Startup holds a lot of great advice for prototypical startups, as well as for innovation efforts within more established companies.  But some on more established teams might think this book has nothing for them.  I suggest they think again! Although a couple of the lessons are a little less applicable later in the product lifecycle, much of the core thinking of Ries’ book is not only still usable, but crucial to understand.

I explain why in my full post on the Amplify Engineering blog.

Measurement Patterns for OKRs

In writing key results for OKRs (or generally trying to find measurable goals), I have noticed several patterns of measurement.  I find this list useful when I am trying to brainstorm ways to measure a given initiative.  I can ask myself, “Hmm, is there a Symptom measure I could use here?”  Gives the imagination a little kick.  This is especially useful when I am stuck with a bunch of output-based key results and I am trying to convert to something outcome-based.  Will post more about that later; for now: it’s basically about trying to avoid the trap of measuring what’s easy to measure as opposed to what’s important to measure.

  • Milestone
    • Binary result; you do it or you don’t.
    • Example: Achieved XyZAB Certification.
  • Metric
    • You change some measure by some amount.
    • Example: Sales up 35% over same quarter last year.
  • Pioneer
    • You do the first of something, forcing you to learn and solve problems along the way.
    • Especially useful when you are moving into a whole new area.  Lays groundwork for future work.
    • Example: Produced first Department podcast.
  • Canary in the Coalmine
    • You measure the whole by measuring a predictable outlier.
    • Example: Perennially dissatisfied customer said some form of “very happy”.
  • Symptom
    • You measure the true (and difficult-to-measure) outcome you actually want by detecting a symptom it creates.
    • Example: true outcome is “increase customer knowledge of topic x”; symptom-style measurement is “20% fewer help desk calls on topic x”.
  • Stepping Stone
    • You believe that by achieving a given result, your true outcome will follow.
    • Especially good for cases where the outcome significantly lags the work you do.
    • Example: true outcome is “people enter key data in SalesForce”. You believe they don’t because your SalesForce implementation is a clutter of unnecessary fields. Stepping-stone style measurement is “Number of SalesForce fields cut in half”, based on the theory that sometime after the cycle, people will as a result start using SalesForce more regularly.
  • Straight Face
    • You make an assertion that you cannot currently say with a straight face.  Your goal is to get to the point where you can say it with a straight face.
    • Good where quantitative measures are impossible.  However, it’s squishy.  Use with caution.
    • Example: I am in good shape.

Resistance is in the Eye of the Beholder

Before I start, I have to give credit to Esther Derby for nearly 100% of the value of this post. She does a free (free!!) monthly Q& A conference call.  During one such call a little over a year ago (“Reframing Resistance for Positive Outcomes”), she opened my eyes on a critical point.  Here’s the very first sentence of Esther’s discussion that day (not verbatim, but close): What resistance really means, if you look beyond the frustration, is a person not going along with your suggestion as enthusiastically or quickly as you would like.

Just this sentence sparked a major perspective shift for me.  The last four words put attention on the part the initiator of the change plays in the feeling of resistance.   Think of it like Newton’s third law: if I punch someone in the face, their face pushes back on my hand.  My hand might really hurt, but would it be fair of me to think ill of them for pushing back on my knuckles with their cheek?  I’m obviously heightening my role as aggressor in this situation, but even in this extreme example, I might have good intentions (e.g., it looked to me like they were about to hurt my friend).  Regardless: I took part in creating the pain I felt in my hand.

While this analogy is silly in some ways, I really like that it highlights something huge that I used to overlook when complaining about resistance: how does the other person feel about it?  Just as focusing on the pain in my hand is unfairly ignoring the pain in the other person’s cheek, complaining of resistance is ignoring the position that I have put the other person in.  They very likely don’t like pushing back, but I am forcing them to do so.  (Of course, they have the option of just giving in, but neither of us does well in that outcome.)

Esther went on to cover some very useful techniques for working through how to approach perceived resistance.  These techniques both involved shifting ones perspective: in one case re-framing the perceived resistance to find its potentially positive components, in the other case seeking an understanding of the other person’s perspective.  Doing the exercises really drove home the truth of her opening statement. Every example of resistance I could think of fit that definition.

One might consider this massive self-deception: if I always slide around to the other person’s perspective, am I simply explaining away resistance that actually exists?  Well, if my interest lies with progress that respects people for who they are, and if understanding their view helps me achieve progress, I don’t think I really care.  Regardless, I truly have come to believe that–in all but very rare cases (e.g., deep personal animus)–resistance is just one’s own view of the feelings one creates by pushing on someone else too fast, too hard, or too insensitively.

The tools Esther shared that day are great, but the really cool thing she did was helping me see the futility in fussing over so-called resistance.  Since then, I’ve stopped using the word in this context.  That’s a pretty damned good result for a free Q&A call.  Thank you, Esther!!