Anzeneering

Posted January 21, 2014 by Joshua Kerievsky

anzenShieldGray

Want to know what decades in the software field has taught me?

Protecting people is the most important thing we can do, because it frees people to take risks and unlocks their potential.

I call this Anzeneering, a new word derived from anzen (meaning safety in Japanese) and engineering.

Every day, our time, money, information, reputation, relationships and health are vulnerable.

Anzeneers protect people by establishing anzen in everything from relationships to workspaces, codebases to processes, products to services.

Anzeneers consider everyone in the software ecosystem, whether they use, make, market, buy, sell or fund software.

Anzeneers approach failure as an opportunity to introduce more anzen into their culture, practices, and tools.

By making anzen their single driving value, anzeneers actively discover hazards, establish clear anzen priorities and make effective anzen decisions.

Protecting People From What?

Anzeneers protect:

  • Software users from programs that hurt their ability to perform their job well, waste their time, annoy them, lose or threaten their data or harm their reputation.

  • Software makers from poor working conditions, including hostile relationships, death marches, burnout, hazardous software (poorly designed, highly complex, deeply defective code, lacking even basic safety nets like automated builds or automated tests), insufficient testing infrastructure, poor lighting, uncomfortable seating, excessive work hours and insufficient exercise.

  • Software managers from the stress and consequences of not delivering, insufficient insight into progress, poor planning and sudden surprises.

  • Software purchasers from software that damages their reputation because it doesn't meet expectations or isn't used.

  • Software stakeholders from losing large investments and marketplace credibility because of doomed software efforts.

An Agile & Lean Common Denominator

Anzen is a common denominator of every Lean and Agile practice.

Lean Startups protect our time and money via minimum viable products/features, validated learning and innovation accounting.

Extreme Programming's technical practices protect us from complexity, stress and defects via simple design, automated testing, continuous builds, test-driven development, refactoring and pair-programming.

Kanban protects us from bottlenecks and decreased flow via visualized work, limited work-in-process and classes of service.

Lean UX protects us from poor user experiences via interaction design and usability evaluations.

Retrospectives protect us from repeating the same mistakes.

Sustainable pace protects us from burnout, poor health and isolation.

Continuous deployment protects us from stressful, error-prone releases while enabling safe, high-speed production improvements.

Protecting people underlies every Lean or Agile practice.

Anzeneers make this protection their explicit, driving value.

Cultivating An Anzen Culture

When General Motors compared their safety engineering, enforcement and education to Alcoa (a leader in safety), they found that they were virtually identical.

Yet Alcoa had an amazing safety record and GM did not.

The difference was that Alcoa had a genuine safety culture.

Dr. Steven Simon, a student of Abraham Maslow (creator of the famous Hierarchy of Needs), invented the idea of "safety culture" in the early 1980s, when most folks thought he was nuts to be talking about such a thing.

He astutely observed that culture "supports or undermines your safety program" and "drives safe or unsafe behaviors."

For example, mixed messages about safety and performance (such as "We care about your safety, but please deliver as fast as possible") can lead people to work in highly unsafe ways.

Dr. Simon said, "The premise of culture-based safety is that the individual's behavior is a product of the group’s culture and particularly of the norms mirrored and modeled by leaders, formal and informal."

If you want to see new employees emulate unsafe behavior, have them work beside coworkers who routinely undermine or bypass a safety practice.

To cultivate a genuine safety culture, people must be empowered to uncover unsafe assumptions or shared beliefs and establish norms that drive safe behavior.

Safety culture work is now recognized as an essential part of safety programs in automotive, aerospace, energy, food, pharmaceutical and transportation companies.

And it is practically non-existent in most software organizations.

Anzeneering is here to change that.

To effectively protect people,
anzeneers must cultivate an anzen culture.

Such work involves embarking on a multi-year "safety culture journey" that is "grassroots-led and management-supported," as Dr. Simon says.

Future posts on anzeneering will share what we are learning about the safety culture journey from Dr. Simon and other leaders in the safety field, including Amy Edmondson, a Harvard Business School professor who is doing outstanding research and writing on team psychological safety.

Safely Taking Risks

Anzeneering does not mean playing it safe, since it can be inherently unsafe to not take risks.

Anzeneers figure out what is or isn't a safe risk.

They avoid faux safety: that which masquerades as safety but fails to deliver genuine protection for people.

Faux Safety: Not Speaking Up

Do you say nothing at all when you have a dissenting opinion because it feels safer to not speak?

That is faux safety, since your dissenting opinion could potentially be quite valuable and protect people from a problem.

In her excellent book, Teaming, Amy Edmondson describes how climates that are psychologically safe expect and welcome dissenting views.

What is safe for one person may not be safe for others.

For example, programmers who have job security because they write hard-to-understand code that only they can maintain/extend put colleagues at risk by being a single point of failure.

Genuine safety provides protection for ourselves and others.

Enabling Excellence

W. Edwards Deming's First Theorem says nobody gives a hoot about profit.

Managers who push profit over everything else rarely get the cooperation, motivation or outcomes they seek.

An early 1980s initiative to improve the quality of aluminum at Alcoa failed because it was a management-driven initiative with no real association to what workers needed most.

As Paul O'Neill later demonstrated at Alcoa starting in the late 1980s, when people are empowered to seek out and eliminate the greatest hazards to their health and safety, their performance improves, quality rises, and they feel safe to fail, innovate and pursue excellence.

Anzeneering at Industrial Logic

Anzeneering now defines how we work at Industrial Logic.

It's improving how we protect people in our training, coaching, development, teamwork, hiring, operations, sales and marketing.

While we were once agilists, we are now anzeneers.

We are helping our clients become anzeneers by helping them answer questions like:

  • What harms or endangers people the most in your software ecosystem?
  • What would need to change to remove or reduce those hazards?
  • What do people need to feel safe to take risks and explore their potential?

I will be giving a keynote speech about Anzeneering at:

If you attend one of those conferences I would love to meet you and hear about your safety journey.

I would like to thank the following people for reviewing and helping me improve this post: Pam Allio, Sandra Browne, Amr Elssamadisy, Chris Freeman, Alexandre Freire, Ashley Johnson, Tracy Kerievsky, Achi Oseso, Tim Ottinger, Miguel Peres, Ingmar van Dijk, Bill Wake and Ruud Wijnands.
  • Khurshid Akbar

    Good post, we had forgotten about “First do no harm” oath

  • Tim Wright

    There’s a fantastic book about process improvement called “chasing the rabbit” – it talks about (amongst other thing) how to make fast effective processes that focus on safety. Many of the things they talk about have strong parallels with agile methodologies and processes.

  • Tim Ottinger

    I did a talk in New Zealand at the Canterbury Software Summit.

    The intent was to talk about lessons learned in 30+ years of software practice. We talked about “what works and what doesn’t” and where we admitted to ourselves that Agile and scrum are fashions (like all software practice) but that they’ve helped to elevate practices that really work.

    All of the things that have worked have been things that make us safer and unlock our ability to do better, more valuable work.

    All of the things that didn’t work were things that took away our humanity, choice, and freedom.

  • Lyndsey Lynch

    I think “faux safety” is such a tough problem. Or misapplication of safety principles, as when managers attempt to “protect” engineers from interacting with stakeholders. I look forward to reading Amy Edmonson’s book and hope it advises on changes teams can make when they recognize they are engaging in “faux safety” practices. As a manager, I always want to support genuine safety, but it’s an area where continual attention and improvement is needed.

    • Joshua Kerievsky

      Thanks for sharing your thoughts Lyndsey. I don’t think Amy deals with faux safety so much in her book. I hope to write more about faux safety in the software field, since I think that such knowledge is important to becoming an anzeneer.

  • Pingback: I once was an agilist | Shisōka

  • Pingback: I once was an agilist | Ruud's BS

  • Pingback: I once was an agilist « Agile Coaching

  • jonkernpa

    +1

  • Marvin Toll

    I was explaining yesterday a discomfort with needlessly refactoring a stateless class into a mutable stateful class. My response was garbled as it swung from talking about eliminating the potential for @ApplicationScoped to potential concurrency implications.

    It was not until 4:30 AM laying in bed that the light-bulb went on… why didn’t I express my experience based discomfort with an Anzeneering assertion? I was uncomfortable because we were needlessly introducing a safety hazard into the code base!

    If this approach works, we don’t have to discuss the probability that this class will ever be used in a multicore (concurrency) context… we don’t have to speculate on the potential impact of future scoping decisions etc. We can simply agree… hey, let’s not create a safety hazard if we don’t need to… OK?

    • Joshua Kerievsky

      Hi Marvin,

      It’s great that you were able to see how the language of safety and hazards could have helped in that situation.

      As an Anzeneer, one question I often ask myself is whether or not I understand someone else’s perspective before I even point out potential hazards or offer suggestions for improvement.

      Imagine that the refactoring example you are citing is from a new collection of exercises for students, rather than real production code. In that context, I’d likely inquire whether the authors of the exercises were aware of the concurrency hazard and how they planned to deal with it? Perhaps they planned a later exercise to address concurrency hazards but wanted to focus on something else in the example you critiqued?

      You may still find their response to be too risky, yet it opens up a respectful dialogue about hazards, risks and safety for students and programmers in your company.

      In essence, I’m saying that using the safety language may not be enough. You also have to work to understand the context, understand the positions of the people involved in what you observe to be hazardous and work towards what I called “reciprocal safety” in my Anzeneering keynote. Doing that will help establish psychological safety, which is essential to fostering a safety culture and high performance teams.

      Hope that helps!

      • Marvin Toll

        Speaking hypothetically – let’s say you worked for a company that values low-cost sourcing… and is very good at executing on that value.

        And… you want to encourage a ‘safety culture’ by including Anzeneering in your training. Do you have a sense whether one is better served by weaving safety concepts into the curriculum as you go… or having a separate topic?

        To make the question more concrete… let’s say you are concerned with the hazard posed by Java’s ability to have mutable state in classes… in contrast to Scala where immutable values are built into the language.

        And let’s say you are teaching a unit on TDD. Would you be inclined to guide folks away from using immutable state or save that conversation for another day under the expressed topic of Safety Engineering?

        Asked another way, do folks tend to learn better when talking about hazards and safety in the context of Anzeneering… or are they initially able to grab hold of the concepts while operating outside of an Anzeneering context?

        A mere 169 hours into Anzeneering and counting,
        Marvin

        • Joshua Kerievsky

          Anzeneers are always concerned with safety and removing/reducing hazards, failures, injuries and near misses.

          Mutable state and the concurrency problems associated with it are a genuine coding hazard. Add “low-cost sourcing” into the mix, and defending against them is even more critical.

          Another coding hazard is difficult-to-understand code. If code is hard to understand, it’s hard to maintain or extend. If a refactoring transformed such code to be easier-to-understand, it would be useful. If that same refactoring also introduced a new concurrency hazard, an anzeneer would acknowledge it and work to remove it while also keeping the code easy-to-understand.

          TDD helps us evolve simple designs safely. Refactoring is a key part of TDD and is a critical practice for removing hazards in code. Learning the various coding hazards and how to safely remove/reduce them ought to be part of any good TDD curriculum.

          • Marvin Toll

            If I may communicate back what I think I heard:

            1. That hazard identification and safety remediation should be considered at all times when coding — including during training.

            2. That parity between over-protection and under-protection (previously communicated as Over-engineering and Under-engineering in your ‘Refactoring to Patterns’ book) should be considered.

            3. That balance between competing safety solutions for competing safety hazards should be considered.

            And of course… it is the mature Anzeneer that at some point begins to perform this balancing act almost intuitively. :-)

            Reasonable summary for getting started?

          • Arlo Belshee

            Nit: I hold refactoring to be the core practice. TDD is a useful adjunct that can enhance the value of refactoring.

            Refactoring without tests (using a tool and not doing rewrites) gives about the same results as refactoring with tests (good design, few to no bugs written). Corey Haines does this about once per year; I’ve done it a couple times. Full XP with one mod: a no tests rule.

            Test first without refactoring is much less useful than with – and actual unit testing is impossible without refactoring.

            Tests add one advantage to refactoring: they show you what you don’t already know about design. Of course, so does a pair partner who asks “would this be usable out of context?” and “what would this be named?” In any case, TDD can help me see places to improve my design skill. Improving that skill comes from pairing and refactoring. The skill is applied via refactoring. You can always choose to solve a problem using only today’s skill.

            Of course I recommend all of it together. But making the primacy of refactoring clear help people quickly get past the “write bugs, find them, pick some to fox, fix them” approach and to the “redesign until you can add code without bugs, then do so, then ship” mentality.

        • Arlo Belshee

          Anzaneering is a culture. As such it is not a topic: it is an explicitly recurring theme in each topic that ties them together. It should be one of the key ways that learners apply their prior knowledge to learning new topics.

          Assuming, of course, that that is the culture you want to establish. I think it is a great choice (I’d recommend it), but your culture is your choice. Use data, for example, is another culture meme that can be used as a foundation to grow all of agile (data requires measures and experiments, which requires punctuated stability, and thus Bob’s your uncle).

          Combine this with good instructional design (chunking, rapid iteration of Kolb cycle, start with concrete and extract theory, …). Then Anzan becomes the theory. Extract it from the first three chunks taught. After that, start presenting problems as the concrete experience (rather than techniques), and focus on the reflection and hypothesis stages, asking “who is unsafe here and how?” then “what could we change to make safety? How would we test if that change worked (both at creating anzan and solving the original problem)?”

          Or, at least, that is how I would use it for teaching.

          • Marvin Toll

            That raises a good question… has anyone outside of Industrial Logic included Anzeneering in the production release of training materials?

            From this real world experience… what were the hazards discovered? How were those training/learning hazards eliminated or reduced? What techniques were used to help learners feel save and to take risks while exploring their potential in a classroom context?

  • Pingback: Extreme Enthusiasm » Blog Archive » A summary of my XP2014

  • http://pedCentral.com/ Marvin Toll

    Now that three months has elapsed since initial exposure to Anzeneering… (just 9,500 hours short of being an expert)… recent application of the metaphor has included “framework” development.

    For instance, framework design goals may include enhancing:

    * Convenience (but have to learn a new API)
    * Reliability (presumably tested, but have to trust the implementer)
    * Readability (but have to be clear on team usage patterns)
    * etc.

    What is absent from this list, is an explicit reference to providing ‘safety’. Some might say that technology ‘safety’ provided by a wrapping style of framework is an enabler for ignorance. However, if you accept the number Mary Popendieck puts forward, that it takes 10,000 hours to become an “expert”, many technologies don’t even last five years!

    Said another way, isn’t it appropriate that the Java ForkJoinPool provides multi-core processing safety – even if it means we live in ignorance of Doug Lea’s thread stealing algorithm? And therefore, isn’t it appropriate to build a layer of abstraction on Doug’s work to provide further safety… even if it results in some developers living in ignorance of a lower level usage pattern?

    Said a third way, one of the principle arguments against abstraction is that it undermines learning for some; and that is true. The counter-argument is that abstraction, done well, can increase safety.

    _Marvin

  • Pingback: Does Technical Debt Cause Employee Turnover? | Industrial Logic