This article is a part of series
This article wraps up meta-approaches for practical problem solving. For a recap: meta-approach is an effective problem solving technique which I managed to capture over time as it manifests itself in work & life. We’re going to look into another triplet for this one.
Root cause analysis (RCA) via combination of factors
You ran into a problem and you need to find out how to fix it. There’s a lot of factors and moving parts, the proper analysis is blurry & troublesome.
Instead of emphasizing exactly one culprit consider multiple factors working together for the problem to manifest. Swiss cheese model of a disaster is a great way of thinking in this case.
Example: your latest deployment to production contains a critical bug. Upon carrying out RCA you found that:
- Bug occurs only with a particular value from a reference table
- There’s robust testing suite for this logic which is executed in staging but that particular value is missing from the staging data
- There’s a procedure in place which imports anonymized prod data into staging but it didn’t populate the value before the release
- That particular value was recently added manually to address a complaint from a customer
- Stage to prod import process haven’t yet run to import the new value what got juxtaposed with the latest release
Every choice is a tradeoff
You followed an approach which seemed brilliant at first but you bumped into some major issues down the road.
Every choice whatsover is a tradeoff with pros & cons. Whenever you believe there’s an option with only pros or only cons you just run into a blind spot for some reason. Fundamentally this goes back to conservation of energy and impossibility of a perpetuum mobile.
Example: you decide on approach of frontend to backend communication and you believe full duplex approach backed up by Websockets is ideal for your case as it enables the lowest latency. What you didn’t consider, though, was that Websockets are:
- stateful, so scaling is much harder to do right in comparison to a regular REST API
- programming model is much more convoluted in comparison to a typical request/response approach
Dogfooding as the universal quality enabler
You just finished a task and happily handed over the results. Surprisingly the downstream workstation isn’t happy with what they got.
Dogfooding is a term which basically means “taste your own medicine”: whenever you do something try to wear the hat of the consumer of your results. It’s hard to overestimate the universality of applicability of this principle to quality assurance.
Example: you build an internal tool for some people in your own department and your “customers” definitely value speed over quality in this case. Still instead of just testing the implemented functionality to ensure it works you go beyond. You test the tool your work on from the perspective of your colleagues overall process. Such focus brings additional insights to your work, so you end up implementing some pieces in a pretty different way.
Practical problem solving ends here
That’s all folks! Hope you enjoyed the series and found some useful techniques for day-to-day tasks.