Welcome to this archived 5-Minute-Friday Newsletter. If you’d like to subscribe to be the first to read these and have them hit your inbox each Friday, subscribe below.


Balancing speed and scale. 

Speed and the lowest path of resistance will always be the preference. But what are the upstream challenges?

This week’s newsletter explores exactly when we should consider erring on the slow side in pursuit of a more scalable, higher integrity future in analytics operations.

Sometimes it really is the tortoise who wins long term.

The problem

Regardless of the craft you operate in, we all know the pressure of feeling as though we need to address problems urgently and of course, sometimes we really do. Sometimes even, we do for an extended period (it is, after all, quite the recession occurring in many industries which begs some pretty consistent urgency right now).

This newsletter is not about those genuinely urgent instances. In those instances, there truly isn’t time to pull resources away from the urgent task at hand. 

It is however about how we so often fall into the trap of believing things are urgent at the cost of progressing forward with uplifting our analytics maturity. Something I otherwise call “staying in the status quo zone”.

It’s easy to do what we’ve always done. They say “do what you’ve always done and you’ll get what you’ve always got” but today I’ll argue the risk is not only in stagnating, but in fact atrophying our performance if we do not know the trigger point for exactly when we should change.

The art of balancing analytics capability uplift while responding to BAU (business as usual) requests is one that requires a great deal of maturity and stakeholder management skills.

This week I offer a framework for supporting your team’s ability to know when to err on the side of slow in pursuit of better.

The problems with staying in the “status quo zone”

There are fundamentally three key problems with teams that focus solely on being a support service to the business without pursuing improvements in analytics capability.

(Pssst! By analytics capability I mean, the analytics stack; things like data storage, cleanliness, table structure, access controls, temporary scratch data tables, core tables and BI/visualisation tools)

The three challenges with deprioritising analytics capability uplift:

  1. Relies on hiring team members and inhibits skill development
  2. Relies on there being rigour in calculation processes to maintain quality
  3. Keeps you from innovating reporting functions

The first challenge among the above being an expensive solve (when we rely on people rather than augmenting teams with efficiencies in technology) can equally become a retention challenge. Leaving teams with only “BAU” work on their plate is hardly breeding a sense of excitement to turn up to work each day.

More than that, challenge #2 is where the real (often hidden) risk to an organisation is  established. When we have a centralised analytics function of one (that is, one staff member) the calculation methodologies that are established live in the mind of the analyst.

So how does this scale as we add people to the team? The answer is, “with risk”. 

I’ve seen some teams produce some truly effective documentation and rigour in their processes to maintain reporting structures which do mitigate the risk that analyst #1 will calculate key metrics entirely differently from analyst #2, yet the risk remains.

(All signs point to introducing a “metrics layer” there, but I’ll get to the solutions soon)

Finally the obvious third challenge of deprioritising uplifting analytics stack capability is the stifling innovation of insight reporting.

I’ve observed teams maintain excel reports for years that over time, become a “franken-sheet” of stitched data sources. Not only does the performance of running queries and updating visualisations stall over time – but the report itself is riddled with risk. More difficult to peer check formulas and macros, much easier for any report recipient to break functions.

So when do we pull the trigger and how do we create a business case that analytics capability uplift needs to make it on our map of tech initiatives?

I know, I’m hardly announcing something groundbreaking here. I’m sure you know what needs to be done. It’s more about the “how” right? 

In fact, I’d posit a guess that this is what you’re thinking right now.

“We WANT to streamline our reporting functions but just don’t have the time or resources to make it happen. We keep getting bogged down in BAU.”

If that’s what you’re thinking right now, it’s time to shape up a business case.

Over the years I’ve shaped up an intuitive list of “break points” to look out for that help indicate uplift is no longer an idealist vision, but a necessity to maintain quality.

Here are the “break points” I look out for

So now we have a guide of what to look out for.

Even still, it’s another thing to build a case around the effort (and thus, the opportunity cost) coming out of your team.

To do this, I like to keep track of my “break points”. This looks like… 

  • Size of the backlog (net new analysis in the “to do” pile)
  • Number of requests to fix or change (use a ticketing system)
  • Time tracking per project or a simple “retro” ceremony that encourages team members to highlight the number of errors and blockers they face
  • Monthly or quarterly audits to peer check that analysis uses like for-like calculations across team members
  • The forecasted cost (or potential cost) or errors

It’s the collection of these “break points” that leave you as a leader with the insight required to forecast the opportunity cost of not changing.

The solve of course looks like securing the resources (time and budget) to focus on uplifting capability in the background while you continue to churn out the most important work. There will be a trade off for stakeholders so communicating the expected timeline and impacts will be key.

The utopia analytics stack is often a non-existent nirvana do be careful not to make this promise either.

Starting with the next best decision

Without promising analytics nirvana (I do promise you, it doesn’t exist), capability uplift can be as simple as:

  • Building dashboards for reports required to be refreshed at a frequent cadence
  • Introducing a “metrics layer” in your data warehouse to store frequently used calculations
  • Implementing a scratch practice (not allowing changes to core tables but temporary tables for static, once off analysis as this forces a rebuild for work that needs to be repeated)

So long as there is pressure to churn out reports at speed there is a potential trap of falling into hiring alone rather than balancing the need for human, process and improved technology. The art is in forgoing some speed that can be achieved today for the promise of a higher quality, more innovative analytics future.

To get there, keeping sight of the break points and putting systems in place that allow us to track their occurrence can help guide the narrative to get what we need to realise speed at scale.


Hi I'm Kate! I'm relentlessly curious about the attribution and origin of things. Especially as it relates to being a corporate girly balancing ambition and a life filled with joy.

Leave a Reply

Your email address will not be published. Required fields are marked *