What ingredients are in your event set up?

Welcome to this archived 5-Minute-Friday Newsletter. If you’d like to subscribe to be the first to read these and have them hit your inbox each Friday, subscribe below.


Start.

If I were to ask you exactly how the events you view in your analytics platforms fire, could you tell me? Could your stakeholders? 

Image item

Coming off the back of a few technical analytics audit projects, I’m reminded of the devil that is in the event and data layer detail, the need we have to ensure accurate interpretation and responsibility we hold not to “set and forget”.

Establishing a sound foundational data collection strategy is what everyone aims for. No doubt. 

We have key “moments” where it makes good sense for us to spend the time to give this strategy rigour by considering exactly what we need across stakeholder groups, then taking the time to properly establish our data layers and tags accordingly. These “moments” in my experience usually stem from the development of a new asset (new website/app) or from a change in analytics tech (forced or otherwise, thanks Google).

This newsletter is about why we shouldn’t just wait for these moments and instead, should embed a practice of effective collection principles to make it the responsibility of our whole teams.

Beyond those few and far between “moments” in time where we have a dedicated opportunity to establish high integrity tracking protocols – we have the everyday needs of adding tracking measures for new widgets, landing pages and media campaigns. It’s how we manage these times that I want to spotlight.

The reality of everyday tracking needs

It’s simply not realistic to think we will have the time or resource at our disposal to execute collection to the highest standards every time.

In a best practice solution, data layer implementations are often dependent on a development team (who are rarely in-house) before tags can be set up by the analytics department.

In what I like to call the “scrappy but calculated” approach, we do what we can with what we have. We set up tags that describe what we are looking for.

It looks something like this.

  1. There is a void between what tracking is desired and what events we are currently collecting, or have the means to collect with our current data layer
  2. We are blocked by a lack of resources (budget or time) in our development team
  3. The analytics team are asked for the scrappy stop gap (“scraped events”)
A cheat sheet explanation of the what and how of event configuration.

Here’s the challenge. 

This is and was always supposed to be a stop gap.

This solution was a corner that we cut.

This is not to say we shouldn’t be implementing the “scrappy but calculated” event strategy at all. That’s simply not realistic. Lacking budget, dependencies on skill sets and time pressures are very real parameters we have to work within.

Sometimes we do just need to “get it done”. It’s what happens next that is often neglected and forgotten. 

Image item

There are however ways we can mitigate this approach from needing to occur at all, and manage the upstream effect when there is no other option.

Why best practice matters

The reason that many simply “set and forget” a less than perfect approach to data collection usually boils down to “Well it might not be perfect, but if it ‘ain’t broke!?”

The issue however is the upstream impacts of NOT addressing and reestablishing a more rigorous collection technique for your key events.

There are two.

  1. Interpretation
  2. Analytics technical debt

Interpretation of what performance really means is key to the effective assessment of campaign performance and attribution. I’ll provide an example.

I’ve audited Facebook CAPI implementations to find conversion events being sent for some device ID’s and not others. In an instance where Meta Ads are set up to display across multiple devices, measurement will only ever close the loop on part of that ads effectiveness. The onus should be on the analytics team to translate how the tag set up should guide campaign set up and the interpretation of results.

Then there’s the upstream analytics debt.

Outside of the obvious debt established by creating a stop-gap tracking solution that should eventually be replaced by a data layer implementation, there can be confusion all round when metrics change.

Imagine you’ve enjoyed one of your mum’s famous batch of homemade blueberry muffins every Sunday with a cuppa for years. Then one day, unbeknownst to you, mum decides to go on a mega health kick and replaces the typical sugar with the sugar alternative, stevia.

You bite into the freshly baked morsel only to find that something is off, but you can’t quite put your finger on what exactly…

That’s the all-round stakeholder confusion we can see when the underlying structure of how events are triggered changes yet reporting remains continuous. Your volumes may shift when it’s come time to reestablish the collection strategy in one of those “moments” mentioned earlier and analysts and marketers are left head scratching. 

We want to get ahead of the “moment” and ensure our stakeholders understand how the technology was set up and should be interpreted today, while supporting an understanding of what would have been more ideal and what we will focus on tomorrow

There is really only one justification for allowing this to sit in the backlog graveyard – and that is if the event was established for a very specific reason that’s unlikely to occur again. Perhaps a campaign landing page that has an expiration date. Yet for everything else, my issue is that “set and forget” typically occurs when it shouldn’t.

What we should do

Thankfully, there are some practices that can be established to mitigate this misinterpretation and technical debt that occurs.

  1. Ensure analytics is considered at inception
  2. Establish a process for consistent, intermittent auditing
  3. Maintain a data dictionary
  4. Maintain a change log

Considering analytics at the inception of a new campaign, test or  asset being developed is often the simplest oversight. Getting the team in early to feed in requirements can be your best chance at avoiding the “scrappy but calculated” approach at all.

In lieu of getting it nailed the first time, don’t wait for those “moments”. Waiting for the “moments” that are so few and far between requires too much change which not only makes the tech costly to fix, but forces your teams to see too much of a shift in their interpretation of reports. It’s often all too much too soon (like the upcoming shift to GA4 where we also see a foundational change in the data model itself) which is no way to build confidence in literacy and adoption of a data driven mindset.

Maintaining a data dictionary is often one of the simplest and most effective ways of mitigating interpretation issues while keeping a clear log of current state. There are many methods, from direct applications (such as this one from Looker) or this simple but fantastic template). 

Finally, my beloved “change log” is one that when operationalised throughout the business, can provide so much clarity as well as a blueprint to support iterative upgrades rather than only waiting for the “moments” or dedicated “audits” to take place.

Psst if you haven’t stalked a bunch of my past consulting work, I’ve made it all free in my online library. I have a change log template hidden in the “Digital Strategy Vault”. Have a squiz or hit reply and I’ll send you the link. 

While often neglected based on being more about foundational hygiene than shiny and exciting analytics insights – establishing an understanding of your collection set up is profoundly key to the successful interpretation of your reports.

We are always going to have moments where we need to be scrappy but calculated, the key is not to get lazy and “set and forget” but follow through on any strategy built with a “crawl”, “walk”, “run” trajectory.

For without a high integrity set up, we put too much pressure on our teams to interpret the meaning and structure activity accordingly.

As analysts or data driven marketers our job is not only to interpret up to the users of our reports, but to interpret back to the technical set up that will yield the best possible indicators for digital measurement.

Embed a practice that makes it everyone’s responsibility to own data hygiene to spend more time meaningfully interpreting and less time head scratching.

End.

Hi I'm Kate! I'm relentlessly curious about the attribution and origin of things. Especially as it relates to being a corporate girly balancing ambition and a life filled with joy.

Leave a Reply

Your email address will not be published. Required fields are marked *