How effective with MarTech vendor AI algorithms be?


My team is working with a client this week who is testing the AI features out of their Onsite Experimentation Platform.

For a little while, since the hype for AI well and truly broke the internet late last year, I’ve been pondering what this will mean for our team (in the context of a marketing and technology consultancy). 

Our team is eager to get their hands on the opportunity to build something net-new using AI technology (large language models or LLM’s that can replace a lot of BAU insight reporting seems to be the #1 use case of interest so far) but we of course know, it’s the large MarTech vendor platforms that have the incentive (and labour force) to build features that make their product even more compelling and effective. 

Adopting a new features within a platform I already have embedded and operationalised throughout the biz? Hell yeah, right?

Maybe.

Comparing the adoption of AI features in MarTech to custom engineered solutions is no contest. Vendor features are cheaper, faster and lower risk as brands have the power to switch on and switch off pending the outcome of the results.

Great marketers know the art of structured experimentation. With the great accessibility provided by vendors enabling the use of new AI powered features, comes the ease of dipping our toes in to test what AI can really do for our organisations. 

This presents more of a risk for the vendors than it does on the client side. 

“Do you really back your AI algorithm?” I find myself asking some unknown vendor operator under my breath. 

The approach every platform will take will largely be different and we’re unlikely to, of course, learn of the specific ways in which they build their algorithms (as that would be sharing their competitive advantage). 

We’re beholden to the documentation they share with us and the questions answered to key partners who are curious enough to ask. 

As data professionals though, we do know a few key truths that will underlie the effectiveness of any model.

  • The output is only as good as the data that’s fed in
  • Duration of learning and calibration improves the model over time

When we are the creators of our models we have the ability to scrupulously review and manage our data quality through normalisation and standardisation processes. We choose what the model should consider, what assumptions should be baked in, set baselines and even expand our data sets adding metadata attributes that can offer the level of granularity needed for the model to be its most effective.

Now more than ever it’s key to ensure we get our implementations right in order to maximise the opportunity vendors provide us. While vendors love to sell the shiny opportunity, the real power we can generate from these algorithms is in our hands.

It’s time to revisit the dirty work in auditing our data layers, enriching our data assets with metadata, reviewing our product feeds and considering what first party data can (and should) be sent back into platforms to align to the customer experiences we are hoping to power.

If we are wanting our AI tests to be effective, it’s time to clean up shop! 

Useful reads:

https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/preparing-metadata

https://www.optimizely.com/personalization/

https://docs.developers.optimizely.com/content-management-system/docs/content-metadata-properties

https://www.braze.com/product/sageaibybraze

Hi I'm Kate! I'm relentlessly curious about the attribution and origin of things. Especially as it relates to being a corporate girly balancing ambition and a life filled with joy.

Leave a Reply

Your email address will not be published. Required fields are marked *