Over my too many years of working with many of the world’s blue chip B2C Telco organisations, one thing has always struck me about the data being used to power their Marketing Automation solutions: It’s either a complete mess, therefore impossible to use, or it is tidy, but structured in a way that makes it impossible to use. Either way, the net result for the quality of a Telco’s digital marketing, depth of personalisation and ability to measure ROI is the same. Each new conversation I have in the field, I wonder which of these will be the case, and very rarely am I presented with a third answer.
It is easy to see why some Telcos have ended up with unwieldy, unstructured marketing data stores. In the vast majority of cases, companies grow and evolve organically, which makes perfect sense, because they are, with the exception of the occasional Artificial Intelligence chatbot, populated by human beings. Humans are messy, my niece’s bedroom is adequate evidence of that. Marketing departments these days acquire new channels to speak to their customers at the rate of at least one per quarter. Each of these channels creates data in the form of what you sent, when you sent it, how you sent it, and what the reply was, and data must be stored.
And it only stands to reason, if there isn’t a logical place to store new data streams that are appearing with such regularity, then an illogical place will have to do.
Similarly, at the other end of the spectrum, vast, IT driven projects by talented professionals armed with numerous ITIL certifications have extolled the value of Big Data, Data Warehouses, Data Bricks and Data Lakes. It’s only a matter of time before some well-meaning IT Solutions provider rushes to market with Data Air or Data Water. At the time of writing though, I checked, and we are safe. For now. In any case, these deeply technical approaches end up with data that is overwhelmingly structured and optimised for storage rather than access.
Data is stored in de-normalised forms, by which I mean that the data is spread out across multiple logical, relational tables. De-normalised data is a cost-effective solution to a storage problem, less relevant maybe than it was ten or fifteen years ago with the reducing cost of hardware and the rise of cloud computing (though I have often seen that the cloud moves rather than removes cost). De-normalised data also tends to have a lower cost of access monetarily, but there is a trade-off against time and sophistication.
Anyone who knows me has probably been bored by my oft-repeated mantra “Time is the only true currency”, and I firmly believe it. Money comes and goes, ebbs and flows, but time only goes. When you look at it that way, which should you value more? Saving some money in the strictly short term, or opportunities lost forever?
Whichever technological cul-de-sac might resonate most with you, even with GDPR limiting the amount of data that organisations are able to retain, it is tempting (and in fact necessary) to act like digital hoarders, retaining a torrent of information about their customers, which of course they have legitimate interest to do so.
If you do have one of these problems, what can you do about it? It’s highly likely that the powers that be in your organisation are only too aware of the problem, even if the solution is either unclear or too painful to contemplate.
As we explored recently in our article, the Rise of the CDP, cloud’s answer to the complexity and disparity of siloed data is to deploy a Customer Data Platform to connect up, unify and de-duplicate all of your diverse data. I’m all for this, although on the subject of a CDP, I don’t think we’re dealing with revealed truth here, the need to better organise data has been around since about five minutes after data was invented. The strong advantage of a CDP in my eyes is the ability to add new data sources as they become available. Alternative solutions exist for the cloud-averse, but they risk becoming ‘forth bridge’ exercises, as the noble efforts lack the tools out of the box to be able to maintain a constantly evolving datasphere.
Because data sanity, as I’m going to call this never-ending endeavour to provide digital marketing with the latest and greatest data, in order to better service your constantly changing client base, isn’t something you do once, a project you start and finish before tucking it away under achievements on your CV.
Like puppies, data sanity is not just for Christmas, it is for life. Or to put it another way, in the possibly apocryphal words of Ralph Waldo Emerson, it is a journey, not a destination.
The benefits of maintaining easy, swift access to new and existing data streams to facilitate Marketing Automation objectives is clear. More accurate, more current profiles of your customers, to be able to contact, or respond to them at the right time with the right message or offer. Greater depths of data will lead to more sophisticated personalisation leading to greater response rates. The ability to measure the results of your campaigns, in real time will facilitate smarter, more cost-effective decisions throughout your business.
Between the giant Telcos that operate in the world today, in the wake of a Global Pandemic that has led to tectonic shifts in buying patterns, the competition for share of wallet for B2B and B2C customers, both monthly and pay as you go has never been fiercer. You all share the same problems, keeping up with, and keeping your customers – and there is no silver medal in the race to be able to speak where your customers are listening.
Knowledge is power. Power is profit.
If you’d like to know how Purple Square could help you solve your data problems, why not get in touch today to have a chat with Timothy, or one of our Sales Consultants.
Stay ahead with the 1250 other marketers getting our regular dose of MarTech insights.
Big data, is it still relevant? Back in 2015, Gartner coined the term Big Data, a term for data sets...
The Rise of Customer Data Platforms. With all the buzz about Customer Data Platforms (CDPs), I read ...