Marketing, heal thyself

In honour of Data Privacy Day 2020, i’ve decided to make an admission. Yes, i’m afraid I have now been working in what amounts to direct marketing for 34 years. Does that qualify me for membership at Direct Marketers Anonymous? That is what DMA stands for isn’t it?

Actually, I see the Direct Marketing Association has rebranded to The Data and Marketing Association. A good move I suggest, and perhaps an opportunity to reflect and move in a new direction. Certainly from where I stand, the marketing profession needs to change, and fast. Here are some obvious problems that need to be rectified:

  1. Data volumes: Those statements about ‘more data created in the last XXX than all the previous YYYs’ used to quite satisfying; think of all that big data analytics… Then, reality set in; data quality is typically a major barrier to progress and thus a significant proportion of the huge data-sets being gathered sit there soaking up resources to no great effect. Time to stop gathering that which has costs that exceed the value in it.
  2. Cookies and surveillance: The modern day version of coupons and loyalty cards, but much more invasive and dangerous. The marketing industry has fallen hook, line and sinker for the adtech promise; which so far as I can see fails to deliver for anyone other than the adtech providers themselves. Enough, switch them off.
  3. Lack of Respect for People’s Time: Back in the day when it cost money to send things, direct marketing used to be a relative rarity. These days when there is very little incremental cost in sending an email, there is a tendency to just keep sending more and more frequently. I used to laugh at those ’12 days of x’mas’ email campaigns, until everyone started doing them, and then they extended beyond 12 days. So now the whole period between Thanksgiving and January Sales is one huge marketing blitzkreig. Enough, just because you can send something, don’t; be more respectful of people’s time.

In the grand scheme of things, and as I see it, response rates in B2C through that whole 30 year period have been pretty stagnant. On average, I expect 1% response to a direct marketing campaign, and 1% of that 1% converting. Given the HUGE change in data availability, analytical tools and technologies that have emerged over that period, that stagnancy is quite shocking. By implication, 99% of our messaging is going to people who find it not relevant to them at that point in time.

I think it is time to change the model. Time to change from push marketing to pull marketing. Rather than spend yet more money on adtech, invest in what i’ll call ‘CustomerTech’; that is to say, tools and capabilities built on the side of the customer. Give customer’s the ability to signal when they are in the market for something, without that then subjecting them to a flood of marcoms and their data being sold and shared out the back door. I would contend that marcoms volumes would go right down, perceptions and response rates go right up.

Thankfully there are now significant numbers of people working to make CustomerTech a reality. Here’s hoping we see that come to life before next years Privacy Day.

Co-managed data, subject access and data portability

I’ve written a fair bit of late on the importance of co-managed data; here and here again. This follow on drills into how co-managed data between individuals and their suppliers would change how we typically think about both subject access and data portability.

Subject access has featured in pretty much all privacy legislation for 20 years or so. It has always been assumed to be ‘if an individual wishes to see the data an organisation holds on them then they have the right to do so’. Typically that has meant the individual having to write to the relevant data protection officer with a subject access request, and then that resulting in a bundle of paper print out showing up a few weeks later. My last one is shown below; it was incomplete and largely useless, but at least validated my view that subject access had not moved on much since GDPR.

Example Subject Access Response

Data portability has been around for less time, as a concept sought by a bunch of interested individuals, and only appeared in legislation with GDPR in 2018. The theory is that individuals now have the right to ask to ‘move, copy or transfer personal data easily from one supplier environment to another in a safe and secure way, without affecting its usability’. However, when I tried that, the response was wholly underwhelming. I’ll no doubt try again this year, but I fully expect the response to be equally poor.

The problems in both of the above, in my view, is that organisations are being given the remit to make the technical choices around what is being delivered as subject access and data portability responses. In that scenario they will always default to the lowest common technical denominator; anything else would be like asking turkeys to vote for X’mas. So, think .csv files delivered slowly, in clunky ways designed to minimise future use rather than the enabling formats and accessibility the data subjects would like to see.

Moreover, the issue is not just the file format. There is also a mind-set challenge in that the current norm assumes that individuals are relatively passive and non-expert in data management, and thus need to be hand held and/ or supported through such processes so as to avoid risk. Whilst yes there may be many in that category, there are also now many individuals who are completely comfortable with data management – not least the many millions who do so in their day jobs.

So, in my view there is little to be gained by pursuing data subject access and data portability as they are currently perceived. Those mental and technical models date back 20 years and won’t survive the next decade. As hinted at above, my view is that both subject access and data portability needs could be met as facets of the wider move towards co-managed data.

So, what does that mean in practice? Let’s look at the oft-used ‘home energy supply’ example (which adds competition law into the mix alongside subject access and data portability). Job 1 is relatively easy, to publish the data fields to be made accessible in a modern, machine-readable format. An example of this is shown at this link, taking the field listing from the original UK Midata project and publishing it in JSON-LD format (i.e. a data standard that enables data to be interoperable across multiple web sites). What that very quickly does is cut to the chase on ‘here’s the data we are talking about’ in a format that all parties can easily understand, share and ingest.

My second recommendation would be to not put the supply side in charge of defining the data schema behind the portable data; i.e. the list of fields. Doing so, as before, will lead to minimum data sharing; whereas starting and finishing the specification process on the individual side will maximise what is shared. There is more than enough sector specific knowledge now on the individual side (e.g. in the MyData community) for individuals to take the lead with a starter list of data fields. The role of the organisation side might then be to add specific key fields that are important to their role, and to identify any major risks in the proposals from individuals.

Third, as hinted at in the post title; both subject access and data portability will work an awful lot better when seen in a co-managed data architecture. That is to say, when both parties have tools and processes that enable them to co-manage data; then ‘export then import’ goes away as a concept, and is replaced by master data management and access control. In the diagram below, BobCo (the home energy provider) is necessarily always the master of the data field (algorithm) ‘forecast annualised electricity use’ as this is derived from their operational systems (based on Alice’s electricity use rate). But Alice always has access to the most recent version of this; and has that in a form that she can easily share with other parties whilst retaining control. That’s a VERY different model to that enacted today (e.g. in open banking) but one that has wins for ALL stakeholders, including regulators and market competition authorities.

Individual-driven data portability and subject access

Fourth recommendation, which is almost a pre-requisite given how different the above is to the current model, is to engage in rapid and visible prototyping of the above – across many different sectors and types of data. Luckily, sandboxes are all the rage these days, and we have a JLINC sandbox that allows rapid prototyping on all of the above – very quickly and cost effectively. So if anyone wishes to quickly prototype and evaluate the above model just get in touch. Obvious places to start might be standard data schema for ‘my job’, ‘my car’, ‘my tax’, ‘my investments’, ‘my transaction history’ on a retail web site, ‘my subscription’, ‘my membership’ ‘my child’s education history’ and no doubt some more easy ones before one would tackle scarier but equally do-able data-sets such as ‘my health record’. To give you a feel for what I mean by rapid prototyping; all of the above could be up and running as a working, scalable prototype within a month; versus within a decade for the current model….

Reminder to Self for the Next Decade – It’s all about the data…!!!

Just a reminder to myself, and anyone else that cares to value the same advice. For the next decade, and probably beyond, the focus should be on detailed work around data: – definitions, documentation, analysis, process flows. So, precisely what data elements, stored precisely where, moving from where, to where, for precisely what purpose; who are the data rights holders, who are the custodians. Nothing generic, all down in the drains type of work.

Other things are important, leadership, culture, business model choices, process optimisation, return on investment (including time investment); but none outweighs the necessary focus on the detail of the data.

The outcome of that level of focus should hopefully be a lot more of the right data getting to the right places, and a lot less of the wrong data going to the wrong places. Right and wrong in this case, at least where personal data are involved, are as seen from the perspective of the individual.

Co-managed data techologies and practicalities (part 1)

I’ve had lots of good feedback and iteration on my first post on co-managed data; thanks for that. I’ll mix that into this post, and also start to drill down into some more detail of what I have in mind when I refer to co-managed data, and also touch on how that might emerge.

The main questions that came back were around the nature of the tools an individual might use to manage ‘my data’ and ‘our data’, both precursors to genuinely co-managing data. I’ve attempted to set out the current options and models in the graphic below. (I suspect i’ll need to write up a separate post explaining the distinctions).

My Data Management Typologies

The current reality is that the vast majority of individuals can only, and will therefore, be running a hybrid of these approaches. That’s because the variants towards the left hand side cannot cope with the requirements of some modern service providers/ connected things. And the variants towards the right cannot cope with the requirements of some relationships and ‘things’ that remain analogue/ physical.

If one was to look at the proportion of ‘My Data’ that is being managed across these options then we need some basic assumptions of volumes. So for the sake of time and calculation, let me assume that every adult individual in the modern world has:

– 100 significant supply relationships – 200 ‘things’ to be managed, – 20 data attributes (fields, not individual data points) from each relationship and 10 from each ‘thing’ – so a total of 4,000 data assets under management. The actual real number is much higher and largely out of the individuals control at present. Come to think of it, that explains why research regularly comes back with the comment that individuals feel they have ‘lost control of their personal information’ – they are right, for now…

In my case, simplistically, I’d estimate that I have my data split across the types as follows (stripping out duplicates, i.e. not my master record, and back-ups of which there are inherently many in the hybrid model). Type 1 – 15% (inc some critical data), Type 2 – 30%, Type 3 – 5%, Type 4 – 35%, Type 5 – 5% , Type 6 – 9%, Type 7 – 0.9%, Type 8 – 0.1%

My numbers will be dis-proportionally high I guess towards Type 6 as I use more of these than most; and very few will have anything at all in Type 8 at present as it is brand new.

In any case, this post is already pretty dense, so i’ll leave it there for now and pick up the next level of detail in the next post.

How CustomerTech Can Improve Product/ Service Reviews

This post writes up my take on a discussion on the VRM List, that initially asked the question ‘can reviews be made portable so that they can appear in more than one place?’. The short answer is we believe yes, there is at least one way to do that, using JLINC, and quite possibly other protocols. And in doing so we believe we can much improve the provenance of the review record so that it becomes more useful and more reliable for all parties.

So how might that work, and what might that then mean? The diagram below illustrates a basic un-structured review of a hotel booking being shared consciously and in a controlled, auditable manner with 3 separate travel related services.

The critical components in this model are:

  • Self sovereign identity – in place for all parties which enables downstream data portability, no lock-in, reputation management and ultimately verified claims where they are useful
  • A Server – to be or connect to the individual’s data store, manage the KYS process (know your supplier), represent the individual in the agreement negotiation, and log all signing and data share activity
  • Information Sharing Agreement – to set out the permanent terms associated with the specific data sharing instance. In this case, and very interestingly, we believe that we may be able to use an existing Creative Commons licence
  • B Server – to present requests for reviews to individuals acknowledging the customer-driven terms to be used, and flagging any downstream use (and ultimately having downstream data controllers sign the same agreement)
  • Audit log – the permanent record of what was shared with whom for what purpose under what terms stored to keep all parties honest.

Personally I think this is the way forward for reviews, and offers people like Consumer Reports and Which the opportunity to re-invent their business models.

Anyone want to give it a try?

PS The same principles and methods apply to pretty much any other ‘volunteered personal information’ scenario. I think over time that approach will win our over capturing ‘behavioural surplus’.

It’s Time to Start Talking About Co-managed Data

As we reach the end of another decade, I’ve been reflecting on the changes over the last ten years in my areas of interest – customer management as my day job, and personal data management for individuals as my hobby horse. For the former I’d say it’s been a very poor decade indeed; the latter a positive frustrating one.

In the world of customer management, the dominant theme in my view has been dis-intermediation of traditional customer-supplier relationships by GAFA and Adtech. That has meant a lot less transparency around how personal data is being managed. GDPR has, so far, offered more promise than reality; to date is feels like lots of positive possibilities, none of which will really be addressed until a few giant fines get handed out to Google, Facebook and Adtech (which will take years).

On the issue of personal data empowerment for individuals, there has been good progress; but there remains a long way to go to get to scale. GDPR has put the brakes on some of the bad stuff, but positive empowerment of people with their own data for their own purposes is not really happening as yet. Even base level ‘rights’ such as data portability are very poorly supported in practice, and even if that improved then there are no large scale services in position to use ported data.

So, all in all, some twenty five years into the commercial Internet, it still feels and acts like the Wild West. A small number of big Ranchers build a fence around ‘their property’, define the rules that they say apply to anyone who comes onto their patch, and then go about their business with little regard to the outside world or their impact on it. In the Internet version, the Wild West is called Surveillance Capitalism; i.e. the systematic process through which large organisations ring fence a piece of digital territory (data about people), declare it to be theirs, and proceed to turn that into products and services for them to sell. That’s only good for the surveillance capitalists themselves; it just dis-enfranchises the individuals whose data is being grabbed.

From the individual perspective, GDPR tells me that I’m now ‘in control of my data’. Well, I must have missed that bit. Or does someone really expect me to go to my 300 or so direct suppliers, read their policies, see my data, change or delete it where I need to, and pretend to not notice that I am being followed around the web by them and hundreds of ‘their partners’ who I have never heard of?

In parallel, let’s look at what would actually be useful to me going forward. I already manage a lot of my own data as my hobby, and with a view to building tools that would make this task easier to undertake, and more useful in terms of what can be done with my data. From that work, I can see, for example, that I have:

  • around 7.5k financial transaction records since Jan 2013 (so about 100 per month)
  • just over 300 suppliers that I know and recognise will have data about me (so about 1k per year, just around 1oo per month)
  • 250 product/ service records where I have a digital record (I have more but not digitised those as yet)
  • thousands of data points on Internet of Things, fitness/ health trackers, location check-ins

Here’s a screenshot from how I log my supply relationships, in this case I’m doing that in the JLINC Sandbox.

And this one is, using the same app, how I log ‘my stuff’ (i.e. assets, products or services that I have or use).

And to complete the set, here is view of some of things I am/ will be in the market for over the next few months.

I would contend that whilst the above is useful to me in many ways; it would be an awful lot more useful to me if ‘my view’ was connected to my suppliers view and that we therefore co-managed what would then be ‘Our Data’ (for example, real time sync of my current bank balance, investment account, fuel tank status and many more). To be clear, ‘Our Data’ = Co-managed data; i.e. data where two or more parties are able to each consider themselves as managers of the relevant data (setting aside that there are many technical points of discussion underneath this co-management principle). I’ve written more about the my data, your data, our data, their data, everybody’s data distinctions over here.

Once the individual has their own data service with the above types of data, and many others over time, then the co-managed model is by far the optimum. I’ll write up those benefits, and the practicalities of how we obtain them in more detail in a separate post shortly..

The Fox is Guarding The Personal Data Hen House (and he has weapons)

This week I watched the excellent documentary The Great Hack. A horrifying story, very well told; congratulations to all those involved in making that and bringing that story to the general public.

I’d recommend it to everyone, but most especially those in large private or public sector organisations sitting on large troves of personal data. The essence of the story is that:

  • the personal data asset class is now ‘more valuable than oil’
  • the vast majority of it is controlled by a small number of ‘for profit’, supra-national organisations
  • un-scrupulous actors can insert themselves into that eco-system and weaponise the personal data assets
  • the effects from them doing so are gigantic and world-changing
  • what they have done is only the tip of the iceberg

That’s pretty scary stuff. The key points made, in my view anyway, are that the products/ services that enable organisations to access and manipulate personal to achieve specific objectives are seen as weapons grade; and that we will see this kind of thing happen time and again unless something is done to change that.

So what can be done to change that? Not easily, not overnight; but it can be done.

Firstly, what won’t work, is regulating to stop it happening; that’s been tried already with GDPR. While regulating may slow down and lessen the effect, it does not address the underlying problem which is that huge volumes of the now weaponisable asset that is personal data sit with large, supra-national, for profit entities. For profits have to maximise the use of their own assets, so are not the right entities to build and guard that personal data hen house. The alignment of incentives is wrong in that model.

So we need to find a model where the alignment of incentives is strong and sustainable. I would contend that this is where the MyData model comes into its own. In that model:

  • The capability (for data management and selective, informed sharing) is built on the side to the individual. That is the game changer
  • Genuine propositions that seek to use personal data can still flourish; but dubious ones unable to persuade the individual of their merits in an open, transparent way will fail
  • The various incentives are aligned with the needs and wants of the individual

Looking beyond that immediate fox and hen house problem (which is big enough anyway); the MyData model is the only way that individuals will gain general control over their personal data, and not the faux control enabled by GDPR.

Lot’s to discuss then at the upcoming MyData conference in Helsinki, 25th, 26th, 27th September 19. See you there.

When will Privacy regulators tackle data access and data portability?

Just looking at the German Competition Authorities decision around Facebook and their interpretation of the use of Consent. That will have a big impact.

Whilst I’m no big fan of GDPR (overhype and under delivery so far versus what could have been), it does seem that by the time of the first anniversary of it going live that the regulation will be starting to show its teeth and have some real impact.

The problem with that is it will probably take a decade to deliver real improvement, and even that will only be on the ‘defence’ side of personal data capabilities. That is to say, a ten year to eliminate the bad stuff which just should not be happening.

So far the regulators seem to be ignoring the more enabling the ‘offence’, ie the more enabling aspects such as data access and data portability.

Of course one could argue that it is not the regulators job to build out positive capability on the side of the individual. I would argue otherwise; if more bandwidth was put on the positive side of what individuals could do with proper access to their data then a lot of the bad things would go away more quickly. Nodding towards data access and data portability and then doing nothing about clear failure to deliver helps no one;

Next Steps for Information Answers

I’ve not really known what to do with this site/ blog since the untimely death of my good friend and colleague John Butler. Until I do i’m going to use the blog/ twitter just as a place for my personal thoughts and sometimes rants about the personal data sector/ industry/ mess.

Opinions and rants will be entirely mine, nothing to do with my employers.

Thanks

Iain

Quantifying the ROI of Customer Engagement Planning

The internet of things (IoT) is generating Big Data and driving the need for business-ready analytics. But the explosion of marketing channels in which organizations are hoping to engage customers is expanding exponentially through mobile apps, social media and the emerging VRM Personal Cloud. CMOs and marketing departments are stretched thinner than ever to ensure enough perceived reach to drive sufficient customer engagement to sustain and grow their business.

The question becomes how to turn this exploding personal data eco-system to the advantage of the organization. With the customer “in charge” in so many real and virtual locations, how can an organization measure the return on its investment in the various channels so as to maximize marketing effectiveness and minimize wasted cost and effort?

Information Answers has been answering that question for major corporations for more than a decade using a sophisticated ROI mechanism that incorporates all the interdependencies, accounts for all costs across departments and tailors its output to the individual organization’s strategy and planning assumptions. Of course, there is a bit of work to input the necessary organizational information – such as planning assumptions – but it’s straightforward and the toolkit hides the complexity, allowing you to press a button and refine your plans or change course with ease.

With our updated proposition, we can work with firms to quantifiably derive the best case, worst case and most likely scenarios for ROI of customer engagement programs based on planned investments, progress and timing of each component and the anticipated progress of key milestones.

Too often, we find organizations suffering from too much information and subsequently double counting expected returns. Through the discipline and application of our updated Customer Engagement ROI Model – we can link marketing plans with finance’s budget, sales projections with current average revenue per customer and so forth.
Even more importantly, the toolset enables CMOs and CIOs to not only navigate the wealth of information but to safely conduct what-if scenarios so that they can:

• Preflight and refine existing propositions

• Identify, test and cost new proposition opportunities

• Assess the projected return on investment in sufficient detail

• And produce a project plan or product roadmap for deployment.

So, if you are serious – we mean really serious – about customer engagement that is, reaching and interacting with prospects, advocates and customers where they research, shop and buy – this toolset and our global expertise enables you to run test flight simulations before launching or scrambling to keep up in every channel. The outcome for you is informed planning, reduced risk, improved margins and growth by leveraging the huge amount of personal data in this new, customer-centric eco-system.

Don’t drown in the data, contact John Butler today for a free consultation and demonstration of how we can help you!

John Butler

Information Answers

john@informationanswers.com

+1 (201) 600-8962