The Personal Data Eco-System

This post is a short(ish) summary of a working session led by Drummond Reed and me at the recent West Coast VRM Workshop, and also an introduction to the Kantara workgroup in which we are going to move this debate forward. It is also part of the thinking that will short emerge in a Mydex white paper.

At the VRM workshop, we discussed the need for the concept of the Personal Data Store, what it would do in practice, and what that will ultimately enable.

Why we need such things – because individuals have a complex need to manage personal information over a lifetime, and the tools they have at their disposal today to do so are inadequate. Existing tools include the brain (which is good but does not have enough RAM, onboard storage, or an ethernet socket……thankfully), stand alone data stores (paper, spreadsheets, phones, which are good but not connected in secure ways that enable user-driven data aggregation and sharing), and supplier based data stores (which can be tactically good but are run under the supplier provided terms and conditions). NB Our current perception of ‘personal data stores’ is shaped by the good ones that are out their (e.g. my online bank, my online health vault); what we need is all of that functionality, and more – but working FOR ME.

What they will do/ enable – the term Personal Data Store is not an ideal term to describe a complex set of functions, but it is what it is until we get a better one (the analogy I’d use in more ways than one is the term ‘data warehouse’ – again a simplistic term that masks a lot of complex activity). A Personal Data Store can take two basic forms:

Operational Data Stores – that get things done, and only need store sufficient breadth and depth of data to fulfill the operation they are built for (e.g. pay a credit card bill, book a doctor’s appointment, order my groceries).

Analytical Data Stores – that underpin and enable decision making, and which typically need a more tightly defined, but much deeper data-set that includes data from a range of aspects of life rather than just that from one specific operation (e.g. plan a home move, buy a car, organise an overseas trip).

A sub-set of the individual’s overall data requirement will lie in both of the above, this being the data that then integrates decision-making and doing.

In both cases, the functionality required is to source, gather, manage, enhance and selectively disclose data (to presentation layers, interfaces or applications).

We also discussed ‘who has what data on you’ and introduced the following diagrams to explain current state and target state (post deployment of Volunteered Personal Information (VPI) tech and standards).

The key terms that require explanation are:

My Data – is the data that is undeniably within, and only within, the  domain of an individual. It’s defining characteristic is that it has demonstrably not been made available to any other party under a signed, binding agreement. This space has been increasingly encroached upon by technology and organisations in recent history (e.g. behavioural tracking tools like Phorm) and this encroachment will continue. Indeed a general comment can be made that ‘my data’ equates to privacy in the context of personal data; so the rise of the surveillance society and state is a direct assault on ‘My Data’. Management of ‘My Data’ can be run by the individual themselves, or outsourced to a ‘fourth party service’.

Your Data – is the data that is undeniably within the domain of an organisation; either private, public or third sector. Proxy views of this data may exist elsewhere but are only that. This data would include, for example, the organisations own master records of their product/ service range, their pricing, their costs, their sales outlets and channels. Customer-facing views of much of Your Data is made available for reproduction in the ‘Our Data’ intersect.

Our Data – is the data that is jointly accessible to both buyer and seller/ service provider, and also potentially to any other parties to an interaction, transaction or relationship. It is the data that is generated through engaging in interactions and transactions in and around a customer/ supplier relationship. Despite being ‘our’ data, it is probably technically owned, or at least provided under terms of service designed by the seller/ service provider; in practical terms this also means that the seller/ service provider dictates the formats in which this data exists/ is made available.

Their Data – is the data built/ owned/ sold by third party data aggregators, e.g. credit bureaux, marketing data providers in all their forms. It’s defining characteristic is that it is only available/ accessible by buying/ licensing it from the owner.

Everybody’s Data – is the public domain data, typically developed/ run by large, public sector(ish) entities including local government (electoral roll), Post Offices (postal address files), mapping bureau (GIS). Typically this data is accessible under contract, but the barriers to accessing these contracts are set low – although often not low enough that an individual can engage with them easily.

The Basic Identifier Set/ Bit in the Middle – this is the core personal identity data which, like it or not, exists largely in the public domain – most typically (but not exclusively) as a result of electoral rolls being made available publicly, and specifically to service providers who wish to build things from them. This characteristic is that which enables the whole personal eco-system and its impact on data privacy to exist, with the individual as the un-knowing ‘point of integration’ for data about them.

Propeller Current State

The ovals in the venn diagram represent the static state, i.e. where does data live at a point in time. The flow arrows show where data flows to and from in this eco-system; I use red to signify data flowing under terms and conditions NOT controlled by the individual data subject.

Flow 1 (My Data to Your Data, and My Data to Our Data) – Individuals provide data to organisations under terms and conditions set by the organisation, the individual being offered a ‘take it or leave it’ set of options. Some granularity is often offered around choices for onward data sharing and use, i.e. the ‘tick boxes’ we all know and which are one of the main bitsof legacy CRM that VRM will fix.

Flow 2 (Your Data to Your Data, including Our Data) – Organisations share data with other organisations, usually through a back-channel, i.e. the details of the sharing relationship are typically not known to the data subject.

Flow 3 (Your Data, including Our Data to Their Data) – Organisations share data with a specific type of other organisation, data aggregators, under terms and conditions that enable onward sale. Typically the sharer is paid for this data/ has a stake in the re-sale value.

Flow 4 (Everybody’s Data to Their Data) – Data Aggregators use public domain data sources to initiate and extend their commercial data assets.

The target state is shown below, a different scenario altogether – and one which I believe will unfold incrementally over the next ten years or so…..data attribute by data attribute, customer/ supplier management process by customer/ supplier management process, industry sector by industry sector. In this scenario, the individual and ‘My Data’ becomes the dominant source of many valuable data types (e.g. buying intentions, verified changes of circumstance), and in doing so eliminates vast amounts of guesswork and waste from existing customer/ citizen managment processes.

The key new capabilities required to enable this to happen are those being worked on in the User Driven and Volunteered Personal Information work groups at Kantara (one tech group, one policy/ commerce one), and elsewhere within and around Project VRM. The new capabilities will consist of:

– personal data store(s), both operational and analytical

– data and technical standards around the sharing of volunteered personal information

– volunteered personal information sharing agreements (i.e. contracts driven by the individual perspective, creative commons-like icons for VPI sharing scenarios)

– audit and compliance mechanics

Around those capabilities, we will need to build a compelling story that clearly articulates, in a shared lexicon (thanks to Craig Burton for reminding us of the importance of this – watch this space), the benefits of the approach – for both individuals and organisations.

The target state that will emerge once these capabilities begin to impact will include the 4 additional individual-driven information flows over and above the current ones. The defining characteristic of these new flows is that the can only be initiated by the data subject themselves, and most will only occur when the receiving entity has ‘signed’ the terms and conditions asserted by the individual/ data subject. The new flows are:

Flow 5 (My Data to Your Data (inc Our Data) – Individuals will share more high value, volunteered information with their existing and potential suppliers, eliminating guesswork and waste from many customer management processes. In turn, organisations will share their own expertise/ data with individuals, adding value to the relationship.

Flow 6 (Everybody’s Data to My Data) – With their new, more sophisticated personal information management tools, individuals will be able to take direct feeds from public domain sources for use on their own mashups and applications (e.g. crime maps covering where I live/ travel)

Flow 7 (My Data to (someone else’s) My Data) – An enhanced version of ‘peer to peer’ information sharing.

Flow 8 (My Data to Their Data) – The (currently) unlikely concept of the individual making their volunteered information available to/ through the data aggregators. Indeed we are already starting to see the plumbing for this new flow being put in place with the launch of the Acxiom Identity Card.

Propeller Target State

The implications of the above are enormous, my projection being that over time some 80% of customer management processes will be driven from ‘My Data’. I’m pretty confident about that, a) because we are already see-ing the beginning of the change in the current rush for ‘user generated content’ (VPI without the contract), and b) because the economics will stack up. Organisation need data to run their operations – they don’t really mind where it comes from. So, if a new source emerges that is richer, deeper, more accurate, less toxic – and all at lower cost than existing sources; then organisations will use this source.

It won’t happen overnight obviously; as mentioned above specific tools, processes and commercial approaches need to emerge before this information begins to flow – and even then the shift will be slow but steady, probably beginning with Buying Intention data as it is the most obvious entry point with enough impact to trigger the change. That said, the Mydex social enterprise already has a working proof of concept up and running showing much of the above working. A technical write up of the proof of concept build can be found here. And the market implications of this are explored in more detail in new research on the market value of VPI shortly to be published by Alan Mitchell at Ctrl-Shift.

The two hour session at the VRM workshop was barely enough to scratch the surface of the above issues, so the plan is to continue the dialogue and begin specifying the capabilities required in detail in the User Driven and Volunteered Personal Information (technology) workgroup at The Kantara Initiative. The workgroup charter can be found here. A parallel workgroup focused on business and policy aspects will also be launched in the next few weeks. Anyone wishing to get involved in the workgroup can sign up to the mailing list here and we’ll get started with the work in the next couple of weeks.

 

The Information Masters (not)

This post by Jerry Fishenden on Government IT spend made me think  back to the Information Masters research and book led way back in 1999 by my sometime consulting colleague John McKean. I find it bemusing and a bit depressing that despite this excellent piece of research, and the associated research having been around now for ten tears – the vast majority of organisations still carry on believing that throwing money at technology is the way to getting maximum return from their information assets. Certainly from Jerrys’ post, it would seem that UK government us very much in this camp.

To re-cap, for those who have not read the book, the research study interviewed senior execs from 30 or so organisations across a range of sectors, worldwide that were noted for their advanced use of customer and related information. It asked them to explain what had started them on their journey to information mastery, what they found out along the way, and what they would do differently if they had the chance. This qualitative research uncovered seven information-related competencies (information management, leadership, development of an information culture, organisational structure to optimise information flows, process management, people management and technology management). These competencies and their impact are explained in detail in the book – the key point made by these top performing businesses was that they invested their time and money across these competencies in a balanced way; i.e. ALL are important, and over/ under/ non investing in any one of them would have limited the overall success of the initiative.

However, for me, the most interesting part of the Information Masters story lies in the quantitative assessment of how the typical organisation (i.e. more than 95% of them according to the survey). In short, they spend most of their money on the technology, and pay lip service to the other six competencies. This comparison is shown in the chart below, along with a view on the return on investment available from each approach.

elements of information competency

Unfortunately for us UK taxpayers, our successive governments and the civil service seem to be very much in the ‘throw money at technology’ camp.

“The Personalisation of Today is Like Lipstick on a Pig….”

I just love that quote from James Gardner of LloydsTSB, who goes on to say…

‘No, the only way to get to markets of one is if customers make the products themselves. This is where the “mash up” I spoke of my in my last post comes in. Customers, who are able throw together bits of offers in unique ways, and then share them with other like minded customers, are the way things will eventually pan out. These are crowds at the centre of the financial services value chain, which will be highly distributed, highly chaotic, but not subject to the system risks of a centralised banking system.’

Spot on i’d say, and that’s where we’re looking to get to with Mydex – allowing the individual to genuinely be the point of integration for the personal data, and the processes/ applications/ mashups that engage with it. I don’t think banking will be the first to engage, but it will probably be a fast follower.

Scotweb 2

I’ll be speaking at this event next week in Edinburgh about VRM and the Mydex initiaitive.

Also, moves are afoot to get a Scotland based ‘chapter’ up and running to do some local pushing forward on VRM initaitives.

Personal RFP’s….what are they, and how do we make them happen?

At the VRM West Coast workshop, Don Marti led a session on Personal RFP’s, which then led to the issue being debated further on the mail list and built out in this post by Alan Mitchell. Here’s my contribution, looking as much from the CRM/ recipient perspective as the VRM one – ultimately I think that until we look at both simultaneously then we won’t get much up and running at any kind of scale deployment.

Firstly, I think we need to get our terminology in order; as Craig Burton pointed out…we do not yet have a clear VRM lexicon accepted and understood by all project participants.

Here are a couple of references from Wikipedia, that relate to/ illustrate the background to the terms Request for Information (RFI) and Request for Proposals (RFP). I think we need to look at both in tandem because typically they interact with each other.

Request for InformationA request for information (RFI) is a standard business process whose purpose is to collect written information about the capabilities of various suppliers. Normally it follows a format that can be used for comparative purposes. An RFI is primarily used to gather information to help make a decision on what steps to take next. RFIs are therefore seldom the final stage and are instead often used in combination with the following: request for proposal (RFP), request for tender (RFT), and request for quotation (RFQ). In addition to gathering basic information, an RFI is often used as a solicitation sent to a broad base of potential suppliers for the purpose of conditioning supplier’s minds, developing strategy, building a database, and preparing for an RFP, RFT, or RFQ.

Request for ProposalA request for proposal (referred to as RFP) is an invitation for suppliers, often through a bidding process, to submit a proposal on a specific commodity or service. A bidding process is one of the best methods for leveraging a company’s negotiating ability and purchasing power with suppliers. The RFP process brings structure to the procurement decision and allows the risks and benefits to be identified clearly upfront. The RFP purchase process is lengthier than others, so it is used only where its many advantages outweigh any disadvantages and delays caused. The added benefit of input from a broad spectrum of functional experts ensures that the solution chosen will suit the company’s requirements. Effective RFPs typically reflect the strategy and short/long-term business objectives, providing detailed insight upon which suppliers will be able to offer a matching perspective.

I think the background to these terms is key to how we must think of them in VRM world if we are to understand how best to deploy them. What does that mean in practice?

  1. The RFI and RFP processes originate from professional procurement functions, that have the time, funds and incentive to make the process work
  2. There is an implicit logic in the process for both parties, architected around eliminating guesswork and waste; i.e. we’ll tell you what we want to know about (RFI) and, based on that information, what we want to buy (RFP) to save you having to market and sell to us; and by being more organised we’ll be able to do a more efficient deal for both and generate a win-win
  3. They are business processes, not just technologies or data flows
  4. The communications channels through which the interactions and transactions are exchanged should be standard, mass market, not niche
  5. They need two parties, issuers and respondents, both of whom understand how the process works, and both of whom have to do a lot of work to make the exercise work
  6. They typically relate to fairly complex requirements, because the cost of the process is high enough to eliminate the value in applying it to simple/ low cost purchases
  7. The buyer requirement/ seller response is rarely just about lowest price, items suited to that are dealt with in commodity markets

In addition to these characteristics, it is also worth noting that over time intermediaries have emerged (e.g. TEC) who, amongst other support services, make whole series of standard RFI and RFP templates available at no or low cost in order to stick themselves into the value chain.

My view of the above is that a) the originators of the terms RFI and RFP now have finely honed processes for dealing with them, they do enable win-wins for buyer and seller, and intermediaries have emerged to deal with some of the hard stuff – like finding common terminology, and b) they are typically not automated processes and thus not not at all like what will actually be required to do the things we have commonly described as Personal RFPs in VRM discussions, (e.g. i’m here, and I need a stroller for twins).

SO: Before we progress, we may wish to change our terminology around the RFI/ RFP issue – to more accurately reflect what the individual needs; otherwise we risk being confused with the prior deployments of the terms which actually have very little in common with what the individual might deploy right now.

Here’s my view of what those needs are:

  • To be able to articulate a requirement for information about a product or service in ways that can be discovered by potential suppliers or other third or fourth party service providers (assume by a machine but not exclusively so). This area is where Alan suggests there is the biggest gap at present; and that’s quite right – if that gap was not there we’d have had personal RFP type things going on years ago.
  • To share that requirement for information without compromising ones data privacy beyond that required to receive the information sought.
  • To match ‘information requests/ buying intentions with their equivalent information provisions and proposals (that’s the really smart bit!!!!).
  • To receive responses to the information request through one or more communication channels.
  • To be able to interact with responses, including follow up to complete a sale, or to extend an information request.

If we look hard enough we’ll find that there are already architectures out there, that do 2, 3 and 4 – and bits of 1 are around that can be picked up and added in, either directly or (more likely) via fourth party services. For example, the architecture below has been doing its stuff on the web since way back in 2000; a proposition called 2busy2surf that was way ahead of its time. Unfortunately that business has now gone, but the architecture and buyer-seller matching engine has been white-labelled into 20 or so propositions since then. It is still churning out stacks of permissioned requests for information and requests for proposals, and returning matched information packages or offers. These come direct from the selling organisation, or more typically through the affiliate markets (third party services).

RFI & P Architecture 1

So, to get what we used to call personal RFP’s up and running, what we need to do, in my view, is:

  1. Sort out our terminology/ lexicon
  2. Build out the Requirements Articulation piece, adding search maps, comparison engines and other added value buying services into the spec)
  3. Tell the story of the architecture
  4. Get it running in a few business in a more overtly VRM way
  5. Publish the architecture as an open standard

That’s going to take a bit of time and effort. It’s on the agenda for the User Driven and Volunteered Personal Information working group at Kantara; this group has now been approved and will be up and running shortly. I’ll post the details on how to join that as soon as I have them.

Thoughts anyone?

Iain

CRM…. meet VRM, the ‘three meeting theory’

I’ve had a couple more validations of this theory in the last few weeks, so thought i’d best write it up. My hope is that we can use the upcoming VRM Workshop to get the VRM story refined and presented so that we can reduce the number of meetings required to get to the detail of why an organisation should consider ‘VRM enabling’ itself.

So, here’s my theory:

It takes three, fairly in-depth meetings before a smart, typically senior CRM/ Customer Management/ Customer Experience executive in a large customer-facing organisation to genuinely ‘get’ VRM and where we are coming from with the project and mind-set – and thus what’s in it for them.

Here’s how it usually pans out in my experience:

Meeting One: This usually happens on the back of an existing contact who has heard/ read some snippet about ‘VRM’, or can also be in one of the more in-depth, small-group presentations that I and others have run in the last 12 months or so (mainly UK).

The outcome of this meeting, from the perspective of the CRM/ CM Exec is usually along the lines of ‘These people are well meaning, are obviously committed to their ‘hobby’, but a bit mad and naive as to what us big organisations have to deal with; but at least i’ve done my bit for keeping an eye on innovation in my space’. Alternately, the shorter meetings can be driven by ‘don’t these people realise that we’ve just spent a zillion pounds on our CRM application and need to get that to work because we’ve told everyone it will’.

Most CRM….meet VRM discussions finish at this stage….for now anyway.

Meeting Two: Let’s say that at best one in twenty of the above meetings end up with a follow up meeting, and that many of these are through personal ongoing contacts (where CRM/ CM work is going on in parallel); or that sufficient time has passed since meeting one for an update to be of possible value.

This is the meeting during which ‘the penny drops’….but typically only in connection with a very small nugget of opportunity, often one which is front of mind for the exec at that point in time. Examples would include:

– yes, I know our data quality is shockingly bad….., you mean we could work with our customers to fix that…..? Or

– so you mean we could accept these highly qualified leads into our existing CRM system with hardly any tweaks….? Or

– so our customers can help us refine/ define our new products if we engage in the right way?

The outcome of this second meeting is usually….’let me think about that’; and ‘is there anything up and running as a genuine VRM application that I can have a look at?’

Meeting Three: So now we’re down to a very small number of ‘almost converts’. These third meetings are typically much more ‘CRM/ CM/ CE Exec driven’ and are about:

– where do I see this stuff? (i.e. we are usually showing some of the behind the scenes development projects at this stage)

– how can I access it to play around with it, prototype it and build proofs of concept in my domain?

– can you meet up with our innovation folks to talk about a possible pilot?

Underpinning these third meetings is usually the realisation that what we VRM folks are talking about actually has a very sound economic argument, and also that we are about ‘win win’ rather than consumer activism for the sake of it.

What happens after meeting three? I don’t know to be honest, we’ve not had any yet that i’d count as such – although there are a couple lined up for June and July. I think for those meetings the challenge falls back onto the VRM community, or those of us building VRM type solutions – we need to be able to answer the ‘meeting three challenges’ loud and clear.

What does that mean for Project VRM and our workshop this week? I think we need to get better at telling our big and complex story, probably in bit sized chunks and in accessible ways – a good web site for example. I think we also need to focus on getting some real, live pilots and proofs of concept out there to be engaged with. Let’s pick up on that on Friday.

Lastly, i’d have to add that the record for ‘getting it’ is actually nothing like my three meeting theory – it was about twenty minutes and the only question at the end of that was ‘where do we sign up’?

Welcome

Welcome to our new web site and blog, at last moved over from static pages to WordPress.

This is where i’ll be blogging about my ‘day job’, customer information strategy consulting, and also my growing involvement in next generation customer information such as that proposed at Project VRM.

Cheers

Iain

Volunteered Personal Information – Dominant Marketing Paradigm by 2015…

(Cross post from Right Side Up)

Here’s an interesting presentation given to the Direct Marketing Association annual Privacy and Data Protection conference last week by Marc Dautlich of Olswang, the lawyers who are advising the Mydex CIC.

The whole deck is useful to those of us interested in privacy and data protection, but the concluding slide is of wider interest – Marc predicts that Volunteered Personal Information will become the dominant marketing paradigm by 2015. Let’s hope so…..

Reblog this post [with Zemanta]