A Strategic Segmentation of the Global Population for Covid 19

Well, it’s not every day one starts a blog post with such a title; but these are not ordinary times. So let’s dive in…

Key Point 1: What we are facing can be seen as a global, citizen management challenge; by which we mean a big part of moving through and beyond the Covid 19 pandemic will come from understanding the affected base , drilling into the key drivers of differentiation and from there deriving a customer/ citizen management plan to achieve the best possible outcomes. I say ‘customer’ solely to remind that many of the disciplines required to build and implement the plan are those that have bubbled up through the customer management/ CRM disciplines. But in this situation, Citizen Management is the better framing.

Key Point 2: Given the above, the start point is to define and agree on a strategic segmentation of the relevant population (i.e. everyone on the planet). Strategic segmentation can be described as identifying the defining the key groups in a population with distinct characteristics that line up with the ability to drive knowledge and take actions. One would typically expect these segments to be mutually exclusive, and one would hope for no more than 8-10 in order that they become embedded and drive strategic plans.

Key Point 3: One would expect many tactical segmentation approaches to then emerge beneath the strategic one; the tactical ones are more aimed at drilling into specific facets of a strategic segment to build understanding and enable actions.

This approach to strategic segmentation for Covid 19 is being evolved in the MyData Community, where there is a huge array of work underway to bring human-centric data approaches to the Covid 19 problem. Here’s how we see that strategic segmentation at this point.

This visual below is our first attempt at defining the strategic variables and allocating the population into segments. The actual numbers in each segment are best guess derived from published sources, and that will always be the case in this situation.

Strategic Segmentation for Covid 19

We see the key strategic variables at this stage being 1) Covid 19 Medical Status (5 options), and 2) Key Worker Status or not. The former reflects where each individual is on their potential Covid 19 journey from no symptoms/ assumed not infected, through symptomatic but not tested or tested and found not infected, to confirmed current case, confirmed recovery and finally confirmed fatality. The latter reflects the extent to which the individual has a different risk profile because they are a key worker of some form and thus will necessarily have differing behaviours to those who are not key workers.

More important than the number within each segment are the movements between segments over time; in this case in almost real time. This is where one would be looking to apply ‘actionable knowledge’, i.e. a chart that not only tells me something I did not know before, but also allows for action to be taken by people who have the necessary access permissions to drill into and use the underlying data. The ideal data-set for analysing and using data in this way will blend top down published or scraped data with bottom up, permissioned based data from individuals. That approach would allow for maximum accuracy in the data-set and extensive usability, not least from the individual contributors to the data-set themselves who could personalise that actionable knowledge to their own specific requirements.

We’re working on a prototype of the above now, so hopefully have something to show in the next few days.

MyData and Coronavirus

Check out the quote below overheard by Doc Searls, my co-conspirator at Customer Commons. That’s spot on…. And it points towards things that need to be built before the next virus shows up; things for the common good. What we are see-ing now around Covid 19 is the best that can be done when you only have access to top-down, macro data and a very small amount of actual evidence. And that actual evidence is usually coming in too late to drive targeted action.

The market for bullshit drops to zero

So what we are see-ing now is macro analytics doing the best it can – and a whole load of bullshit filling the gaps, driving panic buying and causing shortages.

It’s been clear for weeks now that the whole Coronavirus problem is a perfect and massive use case for how top down population level data needs to be complemented with bottom up person level data. That’s ‘MyData’ if you will, given that the emergence of that new capability is recognised in the new EU data strategy. Or to be more technical, we need intelligence and actionable insights at ‘the edge’, i.e. with data, tools and located in the hands of the people to complement that which can be gleaned at the aggregate level by health authorities, governments et al. And then appropriate insights and aggregations come back from the edge to the central/ top down bodies to be blended into top down data and insights and better inform the individual.

I’ve put together an example of what i’m talking about below; the key components in this quick prototype are:

1. Decentralised identity – to avoid huge, dangerous databases of highly sensitive data; and enable verified claims to be utilised where appropriate.

2. Volunteered personal information, gathered, used, shared and aggregated voluntarily under a robust information sharing agreement

3. Personal algorithms – i’m defining Personal Algorithms (Palgorithms) as those algorithms that are ‘on my side’, which are completely transparent, and which an individual engages themselves rather than the more common profiling by second or third parties.

4. A data commons/ trust run by appropriate entities with the relevant skills

5. A full audit log to keep the various actors aligned with their scope

6. Privacy by design to ensure data minimisation, transparency and limit downstream problems

This screenshot shows what data might be shareable (screenshot updated 15th March with new data fields as needs become clearer).

Real software, illustrative data fields

This screenshot shows the audit log

Clearly this prototype, and evolutions of it, won’t be ready to help too much in the current Coronavirus outbreak and its massive consequences. But almost certainly there will be more – because of the nature of this and similar viruses, and the inter-connectedness of our global economy.

To the specifics of the data that could be gathered, in this example it is just some assumptions on my behalf about the type and sources of data that an individual with the right tools can bring to bear when the focus arises. That is to say, when the issue is one of life and death, an individual has extreme self-interest in a situation, so gathering some data should not be an issue.

Without the above we continue to look for hundreds of thousands of needles in 7 billion haystacks. But maybe crazy times can drive edge based solutions more quickly than we have been accustomed to.

Update 11th March – I’ve just noticed this fabulous Tableau data-set and analytics resource on Covid 19. Good as it is, it serves to remind that what i’m talking about above would add a new column called something like ‘pipeline’ which can only come from bottom-up/ human centric data, and also more granularity (data fields) into the existing columns to help identify further useful analyses.

Defining ‘My Data’ Portability and Interoperability

There seems to be a growing recognition in the privacy/ data empowerment spaces that data portability and data interoperability across service providers are important things.

I agree, but am concerned that whilst both words trip of the tongue reasonably easily, defining what precisely we mean by them and thus what we want to emerge is a lot more tricky. With that in mind, here is a statement of outcome requirements from the various perspectives that was worked up in the Kantara Information Sharing Work Group a few years back.

  1. As an individual, I want to be able to pick up my data from one personal data service (locker etc.) and drop it into another with minimal fuss
  2. As an individual, I want to be able to have applications run on my personal data, even when it exists in multiple different services
  3. As an individual, I want the apps I had running on my personal data in one locker service, to work when I move my data to another one
  4. As an application developer, I want the apps I build to run, with minimal overhead, across multiple personal data services
  5. As an organisation willing to receive and respond to VRM/ Me2B/ MyData style data feeds; I don’t want to have to set up different mechanisms for each different provider
  6. As an organisation willing to provide/ return data to customers as part of our proposition to them; I want to be able to make this data available in the minimum number of ways that meet the users needs, not a different format for each personal data service provider
  7. As an organisation willing to provide/ return data to customers as part of our proposition to them; I don’t want to have a different button/ connection mechanism for each personal data service provider on the ‘customers signed in’ page of my web site.

Does that statement of requirements still feel about right? If not then ping me a note and i’ll update them.

These requirements were written around the time of the UK Midata project, and when various other data portability buttons (green button, red button and blue buttons so far) were emerging as localised or sector specific responses to obvious data portability needs.

Thinking about these requirements then in the emerging world where MyData is a stronger theme, e.g. the endorsement in the EU Data Strategy; I think we need a carefully constructed plan if we are to move beyond that localised and sector specific model. My thinking on how we put that plan together is as below:

  1. These solutions will not be reached by putting organisations (the supply side) in charge of the process. No amount of arm twisting and threatened regulation will make that work in any way other than slowly and sub-optimally. We have plenty of evidence for that in UK at present, see this review of the original UK MiData programme dating back to 2014, and I don’t see any sign of uptake in data portability (as defined in that project) use since then.
  2. These solutions will be reached by interested, data-skilled individuals and representatives of customer groups getting down into the weeds and specifying data fields and data formats (so the request to a supplier would be formed not as ‘can I make a data portability request please’, but as ‘here is precisely the data i’d like you to provide me, and the choice of formats which i’ll accept’). So I think that means a) figure out the use cases to be tackled, b) get the interested parties in a room with a whiteboard to list the fields required to make those use cases work, c) publish those data fields in a machine readable format (most likely JSON-LD) for the community to review, build on, improve and agree as a draft standard, d) build prototypes to show this data portability and inter-operability working in practice, e) put those standards through a relevant working group at a standards body (e.g. Kantara Information Sharing Inter-operability). For example, here is an illustration of such an online schema derived from the original MiData use case (home energy switching).
  3. Talk to existing intermediaries (e.g. comparison shopping sites) and service providers about this new model; whilst also flagging the approach up into those working to deliver the EU Data Strategy (which has many strands relating to data portability and interoperability).
  4. Get some basic human-centric services up and running that demonstrate the art of the possible.

There is at least one way to deliver all of the above relatively easily. The use of JSON-LD provides the necessary data documentation and mapping layer; the wider JLINC protocol gives the additional benefits of transparency, data portability co-management of data and the full audit log.

I’ll talk more about the about at the upcoming MyData Community meeting in Amsterdam with anyone who wants to dig into data portability and inter-operability in more detail.

Personalisation – Who Should Drive?

Yesterday I attended an excellent event hosted by Merkle in Edinburgh. There was a lot of very impressive content, tech and skills on show; the highlight was the debate hosted by Firas Khnaisser on whether brands should or should not give up on personalisation (Gartner predicts 80% will by 2025).

In the debate, the ‘let’s keep it’ team won the day; much of that I think was down to a recognition amongst the team that what is done at present remains nowhere near as good as it needs to be to achieve a genuine human connection with a customer or prospect. I particularly liked the point from Stephen Ingledew; along the lines of ‘let’s face it, what we do is not personalisation, it is targeting’.

Needless to say, I chipped in with a question/ point around ‘would it not be better to let the customer do the personalisation?’. That was not laughed out of court, so let me explain how that might work.

The best way to think about that might be to introduce a new xxxxTech to the stack; let’s call it CustomerTech. That new capability is, at the top level, pretty simple; it is about better enabling and supporting the many things that one has to do as a potential and existing customer. Think about that a bit, because the potential impact is huge. As a customer, one wanders consciously or unconsciously through a series of stages – surfacing, identifying and articulating a need, taking that need to the market place, engaging in dialogue, drawing on decision support tools, negotiating, making a buying decision, transacting, getting up and running with the new thing and supply relationship(s). And then finally getting to enjoy the purchase made…. (all the while preparing for when things go wrong, and the exit).

Does that all sound a bit familiar? It should do, because it is the inverse of the Adtech, Martech, eCommerce and CRM stack.

The visual below is an early glimpse of CustomerTech in action; it is a buying interest signal, made available to the market under the control of the potential customer. It’s pretty basic right now; so were CRM and the other ‘techs’ in their early days. But critically, it is expressed through ultra modern technologies that, whilst staying under the hood, address the pain points that mean that direct marketing response rates have been poor and statically so for many years. The main pain points that hinder direct marketing are around that people inherently understand that when they make data available to a commercial organisation that it will more than likely be shared onwards, and used to drive communications – so they share the bare minimum. There is no alignment between the needs of the data sharer and the recipient; there is no win-win, as there typically is in B2B direct marketing. CustomerTech addresses this by enabling this rich, buying intention data to flow, but only under terms that work for the individual (e.g. you can have this rich data if you agree to not share it outside of your organisation, or beyond the purposes agreed). That might sound challenging to the status quo…., it is. But then again, so is running a very expensive and fancy marketing machine on the 2 star fuel that is today’s available data.

How do we get CustomerTech switched on? I would contend it is already on its way, but largely under the radar for most of the marketing community. The tech all works, with a number of game changers in there; and it all plumbs in very neatly to the existing tech stacks. The missing step is those standardised information sharing agreements. With that in mind, I believe based on yesterdays evidence that the newly pivoted and re-invigorated Data and Marketing Association could be the body that sits down to help thrash these out. Consider that an open invite.

Why might the DMA wish to do so? I think that lies in this wonderful piece of emerging evidence from Dr Johnny Ryan at Brave Software. It shows the improvement in the clickthrough metric being achieved by the Brave privacy-centric approach to digital advertising. Think about those numbers; this increase in engagement rate is incredible; imagine what that could be when drawing in the yet richer data-set available from the permissioned buying interest data.

Lot’s to ponder on. Thanks again Merkle for a very useful session.

Next up….., the three data fields that will make all of this happen.

Marketing, heal thyself

In honour of Data Privacy Day 2020, i’ve decided to make an admission. Yes, i’m afraid I have now been working in what amounts to direct marketing for 34 years. Does that qualify me for membership at Direct Marketers Anonymous? That is what DMA stands for isn’t it?

Actually, I see the Direct Marketing Association has rebranded to The Data and Marketing Association. A good move I suggest, and perhaps an opportunity to reflect and move in a new direction. Certainly from where I stand, the marketing profession needs to change, and fast. Here are some obvious problems that need to be rectified:

  1. Data volumes: Those statements about ‘more data created in the last XXX than all the previous YYYs’ used to quite satisfying; think of all that big data analytics… Then, reality set in; data quality is typically a major barrier to progress and thus a significant proportion of the huge data-sets being gathered sit there soaking up resources to no great effect. Time to stop gathering that which has costs that exceed the value in it.
  2. Cookies and surveillance: The modern day version of coupons and loyalty cards, but much more invasive and dangerous. The marketing industry has fallen hook, line and sinker for the adtech promise; which so far as I can see fails to deliver for anyone other than the adtech providers themselves. Enough, switch them off.
  3. Lack of Respect for People’s Time: Back in the day when it cost money to send things, direct marketing used to be a relative rarity. These days when there is very little incremental cost in sending an email, there is a tendency to just keep sending more and more frequently. I used to laugh at those ’12 days of x’mas’ email campaigns, until everyone started doing them, and then they extended beyond 12 days. So now the whole period between Thanksgiving and January Sales is one huge marketing blitzkreig. Enough, just because you can send something, don’t; be more respectful of people’s time.
  4. Failure to recognise the full customer experience: Building on the above, if 1 brand extends their 12 days of X’mas campaign from 12 to 30 days that’s annoying. But when the 20-30 brands an individual will easily be engaged with the volumes go up ridiculously. Brands need to recognise that they are only a small part of their customer’s lives; not the centre of it.

In the grand scheme of things, and as I see it, response rates in B2C through that whole 30 year period have been pretty stagnant. On average, I expect 1% response to a direct marketing campaign, and 1% of that 1% converting. Given the HUGE change in data availability, analytical tools and technologies that have emerged over that period, that stagnancy is quite shocking. By implication, 99% of our messaging is going to people who find it not relevant to them at that point in time.

I think it is time to change the model. Time to change from push marketing to pull marketing. Rather than spend yet more money on adtech, invest in what i’ll call ‘CustomerTech’; that is to say, tools and capabilities built on the side of the customer. Give customer’s the ability to signal when they are in the market for something, without that then subjecting them to a flood of marcoms and their data being sold and shared out the back door. I would contend that marcoms volumes would go right down, perceptions and response rates go right up.

Thankfully there are now significant numbers of people working to make CustomerTech a reality. Here’s hoping we see that come to life before next years Privacy Day.

Co-managed data, subject access and data portability

I’ve written a fair bit of late on the importance of co-managed data; here and here again. This follow on drills into how co-managed data between individuals and their suppliers would change how we typically think about both subject access and data portability.

Subject access has featured in pretty much all privacy legislation for 20 years or so. It has always been assumed to be ‘if an individual wishes to see the data an organisation holds on them then they have the right to do so’. Typically that has meant the individual having to write to the relevant data protection officer with a subject access request, and then that resulting in a bundle of paper print out showing up a few weeks later. My last one is shown below; it was incomplete and largely useless, but at least validated my view that subject access had not moved on much since GDPR.

Example Subject Access Response

Data portability has been around for less time, as a concept sought by a bunch of interested individuals, and only appeared in legislation with GDPR in 2018. The theory is that individuals now have the right to ask to ‘move, copy or transfer personal data easily from one supplier environment to another in a safe and secure way, without affecting its usability’. However, when I tried that, the response was wholly underwhelming. I’ll no doubt try again this year, but I fully expect the response to be equally poor.

The problems in both of the above, in my view, is that organisations are being given the remit to make the technical choices around what is being delivered as subject access and data portability responses. In that scenario they will always default to the lowest common technical denominator; anything else would be like asking turkeys to vote for X’mas. So, think .csv files delivered slowly, in clunky ways designed to minimise future use rather than the enabling formats and accessibility the data subjects would like to see.

Moreover, the issue is not just the file format. There is also a mind-set challenge in that the current norm assumes that individuals are relatively passive and non-expert in data management, and thus need to be hand held and/ or supported through such processes so as to avoid risk. Whilst yes there may be many in that category, there are also now many individuals who are completely comfortable with data management – not least the many millions who do so in their day jobs.

So, in my view there is little to be gained by pursuing data subject access and data portability as they are currently perceived. Those mental and technical models date back 20 years and won’t survive the next decade. As hinted at above, my view is that both subject access and data portability needs could be met as facets of the wider move towards co-managed data.

So, what does that mean in practice? Let’s look at the oft-used ‘home energy supply’ example (which adds competition law into the mix alongside subject access and data portability). Job 1 is relatively easy, to publish the data fields to be made accessible in a modern, machine-readable format. An example of this is shown at this link, taking the field listing from the original UK Midata project and publishing it in JSON-LD format (i.e. a data standard that enables data to be interoperable across multiple web sites). What that very quickly does is cut to the chase on ‘here’s the data we are talking about’ in a format that all parties can easily understand, share and ingest.

My second recommendation would be to not put the supply side in charge of defining the data schema behind the portable data; i.e. the list of fields. Doing so, as before, will lead to minimum data sharing; whereas starting and finishing the specification process on the individual side will maximise what is shared. There is more than enough sector specific knowledge now on the individual side (e.g. in the MyData community) for individuals to take the lead with a starter list of data fields. The role of the organisation side might then be to add specific key fields that are important to their role, and to identify any major risks in the proposals from individuals.

Third, as hinted at in the post title; both subject access and data portability will work an awful lot better when seen in a co-managed data architecture. That is to say, when both parties have tools and processes that enable them to co-manage data; then ‘export then import’ goes away as a concept, and is replaced by master data management and access control. In the diagram below, BobCo (the home energy provider) is necessarily always the master of the data field (algorithm) ‘forecast annualised electricity use’ as this is derived from their operational systems (based on Alice’s electricity use rate). But Alice always has access to the most recent version of this; and has that in a form that she can easily share with other parties whilst retaining control. That’s a VERY different model to that enacted today (e.g. in open banking) but one that has wins for ALL stakeholders, including regulators and market competition authorities.

Individual-driven data portability and subject access

Fourth recommendation, which is almost a pre-requisite given how different the above is to the current model, is to engage in rapid and visible prototyping of the above – across many different sectors and types of data. Luckily, sandboxes are all the rage these days, and we have a JLINC sandbox that allows rapid prototyping on all of the above – very quickly and cost effectively. So if anyone wishes to quickly prototype and evaluate the above model just get in touch. Obvious places to start might be standard data schema for ‘my job’, ‘my car’, ‘my tax’, ‘my investments’, ‘my transaction history’ on a retail web site, ‘my subscription’, ‘my membership’ ‘my child’s education history’ and no doubt some more easy ones before one would tackle scarier but equally do-able data-sets such as ‘my health record’. To give you a feel for what I mean by rapid prototyping; all of the above could be up and running as a working, scalable prototype within a month; versus within a decade for the current model….

Reminder to Self for the Next Decade – It’s all about the data…!!!

Just a reminder to myself, and anyone else that cares to value the same advice. For the next decade, and probably beyond, the focus should be on detailed work around data: – definitions, documentation, analysis, process flows. So, precisely what data elements, stored precisely where, moving from where, to where, for precisely what purpose; who are the data rights holders, who are the custodians. Nothing generic, all down in the drains type of work.

Other things are important, leadership, culture, business model choices, process optimisation, return on investment (including time investment); but none outweighs the necessary focus on the detail of the data.

The outcome of that level of focus should hopefully be a lot more of the right data getting to the right places, and a lot less of the wrong data going to the wrong places. Right and wrong in this case, at least where personal data are involved, are as seen from the perspective of the individual.

Co-managed data techologies and practicalities (part 1)

I’ve had lots of good feedback and iteration on my first post on co-managed data; thanks for that. I’ll mix that into this post, and also start to drill down into some more detail of what I have in mind when I refer to co-managed data, and also touch on how that might emerge.

The main questions that came back were around the nature of the tools an individual might use to manage ‘my data’ and ‘our data’, both precursors to genuinely co-managing data. I’ve attempted to set out the current options and models in the graphic below. (I suspect i’ll need to write up a separate post explaining the distinctions).

My Data Management Typologies

The current reality is that the vast majority of individuals can only, and will therefore, be running a hybrid of these approaches. That’s because the variants towards the left hand side cannot cope with the requirements of some modern service providers/ connected things. And the variants towards the right cannot cope with the requirements of some relationships and ‘things’ that remain analogue/ physical.

If one was to look at the proportion of ‘My Data’ that is being managed across these options then we need some basic assumptions of volumes. So for the sake of time and calculation, let me assume that every adult individual in the modern world has:

– 100 significant supply relationships – 200 ‘things’ to be managed, – 20 data attributes (fields, not individual data points) from each relationship and 10 from each ‘thing’ – so a total of 4,000 data assets under management. The actual real number is much higher and largely out of the individuals control at present. Come to think of it, that explains why research regularly comes back with the comment that individuals feel they have ‘lost control of their personal information’ – they are right, for now…

In my case, simplistically, I’d estimate that I have my data split across the types as follows (stripping out duplicates, i.e. not my master record, and back-ups of which there are inherently many in the hybrid model). Type 1 – 15% (inc some critical data), Type 2 – 30%, Type 3 – 5%, Type 4 – 35%, Type 5 – 5% , Type 6 – 9%, Type 7 – 0.9%, Type 8 – 0.1%

My numbers will be dis-proportionally high I guess towards Type 6 as I use more of these than most; and very few will have anything at all in Type 8 at present as it is brand new.

In any case, this post is already pretty dense, so i’ll leave it there for now and pick up the next level of detail in the next post.

How CustomerTech Can Improve Product/ Service Reviews

This post writes up my take on a discussion on the VRM List, that initially asked the question ‘can reviews be made portable so that they can appear in more than one place?’. The short answer is we believe yes, there is at least one way to do that, using JLINC, and quite possibly other protocols. And in doing so we believe we can much improve the provenance of the review record so that it becomes more useful and more reliable for all parties.

So how might that work, and what might that then mean? The diagram below illustrates a basic un-structured review of a hotel booking being shared consciously and in a controlled, auditable manner with 3 separate travel related services.

The critical components in this model are:

  • Self sovereign identity – in place for all parties which enables downstream data portability, no lock-in, reputation management and ultimately verified claims where they are useful
  • A Server – to be or connect to the individual’s data store, manage the KYS process (know your supplier), represent the individual in the agreement negotiation, and log all signing and data share activity
  • Information Sharing Agreement – to set out the permanent terms associated with the specific data sharing instance. In this case, and very interestingly, we believe that we may be able to use an existing Creative Commons licence
  • B Server – to present requests for reviews to individuals acknowledging the customer-driven terms to be used, and flagging any downstream use (and ultimately having downstream data controllers sign the same agreement)
  • Audit log – the permanent record of what was shared with whom for what purpose under what terms stored to keep all parties honest.

Personally I think this is the way forward for reviews, and offers people like Consumer Reports and Which the opportunity to re-invent their business models.

Anyone want to give it a try?

PS The same principles and methods apply to pretty much any other ‘volunteered personal information’ scenario. I think over time that approach will win our over capturing ‘behavioural surplus’.

It’s Time to Start Talking About Co-managed Data

As we reach the end of another decade, I’ve been reflecting on the changes over the last ten years in my areas of interest – customer management as my day job, and personal data management for individuals as my hobby horse. For the former I’d say it’s been a very poor decade indeed; the latter a positive frustrating one.

In the world of customer management, the dominant theme in my view has been dis-intermediation of traditional customer-supplier relationships by GAFA and Adtech. That has meant a lot less transparency around how personal data is being managed. GDPR has, so far, offered more promise than reality; to date is feels like lots of positive possibilities, none of which will really be addressed until a few giant fines get handed out to Google, Facebook and Adtech (which will take years).

On the issue of personal data empowerment for individuals, there has been good progress; but there remains a long way to go to get to scale. GDPR has put the brakes on some of the bad stuff, but positive empowerment of people with their own data for their own purposes is not really happening as yet. Even base level ‘rights’ such as data portability are very poorly supported in practice, and even if that improved then there are no large scale services in position to use ported data.

So, all in all, some twenty five years into the commercial Internet, it still feels and acts like the Wild West. A small number of big Ranchers build a fence around ‘their property’, define the rules that they say apply to anyone who comes onto their patch, and then go about their business with little regard to the outside world or their impact on it. In the Internet version, the Wild West is called Surveillance Capitalism; i.e. the systematic process through which large organisations ring fence a piece of digital territory (data about people), declare it to be theirs, and proceed to turn that into products and services for them to sell. That’s only good for the surveillance capitalists themselves; it just dis-enfranchises the individuals whose data is being grabbed.

From the individual perspective, GDPR tells me that I’m now ‘in control of my data’. Well, I must have missed that bit. Or does someone really expect me to go to my 300 or so direct suppliers, read their policies, see my data, change or delete it where I need to, and pretend to not notice that I am being followed around the web by them and hundreds of ‘their partners’ who I have never heard of?

In parallel, let’s look at what would actually be useful to me going forward. I already manage a lot of my own data as my hobby, and with a view to building tools that would make this task easier to undertake, and more useful in terms of what can be done with my data. From that work, I can see, for example, that I have:

  • around 7.5k financial transaction records since Jan 2013 (so about 100 per month)
  • just over 300 suppliers that I know and recognise will have data about me (so about 1k per year, just around 1oo per month)
  • 250 product/ service records where I have a digital record (I have more but not digitised those as yet)
  • thousands of data points on Internet of Things, fitness/ health trackers, location check-ins

Here’s a screenshot from how I log my supply relationships, in this case I’m doing that in the JLINC Sandbox.

And this one is, using the same app, how I log ‘my stuff’ (i.e. assets, products or services that I have or use).

And to complete the set, here is view of some of things I am/ will be in the market for over the next few months.

I would contend that whilst the above is useful to me in many ways; it would be an awful lot more useful to me if ‘my view’ was connected to my suppliers view and that we therefore co-managed what would then be ‘Our Data’ (for example, real time sync of my current bank balance, investment account, fuel tank status and many more). To be clear, ‘Our Data’ = Co-managed data; i.e. data where two or more parties are able to each consider themselves as managers of the relevant data (setting aside that there are many technical points of discussion underneath this co-management principle). I’ve written more about the my data, your data, our data, their data, everybody’s data distinctions over here.

Once the individual has their own data service with the above types of data, and many others over time, then the co-managed model is by far the optimum. I’ll write up those benefits, and the practicalities of how we obtain them in more detail in a separate post shortly..