Sales Process… meet Buying Process; and why context trumps segmentation

August 7th, 2009 Comments off

I’ve been doing some thinking in advance of getting stuck into the development of open standards for User Driven and Volunteered Personal Information. That work is being done here if you are interested in joining in. I’ve been thinking mainly about how best to explain what happens to buying processes and sales processes when volunteered personal information is added to the mix (underpinned by the personal data store/ My Data as set out here).

Here’s my stab at that explanation. I need firstly to set out a view of how things currently work – that’s in the first diagram below with individuals/ high level buying processes on the left, and organisations/ high level selling processes on the right. In short, at present, buyers and sellers largely do their own thing/ practice non-automated selective disclosure prior to engaging in an actual customer/ supplier relationship. That is structurally the best option for a buyer, certainly in terms of reducing complexity and protecting negotiating positions for more expensive/ complex purchases – but it does lead to a lot of guesswork; the buyer typically evaluates multiple options before deciding on one – that’s part of the guesswork referred to in the diagram below. This ‘one step removed’ approach is not the best option for the seller – which is why they try a wide range to tricks to have the potential customer engage with them. That would appear a sensible practice, but in reality it tends to fill up the ‘sales funnel’ with many potential customers who actually have no right nor reason being there – and why direct marketing conversions from prospect campaigns are often well below 1%. That’s the the other part of the guesswork in the diagram. At the relevant point in the process, the customer chooses one of the supply options and decides to commence the customer-supplier relationship; the other suppliers fall by the way side/ wonder what’s happened. But those who lost out, because they don’t have the information to do otherwise, keep on turning the marketing handle – lot’s of waste comes from that area.

Moving through the process, commencing the supply relationship in the current mode means interacting on a supplier run platform, and signing up to supplier generated terms and conditions (or going elsewhere to another supplier silo/ get the same result). What that then does is put the organisation unilaterally in charge of processes and process improvement around relationship management. As a historical note, in my view this is where CRM ‘went wrong’ in the widest sense – at least in part because many deployments occurred during the economic downturn in the early 2000’s. It moved from a having been brought in as a platform for driving improvements in the customer experience, to being run as a platform for cost cutting and for risk managment; e.g. the drive to automated processes such as web based customer self-service, offshoring contact centres. Sometimes this automation worked for customers (e.g. online banking), in many cases all it did was move the waste/ inefficiency onto to the customer. Of course what then happened was that customers took their business elsewhere, where they had that choice/ a better option, or stayed but with reduced levels of satisfaction – crazy in that customer retention and satisfaction improvements were almost certainly key drivers for the original CRM business case.

go to market space

So, the current process does not work that well; the sales process cannot be optimised much further within the current tool-set . But options for improving upon this are now emerging – and not through pedalling faster within the organisation/ the selling process; it comes from building capability on the buyer side/ enhancing the buying process. (note the clear parallel with how selling professionalised in the B2B world when professional procurement and its processes emerged, and also that in the B2B world deals are often concluded and managed on the customer side systems).

The first thing to note in the updated diagram below is what the individual brings to the party (via their personal data store/ user driven and volunteered personal information. They bring the context for all subsequent components of the buying process (and high grade fuel for the selling process if it can be trained to listen rather than shout). By ‘context’, I mean the combination of a wide range attributes that describe the individual and their specific buying situation. This would typically include their needs, their current understanding of how their needs relate to products/ services, their location, their existing supply relationships, their preferences (brand, colour), their role in the decision-making process, their timescales, how much they wish to/ are able to spend, when they wish to buy. In other words, the individual’s context bundle is what much of the early part of the sales process is actually trying to figure out – but can’t get access to as the individual has no current incentive to release it in full. The best an organisation can do at present is strategic segmentation of their market (differentiating products or services based on aggregated customer requirements), and tactical segmentation of their messaging content, communications channels, sales outlets or pricing. Then it’s over into guesswork mode – can we put our messages out in the right places to attract our potential customers and suck them into our sales process…..

The other adds to this second diagram are the ten numbered boxes, reflecting that the improvements we make to the buying process through user driven and volunteered personal information will impact differently at different points of the buying/ selling process. These ten areas are substantive enough to each require a post of their own, so for now i’ll list them out at the high level below the diagram and come back to them in more detail as the standards work unfolds.

context equals segmentation build

User Driven and Volunteered Personal Information Enabled Improvements

  1. Search/ Target (sometimes referred to as the Personal RFI, i.e. Request for Information) – through the individual bringing much richer context data to the table, suppliers prepared to engage with these new buying support tools will find that their targeting becomes much more precise, better enabling them to find potential customers whose needs closely match the unique selling propositions in the organisations product/ service offering. In turn, individuals will find that the options made available to them have been pre-qualified to fit their context (to whatever level of detail they have shared). Note – at this stage my assumption is that individuals will be engaging anonymously/ pseudonymously as there should be no need to share personal data in this part of the process. It is likely that new inter/ infomediaries will emerge in this space, acting as the individuals buying agent (4th party/ user driven services).
  2. Find (engage)/ Enquiry Management (sometimes referred to as the Personal RFP, i.e. Request for Proposals)  – through having brought richer data to the table in the preceding phase the individual will now be talking to pre-qualified suppliers (and vice versa), with the qualifying data from both parties available for use in the interaction. Typically this interaction will be about having a more refined/ detailed discussion about a need/ requirement/ solution axis – potentially involving either or both parties asking for more detail, including possibly verification of data asserted in the search/ target phase. It is likely that new inter/ infomediaries will also emerge in this space, quite possibly spanning the Search and Find requirement for individuals and done from the perspective of enabling the individual to buy solutions to their needs rather than the components which they subsequently stitch together themselves.
  3. Negotiate – In this stage the individual is talking to one preferred solution option and getting down to the actual proposed ‘deal’ and the terms and conditions around that – provided by either party. Improvements in this area are likely around improved transparency of terms and conditions, initiated by the individual being much clearer about their requirements, and having access to comparison tools earlier in the process. ‘Reputation’ management tools will also come into operation as the individual shares what they find out about suppliers.
  4. Transact – I would expect payment intermediaries/ financial services providers to find creative ways to engage with/ be driven by VPI enabled services; there is certainly much potential for reduction in credit card fraud and card related identity theft from using the much higher levels of identity assurance that will become the norm in a VPI enabled data-set.
  5. Welcome – This ‘relationship set up’ phase is typically about both parties getting to know each other, i.e. getting products/ services bought set up and configured, ensuring any ongoing account management/ billing is up and running smoothly. In the VPI enabled world this phase won’t change too much in the short term as it will still run mainly on supplier systems – but in the mid and long term i’d expect it to shift to a genuine user-centric architecture which will see the individual ‘welcome’ the new supplying organisation to their personal supply network/ federation.
  6. Relationship Servicing – This is what would typically be called customer service, i.e. fixing basic operational/ service delivery problems and dealing with ad hoc issues that come up such as change of address/ change of contact details/ change of payment details. As VPI enabled tools increasingly emerge, i’d expect this whole ‘change of’ to migrate to the ‘my suppliers follow me’ approach rather than the individual have to run around updating silos as per the current model.
  7. Relationship Development – This typically includes the ‘cross-sell/ up-sell’ much beloved in the CRM business case. This stage will change in the VPI enabled world, much for the better. Customer service will be provided within the context of the individuals existing solution set rather than that little snapshot of it that the supplier currently sees/ is interested in. In turn that will mean that cross-sell and up-sell will be not only be much more informed, but it will also be much more welcome from the individuals perspective – because it is now laser sharp, and running within a more equitable customer/ supplier relationship (partnership).
  8. Manage Problems – This stage is only reached if a significant problem emerges in the customer/ supplier relationship; typically this involves escalation beyond tier 1 customer service (and an increasingly frustrated/ angry/ upset customer). I don’t expect the VPI approach to have a high impact in this area, although improvements further up the process might have a knock on effect rendering this stage less painful if/ when it occurs.
  9. Manage Exits – Exits can and will happen, either permanently or for a period of time. They may be caused by significant problems that emerged, or by a change in the customer need, or in their circumstances (their context has changed). Less frequently, a supplier will wish to leave a market or terminate a product/ service line and thus exit those relationships affected. In the VPI world, i’d expect there to be more information around impending exits and reasons for them – some of which will enable creative supplier responses. Along with relationship development, i’d expect improved customer retention to be one of the major wins for the supply side in the VPI world – but the plumbing and mechanics for that have stilled to be worked out.
  10. Re-engagement – This stage might be known as ‘win-back’ in CRM speak, and involves the lost customer being targeted with appropriate offers to return. For the individual this return to the fold might be as a result in a time-driven change of context, or that the ‘grass was not greener on the other side’ – as is often the case in utility service swaps away from an incumbent that has retained quasi-monopoly advantages. In any case, the point being made here is that in the volunteered personal information scenario, the individual would be in position to retain and share the knowledge of the prior relationship – which many current CRM architectures fail to deliver on.

So there we have it. Time to get back to working on that VPI plumbing!!!

Categories: #Kantara, CRM, Data, VPI Tags:

More on the Privacy Fight-back

July 22nd, 2009 Comments off

Now this is nice….self-destructing digital data controlled by the data subject…..

Categories: Data, Privacy, Project VRM Tags:

Who Said Privacy Was Dead…..?

July 17th, 2009 1 comment

BT decides against deploying Phorm behavioural tracking.

The mobile phone directory Connectivity/ 118800 shut down by pressure from individuals who did not want their details scraped and published.

Facebook found to be in breach of Canadian Privacy law.

So, what have Phorm, Connectivity and Facebook got in common? Referring back to the Personal Data Eco-system framework – you’ll see that all three have reached out and surreptitiously tried to grab data from one of the other categories and move it into ‘Your Data’ (that owned by the organisation in question) in order to make money from it:

– Phorm tries to grab the web site use data from where it currently resides (un-structured, difficult to access ‘My Data’) and move it into their own domain (Your Data – both Phorm and BT variants in this case)

– Connectivity scrapes data from a range of ‘Their Data’ direct marketing files and turns that into another ‘Their Data’ data-set/ domain

– Facebook fails to put adequate processes around ‘Our Data’ (keeps it for an unlimited period) and thus attracts the attention of a regulator.

Exposing these various data grabs is now much more common – because there are now enough people watching and willing to act on it.

Privacy is on the way back…..albeit almost from the grave….

Categories: CRM, Data, Privacy, Project VRM, VPI Tags:

That’s Good, Now We Can Get Started With CRM….Meet VRM

July 13th, 2009 Comments off

This post by Paul Greenberg is the first i’ve read on ‘Social CRM’, and it looks like I came across it at the right time – Paul has drawn a line on what has clearly been a long debate, and set out a detailed definition and description of Social CRM that he will run with. Other than being an excellent summary of what Social CRM is/ is not; the timing works for me, because I think it’s time we in the VRM dialogue start to be more engaged on the mechanics of VRM and how it engages with CRM, rather than the theory of what a VRM world will look like. I say that because it now seems to me, that in UK at least, VRM is a mainstream business/ political discussion – and regarded as a ‘when’ rather than an ‘if’.

First, to Paul’s definition – so that we are clear what VRM is engaging with. His definition is below.

“CRM is a philosophy & a business strategy, supported by a technology platform, business rules, workflow, processes & social characteristics, designed to engage the customer in a collaborative conversation in order to provide mutually beneficial value in a trusted & transparent business environment. It’s the company’s response to the customer’s ownership of the conversation.”

I’m fine with that definition, but will also set out my own context of where ‘CRM’ (social or otherwise) sits in the wider business eco-system – because that will shape my views as to how VRM will engage. The model below was first developed in 1997 by QCi (since acquired by Ogilvy) and was designed to help clients engage with Customer Management/ Customer Relationship Management – which was then an emerging hot topic. This model has now been used over 900 times in organisations worldwide to assess their customer management capabilities. This model is underpinned by 240 best practices – which have been updated 5 times over the period between ’97 and now – to reflect that best practice is necessarily an evolving beast. So, the model is a good start point – not perfect, and there are others out there, but this is the one i’m working with.

Critically – in this model – Customer Relationship Management is central to Customer Management, but the latter is a wider set of capabilities. Practically speaking, one can ‘buy/ deploy/ build’ CRM’, but that then has to be seen in the context of the wider business system that is customer management. CRM is optional, Customer Management is not (unless you don’t have any customers….). A perfectly valid assertion from a Customer Management standpoint, in some sectors (e.g. FMCG) could be ‘we don’t want a relationship with our customers, and they don’t want one with us’ – we make stuff, they buy it.

I won’t dwell further on this model, other than to say i’ll keep referring back to it in discussions around how VRM applications, tools, processes engage with organisations.

CM & CRM

So – we now understand what CRM is, where it fits in an organisation, and that in its most recent evolution ‘the customer owns the conversation’. Given that, what will VRM do over and above that? My contention is that via VRM tools/ applications/ services:

– the customer will own (and share) some, not all the data; but that provided by the customer will transform huge chunks of CRM and Customer Management over time.

– big chunks of traditional data mining will go away, to be replaced with more value adding data services emerging.

– much improved data privacy will be a spin-off benefit

– the customer will be the initiator of the majority of CRM processes, and that organisations will becomes much better listeners than they are broadcasters.

– the net effect will be the elimination of much guesswork in the pre-transaction component of the customer journey, and much waste from the post-transaction component. This guess work and waste elimination will lead to overall cost reduction, some of which will be shared with the customer who has ‘co-eliminated*’ it.

In practical terms, VRM will bring new individual-driven data to the market, and will propose new processes that organisations who wish to become VRM-enabled should engage with. Contractual terms for accessing these data/ processes will also be part of the mix. I’ll build on what I mean by that over the next few weeks.

* I’m not sure there is such a word, but it seems a nice counter-point to co-creation.

Categories: CRM, Data, Privacy, Project VRM, VPI Tags:

The Personal Data Eco-System

June 20th, 2009 2 comments

This post is a short(ish) summary of a working session led by Drummond Reed and me at the recent West Coast VRM Workshop, and also an introduction to the Kantara workgroup in which we are going to move this debate forward. It is also part of the thinking that will short emerge in a Mydex white paper.

At the VRM workshop, we discussed the need for the concept of the Personal Data Store, what it would do in practice, and what that will ultimately enable.

Why we need such things – because individuals have a complex need to manage personal information over a lifetime, and the tools they have at their disposal today to do so are inadequate. Existing tools include the brain (which is good but does not have enough RAM, onboard storage, or an ethernet socket……thankfully), stand alone data stores (paper, spreadsheets, phones, which are good but not connected in secure ways that enable user-driven data aggregation and sharing), and supplier based data stores (which can be tactically good but are run under the supplier provided terms and conditions). NB Our current perception of ‘personal data stores’ is shaped by the good ones that are out their (e.g. my online bank, my online health vault); what we need is all of that functionality, and more – but working FOR ME.

What they will do/ enable – the term Personal Data Store is not an ideal term to describe a complex set of functions, but it is what it is until we get a better one (the analogy I’d use in more ways than one is the term ‘data warehouse’ – again a simplistic term that masks a lot of complex activity). A Personal Data Store can take two basic forms:

Operational Data Stores – that get things done, and only need store sufficient breadth and depth of data to fulfill the operation they are built for (e.g. pay a credit card bill, book a doctor’s appointment, order my groceries).

Analytical Data Stores – that underpin and enable decision making, and which typically need a more tightly defined, but much deeper data-set that includes data from a range of aspects of life rather than just that from one specific operation (e.g. plan a home move, buy a car, organise an overseas trip).

A sub-set of the individual’s overall data requirement will lie in both of the above, this being the data that then integrates decision-making and doing.

In both cases, the functionality required is to source, gather, manage, enhance and selectively disclose data (to presentation layers, interfaces or applications).

We also discussed ‘who has what data on you’ and introduced the following diagrams to explain current state and target state (post deployment of Volunteered Personal Information (VPI) tech and standards).

The key terms that require explanation are:

My Data – is the data that is undeniably within, and only within, the  domain of an individual. It’s defining characteristic is that it has demonstrably not been made available to any other party under a signed, binding agreement. This space has been increasingly encroached upon by technology and organisations in recent history (e.g. behavioural tracking tools like Phorm) and this encroachment will continue. Indeed a general comment can be made that ‘my data’ equates to privacy in the context of personal data; so the rise of the surveillance society and state is a direct assault on ‘My Data’. Management of ‘My Data’ can be run by the individual themselves, or outsourced to a ‘fourth party service’.

Your Data – is the data that is undeniably within the domain of an organisation; either private, public or third sector. Proxy views of this data may exist elsewhere but are only that. This data would include, for example, the organisations own master records of their product/ service range, their pricing, their costs, their sales outlets and channels. Customer-facing views of much of Your Data is made available for reproduction in the ‘Our Data’ intersect.

Our Data – is the data that is jointly accessible to both buyer and seller/ service provider, and also potentially to any other parties to an interaction, transaction or relationship. It is the data that is generated through engaging in interactions and transactions in and around a customer/ supplier relationship. Despite being ‘our’ data, it is probably technically owned, or at least provided under terms of service designed by the seller/ service provider; in practical terms this also means that the seller/ service provider dictates the formats in which this data exists/ is made available.

Their Data – is the data built/ owned/ sold by third party data aggregators, e.g. credit bureaux, marketing data providers in all their forms. It’s defining characteristic is that it is only available/ accessible by buying/ licensing it from the owner.

Everybody’s Data – is the public domain data, typically developed/ run by large, public sector(ish) entities including local government (electoral roll), Post Offices (postal address files), mapping bureau (GIS). Typically this data is accessible under contract, but the barriers to accessing these contracts are set low – although often not low enough that an individual can engage with them easily.

The Basic Identifier Set/ Bit in the Middle – this is the core personal identity data which, like it or not, exists largely in the public domain – most typically (but not exclusively) as a result of electoral rolls being made available publicly, and specifically to service providers who wish to build things from them. This characteristic is that which enables the whole personal eco-system and its impact on data privacy to exist, with the individual as the un-knowing ‘point of integration’ for data about them.

Propeller Current State

The ovals in the venn diagram represent the static state, i.e. where does data live at a point in time. The flow arrows show where data flows to and from in this eco-system; I use red to signify data flowing under terms and conditions NOT controlled by the individual data subject.

Flow 1 (My Data to Your Data, and My Data to Our Data) – Individuals provide data to organisations under terms and conditions set by the organisation, the individual being offered a ‘take it or leave it’ set of options. Some granularity is often offered around choices for onward data sharing and use, i.e. the ‘tick boxes’ we all know and which are one of the main bitsof legacy CRM that VRM will fix.

Flow 2 (Your Data to Your Data, including Our Data) – Organisations share data with other organisations, usually through a back-channel, i.e. the details of the sharing relationship are typically not known to the data subject.

Flow 3 (Your Data, including Our Data to Their Data) – Organisations share data with a specific type of other organisation, data aggregators, under terms and conditions that enable onward sale. Typically the sharer is paid for this data/ has a stake in the re-sale value.

Flow 4 (Everybody’s Data to Their Data) – Data Aggregators use public domain data sources to initiate and extend their commercial data assets.

The target state is shown below, a different scenario altogether – and one which I believe will unfold incrementally over the next ten years or so…..data attribute by data attribute, customer/ supplier management process by customer/ supplier management process, industry sector by industry sector. In this scenario, the individual and ‘My Data’ becomes the dominant source of many valuable data types (e.g. buying intentions, verified changes of circumstance), and in doing so eliminates vast amounts of guesswork and waste from existing customer/ citizen managment processes.

The key new capabilities required to enable this to happen are those being worked on in the User Driven and Volunteered Personal Information work groups at Kantara (one tech group, one policy/ commerce one), and elsewhere within and around Project VRM. The new capabilities will consist of:

– personal data store(s), both operational and analytical

– data and technical standards around the sharing of volunteered personal information

– volunteered personal information sharing agreements (i.e. contracts driven by the individual perspective, creative commons-like icons for VPI sharing scenarios)

– audit and compliance mechanics

Around those capabilities, we will need to build a compelling story that clearly articulates, in a shared lexicon (thanks to Craig Burton for reminding us of the importance of this – watch this space), the benefits of the approach – for both individuals and organisations.

The target state that will emerge once these capabilities begin to impact will include the 4 additional individual-driven information flows over and above the current ones. The defining characteristic of these new flows is that the can only be initiated by the data subject themselves, and most will only occur when the receiving entity has ‘signed’ the terms and conditions asserted by the individual/ data subject. The new flows are:

Flow 5 (My Data to Your Data (inc Our Data) – Individuals will share more high value, volunteered information with their existing and potential suppliers, eliminating guesswork and waste from many customer management processes. In turn, organisations will share their own expertise/ data with individuals, adding value to the relationship.

Flow 6 (Everybody’s Data to My Data) – With their new, more sophisticated personal information management tools, individuals will be able to take direct feeds from public domain sources for use on their own mashups and applications (e.g. crime maps covering where I live/ travel)

Flow 7 (My Data to (someone else’s) My Data) – An enhanced version of ‘peer to peer’ information sharing.

Flow 8 (My Data to Their Data) – The (currently) unlikely concept of the individual making their volunteered information available to/ through the data aggregators. Indeed we are already starting to see the plumbing for this new flow being put in place with the launch of the Acxiom Identity Card.

Propeller Target State

The implications of the above are enormous, my projection being that over time some 80% of customer management processes will be driven from ‘My Data’. I’m pretty confident about that, a) because we are already see-ing the beginning of the change in the current rush for ‘user generated content’ (VPI without the contract), and b) because the economics will stack up. Organisation need data to run their operations – they don’t really mind where it comes from. So, if a new source emerges that is richer, deeper, more accurate, less toxic – and all at lower cost than existing sources; then organisations will use this source.

It won’t happen overnight obviously; as mentioned above specific tools, processes and commercial approaches need to emerge before this information begins to flow – and even then the shift will be slow but steady, probably beginning with Buying Intention data as it is the most obvious entry point with enough impact to trigger the change. That said, the Mydex social enterprise already has a working proof of concept up and running showing much of the above working. A technical write up of the proof of concept build can be found here. And the market implications of this are explored in more detail in new research on the market value of VPI shortly to be published by Alan Mitchell at Ctrl-Shift.

The two hour session at the VRM workshop was barely enough to scratch the surface of the above issues, so the plan is to continue the dialogue and begin specifying the capabilities required in detail in the User Driven and Volunteered Personal Information (technology) workgroup at The Kantara Initiative. The workgroup charter can be found here. A parallel workgroup focused on business and policy aspects will also be launched in the next few weeks. Anyone wishing to get involved in the workgroup can sign up to the mailing list here and we’ll get started with the work in the next couple of weeks.

 

Categories: #Kantara, Data, Mydex, Privacy, Project VRM, VPI Tags:

The Information Masters (not)

June 15th, 2009 Comments off

This post by Jerry Fishenden on Government IT spend made me think  back to the Information Masters research and book led way back in 1999 by my sometime consulting colleague John McKean. I find it bemusing and a bit depressing that despite this excellent piece of research, and the associated research having been around now for ten tears – the vast majority of organisations still carry on believing that throwing money at technology is the way to getting maximum return from their information assets. Certainly from Jerrys’ post, it would seem that UK government us very much in this camp.

To re-cap, for those who have not read the book, the research study interviewed senior execs from 30 or so organisations across a range of sectors, worldwide that were noted for their advanced use of customer and related information. It asked them to explain what had started them on their journey to information mastery, what they found out along the way, and what they would do differently if they had the chance. This qualitative research uncovered seven information-related competencies (information management, leadership, development of an information culture, organisational structure to optimise information flows, process management, people management and technology management). These competencies and their impact are explained in detail in the book – the key point made by these top performing businesses was that they invested their time and money across these competencies in a balanced way; i.e. ALL are important, and over/ under/ non investing in any one of them would have limited the overall success of the initiative.

However, for me, the most interesting part of the Information Masters story lies in the quantitative assessment of how the typical organisation (i.e. more than 95% of them according to the survey). In short, they spend most of their money on the technology, and pay lip service to the other six competencies. This comparison is shown in the chart below, along with a view on the return on investment available from each approach.

elements of information competency

Unfortunately for us UK taxpayers, our successive governments and the civil service seem to be very much in the ‘throw money at technology’ camp.

Categories: Data, Leadership Tags:

“The Personalisation of Today is Like Lipstick on a Pig….”

June 11th, 2009 2 comments

I just love that quote from James Gardner of LloydsTSB, who goes on to say…

‘No, the only way to get to markets of one is if customers make the products themselves. This is where the “mash up” I spoke of my in my last post comes in. Customers, who are able throw together bits of offers in unique ways, and then share them with other like minded customers, are the way things will eventually pan out. These are crowds at the centre of the financial services value chain, which will be highly distributed, highly chaotic, but not subject to the system risks of a centralised banking system.’

Spot on i’d say, and that’s where we’re looking to get to with Mydex – allowing the individual to genuinely be the point of integration for the personal data, and the processes/ applications/ mashups that engage with it. I don’t think banking will be the first to engage, but it will probably be a fast follower.

Categories: Mydex, Project VRM, VPI Tags:

Scotweb 2

June 11th, 2009 Comments off

I’ll be speaking at this event next week in Edinburgh about VRM and the Mydex initiaitive.

Also, moves are afoot to get a Scotland based ‘chapter’ up and running to do some local pushing forward on VRM initaitives.

Categories: Mydex, Project VRM, VPI Tags: ,

Personal RFP’s….what are they, and how do we make them happen?

May 28th, 2009 4 comments

At the VRM West Coast workshop, Don Marti led a session on Personal RFP’s, which then led to the issue being debated further on the mail list and built out in this post by Alan Mitchell. Here’s my contribution, looking as much from the CRM/ recipient perspective as the VRM one – ultimately I think that until we look at both simultaneously then we won’t get much up and running at any kind of scale deployment.

Firstly, I think we need to get our terminology in order; as Craig Burton pointed out…we do not yet have a clear VRM lexicon accepted and understood by all project participants.

Here are a couple of references from Wikipedia, that relate to/ illustrate the background to the terms Request for Information (RFI) and Request for Proposals (RFP). I think we need to look at both in tandem because typically they interact with each other.

Request for InformationA request for information (RFI) is a standard business process whose purpose is to collect written information about the capabilities of various suppliers. Normally it follows a format that can be used for comparative purposes. An RFI is primarily used to gather information to help make a decision on what steps to take next. RFIs are therefore seldom the final stage and are instead often used in combination with the following: request for proposal (RFP), request for tender (RFT), and request for quotation (RFQ). In addition to gathering basic information, an RFI is often used as a solicitation sent to a broad base of potential suppliers for the purpose of conditioning supplier’s minds, developing strategy, building a database, and preparing for an RFP, RFT, or RFQ.

Request for ProposalA request for proposal (referred to as RFP) is an invitation for suppliers, often through a bidding process, to submit a proposal on a specific commodity or service. A bidding process is one of the best methods for leveraging a company’s negotiating ability and purchasing power with suppliers. The RFP process brings structure to the procurement decision and allows the risks and benefits to be identified clearly upfront. The RFP purchase process is lengthier than others, so it is used only where its many advantages outweigh any disadvantages and delays caused. The added benefit of input from a broad spectrum of functional experts ensures that the solution chosen will suit the company’s requirements. Effective RFPs typically reflect the strategy and short/long-term business objectives, providing detailed insight upon which suppliers will be able to offer a matching perspective.

I think the background to these terms is key to how we must think of them in VRM world if we are to understand how best to deploy them. What does that mean in practice?

  1. The RFI and RFP processes originate from professional procurement functions, that have the time, funds and incentive to make the process work
  2. There is an implicit logic in the process for both parties, architected around eliminating guesswork and waste; i.e. we’ll tell you what we want to know about (RFI) and, based on that information, what we want to buy (RFP) to save you having to market and sell to us; and by being more organised we’ll be able to do a more efficient deal for both and generate a win-win
  3. They are business processes, not just technologies or data flows
  4. The communications channels through which the interactions and transactions are exchanged should be standard, mass market, not niche
  5. They need two parties, issuers and respondents, both of whom understand how the process works, and both of whom have to do a lot of work to make the exercise work
  6. They typically relate to fairly complex requirements, because the cost of the process is high enough to eliminate the value in applying it to simple/ low cost purchases
  7. The buyer requirement/ seller response is rarely just about lowest price, items suited to that are dealt with in commodity markets

In addition to these characteristics, it is also worth noting that over time intermediaries have emerged (e.g. TEC) who, amongst other support services, make whole series of standard RFI and RFP templates available at no or low cost in order to stick themselves into the value chain.

My view of the above is that a) the originators of the terms RFI and RFP now have finely honed processes for dealing with them, they do enable win-wins for buyer and seller, and intermediaries have emerged to deal with some of the hard stuff – like finding common terminology, and b) they are typically not automated processes and thus not not at all like what will actually be required to do the things we have commonly described as Personal RFPs in VRM discussions, (e.g. i’m here, and I need a stroller for twins).

SO: Before we progress, we may wish to change our terminology around the RFI/ RFP issue – to more accurately reflect what the individual needs; otherwise we risk being confused with the prior deployments of the terms which actually have very little in common with what the individual might deploy right now.

Here’s my view of what those needs are:

  • To be able to articulate a requirement for information about a product or service in ways that can be discovered by potential suppliers or other third or fourth party service providers (assume by a machine but not exclusively so). This area is where Alan suggests there is the biggest gap at present; and that’s quite right – if that gap was not there we’d have had personal RFP type things going on years ago.
  • To share that requirement for information without compromising ones data privacy beyond that required to receive the information sought.
  • To match ‘information requests/ buying intentions with their equivalent information provisions and proposals (that’s the really smart bit!!!!).
  • To receive responses to the information request through one or more communication channels.
  • To be able to interact with responses, including follow up to complete a sale, or to extend an information request.

If we look hard enough we’ll find that there are already architectures out there, that do 2, 3 and 4 – and bits of 1 are around that can be picked up and added in, either directly or (more likely) via fourth party services. For example, the architecture below has been doing its stuff on the web since way back in 2000; a proposition called 2busy2surf that was way ahead of its time. Unfortunately that business has now gone, but the architecture and buyer-seller matching engine has been white-labelled into 20 or so propositions since then. It is still churning out stacks of permissioned requests for information and requests for proposals, and returning matched information packages or offers. These come direct from the selling organisation, or more typically through the affiliate markets (third party services).

RFI & P Architecture 1

So, to get what we used to call personal RFP’s up and running, what we need to do, in my view, is:

  1. Sort out our terminology/ lexicon
  2. Build out the Requirements Articulation piece, adding search maps, comparison engines and other added value buying services into the spec)
  3. Tell the story of the architecture
  4. Get it running in a few business in a more overtly VRM way
  5. Publish the architecture as an open standard

That’s going to take a bit of time and effort. It’s on the agenda for the User Driven and Volunteered Personal Information working group at Kantara; this group has now been approved and will be up and running shortly. I’ll post the details on how to join that as soon as I have them.

Thoughts anyone?

Iain

CRM…. meet VRM, the ‘three meeting theory’

May 13th, 2009 Comments off

I’ve had a couple more validations of this theory in the last few weeks, so thought i’d best write it up. My hope is that we can use the upcoming VRM Workshop to get the VRM story refined and presented so that we can reduce the number of meetings required to get to the detail of why an organisation should consider ‘VRM enabling’ itself.

So, here’s my theory:

It takes three, fairly in-depth meetings before a smart, typically senior CRM/ Customer Management/ Customer Experience executive in a large customer-facing organisation to genuinely ‘get’ VRM and where we are coming from with the project and mind-set – and thus what’s in it for them.

Here’s how it usually pans out in my experience:

Meeting One: This usually happens on the back of an existing contact who has heard/ read some snippet about ‘VRM’, or can also be in one of the more in-depth, small-group presentations that I and others have run in the last 12 months or so (mainly UK).

The outcome of this meeting, from the perspective of the CRM/ CM Exec is usually along the lines of ‘These people are well meaning, are obviously committed to their ‘hobby’, but a bit mad and naive as to what us big organisations have to deal with; but at least i’ve done my bit for keeping an eye on innovation in my space’. Alternately, the shorter meetings can be driven by ‘don’t these people realise that we’ve just spent a zillion pounds on our CRM application and need to get that to work because we’ve told everyone it will’.

Most CRM….meet VRM discussions finish at this stage….for now anyway.

Meeting Two: Let’s say that at best one in twenty of the above meetings end up with a follow up meeting, and that many of these are through personal ongoing contacts (where CRM/ CM work is going on in parallel); or that sufficient time has passed since meeting one for an update to be of possible value.

This is the meeting during which ‘the penny drops’….but typically only in connection with a very small nugget of opportunity, often one which is front of mind for the exec at that point in time. Examples would include:

– yes, I know our data quality is shockingly bad….., you mean we could work with our customers to fix that…..? Or

– so you mean we could accept these highly qualified leads into our existing CRM system with hardly any tweaks….? Or

– so our customers can help us refine/ define our new products if we engage in the right way?

The outcome of this second meeting is usually….’let me think about that’; and ‘is there anything up and running as a genuine VRM application that I can have a look at?’

Meeting Three: So now we’re down to a very small number of ‘almost converts’. These third meetings are typically much more ‘CRM/ CM/ CE Exec driven’ and are about:

– where do I see this stuff? (i.e. we are usually showing some of the behind the scenes development projects at this stage)

– how can I access it to play around with it, prototype it and build proofs of concept in my domain?

– can you meet up with our innovation folks to talk about a possible pilot?

Underpinning these third meetings is usually the realisation that what we VRM folks are talking about actually has a very sound economic argument, and also that we are about ‘win win’ rather than consumer activism for the sake of it.

What happens after meeting three? I don’t know to be honest, we’ve not had any yet that i’d count as such – although there are a couple lined up for June and July. I think for those meetings the challenge falls back onto the VRM community, or those of us building VRM type solutions – we need to be able to answer the ‘meeting three challenges’ loud and clear.

What does that mean for Project VRM and our workshop this week? I think we need to get better at telling our big and complex story, probably in bit sized chunks and in accessible ways – a good web site for example. I think we also need to focus on getting some real, live pilots and proofs of concept out there to be engaged with. Let’s pick up on that on Friday.

Lastly, i’d have to add that the record for ‘getting it’ is actually nothing like my three meeting theory – it was about twenty minutes and the only question at the end of that was ‘where do we sign up’?

Categories: Leadership, Mydex, Project VRM, VPI Tags: