Scoring spreadsheets, the bane of my life

My colleague Alan Pelz-Sharpe and I frequently engage in friendly debate, be it about content technology, our preferred London neighborhoods, or how to cook a good curry. Recently, as we are both advising research customers with web content management, digital asset management and enterprise document management procurements, we've been discussing the pros and cons of using scoring spreadsheets in the vendor evaluation process. Alan recently shared his thoughts on the subject.

I am more cynical about quantitative methods of evaluating vendors. It makes me think of people who buy wine simply based on what The Wine Spectator rating is (without considering the food they might drink it with, or what they actually like vs. what Mr. Wine Critic likes), or my single friends who have a checklist of what they will or won't accept in a mate. "He must be this tall, have this color eyes, and make at least this level of salary." Well, the world simply doesn't allow you to pick partners based on a checklist, perhaps unless you're Tiger Woods. 

I often debate with procurement departments at large enterprises on the matter of scoring spreadsheets. Despite my warnings against the approach, due to tradition or regulations, very painstaking time and effort often gets put into creating multi-page spreadsheets, and Mr. Procurement Man then seems very proud that we now have a very scientific and clearly quantitative assessment process. None of these "silly" or "emotional" things like "company culture" or "ability to work with our people" plays into the formulaic precision of this spreadsheet. Whoever gets eliminated will clearly feel better knowing they got 99 points vs. 100, and if any eliminated vendor gets ruffled feathers, Mr. Procurement Man can show the spreadsheets and/or the scores, or use them to indeed point out that this was a very scientific process, so clearly it must have been accurate. 

I consider it all a facade to what is, at the core, a qualitative process. Come to think of it, I don't know why there's not a Dilbert cartoon depicting this very thing.

Generally, when the rubber meets the road during a vendor selection process, and you are down to the last 3 or 4 vendors, they tend to be quite functionally similar. If they weren't a good fit for your requirements, a fit for your technical team and your architecture, if they couldn't accommodate your use cases, they wouldn't have gotten this far. 

Final selection almost always boils down to "softer" criteria, such as the team the vendor is willing to supply, their experience in your industry, as well as geographic and cultural considerations

These factors are just as important as the technology, despite what the scoring spreadsheets tend to reflect. They also weigh on evaluators very heavily when doing actual scoring. It's not uncommon to hear phrases from evaluators like "let's go back and re-jigger the scores for the auditor" or "I'm going to change the score to better reflect the vendor that's the better fit." Yep, that's "science" and "quantitative analysis" for you. 

For me personally, the biggest challenge in filling out these spreadsheets is whether to do it based on my accumulated knowledge of a vendor from many years of following them and their implementations, or simply based on what the vendor says in their proposal. Oftentimes clients have asked me to do the latter, "so everything is consistent with others on the evaluation team," at which point I answer, "then why pay me to be here?" I'm involved to tell our customers everything the vendor won't tell you in in proposal, to "rate" them based on the context of your use cases, your organization, your goals - not to demonstrate my reading comprehension skills. I admit, I love to shock my clients when I say this, but it never fails to prove my point.

Scoring, unless you're talking about sports (and I don't mean the rated ones, like figure skating or diving), is inherently a subjective process. It takes qualitative assessments even to produce a quantitative result. So if you are going to do scoring, keep it simple - and don't be afraid to favor the qualitative. 

 


Our customers say...

"The Digital Asset Management Research is a straightforward, comprehensive report that's invaluable to anyone considering or implementing digital asset management. With a no-nonsense approach to evaluating the major vendors and best practices, this report is a true handbook. It's my new bedside reading..."


Faith Robinson, Content Strategist & Industry Thought Leader

Other Digital Asset Management posts

New Conference: Creative Tech Europe 2024

I’ve been asked to Chair a new conference devoted to the Creative Technology topic.  On 28 June 2024 in London, we’ll look to bring together creative leaders and practitioners who are looking to find more powerful and scalable solutions.

DAM Trends to Watch in 2024

To navigate this dynamic environment, we are excited to announce a must-attend webinar: "DAM Trends to Watch in 2024," scheduled for February 28, 2024, at 01:00 pm ET / 18:00 UTC. This webinar is designed to accelerate and align your DAM initiatives with the evolving marketplace, ensuring your strategies are current and ahead of the curve.

DAM New York 2023: Year 20!

Henry Stewart’s next in-person event, DAM New York 2023 (aka the World’s largest event dedicated to Digital Asset Management) is the 20thyear of the conference!

Join Us at DAM LA

After a few long years, we are now just a few weeks away from the DAM community reconvening on the West Coast.