Best bets: a worst practice?

As one of the authors of our Enterprise Search Report 2008, I've been spending a lot of time lately looking at search technology and talking to folks who care deeply about the subject (i.e., vendors and their customers).

One thing everyone seems to agree on is that providing relevant results to the user is a very hard problem indeed. People don't want to enter a few keywords and then get 10,000 hits on documents that contain those keywords, 9,999 of which might be irrelevant. They want pointers to the one or two documents that are relevant.

The signal-to-noise problem is so thorny that many enterprise search products include an optional feature known as "best bets." The idea is that certain very common searches should point to particular documents (or intranet pages) that are known, or presumed, to apply. Imagine that a lawyer working for a large legal firm logs into the company portal and searches on "poison pill." A thousand hits might come back, of which 990 are related to medications, allergic reactions, toxicity, malpractice, and so on, even though all the person was really looking for was a link to the company's "merger and acquisitions" resource page. ("Poison pill" is a term for tactics a company can use to fend off hostile takeover attempts.) The idea of "best bets" is that you rig the system to promote the company's "M&A resources" link to the top of the hit list whenever someone does a search on "poison pill."

Sometimes "best bets" refers to presenting the user with a recommendation when, say, several repositories exist, one or more of which could be better-suited to a given search than the others. ("Would you like to search the Parts Catalog for this?") This is more of a navigational scenario. That's not really what I'm talking about here. I'm talking about  the practice of biasing search results by hard-coding certain answers to certain common queries.

Setting up "best bets" is typically a manual process. A person in IT will use search analytics to determine the most common search queries and the most-followed links associated with them. Then those associations will be captured in a database and wired into the search software in such a way that when a user issues a query for which a best bet already exists, the best-bet link(s) will automatically be shown at the top of the results page (either as a regular hit or under a separate heading of "Best Bets").

Not everyone thinks the "best bets" mechanism is a good idea. The problem is that, fundamentally, it's a hack. It's arguably the worst kind of hack in that it involves serious amounts of human intervention. Someone has to create the best-bet database. (Typically there will be hundreds, if not thousands, of best-bet links.) Then the database has to be updated and kept fresh as user needs change and documents are added to or dropped from the system.

In point of fact, the search software should do all this for you. After all, that's its job: to return relevant results (automatically) in response to queries. Why would you sink tens (or hundreds) of thousands of dollars into an enterprise search system only to override it with a manually assembled collection of point-hacks?

Sure, search is a hard problem. But if your search system is so poor at delivering relevant results that it can't figure out what your users need without someone in IT explicitly telling it the answer, maybe you should search for a new search vendor. (And for help with that, see Enterprise Search Report 2008, a free sample of which is available right here.)


Our customers say...

"I've seen a lot of basic vendor comparison guides, but none of them come close to the technical depth, real-life experience, and hard-hitting critiques that I found in the Search & Information Access Research. When I need the real scoop about vendors, I always turn to the Real Story Group."


Alexander T. Deligtisch, Co-founder & Vice President, Spliteye Multimedia
Spliteye Multimedia

Other Posts