2016-03-21

In science, form follows funding

Recently, brief Twitter exchanges I had with @csarven on the subject of #LinkedResearch made me want to articulate a longer-form opinion on scientific publishing that no longer fitted a tweet. Be wary though, although this opinion is longer, it's still oversimplifying a rather complex matter for the sake of conveying the few key points I have.

There's a pervasive belief that “if you can't measure it, you can't manage it”. Not only is this quote often misattributed to Peter Drucker, its author William Edwards Deming actually wrote that “it is wrong to suppose that if you can't measure it, you can't manage it — a costly myth” (The New Economics, 2000, p. 35). Contradicting the misquoted statement, Deming instead suggested that it's possible to manage without measuring. With that being said, he acknowledges that metrics remain an essential input to management.

Since funding is a key instrument of management, metrics influence funding decisions too. Viewed from this perspective, science is difficult to fund because its quality is hard to measure. The difficulty of measuring science is widely recognized, so that scientometrics was devised with the purpose of studying how to measure science. Since measuring science directly is difficult, scientometrics found ways to measure scientific publishing, such as citation indices. Though using publishing as a proxy to science comes with an implicit assumption that the quality of scientific publications correlates positively with the quality of science, many are willing to take on this assumption simply because of the lack of a better way for evaluating science. The key issue of this approach is that the emphasis on measurability constrains the preferred form of scientific publishing to make measuring it simpler. A large share of scientific publishing is centralized in the hands of few large publishers who establish a constrained environment that can be measured with less effort. The form of publishing imposes a systemic influence on science. As Marshall McLuhan wrote, the medium is the message. While in architecture form follows function, in science, form follows funding.

Measuring distributed publishing on the Web is a harder task, though not an insurmountable one. For instance, Google's PageRank algorithm provides a fair approximation of the influence the documents distributed on the Web have. Linked research, which proposes to use the linked data principles for scientific publishing, may enable to measure science without the cost incurred by centralization of publishing. In fact, I think its proverbial “killer application” may be a measurable index like the Science Citation Index. Indeed, SCI was a great success, and it “did not stem from its primary function as a search engine, but from its use as an instrument for measuring scientific productivity” (Eugene Garfield: The evolution of the Science Citation Index, 2007). A question that naturally follows is: how off am I in thinking this?

No comments :

Post a Comment