2017-12-26

Copy-pasting the history of public procurement

They say that “those who do not learn history are doomed to repeat it.” However, those who machine-learn from history are also doomed to repeat it. Machine learning is in many ways copy-pasting history. A key challenge is thus to learn from history without copying its biases.

For the first time in the history we have open data on past experiences in making public contracts. Contracting authorities and bidders have a chance of learning from these experiences to improve the efficiency of agreeing on future contracts by matching contracts to the most suitable bidders. I wrote my Ph.D. thesis on this subject.

The thing is that we know little when learning from awarded contracts. What we have is basically this: Similar contracts are usually awarded to these bidders. This has two principal flaws: the first is tenuous similarity. Similarity between contracts hinges on the how informative their description is. However, the features comprising the contract description may be incomplete or uninformative. Civil servants administering contracts may either intentionally or inadvertently withhold key information from the contract description or fill it with information that ultimately amounts to noise.

What is perhaps more detrimental is that the associations between bidders and their awarded contracts need not be positive. If we learn from contracts awarded in the past, we assume that the awarded bidders are the best (or simply good enough) matches for the contracts. This assumption is fundamentally problematic. The winning bidder may be awarded on the basis of adverse selection, which is not in favour of the public, but the public decision maker. There are many ways in which the conditions of adverse selection can arise. Relevant information required for informed decision-making can be unevenly distributed between the participants of the public procurement market. Rival bidders can agree to cooperate for mutual benefit and limit open competition, such as by submitting overpriced fake bids to make real bids more appealing in comparison. When a bidder repeatedly wins contracts from the same contracting authority, it needs not be a sign of its exceptional prowess, but instead a sign of clientelism or institutional inertia. In most public procurement datasets the limitations of award data are further pronounced by missing data on unsuccessful bidders. In this light, the validity of contract awards recommended by machine learning from data with these flaws can be easily contested by unsuccessful bidders perceiving themselves to be a subject of discrimination. Consequently, blindly mimicking human decisions in matching bidders to contracts may not be the best course of action.

Some suspect we can solve this by learning from more kinds of data. Besides award data, we can learn from post-award data, including the evaluations of bidders’ performance. However, post-award data is hardly ever available. Without it, the record of a contract is incomplete. Moreover, in case of learning from data for decision support we combine conflicting incentives of decision arbiters and data collectors, who are often the same in public procurement. For instance, a winning bidder may receive a favourable evaluation despite its meagre delivery on the awarded contracts thanks to its above-standard relationship with the civil servant administering the contracts.

In Logics and practices of transparency and opacity in real-world applications of public sector machine learning Michael Veale suggests that we need to carefully select what data gets disclosed to combine “transparency for better decision-making and opacity to mitigate internal and external gaming.” We need to balance publishing open data to achieve transparency with the use of opacity to avoid gaming the rules of public procurement. Apart from data selection, pre-processing training data for decision support is of key importance. In our data-driven culture, engineering data may displace engineering software, as Andrej Karpathy argues in Software 2.0. Careful data pre-processing may help avoid overfitting to biases that contaminate the public procurement data.

Neither technology nor history is neutral. In fact, history always favours the winners. Winners of public contracts are thus always seen in favourable light in public procurement data. Moreover, data is not an objective record of the public procurement history. Therefore, we cannot restrict ourselves to copy-pasting from history of public procurement by a mechanical application of machine learning oblivious of the limitations of its training data. On this note in the context of the criminal justice system, the Raw data podcast asks: “Given that algorithms are trained on historical data – data that holds the weight of hundreds of years of racial discrimination – how do we evaluate the fairness and efficacy of these tools?” Nevertheless, how to escape this weight of history in machine learning is not yet well explored.

1 comment:

  1. This is an insightful piece. I have been contemplating in recent days if in my country, a national supplier performance database would be useful to contracting authorities when making contract awards. These thoughts stemmed from the under-current of the instant writing- "those who don't learn from history are doomed to repeat it". More to the point however is exactly what data ought to be captured in that supplier performance database- as any number of circumstances could tend to favour a supplier's good performance in one context and poor performance in another. The data alone has to be available but it has to be thoroughly thought through to make it effective. Thanks Mynarz...

    ReplyDelete