2014-07-09

Epistemology of data in contemporary science

On the turn of the last year I published a paper in E-LOGOS, a Czech philosophy journal (the paper is available here). The published text deals with the image of data that is widespread in contemporary science. It offers a critical look on the understanding of data that is a key part of the big data hype. Again, as is the case for most of my works in areas, which I am not deeply familiar with, it is a compilation and remix of thoughts drawn from a long array of sources. What you see below is a (rough) English translation of the text (with proper hyperlinks). This way, there’s a chance it will be indexed well enough so that the interested audience can find it.


Abstract: Contemporary science is dominated by positivist epistemology of data, which builds on the foundations of metaphysical realism and the ideal of mechanical objectivity. This approach to data suffers from a number of flaws. Shortcomings of this approach were identified in many critical responses and led to a new problematization of the established concept of data. The often criticised aspects of this epistemology remark on data being embedded in the context of its making, and point out to the mediation of data and its openness to manipulation. In recent years, the function of data gained an unprecedented importance due to the rising appetite of science for data, which attracted attention to this formerly unproblematic concept. Several alternative approaches to epistemology of data appeared, of which this text introduces the positions proceeding from constructivism and rhetoric. The presented paper draws heavily on critical literature in epistemology of data. Due to its summarising character, it may be understood as a synthesis and reconfiguration of the existing thoughts on the topic. In this way, the paper offers a contribution to rhetorical argumentation in the discourse of data in contemporary science.

Introduction

In spite of the fact that etymology of words often bears no correspondence to their use, the roots of ‘data’ give many hints about the way this word is used. Data, as Rosenberg describes (2013, p. 18), comes from the plural of the Latin word ‘datum’; a neuter past participle of the verb ‘dare’, which is translated as ‘to give’. ‘Datum’ can be thus translated as something ‘given’. Common use of data is in line with this explanation, often treating it as something given, which needs no questioning.

Constructivist epistemology takes a stance opposing this viewpoint and claims that nothing is given, since everything is a product of human construction. For example, Bachelard writes:

“For a scientific mind, all knowledge is an answer to a question. If there has been no question, there can be no scientific knowledge. Nothing is self-evident. Nothing is given. Everything is constructed.” (Bachelard, 2002, p. 25)

Diverging understanding of data provides a basis for contemporary criticism that undermines the established status of data in science. On the one hand, data is perceived as direct reflection of reality, whereas on the other hand, it is designed as an artifact of human creation. These diverging perspectives are reflected in problematization of data, which causes a lot of doubt. For example, the work of Poovey dedicated to history of modern fact formulates many questions, which apply to the concept of data as well:

“What are facts? Are they incontrovertible data that simply demonstrate what is true? Or are they bits of evidence marshalled to persuade others of the theory one sets out with? Do facts somehow exist in the world like pebbles, waiting to be picked up? Or are they manufactured and thus informed by all social and personal factors that go into every act of human creation? Are facts beyond interpretation? Or are they the very stuff of interpretation, its symptomatic incarnation instead of the place where it begins?” (Poovey, 1998, p. 1)

This text comprises some of the possible answers to these questions. Proceeding from historical traces of the evolving understanding of data the following sections introduce criticism of the dominant realist epistemology and offer alternative epistemologies drawing from constructivism or rhetoric.

Brief history of data

The concept of ‘data’ is in use for a long time, yet it acquired its current meaning no sooner than on the onset of modernity (Gitelman, 2013, p. 15). One of first known uses of this concepts appears in Euclid’s book entitled Data (Euclid, 1834). The book describes methods for solving and analysing problems, in which data serve either as that what is known in relation to a hypothesis, or that what can be demonstrated to be known. Data offers starting points of inquiry, in which new knowledge may be inferred.

Describing givens as data remained in use at least until the 17th century. In disciplines such as mathematics, philosophy and theology, data signified given foundations, which are not to be disputed (Gitelman, 2013, p. 19). For instance, theology employed this term for the things given by God or the Bible.

The concept came near to its modern use in the early 18th century. Instead of standing for unquestionable givens, data appeared in use as results of experiments, experience or collection. In other words, data “went from being reflexively associated with those things that are outside of any possible process of discovery to being the very paradigm of what one seeks through experiment and observation” (Gitelman, 2013, p. 36). Making this approach a part of general knowledge can be ascribed chiefly to positivism. Viewing it the terms of epistemology, it can be described as metaphysical realism of data.

Metaphysical realism of data

The view of metaphysical realism assumes that data can be collected from objectively perceivable reality. The realist framework deems data as exact record or faithful representation of reality. Rhetoric of scientific ‘discoveries’ requires that knowledge to be discovered already exists in reality; science is then tasked with revealing such knowledge. Photography offers a prototypic example of accurate capture of reality. Photographs are “raw representations of the natural world,” which stand for a “unique and literal transcription of nature - a ‘scientific record’ (Gitelman, 2013, p. 4, appendix).

Scientists generally regard data as records of structured observation that is guided by protocol designed up front (Halavais, 2013). Data collection is described as observation without interfering with the observed reality. The desired objectivity of scientific data presupposes separation of perceiving subject from perceived reality. Metaphysical realism, according to Von Glasersfeld (Von Glasersfeld, 1984, p. 2), asserts that “we may call something ‘true’ only if it corresponds to an independent, ‘objective’ reality.” The realist framework bestows data with a privileged role, while holding the subjective description to be distant from ‘genuine’ reality.

Epistemic privilege of data

The emphasis on data is particularly characteristic of the science in recent decades, when large volumes of data became widely available. Yet data had a fundamental role in science in former times as well; for example Nelson (2009) mentions Rudolphine Tables by Johannes Kepler as an early example of scientific use of data. However, data acquired its peculiar function “in the epistemology we associate with modernity” (Poovey, 1998, p. 1).

Modern science started to drew increasingly on quantized data, which contributed “in a major way to the impression of objectivity in scientific prose” (Gross, 2002, p. 37). The average number of data tables used in scientific articles almost doubled between 19th and 20th century. Half of a sample of 20th century articles contained a table, with an average number of 5 tables per article (Gross, 2002, p. 182). Science in 20th century increased the distinctive preference of quantitative facts over qualitative facts. Language of science reflects this shift in distinguishing ‘hard data’, the quantitative nature of which lends the data an aura of unquestionability, and ‘soft data’, the qualitative nature of which enables the data the be bent at will. In some cases, this preference goes to extreme situations, in which quantitative datasets “are given considerable weight even when nobody defends their validity with real conviction” (Porter, 1995, p. 8). A large share of modern scientists fill their papers with mechanical or mathematical explanations of the facts they describe, while their “argumentative strategy for establishing facts and explanations typically revolves around comparisons of data sets” (Gross, 2002, p. 188). Mathematical explanations referring to data are often privileged for their alleged elegance and clarity (Halevy, 2009). During the 20th century there appears a marked inclination to favour “comparison of large data sets; in addition, mathematics is applied, seemingly whenever possible” (Gross, 2002, p. 231). These trends gradually lead to “rapid ‘commodification’ of data”, which causes data to be presented as “complete, interchangeable products in readily exchanged formats” and may encourage “misinterpretation, over reliance on weak or suspect data sources, and ‘data arbitrage’ based more on availability than on quality” (Edwards, 2013, p. 7).

Data-driven science

In recent years, the emphasis on using data in science increased to such an extent that some proclaim it be bring about a new methodological paradigm of data-driven science (Leonelli, 2014). This approach is labelled as the fourth paradigm of science that uses data-intensive research, in which computers help finding knowledge in data, to extend the preceding three paradigms; the paradigm of empirical observation, the paradigm of explanatory models and the paradigm of simulation for insight into complex phenomena (Nielsen, 2012). Data is considered as a product of quantitative research, which is especially privileged to serve as scientific evidence.

The extreme cases promoting this scientific paradigm earned a label of ‘data fundamentalism” (Crawford, 2013). For example, Anderson’s controversial article from 2008 (2008) announces big data as the “end of theory” and claims that numbers speaking for themselves make hypotheses unnecessary. However, as Keller (1985, p. 130) points out, “the problem with this argument is, of course, that data never do speak for themselves.” Regardless of these critics, some authors believe that getting rid of formulating hypotheses contributes to ‘purification’ of science:

“In a small-data world, because so little data tended to be available, both causal investigations and correlation analysis began with a hypothesis, which was then tested to be either falsified or verified. But because both methods required a hypothesis to start with, both were equally susceptible to prejudice and erroneous intuition.” (Mayer-Schonberger, 2013)

The cause of disregard to making hypotheses in this scientific paradigm may be attributed to the ‘unreasonable effectiveness of data’. Some authors, such as Halevy, 2009, contend that simple models or hypotheses equipped with large enough data inevitable surpass complex models that lack data. In its extreme form, the data-driven science overturns the usual relation between hypotheses and data, in which hypothesis plays a primary role and data only provides grounds for verification or falsification. Instead, this paradigm promotes processes for inductive generalisation of data into valid hypotheses. Research of these methods is a principal concern of the field of data mining, where particulars given in data may be distilled into universals, such as sets of association rules.

Data-driven science considers hypotheses inherently untrustworthy if their reliability is not backed by data. However, as critical rationalism of Karl Popper teaches, data supporting hypothesis does not verify it; instead the data merely falsifies incompatible hypotheses. Trustworthiness ascribed to data can be illustrated by the popular saying: “In god we trust, everyone else bring data.” Data thus functions as evidence testifying for truthiness of the presented claims. For instance, Markham (Markham, 2013) mentions the impression of ‘instant credibility’ that proceeds data.

Contrary to the afore-mentioned claims Boyd and Crawford (2012, p. 663) describe this uncritically accepted approach as a “widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy.” Keil, for example, adds that “data-driven science is a failure of imagination” (Keil, 2013). Keil stresses that science cannot ignore hypotheses, which constitute models or theories. Instead, it is necessary to combine both empirical observation and making of hypotheses. As Keil suggests, vast volume of data does not help, if it is not confronted with useful theory. Although larger volume of data increases the support it lends to prevalent hypotheses, it increases the noise in data in the same way. This is the reason why large volume might in fact, despite expectations, multiply problems in data. These expectations usually held for data stem from the assumption of their mechanical objectivity.

Myth of mechanical objectivity of data

The epistemic privilege of data springs from the ideal of mechanical objectivity of data (Gitelman, 2013), which ignores contextuality, mediation and manipulation of data. The presumed absence of human input (e.g., in photography) and minimisation of unwanted influences is seen fundamental in achieving the goal of objectivity. The belief in neutrality, autonomy and objectivity of data is widespread. For example, Porter states that “when philosophers speak of the objectivity of science, they generally mean its ability to know things as they really are”” (Porter, 1995, p. 3). In accordance with this demand metaphysical realism deems data to be a direct representation of reality. However, data cannot be an exact reflection of complex reality, as it cannot avoid reducing the reality and omitting details that are reckoned unnecessary for the purpose of data. High level of reduction makes data lose the ability to represent, and thus data can be considered no more than an approximation of reality (Markham, 2013). More causes of data failing to represent reality can be identified, some of which are examined in the further sections of the text.

Nevertheless, it is important to acknowledge that there are other ways of formulating the objectivity of science. One of them is an influential definition of objectivity as an “ability to reach consensus” (Porter, 1995, p. 3), another is equating objectivity to “fairness and impartiality” (ibid., p. 4). However, the ideal of mechanical objectivity is unattainable, because data is always mediated and their creation is embedded in context, which cannot be avoided nor reproduced.

Contextuality of data

Data is shaped to a large extent by context, in which the data is created. In science, the sense of data is “tightly dependent on a precise understanding of how, where, and when they were created” (Edwards, 2013). “Knowledge production is never separate from the knowledge producer” and nor data can be obtained without direct or indirect human influence, so that human thinking always marks the produced data. Direct sensory input is thus combined with mental contents of perceiver, while indirect perception using instruments is affected by views of the instruments’ creators. Bachelard sums this up in writing that “when we contemplate reality, what we think we know very well casts its shadow over what we ought to know” (Bachelard, 2002, p. 24). Therefore, there is a need to keep on mind the “situated, material conditions of knowledge production” (Gitelman, 2013, p. 4), which cause the resulting data to be “framed and framing” (ibid., p. 5).

Apart from the data creators, data is significantly framed by the environment in which it is created. For instance, Magee draws attention to this influence:

“Knowledge systems are all too frequently characterised in essentialist terms - as though, as the etymology of ‘data’ would suggest, they are merely the housing of neutral empirical givens. […] on the contrary, that systems always carry with them the assumptions of cultures that design and use them - cultures that are, in the very broadest sense, responsible for them.” (Magee, 2011, p. 15)

Environment may determine the way of gathering data, such as by standardising different methods, which can furthermore evolve in time. For example, reclassifications within systems of categories may happen over the years, which significantly worsens the comparability of data from time series (Diakopoulos, 2013).1

The influence of context cannot be eliminated, nor it can be reproduced. Even though data is of discrete nature, so that “each datum is individual, separate and separable, while still alike in kind to others in its set” (Gitelman, 2013, p. 8), and thus data may be partially decontextualised, the efforts to remove contextual influences completely are bound to fail.

Some expect that data may be purged from contextual distortions and subjectivity, if large volume of data containing records of many variations of the same phenomenon is available. The promise of big data rests on a conjecture that by combination and aggregation data may be neutralised and its individual tint may be dampened, so that it draws nearer to the objective reality. The fallacy of this approach is the neglect of arbitrariness of the chosen aggregation method and failure to take the incompleteness of data into account. Choice of aggregation is a subjective act, in which arbitrary conceptualisations (e.g., categories) are selected, so that the resulting aggregated data may end up being further away from the described reality. No matter how large data is, it always remains an incomplete sample, the selection of which may omit that what is important. The extensiveness of data does not guarantee its representativeness, because data is subject to limitations and prejudices independently of its size. For the same matter quantity cannot fill in for quality and consistency of data, which may be diverging due to varying contexts. In case of data samples, absolute values are never exact and relative values are loaded with the skew of sample selection and aggregation.

Moreover, attempts to decontextualise data may be harmful for the context of its use. Boyd and Crawford remark that if “taken out of context, data lose meaning and value” (Boyd, 2012, p. 670). Data is a medium that requires active participation and understanding from its users. Knowledge is not a passive process (Von Glasersfeld, 1984, p. 9). Even though the assumption of objectively perceivable reality constitutes a common object of data, which forms the basis of shared understanding, the universal comprehension of data remains fiction, because interpretation of data depends not only on its object, but its context as well (Markham, 2013).

Mechanical objectivity tries to reduce contextual influences into a clearly delimited protocol. Production of data is thus conducted by strict rules. In this way, mechanical objectivity is defined as an ability to follow rules and fixed protocol (Porter, 1995, p. 4). The function of protocol is to setup a controlled context and minimise unwanted influences, which may be reflected on the created data. Transparent and documented protocol of data preparation, containing detailed information about data provenance, contributes to trustworthiness of data. Bird adds that “what makes something an item of observational knowledge is the reliability and uncontentious nature of the mechanism which produces it” (Bird, 2010, p. 10). This way, users of data may evaluate the “adequacy of the experimental conditions under which data have been produced” and determine, what level of reliability can be expected from the data and what is its evidential value.

In similar manner, the restrictions of protocol aim to make data reproduction feasible. However, Leonelli states that, for the most part, data is “idiosyncratic to particular experimental contexts, and typically cannot occur outside of those contexts” (Leonelli, 2009). It is unavoidable that data is embedded in unrepeatable context and it makes data impossible to reproduce in full. At most, one can attempt to reproduce the methods used to create the data, which may lead to other, yet partially compatible data.

Mediation of data

Immediacy ascribed to data comes from a desire for direct knowledge of reality. ‘Raw’ data is attributed with the quality of primariness. It is thought to be data coming ‘directly’ from its source, which is reality itself. This alleged quality relates to the seemingly natural process of mechanical production of data. One may succumb to an impression that the value of data depends on the straightforwardness of its derivation from reality. For example, data from automated sensors might be perceived as substantially more trustworthy than calculations of impact factor based on indirect inputs that are considerably distant from reality. Thanks to the implied immediacy of data it is often understood, in accordance with its etymology, as having an axiomatic nature, which makes it beyond dispute (Halavais, 2013). “At first glance data are apparently before the fact: they are the starting point for what we know, who we are, and how we communicate” (Gitelman, 2013, p. 2). “Data is beyond argument,” writes Markham (2013), because data is understood as that which precedes argument. In this perspective, data avoids interpretation, analysis and thereby is held to be free of subjective influence. Nevertheless, as Boyd and Crawford suggest, “claims to objectivity are necessarily made by subjects and are based on subjective observations and choices” (Boyd, 2012, p. 667).

The assumption of pre-analytical nature of data was subjected to criticism that problematised the concept of ‘raw data’ and validity of this assumption was contested. Already in 1929 Dewey criticised this presumption:

“[…] all of the rivalries and connected problems grow from a single root. They spring from the assumption that the true and valid object of knowledge is that which has being prior to and independent of the operations of knowing. They spring from the doctrine that knowledge is a grasp or beholding of reality without anything being done to modify its antecedent state - the doctrine which is the source of the separation of knowledge from practical activity.” (Dewey, 1929, p. 196)

An approach alike is deemed generally adopted in 1985, when Keller writes that “it is by now a near truism that there is no such thing as raw data; all data presuppose interpretation” (Keller, 1985, p. 130). Bowker adds a remark that “raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care” (Bowker, 2005, p. 184).

The key point of such criticism is the recognition that interpretation is already present in observation and data production on its own. Nunberg contends that properties that we ascribe to information, and which can be ascribed to data as well, such as its “metaphysical haeceity or ‘thereness,’ its transferability, its quantised and extended substance, its interpretive transparency or autonomy — are simply the reifications of the various principles of interpretation that we bring to bear in reading these forms” (Nunberg, 1996). Data and “numbers are interpretive, for they embody theoretical assumptions about what should be counted, how one should understand material reality, and how quantification contributes to systematic knowledge about the world” (Poovey, 1998, p. xii). Therefore, data cannot be accepted as “simple observations about particulars, which were immune from interest and theoretical conjectures of any kind” (ibid., p. xxiv). Due to various reasons, data is always mediated, and so, as Bachelard writes:

“Knowledge of reality is a light that always casts a shadow in some nook or cranny. It is never immediate, never complete.” (Bachelard, 2002, p. 24)

Data is inevitably produced via media. Examples of such media are instruments, such as microscope, or models, that may synthesise data. It is because of media that science may make claims about objects and properties that escape direct observation, such as long-extinct galaxies noticeable via telescopes (Bogen, 1988). A general view then assumes that “scientific theories predict and explain facts about ‘observables’: objects and properties which can be perceived by the senses, sometimes augmented by instruments” (ibid., p. 303).

In the course of data development it necessarily passes through models of reality. Model is a medium of cognition. “Rational views of the universe are idealised models that only approximate reality” (Kent, 2000, p. 220), however, “we can share a common enough view of it for most of our working purposes, so that reality does appear to be objective and stable” (ibid., p. 228). Yet there are some, who assert that “‘sound science’ must mean ‘incontrovertible proof by observational data,’ whereas models were inherently untrustworthy” (Edwards, 2013, p. xviii). “Let the data speak for themselves,” (Keller, 1985, p. 129) is demanded by those, who call for raw, immediate data. Edwards calls the assumption that immediacy can be achieved by “waiting for (model-independent) data” to be misguided (Edwards, 2010, p. xiii). As he writes further, “no collection of signals or observations — even from satellites, which can ‘see’ the whole planet — becomes global in time and space without first passing through a series of data models” (ibid., p. xiii). The dependence of data on models can be seen on the example of weather forecasts and climate change predictions, for which “only about ten percent of the data used by global weather prediction models originate in actual instrument readings. The remaining ninety percent are synthesised by another computer model” (ibid., p. 21). In the same way as models or theories, data is only an imperfect approximation of reality. Nevertheless, in a similar way as Box (1987, p. 424) claims that “all models are wrong, but some are useful”, an analogous approach may be applied to data.

Data manipulation

Mediation of data allows for manipulation and purposeful reconstruction. Data may be distorted either deliberately or unintentionally. In some cases, the influence of context can leave data with its marks that are barely noticeable. For example, Fanelli argues that “scientific results can be distorted in several ways, which can often be very subtle and/or elude researchers’ conscious control” (Fanelli, 2009). Nonetheless, even though science is generally associated with “fairness and impartiality” (Porter, 1995, p. 4), a significant share of data manipulation is deliberate. Babbage warns about data manipulation in science already in 1830:

“Of cooking. This is an art of various forms, the object of which is to give to ordinary observations the appearance and character of those of the highest degree of accuracy. One of its numerous processes is to make multitudes of observations, and out of these to select those only which agree, or very nearly agree. If a hundred observations are made, the cook must be very unlucky if he cannot pick out fifteen or twenty which will do for serving up.” (Babbage, 1830, p. 178)

Data manipulation in science is relatively prevalent. Anonymous survey revealed that roughly 2 % of scientists admit to have manipulated data. About a third of the survey’s participants conceded to be involved in dubious scientific practices. However, it should be kept in mind that these estimates are likely conservative as this is a sensitive topic (Fanelli, 2009). Moreover, besides manipulation on purpose data can be distorted because of laziness or malpractice.

Intentional manipulation of data includes disregard of unfavourable data, data answering suggestive questions, excessive generalisation, skewed sample (e.g., non-random), misunderstanding of error margins, false causality or finding statistically insignificant correlation in big data (Misuse of statistics, 2013). Data quality may be also deteriorated and made unclear by reducing data to aggregations (Diakopoulos, 2013).

Alternative epistemologies of data

Apart from metaphysical realism, epistemology of data can be considered from alternative viewpoints that do not suffer the afore-mentioned shortcomings. This essentially “positivist picture of the structure of scientific theories is now widely rejected” (Bogen, 1988, p. 304) and its place was seized up by approaches that fall within the postmodernism, yet frequently draw on older thinking, which in some cases date back to the rhetorical origins of philosophy. The following sections introduce the approaches of constructivist epistemology and rhetoric, which are deemed to be mutually compatible.

Constructivist epistemology of data

Constructivist epistemology is based on the presumption that all knowledge is a construction of man. The constructivist school of thought departs from metaphysical realism in not requiring a concept of objective reality. However, treating constructivism as simple rejection of the concept of objective reality would be overly simplistic. Constructivism reverses the relation between data and reality and instead claims that data constitutes the reality it describes, so that “data are not found, they are made” (Halavais, 2013).

Some of the central theses of constructivist epistemology may be clearly seen already in works of Giambattista Vico from the 18th century. The treatises of this intellectual predecessor of constructivism claim that “science (scientia) is the knowledge (cognitio) of origins, of the ways and the manner how things are made” and therefore “ we can only know what we ourselves construct” (Von Glasersfeld, 1984). Such recognition is what distinguishes scientific and pre-scientific mind, because “whereas the pre-scientific mind possesses reality, the scientific mind constructs and reconstructs it, and in doing so is itself constantly reformed” (Bachelard, 2002, p. 9).

Foundations of constructivist epistemology are likely built on the fallout from the shift of philosophy towards language. Constructivist reading may be applied to the works of anthropologist and linguist Edward Sapir, who argues that ‘world’ is constructed by language of community:

“The fact of the matter is that the ‘real world’ is to a large extent unconsciously built up on the language habits of the group. No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are distinct worlds, not merely the same world with different labels attached.” (Sapir, 1990, p. 221)

Following Sapir’s reasoning, constructivism has no need for homomorphism between data and reality, in which data correspond to experience of reality. Instead, data and knowledge is what fits the reality and functions in a consistent way within the reality. To illustrate this relationship Von Glasersdorf offers a simile of key that fits in lock, in the same way as data matches reality (Von Glasersfeld, 1984, p. 3).

Constructivist claims are prone to attract simplified reading. For example, “the claim that science is socially constructed has too often been read as an attack on its validity or truth” (Porter, 1995, p. 11). In this regard, constructivism offers to replace the criterion of truth with the concept of inner consistency and rule of no contradiction within a system of knowledge (Von Glasersfeld, 1984, p. 9). Given such conditions, research can be seen as a generative process that produces data and eliminates non-functional knowledge. An example of knowledge revealed as non-functional is what results from ‘apophenia’; a phenomenon of “seeing patterns where none actually exist” (Boyd, 2012, p. 668). Apophenia can affect data analysts, who succumb to the impression that they discovered causal chain of inference in data, whereas it is merely an idiosyncratic construction of the observer.

Constructivist approach is supported by the fact that production of data is always to some degree an act of classification. Malleability of data allows it to be casted using chosen data structures and conceptualisations. The moment that classification is established in data, it becomes part of the data and it is difficult to distinguish it. Arbitrariness of data structures is what Kent spends a lot of thought on:

“Data structures are artificial formalisms. They differ from information in the same sense that grammars don’t describe the language we really use, and formal logical systems don’t describe the way we think.” (Kent, 2000, p. xix)

In a similar fashion like language of community is used to construct shared world, data structures form a basis for shared understanding of data. Data structures are created with specific purposes in mind. “Like different kinds of maps, each kind of structure has its strengths and weaknesses, serving different purposes, and appealing to different people in different situations” (ibid.).

Albeit various aspects of constructivism are referred by critics of metaphysical realism, mentioned mainly in the section discussing mediation, its implications for epistemology do not dominate many scientific domains. For example, while in the field psychology Piaget writes in 1980 that “fifty years of experience have taught us that knowledge does not result from a mere recording of observations without a structuring activity on the part of the subject” (Piaget, 1980, p. 377) and the principles of constructivist epistemology are already widely adopted in humanities and social sciences, still in sciences these principles are rather ignored (Hennig, 2002) and instead substituted with the remains of metaphysical realism.

Rhetoric of data

Rhetoric provides an interpretation of data that is complementary to the approach of constructivist epistemology. Its compatibility can be seen mostly in case of ontological approach to epistemic understanding of rhetoric, which is distinguished by Brummett (Brummett, 1979). The ontological explanation of rhetorical epistemology purports that “discourse does not merely discover truth or make it effective. Discourse creates realities rather than truths about realities” (ibid.). The function of rhetoric is not limited to persuasion and justification, but it covers production of assertions as well. Therefore, Scott, as one of the first who linked rhetoric to epistemology, writes that “rhetoric may be viewed not as a matter of giving effectiveness to truth but of creating truth” (Scott, 1967, p. 13). The lens of constructivist epistemology seem to be present, when Scott remarks that “‘truth,’ of course, can be taken in several senses. If one takes it as prior and immutable, then one has no use for rhetoric except to address inferiors” (ibid., p. 9).

As the use of ‘data’ in Euclid’s treatises (Euclid, 1834) indicates, the concept was already used in rhetorical sense during Euclid’s time. According to the etymology of data, it is “‘that which is given prior to argument,’ given in order to provide a rhetorical basis” (Gitelman, 2013, p. 7). Production of data in science is set in a discourse of rhetorical argumentation. Data is constructed as one of the products of scientific discourse, primary as a vehicle of persuasion. Selection and processing of data can be tailored to support the sought purpose in argument. If validity of claims is attacked, their authors are required to justify these claims. “If challenged it is up to us to produce whatever data, facts, or other backing we consider to be relevant and sufficient to make good the initial claim” (Toulmin, 2003, p. 13). Production of data may be considered as a specific speech act, which is of use in argumentation for justifying previous or forthcoming claims.

Rhetoric argument offers an alternative to analytical logic. Similarly to logic, in rhetoric “given certain data, certain conclusions may be proven or argued to follow” (Gitelman, 2013, p. 18). Data does not belong into the framework of analytical logic though, because it cannot be evaluated to truth value. “When a fact is proven false, it ceases to be a fact. False data is data nonetheless.” (Gitelman, 2013, p. 18). Use of data is thus rhetorical. Rosenberg summarises the distinction features of data by stating that “facts are ontological, evidence is epistemological, data is rhetorical” (ibid.).

Rhetoric has a bad reputation in science. The dangers of rhetoric are pointed out by Thomas Sprat in 1667, when he published his treatise on history of the British Royal Society: “And to accomplish this, they have indeavor’d to separate the knowledge of Nature, from the colours of Rhetorick, the devices of Fancy, or the delightful deceit of Fables” (Sprat, 1667, p. 62). Historically, rhetoric is associated with deliberate manipulative uses of data. Some of these uses are described in the previous section on data manipulation. Data manipulation, used for example to obtain grant funding, can be regarded as a kind of rhetorical argumentation. For example, examples of using data for rhetorical purposes can be found in propaganda infographics,2 debates on the existence of global warming, or in pre-election surveys, the creators of which are frequently accused of intentional manipulation.

In case of data, it is its ability to aggregate, which gives it its “potential power” and “rhetorical weight” (Gitelman, 2013, p. 8). Aggregation may contribute to an impression of false objectivity. An example of rhetorical production of data is the reconceptualisation of newspaper as database carried out by Angeline Grimké Weld, her husband Theodore and her sister Sarah (Gitelman, 2013, p. 90). In this case data about slavery was compiled from newspaper, such as from ads for runaway slaves. The collected data was reframed as testimony of slaveholders’ brutality as it turned their own words against them.

Modern rhetoric has a much broader scope than manipulation or persuasion. Ontological approached mentioned by Brummett positions rhetoric as a dimension present in all epistemic activities (Brummett, 1979). Rhetorical dimension is also present in scientific data. Even though the proclamations saying that “data is apolitical” (Peled, 2013) appear, data is never impartial and it necessary to take into account that it may include hidden rhetorical agenda. Even though practical data analysis mostly lacks deliberate rhetorical approach (Schron, 2013), the growing amount of research in this area3 suggest that there is interest in disruption of the established understanding of data.

Conclusion

Epistemology of data needs to be paid attention to because of the fundamental status data has in contemporary science. Due to a dramatic decrease of costs of producing large data of sufficient quality the science adopted data as its central resource. If such power is bestowed to data, it is important not to treat data as an unquestionable concept exempt from scrutiny. Regardless of this need, the predominant epistemology of data in current science is based on the alleged pre-analytical nature of data. The popular pyramid data - information - knowledge - wisdom puts data at the first place, as a basis for following levels of knowing. Given such position, data is preceded solely by the very reality, the direct representation of which data purports to be.

Many critics contributed to unveil the weaknesses of this concept and it is their works on which this text is built on. A lot of the publications cited hereby brings attention to shortcomings of the positivist heritage. A growing number of authors casts doubt upon the established role of data in science. Reformulation of epistemology of data-intensive science is attempted by several researchers, while first projects focused on this topic appear.4 This text also tried to oppose the unproblematic view of data in present science. In accordance with Kent, the text:

“[…] projects a philosophy that life and reality are at bottom amorphous, disordered, contradictory, inconsistent, non-rational, and non-objective. Science and much of western philosophy have in the past presented us with the illusion that things are otherwise.” (Kent, 2000, p. 220)

Critical reflection of the dominant epistemology of data in western philosophy found many holes in the uncritical, positivist approach to data. In the light of these findings, the interpretation of data as an unquestionable representation of objectively perceivable reality does not stand the test. The alternative interpretations offered by constructivist epistemology or rhetoric appear to be more productive frames for thinking about data. Whatever the path is chosen, science cannot treat data as unproblematic input to mathematical task; instead, it needs to subject data to questions.

References

  • ANDERSON, Chris. The end of theory: the data deluge makes the scientific method obsolete. Wired [online]. 2008-06-23 [cit. 2013-12-23]. Available from WWW: http://www.wired.com/science/discoveries/magazine/16-07/pb_theory
  • BABBAGE, Charles. Reflections on the decline of science in England and on some of its causes. London: B. Fellowes; J. Booth, 1830. Available from WWW: https://archive.org/details/reflectionsonde00mollgoog
  • BACHELARD, Gaston. The formation of the scientific mind: a contribution to a psychoanalysis of objective knowledge. Translated by Mary MCALLESTER JONES. Manchester: Clinamen Press, 2002. ISBN 1-903083-20-6.
  • BIRD, Alexander. The epistemology of science: a bird’s-eye view. Synthese. 2010, vol. 175, no. 1 appendix, pp. 5–16. Available from WWW: http://eis.bris.ac.uk/~plajb/teaching/The_Epistemology_of_Science.pdf. DOI 10.1007/s11229-010-9740-4.
  • BOELLSTORFF, Tom. Making big data, in theory. First Monday [online]. 2013 [cit. 2013-12-28], vol. 18, no. 10. Available from WWW: http://uncommonculture.org/ojs/index.php/fm/article/view/4869/3750
  • BOGEN, James; WOODWARD, James. Saving the phenomena. The Philosophical Review. 1988, vol. 97, no. 3, pp. 303–352. Also available from WWW: http://www.pitt.edu/~rtjbog/bogen/saving.pdf
  • BOX, George E. P.; DRAPER, Norman R. Empirical model-building and response surfaces. Hoboken (NJ): Wiley, 1987. Wiley series in probability and statistics, vol. 157. ISBN 0-471-81033-9.
  • BOWKER, Geoffrey C. Memory practices in the sciences. Cambridge (MA): MIT Press, 2005, 280 p. Inside technology. ISBN 978-0-262-52489-6.
  • BOYD, Danah; CRAWFORD, Kate. Critical questions for big data. Information, Communication & Society. 2012, vol. 15, no. 5, pp. 662–679. Also available from WWW: http://dx.doi.org/10.1080/1369118X.2012.678878. DOI 10.1080/1369118X.2012.678878.
  • BRUMMETT, Barry. Three meanings of epistemic rhetoric. Speech Communication Association Convention: Seminar on Discursive Reality. San Antonio (TX): 1979. Also available from WWW: http://ap2008.wdfiles.com/local--files/selected-research-articles/Brummett1979.doc
  • CRAWFORD, Kate. The hidden biases in big data. Harvard Business Review Blog Network [online]. April 1, 2013 [cit. 2014-01-11]. Available from WWW: http://blogs.hbr.org/2013/04/the-hidden-biases-in-big-data/
  • DEWEY, John. The quest for certainty: a study of the relation of knowledge and action. New York: Minton, Balch & Company, 1929. Gifford lectures. Also available from WWW: https://archive.org/details/questforcertaint032529mbp
  • DIAKOPOULOS, Nick. The rhetoric of data [online]. July 25, 2013 [cit. 2013-12-22]. Available from WWW: http://www.nickdiakopoulos.com/2013/07/25/the-rhetoric-of-data/
  • EDWARDS, Paul N. A vast machine: computer models, climate data, and the politics of global warming. Cambridge (MA): MIT Press, 2010, 552 p. ISBN 978-0-262-01392-5.
  • EDWARDS, Paul N. [et al.] (eds.). Knowledge infrastructures: intellectual frameworks and research challenges [online]. Report of a workshop sponsored by the National Science Foundation and the Sloan Foundation, University of Michigan School of Information, 25–28 May 2012. May 2013 [cit. 2013-12-22]. Available from WWW: http://hdl.handle.net/2027.42/97552
  • EUCLID. Data. In SIMSON, Robert (ed.). The elements of Euclid. Philadelphia: Desilver, Thomas & co., 1834.
  • FANELLI, Daniele. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. Public Library of Science ONE [online]. May 29, 2009 [cit. 2014-01-03], vol. 4, no. 5. Available from WWW: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738. DOI 10.1371/journal.pone.0005738.
  • GITELMAN, Lisa (ed.). ‘Raw data’ is an oxymoron. Cambridge (MA): MIT Press, 2013. ISBN 978-0-262-51828-4.
  • GROSS, Alan G.; HARMON, Joseph E.; REIDY, Michael. Communicating science: the scientific article from the 17th century to the present. New York (NY): Oxford University Press, 2002. ISBN 0-19-513454-0.
  • HALAVAIS, Alexander. Home made big data? Challenges and opportunities for participatory social research. First Monday [online]. 2013 [cit. 2013-12-28], vol. 18, no. 10. Available from WWW: http://uncommonculture.org/ojs/index.php/fm/article/view/4876/3754
  • HALEVY, Alon; NORVIG, Peter; PERREIRA, Fernando. The unreasonable effectiveness of data. Intelligent Systems. 2009, vol. 24, no. 2, pp. 8–12. Available from WWW: http://static.googleusercontent.com/media/research.google.com/en/pubs/archive/35179.pdf. ISSN 1541-1672. DOI 10.1109/MIS.2009.36.
  • HENNIG, Christian. Confronting data analysis with constructivist philosophy. In Classification, clustering, and data analysis: recent advances and applications, part II. Berlin; Heidelberg: Springer, 2002, pp. 235–243. ISBN 978-3-642-56181-8. DOI 10.1007/978-3-642-56181-8_26.
  • KEIL, Petr. Data-driven science is a failure of imagination [online]. January 2, 2013 [cit. 2013-12-22]. Available from WWW: http://www.petrkeil.com/?p=302
  • KELLER, Evelyn Fox. Reflections on gender and science. New Haven (MA): Yale University Press, 1985. ISBN 0-300-06595-7.
  • KENT, William. Data and reality. Bloomington (IN): 1st Books Library, 2000. ISBN 1-58500-970-9.
  • LEONELLI, Sabina. On the locality of data and claims about phenomena. Philosophy of Science. 2009, vol. 76, no. 5, pp. 737–749. Also available from WWW: https://ore.exeter.ac.uk/repository/handle/10871/9429. ISSN 0031-8248.
  • LEONELLI, Sabina. Data interpretation in the digital age. Perspectives on Science [in print]. 2014. Also available from WWW: https://ore.exeter.ac.uk/repository/handle/10036/4484. ISSN 1063-6145.
  • MAGEE, Liam. Frameworks for knowledge representation. In COPE, Bill; KALANTZIS, Mary; MAGEE, Liam (eds.). Towards a semantic web: connecting knowledge in academic research. Oxford: Chandos, 2011. ISBN 978-1-84334-601-2.
  • MARKHAM, Annette N. Undermining ‘data’: a critical examination of a core term in scientific inquiry. First Monday [online]. 2013 [cit. 2013-12-22], vol. 18, no. 10. Available from WWW: http://uncommonculture.org/ojs/index.php/fm/article/view/4868/3749. DOI 10.5210/fm.v18i10.4868.
  • MAYER-SCHÖNBERGER, Viktor; CUKIER, Kenneth. Big data: a revolution that will transform how we live, work, and think. Boston (MA): Houghton Mifflin Harcourt, 2013. ISBN 978-0-544-00269-2.
  • Misuse of statistics. Wikipedia [online]. Last modified December 19, 2013 [cit. 2014-01-12]. Available from WWW: http://en.wikipedia.org/wiki/Misuse_of_statistics
  • NELSON, Michael L. Data-driven science: a new paradigm? EDUCAUSE Review [online]. July/August 2009 [cit. 2013-12-22], vol. 44, no. 4, pp. 6–7. Available from WWW: http://www.educause.edu/ero/article/data-driven-science-new-paradigm
  • NIELSEN, Michael. Reinventing discovery: the new era of networked science. New Jersey: Princeton University Press, 2011, 273 p. ISBN 978-0-691-14890-8.
  • NUNBERG, Geoffrey. Farewell to the Information age. In NUNBERG, Geoffrey (ed.). The future of the book. Berkeley (CA): University of California Press, 1996. ISBN 0-520-20451-4.
  • PELED, Alon. The politics of big data: a three-level analysis [online]. 2013 [cit. 2014-01-06]. Available from WWW: http://ssrn.com/abstract=2315891
  • PIAGET, Jean. The psychogenesis of knowledge and its epistemological significance. In PIATTELLI-PALMARINI, Massimo (ed.). Language and learning: the debate between Jean Piaget and Noam Chomsky. Cambridge (MA): Harvard University Press, 1980. ISBN 0-674-50940-4.
  • POOVEY, Mary. A history of the modern fact: problems of knowledge in the sciences of wealth and society. 1st ed. Chicago: University of Chicago Press, 1998, 436 p. ISBN 0-226-67526-2.
  • PORTER, Theodore M. Trust in numbers: the pursuit of objectivity in science and public life. Princeton (NJ): Princeton University Press, 1995. ISBN 0-691-03776-0.
  • SAPIR, Edward. The collected works of Edward Sapir. VIII, Takelma texts and grammar. Berlin; New York: Mouton de Gruyter, 1990. Also available from WWW: https://archive.org/details/collectedworksof01sapi
  • SCHRON, Max. Data’s missing ingredient? Rhetoric [online]. April 11, 2013 [cit. 2014-01-09].Available from WWW: http://strata.oreilly.com/2013/04/datas-missing-ingredient-rhetoric.html
  • SCOTT, Robert L. On viewing rhetoric as epistemic. Central States Speech Journal. 1967, vol. 18, no. 1, pp. 9–17. DOI 10.1080/10510976709362856.
  • SPRAT, Thomas. The history of the Royal-Society of London, for the improving of natural knowledge. [T.R.]: London, 1667, 438 p. Also available from WWW: https://archive.org/details/historyroyalsoc00martgoog
  • TOULMIN, Stephen E. The uses of argument. Cambridge: Cambridge University Press, 2003. ISBN 978-0-511-07117-1.
  • VON GLASERSFELD, Ernst. An introduction to radical constructivism. In WATZLAWICK, Paul (ed.). The invented reality. New York: Norton, 1984, pp. 17–40.

Footnotes

  1. For example, this problem concerns volatile groups, such as the group of the 1 % of the richest people, for which, due to its volatility, one cannot compare different time slices of data describing the group.

  2. As Gitelman mentions, “data visualisation amplifies the rhetorical function of data” (Gitelman, 2013, p. 12).

  3. Such as the anthology ‘Raw data’ is an oxymoron from 2013 (Gitelman, 2013).

  4. For example, http://www.datastudies.eu/.

No comments :

Post a Comment