Dr Darrell P. Rowbottom
Associate Professor, Department of Philosophy, Lingnan University
Associate Editor, Australasian Journal of Philosophy
Prior to my appointment at Lingnan, I was a British Academy Postdoctoral Fellow at the University of Oxford. I have also worked at several other universities in the UK, including Aberdeen, Bristol, Durham, and Edinburgh.
I started out studying physics, but undertook graduate studies in philosophy and history and philosophy of science. The shift of direction is easily explained: I was fascinated by quantum theory, and what my lecturers (wrongly!) thought that it tells us about the world.
I am interested in a wide variety of topics in philosophy of science, epistemology, metaphysics, and philosophy of probability. I also dabble in other areas, including philosophy of education and philosophy of mind, from time to time.
My main projects at the moment are: (1) to complete a monograph in defence of a new form of instrumentalism about science; (2) to develop my account of group inquiry, especially concerning how to strike the balance between different activities/functions; (3) to make related contributions to the philosophy of probability and game theory, particularly on how group probabilities and identification should be understood; and (4) to complete a textbook on the philosophy of probability.
There is more, of course, but that's enough to be getting on with!
This paper argues that talk of ‘the aim of science’ should be avoided in the philosophy of science, with special reference to the way that van Fraassen sets up the difference between scientific realism and constructive empiricism. It also argues that talking instead of ‘what counts as success in science as such’ is unsatisfactory. The paper concludes by showing what this talk may be profitably replaced with, namely specific claims concerning science that fall into the following categories: descriptive, evaluative, normative, and definitional. There are two key advantages to this proposal. First, realism and its competitors may be understood to consist of highly nuanced variants. Second, scientific realism and its competitors may be understood as something other than ‘all or nothing’ theses about science. More particularly, one may accept that there are general claims concerning science in some of the identified categories, but deny that there are such claims in the others.
This paper is a supplement to, and provides a proof of principle of, Kuhn vs. Popper on Criticism and Dogmatism in Science: A Resolution at the Group Level. It illustrates how calculations may be performed in order to determine how the balance between different functions in science—such as imaginative, critical, and dogmatic—should be struck, with respect to confirmation (or corroboration) functions and rules of scientific method.
In this paper, I present some new group level interpretations of probability, and champion one in particular: a consensus-based variant where group degrees of belief are construed as agreed upon betting quotients rather than shared personal degrees of belief. One notable feature of the account is that it allows us to treat consensus between experts on some matter as being on the union of their relevant background information. In the course of the discussion, I also introduce a novel distinction between intersubjective and interobjective interpretations of probability.
I argue that so-called 'background knowledge' in confirmation theory has little, if anything, to do with 'knowledge' in the sense of mainstream epistemology. I argue that it is better construed as 'background information', which need not be believed in, justified, or true.
This paper responds to Achinstein’s criticism of the thesis that the onlyempirical fact that can affect the truth of an objective evidence claim such as ‘e is evidence for h’ (or ‘e confirms h to degree r’) is the truth of e. It shows that cases involving evidential flaws, which form the basis for Achinstein’s objections to the thesis, can satisfactorily be accounted for by appeal to changes in background information and working assumptions. The paper also argues that the a priori and empirical accounts of evidence are on a par when we consider scientific practice, but that a study of artificial intelligence might serve to differentiate them.
Philosophy of Science (In Press)
This paper shows that Popper’s measure of corroboration is inapplicable if, as Popper also argued, the logical probability of synthetic universal statements is zero relative to any evidence that we might possess. It goes on to show that Popper’s definition of degree of testability, in terms of degree of logical content, suffers from a similar problem.
In this piece, I advocate and motivate a new understanding of thought experiments, which avoids problems with the rival accounts of Brown and Norton.
If you require a copy of the published version of any paper which you can't download from the links below, e.g. because your library is not subscribed to the relevant journal, I will be happy to send you one. Just drop me an e-mail.
This paper shows that Bertrand's proposed 'solutions' to his own question, which generates his chord paradox, are inapplicable. It uses a simple analogy with cake cutting. The problem is that none of Bertrand's solutions considers all possible cuts. This is no solace for the defenders of the principle of indifference, however, because it emerges that the paradox is harder to solve than previously anticipated.
This paper offers a novel ‘changing places’ account of identification in games, where the consequences of role swapping are crucial. First, it illustrates how such an account is consistent with the view, in classical game theory, that only outcomes (and not pathways) are significant. Second, it argues that this account is superior to the ‘pooled resources’ alternative when it comes to dealing with some situations in which many players identify. Third, it shows how such a ‘changing places’ account can be used in games where some of the players identify with one another, but others do not. Finally, it illustrates how the model can handle the notion that identification comes in degrees.
Science in Context 25, 247–262 (2012)
This paper investigates whether there is a discrepancy between the stated and actual aims in biomechanical research, particularly with respect to hypothesis testing. We present an analysis of one hundred papers recently published in The Journal of Experimental Biology and Journal of Biomechanics, and examine the prevalence of papers which (a) have hypothesis testing as a stated aim, (b) contain hypothesis testing claims that appear to be purely presentational (i.e. which seem not to have influenced the actual study), and (c) have exploration as a stated aim. We found that whereas no papers had exploration as a stated aim, 58% of papers had hypothesis testing as a stated aim. We had strong suspicions, at the bare minimum, that presentational hypotheses were present in 31% of the papers in this latter group.
This paper develops a new version of instrumentalism, in light of progress in the realism debate in recent decades, and thereby defends the view that instrumentalism remains a viable philosophical position on science. The key idea is that talk of unobservable objects should be taken literally only when those objects are assigned properties (or described in terms of analogies involving things) with which we are experientially (or otherwise) acquainted. This is derivative from the instrumentalist tradition in so far as the distinction between unobservable and observable is taken to have significance with respect to meaning.
Popper's Critical Rationalism: A Philosophical Investigation (London: Routledge, 2011)
Popper’s Critical Rationalism presents Popper’s views on science, knowledge, and inquiry, and examines the significance and tenability of these in light of recent developments in philosophy of science, philosophy of probability, and epistemology. It develops a fresh and novel philosophical position on science, which employs key insights from Popper while rejecting other elements of his philosophy.
Group Level’, Studies in History and Philosophy of Science 42(1), 117–124
Popper repeatedly emphasised the significance of a critical attitude, and a related critical method, for scientists. Kuhn, however, thought that unquestioning adherence to the theories of the day is proper; at least for ‘normal scientists’. In short, the former thought that dominant theories should be attacked, whereas the latter thought that they should be developed and defended (for the vast majority of the time).
Both seem to have missed a trick, however, due to their apparent insistence that each individual scientist should fulfil similar functions (at any given point in time). The trick is to consider science at the group level; and doing so shows how puzzle solving and ‘offensive’ critical activity can simultaneously have a legitimate place in science. This analysis shifts the focus of the debate. The crucial question becomes ‘How should the balance between functions be struck?’
Interface’, Studies in History and Philosophy of Biological and Biomedical
Sciences 42(2), 145–154 (2011)
This paper, which is based on recent empirical research at the University of Leeds, the University of Edinburgh, and the University of Bristol, presents two difficulties which arise when condensed matter physicists interact with molecular biologists: (1) the former use models which appear to be too coarse-grained, approximate and/or idealized to serve a useful scientific purpose to the latter; and (2) the latter have a rather narrower view of what counts as an experiment, particularly when it comes to computer simulations, than the former. It argues that these findings are related; that computer simulations are considered to be undeserving of experimental status, by molecular biologists, precisely because of the idealizations and approximations that they involve. The complexity of biological systems is a key factor. The paper concludes by critically examining whether the new research programme of ‘systems biology’ offers a genuine alternative to the modelling strategies used by physicists. It argues that it does not.
This paper compares and contrasts the concept of a stance with that of a paradigm qua disciplinary matrix, in an attempt to illuminate both notions. First, it considers to what extent it is appropriate to draw an analogy between stances (which operate at the level of the individual) and disciplinary matrices (which operate at the level of the community). It suggests that despite first appearances, a disciplinary matrix is not simply a stance writ large. Second, it examines how we might reinterpret disciplinary matrices in terms of stances, and shows how doing so can provide us with a better insight into non-revolutionary science. Finally, it identifies two directions for future research: “Can the rationality of scientific revolutions be understood in terms of the dynamic between stances and paradigms?” and “Do stances help us to understand incommensurability between disciplinary matrices?”
Scientific Realism and the Rationality of Science), Studies in History and
Philosophy of Science 42(4), 625–628 (2011)
In this essay review, I argue that that there is a problem with the way that scientific realists tend to motivate belief in the metaphysical underpinnings of their position, or the so-called ‘metaphysical thesis of scientific realism’. I use Sankey’s approach as an illustration of the epistemological problems, in particular, with the inappropriate uses of presupposition and appeals to ‘common sense’.
Research’ (with Sarah Aiston), British Educational Research Journal 37(4),
How should educational research be contracted? And is there anything wrong with the way that public funding of educational research is currently administered? We endeavour to answer these questions by appeal to the work of two of the most prominent philosophers of science of the twentieth century, namely Popper and Kuhn. Although their normative views of science are radically different, we show that they would nonetheless agree on a number of key rules concerning the extent to which scientific practice should be influenced ‘from the outside’. We then show that these rules are often broken in the way that research is publicly funded in the UK.
Voluntarism’ (with Otávio Bueno), Synthese 178(1), 7–17 (2011)
We have three goals in this paper. First, we outline an ontology of stance, and explain the role that modes of engagement and styles of reasoning play in the characterization of a stance. Second, we argue that we do enjoy a degree of control over the modes of engagement and styles of reasoning we adopt. Third, we contend that maximizing one’s prospects for change (within the framework of other constraints, e.g., beliefs, one has) also maximizes one’s rationality.
Philosophy 88(2), 209–225 (2010)
Both Popper and van Fraassen have used evolutionary analogies to defend their views on the aim of science, although these are diametrically opposed. By employing Price's equation in an illustrative capacity, this paper considers which view is better supported. It shows that even if our observations and experimental results are reliable, an evolutionary analogy fails to demonstrate why conjecture and refutation should result in: (1) the isolation of true theories; (2) successive generations of theories of increasing truth-likeness; (3) empirically adequate theories; or (4) successive generations of theories of increasing proximity to empirical adequacy. Furthermore, it illustrates that appeals to induction do not appear to help. It concludes that an evolutionary analogy is only sufficient to defend the notion that the aim of science is to isolate a particular class of false theories, namely those that are empirically inadequate.
International Studies in the Philosophy of Science 24(3), 241–255 (2010)
This article challenges Bird’s view that scientific progress should be understood in terms of knowledge, by arguing that unjustified scientific beliefs (and/or changes in belief) may nevertheless be progressive. It also argues that false beliefs may promote progress.
Synthese 177(1), 139–149 (2010)
This paper argues that Duhem’s thesis does not decisively refute a corroboration-based account of scientific methodology (or ‘falsificationism’), but instead that auxiliary hypotheses are themselves subject to measurements of corroboration which can be used to inform practice. It argues that a corroboration-based account is equal to the popular Bayesian alternative, which has received much more recent attention, in this respect.
Please feel free to contact me for whatever reason - to ask me to review something, to discuss topics of mutual interest (whether or not you're a professional philosopher), to ask about doing a PhD, or what have you - as I am always happy to respond.
You'll find the link on the sidebar.