Category Archives: Uncategorized

Two postdoctoral positions on the nature of representation advertised

Two postdoctoral positions at the University of Leeds are currently being advertised in connection with the ERC-funded project The Nature of Representation. They are fixed term for 4 years, to start on 1 September 2013

One focuses on the philosophy of language. Official details here.

The other focuses on the philosophy of mind.  Official details here.

Deadline for applications is 19th December. Contact Robert Williams at j.r.g.williams@leeds.ac.uk for further details.

The nature of representation

I’m now officially running an ERC project on the Nature of Representation. It’ll run for five years (2012-17) and is based in Leeds. Two four-year postdoctoral research fellow positions will be advertised soon, plus two project PhD studentships. It’s going to be a blast!

The project website can be found here

Dutch books and accuracy domination.

In my paper on accuracy and non-classical logic/semantics, I adapt Jim Joyce’s accuracy-domination theorem to a non-classical setting. His result shows that (under certain assumptions) an improbabilistic belief state is “accuracy dominated” by a probabilistic one (i.e. the latter is closer to the truth, no matter which world is actual). I generalized this to a case where the “worlds” and “truth values” are non-classical, and proved accuracy domination for a notion of “generalized probability”.

Jeff Paris then gave a talk at Leeds, and chatting to him afterwards, it emerged that he’d been interested in something very similar: his 2001 paper “A note on the dutch book method” shows that belief states that aren’t generalized probabilities are susceptible to a dutch book—again, the results cover non-classical as well as classical settings, and the characterization of generalized probability he uses pretty much coincides with mine. (The paper is great, by the way—well worth checking out).

This looked like more than coincidence: what lies behind it? I’ve written up a quick note on the relationship between dutch books and accuracy. It turns out that Paris’s core result on the way to proving the dutch book theorem (an application of the separating hyperplanes theorem) has both his dutch book theorem, and a version of accuracy-domination, as easy corollaries. (The version of accuracy domination is one that measures accuracy by the Brier Score—the square Euclidean distance).

But that isn’t quite the end of matters—the proof just shows that in one specific case, we can construct a specific dutch-book/accuracy-dominating belief state. In effect, if we’re at an improbabilistic belief state, it shows how to construct a probabilistic one that has a property that turns out to be sufficient for both dutch-booking and accuracy-domination. But the property isn’t necessary for either.

But it’s not hard to figure out the more general connection: every accuracy-dominating point corresponds to a dutch book. And although not every dutch book corresponds to an accuracy-dominating point, there’s always some accuracy-dominating point reachable by manipulation of the dutch book.

So I feel I see why the formal connection between the results is now (and remember that these hold in a very general setting—way beyond the standard classical case). But there remain questions: in particular, what about where we measure accuracy by something other than the Brier score? Is there some kind of liberalization of the assumptions of the dutch book argument that corresponds to loosening of the assumptions about how accuracy is measured (and is it philosophically illuminating?)

Thoughts, criticisms, suggestions, most welcome.

Apologies to those with RSS feeds…

I’ve been putting categories on old posts—and some RSS feeds are showing them as new. *Sigh*.

Utility of posting papers in public

I was about to post up a draft of a new paper. And then I picked up on a rather nasty flaw in the argument. So that paper’s now under reconstruction again—until I find a way to patch the gap.

It’s a bit cold-shivery to have almost posted things in a very public way with a major quantifier-shift fallacy right in the centre of them. But I take at least this out of the experience: posting things on blogs is a *very* good way of disciplining yourself on content. At least for me, I’m shifted from a mode where I’m wanting things to work out/patch errors etc—in effect, working on the content of the paper itself—to a mode where I’m looking at it with an eye to potential readers. And it’s the second mode whereby I find I get enough critical distance to reliably spot things that need fixing (be they typos or real errors I’ve missed).

And of course, this is even before the very clever people out there pitch in to helpfully point out all the ways in which things need tightening up or amending. So hooray for academic blogs. But boo to quantifier shift fallacies.

UK philosophy rankings: RAE, gourmet, etc

There’s been quite a bit of discussion going on about how to interpret UK RAE results (see here and here). The raw output of the exercise isn’t a single figure, but a whole “vector” of information. You get a percentage of research activity that’s ranked “world class”, another percentage that’s “internationally excellent”, another for “international recognition” and lastly for “national recognition”. The figures also tell you how many people were submitted for evaluation.

There’s many ways of working with the data to produce different rankings. That means that everyone should be sceptical, I think, of people picking on one particular choice of “scoring” method and declaring that to be the thing that the RAE tells us about research quality in UK departments. In particular, there’s an obvious and massive choice point: you have to decide whether to look at average ratings (ignoring the quantity of high-quality research) or whether you’re going to take numbers into account. The figures, and induced rankings, change markedly depending on which you choose to focus on.

Jo Wolff in the Guardian writes:

“Really there seem to be two sensible ways of compiling rankings given the information we have so far. One is GPA, or perhaps a weighted version with extra weight to 4* outputs. This gives you an account of the average quality of the work submitted by the faculty (assuming that you trust the judgement of the panels). And the other is to multiply GPA by the number of people submitted, which will give a better prediction of eventual cash flows, and also an account of total quality.”

These aren’t the only natural ways of extracting rankings (see Brian Weatherson’s interesting discussion for several others that seem equally sensible). But they are representatives of the two obvious methods: looking at some kind of ranking by pure percentage quality (i.e. ignoring the quantity of research); or looking at some kind of ranking by the quantity of excellent research.

On GPA, the ranking goes like this:

University College London

3.15

University of St Andrews

3.15

King’s College London

3.05

University of Reading

3.05

University of Sheffield

3.05

University of Cambridge (HPS)

2.95

London School of Economics

2.95

University of Oxford

2.95

University of Stirling

2.95

University of Bristol

2.9

University of Essex

2.9

Birkbeck College

2.85

University of Cambridge (Philosophy)

2.85

University of Leeds

2.8

Middlesex University

2.8

University of Nottingham

2.8

University of Edinburgh

2.8

University of Sussex

2.7

University of Warwick

2.65

University of York

2.65

On what’s sometimes called “research power” (GPA multiplied by numbers), the ranking goes like this:

University of Oxford

233.0205

University of Cambridge (HPS)

103.25

University of Leeds

75.6

King’s College London

64.1415

University of St Andrews

60.417

University of Warwick

58.3

University of Sheffield

57.95

University of Cambridge (Philosophy)

54.15

University College London

51.282

University of Edinburgh

47.6

University of Durham

46.8

University of Bristol

46.4

University of Nottingham

43.4

Birkbeck College

41.325

London School of Economics

39.9725

University of York

39.75

University of Reading

36.6

University of Manchester

33.75

University of Glasgow

32.5

University of Stirling

32.45

If you’re in UCL or Reading, clearly you prefer looking at the first table; and if you’re in Oxford or Leeds, you prefer looking at the second!

As I say, there are many many more tables to be constructed, if you like that kind of thing. And in the end, all this is about perception and coffee-time chat – the real practical impact of these stats will be financial (which as Jo notes, is likely to be based on some variant of the second table).

Here’s one exercise that I find quite illuminating. Rather than trying to use the tables to argue about who is better than whom, we could try to use the tables to figure out what reputational surveys (of which the obvious exempler in our field is the gourmet report) plausibly track. Here’s the UK gourmet rankings for 2006, ordered by local mean (for direct comparison with the locally-determined RAE ratings):

1. Oxford (4.7)
2=. Cambridge (4)
2=. St Andrews (4)
4=. Sheffield (3.9)
4=. UCL (3.9)
6. Birkbeck (3.7)
7. KCL (3.6)
8=. Leeds (3.3)
8=. LSE (3.3)
10. Bristol (3.2)
11=. Nottingham (3.1)
11=. Reading (3.1)
13=. Edinburgh (3.0)
13=. Warwick (3.0)

Of course, the 2006 gourmet rankings are a year adrift of the census date for the RAE (which was late 2007). So we’re not dealing with exactly the same departments. But it seems close enough for the comparison to the RAE to be interesting.

The question is: is there a chart extractable from the RAE that looks like the gourmet rankings? If so, that might give a clue as to what kinds of things a reputational survey such as the gourmet report tracks.

Neither of the suggestions we’ve seen so far seems a very good candidate for this job, whatever their other merits. The ranking by GPA totally scrambles the order (of course, one convinced that GPA is the one true analysis of “betterness” among departments could use this to criticize the influence that the gourmet rankings have in the field—but my purposes are *not* to engage with that question, but just to look at what the gourmet ranking might track). The ranking by numbers times GPA gives a slightly better match—but still we have some very noticeable disparities. Leeds and KCL shoot up relative to the gourmet rankings, Sheffield and UCL shoot down.

Here’s a table that’s much closer. You get this by multiplying percentage of research activity rated in the top rank “world class” rank by numbers. As a story about how reputations get judged, this has some plausibility: how much top-notch stuff is going on at place X? Here’s the table we get:

1. University of Oxford

27.6465

2. University of Cambridge (HPS)

12.25

3. University of St Andrews

7.672

4. King’s College London

7.3605

5. University College London

7.326

6. University of Sheffield

6.65

7. University of Leeds

5.4

8. University of Bristol

4.8

9. University of Cambridge (Philosophy)

4.75

10. London School of Economics and Political Science

4.7425

11. University of Nottingham

3.875

12. Birkbeck College

3.625

13. University of Reading

3.6

14. University of Edinburgh

3.4

15. University of Warwick

3.3

This is actually surprising close considering the different methodologies of the surveys! Aside from Birkbeck and KCL every department is within a couple of slots of its gourmet ranking (KCL doing worse in the gourmet ranking than on this version of the RAE ranking; Birkbeck the opposite). Birkbeck is the only really big disparity between the tables—but I don’t know any way of fixing for that. And of course we couldn’t every expect perfect fit: the surveys are asking different things, took place a year apart, etc.

Cambridge is fractured into HPS and Philosophy in the RAE and not in the gourmet rankings, but we can’t really do anything about that either. HPS in Cambridge does have a large portion of historians of science who probably don’t register for gourmet report purposes, but also—of course—several philosophers of science and philosophers working in related areas. So there’s no simple way to determine a unified “Cambridge philosophy” ranking from the RAE.

As already emphasized, this doesn’t mean we should start declaring that high ranking on either table is the one true ranking of research quality—who ever thought that there’s a linear ordering to capture in the first place? I’m sure there will be those who point to the merits of the GPA table, and I’d certainly like to stick up for a ranking which takes quantity of world class *and* internationally excellent research into account. If we start to take that into account, Leeds performs excellently-to-stunningly (again see the Weatherson tables for some illustrations).

But the only claim I want to make here—and tentatively at that—is that insofar as you’re looking for some account of what explains research reputation, quantity of world class research activity is the most plausible candidate.

[Update: I thought I’d have a try at evaluating in a more objective way the judgements about which tables are “closer” to the gourmet rankings than which others. The method was this: assign values to each dept on each ranking (in the case of ties for ranks n-m, I assigned the value (n+m)/2. I then calculated the average difference between the values for each department on the gourmet report, compared to the values for those departments on the tables above.

Now above I claimed that the GPA figures were “totally scrambled” relative to the gourmet rankings. Sure enough, the average difference between positions was just over 3.8 by my workings. But I also said that the GPA*numbers did better (albeit still pretty badly). But by my workings, the average difference was pretty much just the same—just under 3.8. Also, in each case, six departments have positions on the respective RAE rankings that differ by more than four places from their positions on the gourmet rankings. So both are looking pretty much equally “scrambled” relative to the gourmet rankings.

What about the “WC*numbers” ranking that I suggested approximated the gourmet rankings quite nicely? Well, the average difference here was 1.5; and only one department shifts by more than 4 places. So this does seem to bear out the claim that that parsing of the RAE data is a much closer match to the reputational survey than either of the others.

But does this justify my final (tentative) claim—that the gourmet rankings may be tracking what this last RAE ranking represents, i.e. quantity of top-notch research? Well, we’d really have to do similar calculations for lots of other options to explore this fully. I’d like to see, for example, how the above compares to the kind of eight-way metascore that Brian Weatherson calls his “One True” RAE ranking. From what I can tell, its average divergence from the gourmet is not that much worse than the “quantity of world class” ranking, but it’s hard to tell exactly without redoing all of Brian’s calculations. I did look at a much cruder metascore (just working with the GPA and GPA*numbers given above): this still does much better at approximating the gourmet rankings than either of the constituent rankings on their own. The average difference is around 2.4, with only three departments more than 4 places away from their gourmet rankings. (For those who are interested, the crude metascore gives exactly the same rankings 1-7 as Brian’s: i.e. St Andrews through Leeds; but diverges slightly thereafter.)]

[Update 2: I just found an interesting blogpost by Chris Bertram on Crooked Timber from 2004, attempting a similar comparison between gourmet rankings and the (then) RAE. There certainly wasn’t the sort of flexibility available at that point, as the RAE results at the time were enormously cruder. The author and some of the participants from that post are also contributing to the current discussion on Brian Leiter’s blog. ]

Conditionals in Budapest

This event looks fabulous—over a week on conditionals in the company of Stanley, Loewer, Edgington, Hajek, Kratzer, and Stalnaker.

I’m actually due to be in Australia during July, but if I were in Europe, I’d be there.