Since my last post I’ve been doing a lot of work on cleaning up the simulation and adding some additional scenarios to the mix. After some in-depth discussion with colleague Nic Geard, co-author of the 2010 academic funding model that inspired this work, we decided that a good starting point for this would be to compare a more basic growing population of permanent academics with a population that includes insecure postdocs.
So that led to me getting to work on re-working some things to allow for four possible scenarios:
- Core academic funding model as written by Nic and Jason
- Simple growing population of permanent academics
- Population which includes postdocs, in which research quality does not increase the chances of promotion for postdocs
- Population which includes postdocs, in which research quality does increase the chances of promotion for postdocs
This last scenario in particular is intended to investigate how things proceed if we have an optimistic view — we know exactly how good each postdoc is, and we hire only the very best 15% of the current crop during each iteration. Those in favour of the current structure would most likely argue that competition for limited jobs allows the cream to rise to the top, so we need to investigate whether that assumption holds.
So for the purposes of this post, I’ve done a quick run of the sim for each of these scenarios. Note that the previous model by Nic and Jason investigates the time-management aspect of grant applications much more deeply — right now I’m just focusing on the mean research output for different groups of academics under each scenario.
Scenario 1: Core Academic Funding Model
If you alter the parameter settings of my version of this model and turn off all my additions — growing populations, promotion mechanisms, and the postdoc system — you end up with a scenario that’s nearly identical to the original model by Nic and Jason. The only major difference is that in my version the bonus in research quality given to grant-holders is 1.5 rather than 1.25.
So what we see is that grant-holders, as you might expect, have a massive advantage in terms of research productivity:
Grant-holders are sitting pretty at the top there, although their output fluctuates given that various researchers of differing levels of research talent are jumping in and out of the grant-holders club each semester.
(NB: I’m aware that the non-grant holders are invisible in this graph and the next — I’m working on it. This is all a work-in-progress, it’ll get there in the end!)
Scenario 2: Growing Population
In the second scenario, I’ve added a mechanism which adds a few academics to the population each semester. Their research quality and initial level of time investment into grant proposals is randomised. As in the last post we’re living in a generous society here where research funding stays in step with the growing academic population — 30% of applicants are always funded, regardless of the population size.
Perhaps unsurprisingly, the results look nearly the same as in Scenario 1:
Just like in Scenario 1, grant-holders do far, far better than the overall population, particularly those applicants whose grant applications have failed.
Scenario 3: Postdocs, Random Promotions
So now things start getting more bizarre. In this scenario we introduce the postdoctoral system outlined in my previous posts. Postdocs are added in proportion to the number of grants that have been funded in a given semester, with a bit of random variation to spice things up. New postdocs are assigned contract lengths between 4 and 10 semesters. For the first two semesters their research quality is lower to account for their adjustment period into a new post; similarly, their last two semesters also see a drop in quality due to the time they must devote to finding a new post.
At the end of their contract, postdocs have a 15% chance of being promoted into a permanent position. That may sound harsh, but that’s actually slightly more generous than reality (the figures I’ve seen have it pegged at 12%). Research track record doesn’t count in this scenario — this is a world where promotions are entirely a lucky coincidence (some would argue that this is broadly reflective of reality). Once promoted, they’re now permanent academics and can apply for grants.
So here’s a sample run of the latest formulation of this scenario:
Much like the last set of early results, we see a drastic drop in mean research output amongst permanent academics who are grant-holders, and postdocs don’t do very well in terms of productivity despite allocating 100% of their time to research. Overall we see no benefit to research output of the population with the introduction of postdocs, and both permanent academics and postdocs see significant variability in their research output. My interpretation is that the introduction of a randomised population of insecure researchers is massively disruptive — each semester we don’t know how good our postdocs will be, so their output is highly variable, and we also don’t know how good our promoted academics will be, so again we see fluctuations at that level too.
Scenario 4: Postdocs, Non-Random Promotions
This scenario is particularly intriguing to me. Nic and I had wondered whether selecting only the very best postdocs from the crop for promotion each semester would improve the picture or not. After all if we pick the best of the best and put them in a position to get grants and thus that juicy grant-holder output bonus, surely things will go much better for our virtual scientists?
Well… not massively:
Now you’ll see in this run that actually both the grant-holders and postdocs appear to be doing a bit better in terms of research output. Initially this seems good, but by the end of the simulation we see that the mean research productivity for the overall population is actually slightly lower than in the random promotions case!
At first blush this seems nonsensical, but if we ponder it for a moment I think it makes sense. While the non-random promotions do mean that we get the best of the postdoc population promoted each semester, it still means we’re highly dependent on the whims of the random-number generator — if we get a few bad crops of postdocs, in other words, we just end up with more crappy academics, and our exacting knowledge of postdoc research quality hasn’t saved us from the disruptive influence of the constant influx of new people with highly variable research output and contract lengths.
Moreover, there’s no mechanism at present for postdocs to be mentored or to mature in their research abilities — once crappy, always crappy, in other words. In real life people may argue that the trials and tribulations of postdoc life can allow young researchers to grow into more productive academics — so that’s another aspect we need to examine.
I’ve done a bunch more runs with different random seeds and seen variations in the output that seem to support these ideas, but I’m going to spare you the 18 other graphs. Suffice to say that the graph above seems to indicate a lucky series of postdoc recruitment drives more than anything else. Instead I’ll keep working at it and post more when I’m more clear on my interpretations of this scenario.
SURPRISE NEW SCENARIO: Non-Random Postdoc Promotions, With Mentoring Bonus!
Wow, what a day for you lucky people! I’ve just decided to do a quick-and-dirty scenario where we give promoted postdocs in the non-random scenario a bonus to their research quality to attempt to simulate postdocs being mentored toward success by their superiors. Surely we’ll see a change in the fortunes of our virtual scientists now?
Well… not really:
In fact things look almost identical, with the exception of the overall mean research output hitting a plateau rather than dropping slightly at toward the end of the sim, as we saw above.
To be fair, however, the ‘mentoring bonus’ I gave out here was not outrageously large — effectively the promoted, mentored postdocs get a 25% bonus to research quality. What if I double that to 50%, what do we get?
Ah-ha! At last, a very slight positive outcome. Mean research output overall trends ever so slightly upward over the course of this simulation run, rather than plateauing or starting to fall as above.
But I think we’d have to admit here that this is a fairly minimal outcome considering a rather generous scenario — and it’s quite likely this won’t hold in every run and some other runs may show worse results depending on the feelings of the random-number generator. It seems reasonably consistent at a quick glance — out of 10 runs I’ve just done for this scenario, 7 out of the 10 showed a similar tiny, tiny positive trend.
So what I’ve gathered from today’s work is that increasing the average research output of academics in a postdoc scenario requires some major work: we need to recruit only the very best postdocs; and we need to ensure they get mentoring of high enough quality that they are a full 50% better than they were during their postdoc days. Even with these powerful tools, that’s still barely enough to overcome the disruptive impact of a fluctuating population of insecure overstressed young researchers.
In real life of course, we don’t have such a transparent method of evaluating research outputs and determining the best postdocs to hire — nor do we have a population of super-mentors who can massively improve the productivity of every single postdoc. So, if we believe the underlying assumptions of this model, then perhaps we should start to think about whether insecure research posts are a good thing for science or not.
Of course there’s a human dimension here as well — over the many runs I’ve done with the postdoc mechanism running, most simulations top out around 500 active academics at the end of the simulation, and between 5-600 total postdocs hired over the 100 semesters. Out of those we’ll see between 70-90 postdocs get promoted, while the rest all get the sack and leave academia forever. Do we really want to be sending these vast numbers of PhD graduates out of the academy and lose all that potential research talent? That seems like an incredible waste, and even more so when we see how difficult it is to get a positive impact on productivity out of this structure.
Next time: I’ll keep poking at this simulation and see whether these results hold up, and I’ll be doing some other comparisons on other measures, including total research output across different groups. Early indicators: postdocs increase total research output, and research quality across the population becomes highly unstable. More later.