Fail Fast, Learn Faster - Rethinking Talent up-skilling

I built a decentralized micro-fund to upskill 400+ postdocs. It funded great science, but data revealed broken incentives and AI vulnerabilities. Leadership means ruthlessly iterating. Here is my post-mortem on failing fast and redesigning a high-signal skills accelerator.

Fail Fast, Learn Faster - Rethinking Talent up-skilling

Postdoctoral researchers are the drivers of much university scientific progress. Yet, they continually navigate the precarity of fixed-term contracts and hyper-specialization. Academic leaders have a moral obligation not just to employ them as brilliant minds, but to actively invest in their futures.

The UK's Researcher Development Concordat challenges us to cultivate well-rounded, multi-skilled researchers who can seamlessly cross the interface of academia, business, and innovation. Providing the best possible up-skilling - equipping our cohort with highly transferable capabilities, rather than just seeing the role as an academic stepping stone - is a priority.

Driven by this concordat and our Manchester 2035 vision, I recently conducted a review of a decentralized micro-fund scheme I designed, developed, and launched two years ago as a scalable model for radical skill development in a "learning-through-doing" mode.

Having evaluated the operational data, I am retiring the current model. Here is why it failed to scale, what the data taught me, and what comes next.

A High Leverage, High Reward Pilot

I originally designed this small, £10k pilot fund as a highly leveraged lean prototype to deliver maximum skills development with minimal administrative drag. My goal was to provide tangible skills development through a decentralized evaluation network, all through an "in-group" review process requiring virtually zero senior faculty input.

The >400 postdoctoral researchers in Natural Sciences were invited to propose independent research projects - individually or as a team - to develop impact from their work. These proposals were reviewed by at least three other applicants. Using both using a UKRI standard 6-point scoring system and through a "calibration-free" PageRank-inspired approach we generated a ranked list for funding alongside qualitative feedback for the applicants.

The scheme moved with high speed and efficiency. In 2025, over 100 peer-reviewers were secured in one week with no faculty input, and the scientific return on investment was strong. It empowered postdocs to take ownership of their professional identity, of capital allocation and of project leadership: the model fundamentally aligned with our Concordat commitments.

Evaluation and a "Fail Fast" Realisation

Despite the process efficiency and in successful funding of impactful science, an objective look at the operational data and anonymous feedback from our 400+ postdocs forced a "kill my darlings" moment. As a broad talent accelerator, the programme had not succeeded.

While applications arrived from every department - proving the communications were widely shared - the scheme was ultimately ignored by the majority of the community. Of the >50 projects reviewed, it became clear that the participant pool represented a very narrow demographic. The programme was failing to reach 90% of the cohort, whilst disproportionately attracting physicists (who over-indexed in applications, likely due to greater flexibility in their core funding compared to more applied disciplines).

A post-scheme survey provided the why: the effort-to-reward ratio of writing a full proposal and conducting three peer reviews for a £1,000 grant simply didn't hold enough perceived value for researchers who are routinely working on £1M+ grants. Furthermore, the "in-group" autonomous peer-review model exposed vulnerabilities, including the emerging use of AI to draft reviews, which severely compromised the developmental feedback loop.

The mechanism successfully and efficiently funded great science, but it was not scaling.

The Sandpit Model

To preserve the developmental benefits whilst completely transforming the value proposition, I am pivoting the funding to an intensive, in-person Sandpit Model.

Our University Delivery Handbook challenges us to "design with people at the centre", I am replacing the protracted remote application window with a single, focused day. By combining didactic grant-writing and reviewing training, live funding pitches, "learning through doing", and - crucially - a good lunch, this new format makes the scheme significantly more attractive and accessible.

The core benefits of this model include:

  • Active Skill Acquisition: Integrating formal instruction ensures high-quality, verifiable skills development that transfers immediately to any career path. Crucially, this will be delivered efficiently by leveraging our existing academic training materials and partnering with Professional Services colleagues who specialise in grant writing, thus removing the dependence on a remote, unguided reviewer's effort.
  • Integrity and Connection: Moving ideation and pitching in-person eliminates the risks associated with AI-generated peer review and restores genuine human networking.
  • Immediate Outcomes: Postdocs walk away with expanded collaborative networks, real-time critical feedback, and live capital allocation of up to £2k from the fund for the winning teams.

This sandpit approach deepens our investment under the Researcher Development Concordat, moving away from passive administration towards active, inclusive professional development.

As scientists and leaders, we must test, learn, and iterate. I look forward to launching this V2 format with our postdoctoral community in the next cycle.