A cutaway illustation of the engine room of a steam ship.

Engineering for Science

Steve Crossan
3 min readOct 1, 2024

Why did AlphaFold happen at DeepMind rather than (for example) the Broad Institute ?

It wasn’t data. Everyone had access to exactly the same data.

It wasn’t compute. The compute budget for AlphaFold1 was well within the budget of an academic project.

The real reason was that we treated it as an engineering problem as much as a research one. In fact, this was the secret sauce of DeepMind: around ⅓ of the overall headcount was devoted to what we called Research Engineering. This was its own organization (led by the amazing Andreas Fijdeland) with its own ethos, embedded in but separate from the culture of academic AI research. The vibe was a bit like the (very strong) vibe of the SRE organization at Google: we’re the ones who sweat in the engine rooms so the top brass can look good on deck.

Research Engineering at DeepMind did 2 things. About half of the team was devoted to building tools to make all of research go faster. This meant that if a researcher on any project had a new idea, they could code it up pretty quickly, and very easily run it against a suite of benchmarks before seeing it appear on a leaderboard, often within a day. This part of the organisation also managed access to data resources.

The other half was embedded with the research groups. Each pod of 3 or 4 researchers had a research engineer embedded with them. These engineers were contributing research ideas, and very happy to read a paper and code up an implementation of a new approach. But they were also ensuring that the pod produced maintainable, scalable code that could be picked up by someone else in the organization, and followed the norms of the wider codebase.

This approach has a compounding effect, as tooling and practices built for each iteration make the next go faster. I believe that at OpenAI the ratio of engineering to research is higher. Many of the authors (including the leads) of the key GPT paper are engineers first. Getting LLMs to work at huge scale is primarily a problem in distributed computing — as you can also tell from the Llama3 paper. I recently spoke to a former colleague there who said he thought the ideal ratio was probably even higher: 80:20 engineering to research. I think that’s not far off.

AlphaFold started in DeepMind’s Applied Group, which was majority Research Engineers. In the first 9 months of the project the team spent most of the time engineering & curating data (with a big focus on accidental data leakage), setting up the right metrics, and building infrastructure that could then be used to experiment fast. And yet that team went from a standing start in April 2016 to winning CASP in August 2018.

2 years after that, AlphaFold2 came out which really lit the touchpaper in the field, with RosettaFold, ESMFold and many others following quickly. In 2020 scientists had around 200,000 3d protein structures available representing 50+ years of work; a few years later the number of usable structures is in the hundreds of millions.

So when we talk about the promise of AI for Science we should remember the crucial importance of Engineering for AI for Science.

--

--

Steve Crossan
Steve Crossan

Written by Steve Crossan

Research, investing & advising inc in AI & Deep Tech. Before: Product @ DeepMind. Founded Google Cultural Institute; Product @ Gmail, Maps, Search. Speak2Tweet.

No responses yet