Your experiments need names
*** A quick note/ad: I've noticed that many of the software companies who are trying to make Biotech more data driven struggle to communicate what they do to the users who need them the most. So I've started offering services to fill this gap. If you're one of these software companies, send me an email at jesse@merelogic.net to learn more. ***
So, the thing about writing about the mundane aspects of biotech data is that some of the topics that I feel most compelled to write about feel embarrassingly simple. Like, to the point where what I’m going to write about today will probably seem to an outsider so obvious and simple that every biotech startup must to do this. And yet, in my experience, almost all of them don’t. So here it is:
Keep a shared and regularly updated list of upcoming experiments (and their names).
This post is part of my ongoing series on specific issues that biotech startups run into at specific points in time, and this week the issue is: When the size of the team and the number of ongoing experiments increases to the point where it’s no longer possible for everyone involved in the experiments to keep up with them through informal means.
For me, the easiest way to notice that you’ve gotten here is when the experiments start to have similar enough names that you can’t tell if they’re two names for the same experiment or two different experiments. Is the “HER2 Binding” experiment the same as the “HER2 ASMS”? That’s why I appended “and their names” above.
And just to be clear, your ELN is not this list. Your ELN tracks details of *past* experiments to address legal concerns that haven’t been relevant since 2013 (though no one seems to have noticed.) Many ELNs could theoretically be used to track future experiments, as could many LIMS, but in my experience, few if any biotechs do this.
So, the first question that we should ask is - Why is this a problem? Does everyone involved in these different experiments really need to know what they all are? And while that second question in particular seems to be designed so that the answer is obviously “yes”, the slightly depressing answer for many biotechs seems to be “no”. Because if it was necessary for everyone to know what experiments are ongoing then they would do a better job of making that happen.
And up until the last 5-10 years, there was a good reason for this. Most experiments were managed end-to-end by small wet lab teams that could focus on just their own. As the overall organization grew, the number of these teams would grow but the size of each team would stay roughly the same. So within each team, they could use informal means to keep track of ongoing experiments. Only once experiments were complete, would they communicate a summary to the larger organization (usually via a slide deck, but that’s another matter). Oh, and they’d put everything in the ELN for the lawyers to sift through later.
Today, however, as biotech organizations are increasingly incorporating data teams into the process, this dynamic is shifting. Many of these teams end up working on data from multiple bench teams. So without a central list of all these experiments, they have to spend a lot of their time tracking down this information from bench teams and organizing it themselves. Or, perhaps more often, they just give up and work on whatever gets thrown at them.
In other words, it’s very hard for data teams to plan for the longer-term, let along think strategically, because they don’t know what the longer term will look like. I suspect it’s also harder for the organization’s leadership to think strategically without having all this information in one place, though it’s easier for them to cope: A large part of their job is to communicate with the bench teams, so they’ll naturally learn about ongoing experiments. But more importantly, they don’t need to understand them to the same level of detail as someone who’s going to be doing the analysis.
At the end of the day, the folks who should theoretically need a central list of upcoming experiments seem to find ways to cope with not having one. And while creating such a list is technically easy, the fact is that keeping it up to date requires significant shifts in processes and habits, even if it shouldn’t be too much extra time. So as long as everyone can cope, most organizations don’t see the need for that investment.
But the questions that gets me are: How Could things be different if your organization did make that investment? How much could the ability to think more strategically about experiments at every level from leadership down lead to better outcomes? What are the possibilities that we’re not seeing because we haven’t tried?
I don’t know the answers to these questions. But this one simple (mundane) thing seems like such a no-brainer that there has to be something there.