Let’s keep going with the discussion of conceptual models for a digital twin of your biotech lab. In the last two posts I wrote about the data model (the information that needs to be captured) and the decision model (the way that decisions are organized and grouped). Today, I want to discuss the last piece: the workflow model, which defines when decisions and information are recorded, and by who.
As with the other components of the model, a gap between your organization’s workflow model and the software’s workflow model can cause two problems: Either individuals won’t use the system for the things that don’t fit, and your digital twin will miss things. Or worse, they’ll adjust their workflows to fit the system, instead of fitting the workflow that you actually want. So it’s important to get this right.
If we oversimplify things sufficiently, most lab workflows follow a common pattern: Design an experiment, prepare materials, run the experiment, get the readouts, do analysis. There’s obviously a lot of variation between different types of experiments, but you could argue that this only impacts what information is collected at each stage, not the overall workflow. And existing software generally addresses this kind of variation. Instead, I think there are two more important ways in which workflows tend to vary:
First, there’s the workflows between experiments - how each experiment creates insights that define the next experiment. In “traditional” drug development, this is pretty standard. But for most biotech startups, this is what makes them unique. Their secret sauce, if you will. So of course off-the-shelf software isn’t going to reflect this part of their workflow model. It doesn’t help that these decision workflows tend to evolve (emerge?) over time. The only options are to build completely custom software or (more often) track these decisions in a scattered collection of slide decks and institutional lore.
The second way that workflows tend to vary is who’s involved and how they communicate and hand things off to each other. If you’ve been reading this newsletter for long enough, you know this is a common theme with me: Most ELNs are built around workflows where the bench team does everything themselves, usually on a single laptop. But once you have a data team involved in the analysis step and even (hopefully) in the design step, the communication and hand-offs become much more complex.
So, the point of all this is that the next time you consider updating the software that defines the digital twin of your lab (whether or not you call it that) you should consider how well it fits your organization’s workflow model. Unfortunately this isn’t something that typically appears on an ELN/LIMS evaluation checklist, and most vendors make it difficult to actually see a demo of their workflows. But I’ve already complained about that.
Next time we’ll move on to implementation.
Scaling Biotech is brought to you by Merelogic. We design data models and infrastructure that help early-stage biotech startups turn their AI/ML prototypes into tangible impact. To learn more, send me an email at jesse@merelogic.net
Spot on with this excerpt - ‘In “traditional” drug development, this is pretty standard. But for most biotech startups, this is what makes them unique. Their secret sauce, if you will. So of course off-the-shelf software isn’t going to reflect this part of their workflow model’
This is where services and implementations come in. Most LIMs companies will provide an OOTB solution that gets you 60-80% there.