Last week, I kicked off a series of posts about conceptual models for digital twins of biotech labs by discussing the importance of getting the data model right.
Another great post! When I was running lab ops at the Center for Epigenetic Research at MSKCC, in many ways we first built the "right" data + decision model to reach our goals, before really ramping up our physical lab operations. It's a bit of an iterative process of course. Having the right tools and framework in place allowed us to not only scale rapidly but, more importantly, easily collaborate across institutions on a wide variety of projects (from discovery to clinical trials) and create meaningful scientific work. As scientists I think we too often get wrapped up on setting up the right experiments and executing them, treating data almost as an afterthought. This often winds us up in a bit of a vicious cycle.
At Ganymede, we've decided to argue for a different approach similar to what you're arguing here for digital twins. We touched on our vision for building a digital twin first, and then building the physical lab operations around them.
Thanks! You're absolutely right that the earlier you have the framework in place, the easier it is to operate and scale. But the framework needs to be flexible enough to adjust to new circumstances.
I also like your insight that bench scientists focus on the setup and execution of the experiment, but not necessarily the data. I think data teams often have the exact opposite focus.
Another great post! When I was running lab ops at the Center for Epigenetic Research at MSKCC, in many ways we first built the "right" data + decision model to reach our goals, before really ramping up our physical lab operations. It's a bit of an iterative process of course. Having the right tools and framework in place allowed us to not only scale rapidly but, more importantly, easily collaborate across institutions on a wide variety of projects (from discovery to clinical trials) and create meaningful scientific work. As scientists I think we too often get wrapped up on setting up the right experiments and executing them, treating data almost as an afterthought. This often winds us up in a bit of a vicious cycle.
At Ganymede, we've decided to argue for a different approach similar to what you're arguing here for digital twins. We touched on our vision for building a digital twin first, and then building the physical lab operations around them.
https://blog.ganymede.bio/biotech-and-biopharma-needs-to-start-thinking-about-physical-twins-not-digital/
Thanks! You're absolutely right that the earlier you have the framework in place, the easier it is to operate and scale. But the framework needs to be flexible enough to adjust to new circumstances.
I also like your insight that bench scientists focus on the setup and execution of the experiment, but not necessarily the data. I think data teams often have the exact opposite focus.