In software engineering, premature optimization is the term for when you start trying to make something work better before you've completely settled on how it works or even what it does. There's a form of this that might be called premature productionization that I've been alluding to in my last few posts. To better explore the idea, I want to use an analogy to designing the interior of a house - not just the furniture, but where the walls go, how the rooms are laid out, etc.
The traditional approach is to make a floor plan and iterate on it based on imagining what it would be like to walk around in the resulting space. Maybe you can even do a VR tour. But no matter how vivid your imagination is, you will always notice things once it's built that you wish you had designed differently.
So what if, instead, you first created the layout with temporary walls and lived in the space for a few months? Then, when you notice an issue in real life, instead of just complaining about it, you can fix it relatively quickly. When you become more confident that a wall is in the right place, you replace the temporary version with a permanent one. But you can do this incrementally, leaving the trickiest walls as temporary until the end.
On the one hand, it would be kind of a pain to live with the temporary walls for all those months, then deal with the construction as you make them permanent. On the other hand, you'd get the benefit of moving into the new space (with temporary walls) much sooner than if you did the permanent construction up front. And in the long term, you’d end up with a design that you’re much happier with.
Of course, this isn’t really feasible for architecture, which is why architects use the process that they do. But as I explored in the two case studies, there are often ways to do the equivalent for software and data tools in a biotech organization.
As in the analogy, the problem with building production-grade tools too soon is not just that it takes longer to do the initial rollout. It also tends to make every subsequent change take longer, throwing off the cost-benefit analysis. If every minor change has to wait for the next two-week sprint, it might be easier to just live with those minor annoyances. If larger changes are going to take months, it may be more expedient to adjust the process to match the tool instead of optimizing both.
The goal should always be to get to production-grade. (Or at least to the appropriate level of production for the overall goals.) The temporary walls should always be replaced by wood and sheetrock in the end. But rushing to get there just increases the chances that the permanent walls won’t be in the right places.
In "Zen and the Art of Motorcycle Maintenance" the author contrasts the bottom up parts-> train model with top down train moves stuff. It's difficult to modernize any process midstream, so making good value judgements early on, with an eye on getting there, on schedule, on quality, and on budget requires both forethought and hindsight.
Our experience at Sapio is that people should be willing to get something live very fast even if its a little "dirty". We can go extremely fast in a V1 if people buy into this approach. Then the users will use it, maybe break it, but importantly will know what they like and what is missing. Then we iterate very fast again on the next release.
The challenge is we are asking people who live by rigor and protocol where precision in definition are very important to be less so on the ELN/LIMS implementation. But LIMS/ELN projects are not identical to experimental or lab processes, even though we want to model these. For lab processes there is little wiggle room as if you change things you can get bad results, but with software there is always wiggle room as "what" you track and how is always open for debate. So best to go fast and get it in the hands on the users and iterate...this will lead to a better solution faster and at lower costs.