Case Study: A Wet Lab to Digital Pipeline (Part 3)
In my last two posts, I described a scenario in which we needed a system for consistently collecting (meta)data in the wet lab, then I explained how the most effective approach would be to break up the process into a series of minimum viable changes. This week, I want to describe what this series of steps might look like.
Before you read on you, may want to first review Part 1 and Part 2.
*** But first, a quick note: We're building a community of folks working on data/software teams embedded in larger biotech organizations (as opposed to selling tools/services to other companies). If you like this newsletter, there’s a good chance you’ll fit right in. Come join us on the #embedded-data-teams channel of BitsInBio Slack. ***
The reason breaking up the process is difficult is that there’s one particular piece that doesn’t want to be broken up: The software at the center of it all. Whether it’s an ELN, a LIMS or some other solution, getting even the initial version of the right software in place will take a while, whether it’s wiring up a custom front end and database, or working with a vendor to install something off the shelf. That doesn’t make for a good Minimum Viable Change.
This is an example of how the right solution isn’t always the best solution (to start with). We want to end up with a database and a nice UI, but we don’t need to start there. The software is, in practice, the easiest part of the system to change or replace. Changing the specification and/or the processes involves changing the behaviors of many different people. Writing/changing software is fun by comparison. We should start by evolving the process to create a space that the right software can fit into.
So the first Minimum Viable Change should be to implement some type of data entry using whatever tools will get you there fastest. This is probably a template in Excel, and that’s OK. Choose a shared folder that everyone can put the final files in. Use whatever data specification you can come up with from a few conversations with stakeholders. It should be in the right ballpark, but will still be wrong and that’s OK too. The beauty of an Excel template is that it’s easy to update, which will be the next Minimum Viable change. And the next after that.
As you iteratively tweak the Excel template (or whatever you’re using), you can build out the right software in parallel. By the time it’s ready, you will have accomplished three things:
By iterating on the template, you will have defined the specification that the final software can implement.
You’ve trained your users on the process, so they’ll know how to use the software from day one.
All the data that was collected while you were building out the software is in a machine-readable format in a central location. Yes, the schema evolved throughout the process, but it’s more consistent than the alternative.
Best of all, you will know from the very beginning of the process whether you’re heading the right direction. You’ll know if the template is serving its purpose, and how close you are to done, because users will be using it throughout the process, via Minimum Viable Changes.