In my last two posts, I described a situation in which we wanted to roll out a new predictive model to the wet lab team to define the compound concentrations that would be used in upcoming experiments. When we left off, I suggested that there were two different approaches, depending on what you would do if the predictive model turns out to be junk. Today, I want to explore the first option - how we would proceed if a failed model meant we would scrap the project and move on to something else.
For context, you may want to read the previous posts, Part 1 and Part 2, before you read this one.
Since this project is an experiment to see if we want to change how we select cutoffs, the goal should be to get the data that will verify or debunk that hypothesis as quickly as possible. So we’re going to forego the fancy UI and any kind of automation. But more importantly, instead of building the model before we look for someone to try it, we’re going to start by deciding how we’re going to either integrate it into upcoming experiments or design new experiments.
In other words, rather than thinking in terms of how long we’ll need to get the model to where it’s ready to try, we should start by identifying a specific upcoming experiment where we can try something, determine when the accompanying decision will be made, then figure out how we can have *something* ready by then. If we’re worried about the cost and risk of completely revamping the concentrations, maybe that means just adjusting/adding one or two values to see their effects. As long as it provides information we can use, we can go a bit farther on the next one.
These negotiations will depend on the story you tell about the why and how of this change. If the story you start with doesn’t get traction, you may want to rethink your approach so you can tell a more compelling story. When you eventually find a story that works, it will shift how the wet lab team is thinking.
When it comes time to plan the experiment, we won’t give the biologist an app to use. The data scientist will be the app: They’ll communicate the new values directly to the lab team, for example in an email or in a meeting. Remember, we’re not investing in automation at this point. Think of this as a paper prototype.
This is a subtle shift in thinking, but there are a few big benefits: First, by explicitly deciding when you’ll test the new approach, you’re strengthening the shared mental model with the lab team around how and when new tools are rolled out. We also ensure we don’t miss an opportunity for data/feedback by making the first iteration too big.
Second, by using a concrete experiment as a model for the initial development, we ensure that we’re developing for a realistic situation in the lab. There may be subtle details that the data team would miss when thinking about the problem in abstract. Again, this is strengthening the shared mental model between digital and wet lab teams.
For each successive iteration of development, we should again decide what’s the next experiment that we’ll try it on as early as possible. As the iterations become more successful, and the wet lab team begins to see its potential, we can expand how we use it in the experiments. And if it looks good enough to role out in production then by the time we’re done iterating, the wet lab team will know how to use it and will be bought in that it’s the right approach. (More shared mental models.)
The key here is that we’ve expanded the scope of the iteration cycle beyond just the model itself, i.e. the tooling. It now includes experiment design and the communication between teams, i.e. the process, driven by the larger cycle of stories and shared mental models.
+1 "The data scientist will be the app"
The 17th, 18th, and 19th centuries were amazing in this regard. Knowledge progressed to some degree via correspondence and a few journals.