Now that we understand the different pieces of the conceptual model for building a digital twin of the lab, it’s time to turn our attention to implementation. But this isn’t going to be a lengthy tirade about why you should use a certain JavaScript framework or back-end language. Not this week, at least. Instead, I want to discuss some of the trade-offs that go into the overall design. And this week, I’ll start with choosing the right balance between flexibility and consistency.
Things change, and your software will need to change with them. For the things that change often, the software needs to be easy to change so it can keep up. For the things that change rarely, the software should be intentionally difficult to change, to ensure consistency. But in between those is a very wide spectrum, which I like to bucket into three broad ranges:
Developer-controlled changes - Basically, anything where you need to write code, muck with version control, and go through a deploy process.
Super-user-controlled changes - These are configuration changes that are made in essentially the same UI as the rest of the application, and immediately deployed, but hidden away in an admin/setting page.
Ad-hoc changes - These are decisions and local configuration changes that users can make during the normal use of the software.
So, here’s my controversial opinion: I want any major, long-term design decisions to be made by people who viscerally understand the concept of technical debt. And while that doesn’t mean they have to know how to code, there’s a strong correlation between the two. And I see this as the main advantage of number 1.
On the other hand, if there’s even a minor gap between the user’s mental model and the model implemented in the system, the user will find other means to fill that gap, and your digital twin will suffer. So the advantage of number 3 is that users can keep the system’s model up to date in real time.
The low-code solutions that are all the rage these days fall into that second range. They’re meant to be the ideal balance of the other two, but it often ends up being the worst of each: Design decisions are often done by folks who don’t understand technical debt, but there’s enough friction that the models still get out of sync.
Instead, I think the right way to reconcile 1 and 3 is to implement a sort of two-stage system where users can add small changes in real time as their needs evolve (3), while in parallel developers (who understand technical debt) examine which changes make sense and either incorporate them into code or eliminate them (1).
I haven’t seen anyone try to implement this yet, and maybe it’s too idealistic to expect it to work. But I think it’s a useful framing for thinking about where software should fall on the flexibility vs consistency spectrum.
Scaling Biotech is brought to you by Merelogic. We design data models and infrastructure that help early-stage biotech startups turn their AI/ML prototypes into tangible impact. To learn more, send me an email at jesse@merelogic.net
> a sort of two-stage system where users can add small changes in real time as their needs evolve
This was done by Riffyn Nexus in a 'dynamic schema alteration' like process in 2017. Riffyn didn't exist long enough to release the second part (but it was in development).