This will be my last post of the year, so I wanted to do a quick summary of Scaling Biotech in 2024. Back in October, I hit 1,200 subscribers, which is easily ten times more than I ever expected to get for a a newsletter about a niche subject like data tools and infrastructure for early discovery biotech. So thank you all for following along - your encouragement is what keeps me writing this.
During the course of the year, I had a few major changes to how I think about both this newsletter and my company, Merelogic. The biggest was rethinking what I can do to drive the kinds of change I want to see in how biotechs manage data beyond just writing about it on here. In particular, I started looking for ways to help the biotech software companies that are leading the charge to better communicate their visions to the teams that need them. Which led to the Biotech Reference Stack, my second newsletter, Viral Esoterica, the series of Webinars I started a few months ago, and the marketing consulting work I began offering through Merelogic.
This will continue to evolve in 2025 as I figure things out. And, in fact, one direction that I’m really interested to explore is how LLMs will impact the way teams design their data infrastructure and evaluate implementation options, including off-the-shelf software. I started the year as very much an AI skeptic. And I’m still fairly skeptical about some of the grander ideas of where AI may lead. But after writing more about LLMs in the last few posts, and using them more in my own work, I can see they’re going to drive massive changes in how we work, no doubt some good and some bad.
I’m planning to start off 2025 with more posts on LLMs in early discovery biotech. And I’m planning to explore some new ideas related to how data teams use LLMs to discover and evaluate data tools and software. I’ll probably write more about that on LinkedIn and Viral Esoterica, but you may see some of it here.
So, I hope you all have a nice holiday break, and I’ll see you back here in 2025!
I have to say---my approach to usage of LLMs has evolved, too. We're now using them for the most trivial purpose (the one you mentioned in the other newsletter's wrap-up post) and are experimenting with a few more use cases.