For many wet lab scientists, Excel is a saving grace that allows them to easily and flexibly collect and manipulate all their experimental (meta)data. For many data scientists/computational biologists/etc. Excel is the bane of their existence, where they spend hours reformatting and cleaning that very same data.
So, is Excel a blessing or a curse?
The beauty of Excel is speed and flexibility: The first time a scientist does an experiment, they can quickly create a spreadsheet whose structure exactly matches the data that they need to track. The databases that many of us would prefer they use just don’t do that. So if you take away their spreadsheet, they’re not going to switch to a database - they’re going to pick up pen and paper.
The problem isn’t the software itself, but how we think about its role in the larger process. Excel is great as a prototyping tool the first few times you run a certain process or experiment. It becomes less great the tenth or twentieth or hundredth time you do it. But as a tool for transitioning to a database, it can work pretty well.
The key is to set expectation from the outset, with both the users that are collecting data and the users who are analyzing it. It doesn’t make sense to enforce the consistency that you’ll eventually need until you’ve seen the problem enough times to know what that looks like. Excel allows you to efficiently iterate until you get there, but it’s the data collector’s responsibility to be consistent about whatever they can, as early as they can.
The long-term solution should probably involve a database rather than Excel. But you can’t get there unless everyone involved agrees on what the journey looks like.