I am speaking to the environmental modellers now. Imagine, you have been asked to make your model better, to improve its performance, and generally make it a more useful tool for decision makers. You have got a generous budget and free reign to do whatever you want. Just take a short moment to think about what you would do.
When you read the paragraph above, what did you think about? I am going to guess it was something along the lines of “Amazing, I’m going to add in representation of that process the model currently doesn’t have”. Maybe it was how you would increase the resolution of the model or how you would collect more data to add into it. I am also going to guess that you did not think about what you would take away from your model.
A recent study by Adams et al (2021), published in Nature, found that we are hard wired to solve solutions by adding things in rather than looking at taking things away, despite the fact that taking something away would have been the better and more efficient way. I really encourage you to watch the video below that nicely summarises this work.
I know when I have approached modelling problems, my go to has been to add something in, rather than to consider what could be taken away. Yet, often when we add in new processes or increase the resolutions we may improve our outputs but we also increase the complexity, resulting in slower processing speeds and increased uncertainties. When assessing the models on how useful they are to decision makers, we may have actually made them worse.
The European Centre for Medium Weather Forecasts (ECMWF) have recently upgraded their Integrated Forecast System. One of the improvements they made is a great example of taking something away to solve a problem. Previously, they had stored numbers using 64-bits of memory within their computers. Using 64-bit over 32-bit allows you to store bigger numbers, i.e., use more decimal places and increase the precision of the output. This sounds like it is better, it sounds like if you had the option to go to 128-bit you ought to as you could have even bigger numbers and even greater precision still. The flipside is that storing and computing with bigger numbers takes a tiny bit longer to do each time and when multiplied over the vast number of sums the supercomputers at ECMWF do, this adds up. They realised that they did not need that level of precision and, for many processes, using 32-bit instead of 64-bit made little different to the output. Making the switch reduced the computational load by 40%, meaning swifter, and therefore more useful, results.
This is not anything new in numerical modelling and reduced-complexity approaches are popular and long established. However, these were designed with a conscious effort to take things away and it is when we stop making this conscious effort that we default back to adding things in as a first option. This is especially true, as the video tells us, when our cognitive load is high. Next time you sit down to solve a modelling problem make sure to remind yourself to stop and think – what can I take away to make this better?
Fridays are my non-work day so I try to write a short blog post on my thoughts about environmental modelling, games, or really anything else that is on my mind. The purpose is for nothing more than the love of writing and for practice but I do hope you enjoy them. For the avoidance of any doubt, all of the views and opinions I express in these blogs are very much my own and not those of my employer.