Last week I attended a thought-provoking presentation on digital twinning (DT) by the energy manager at Glasgow University, which has built digital twins for five of its buildings. It’s not a topic I know much about but I was interested because, going by what it says on the tin, it sounded like potentially a good tool for what I would call ‘discrepancy detection’ as a way of saving energy. In other words, spotting when a real building’s behaviour deviates from what it should be doing under prevailing circumstances, which will nearly always incur a penalty in excess energy consumption. The other potential benefit of DT to my mind would be the ability to try alternative control strategies on the virtual building to see if they yielded savings, and what adverse impacts there might be on service levels. This would be less intrusive than the default tactic of experimenting on live occupants.
Unfortunately I came away with the impression that we are still a way off achieving these aims. The big obstacle seems to be that DT is not dynamic – it only provides a static model. That surprised me a lot, and if any readers have evidence to the contrary, please get in touch. Another misgiving (and to be fair, the presenter was very candid about these issues) was the cost and difficulty of building and calibrating a detailed virtual model of a building and its systems. Then there is the question of all the potential influencing factors that you cannot afford to measure.
My conclusions are in two parts. One is that simulating the effect of alternative control strategies would have to be done with software short of a full DT implementation, in other words, using much-simplified dynamic block models. The other is that discrepancy detection is probably still best done with conventional monitoring-and-targeting approaches using data at the consumption-meter level, with expected consumption patterns derived empirically from historical observations rather than from theoretical models.