The methods that we use for detecting and diagnosing avoidable energy waste are perfect for verifying the results of energy-saving projects. In both cases we are comparing actual consumptions (at, say, weekly intervals) with the corresponding expected values, calculated from a formula based on a relevant driving factor or factors. For routine monitoring purposes we expect actual and expected to agree (give or take some random error) while for verification purposes we expect to get actuals that are consistently below expected values, or more correctly I should have said ‘below the values that would previously have been expected’.
Strictly speaking, after implementing an energy-saving measure (ESM), the old formula for expected consumption should be replaced with a new one, enabling us to police performance at its new, lower, achievable level. I’ll come back to that idea later. The point is that we don’t discard the old (pre-ESM) formula because it tells us what consumption would have been in the absence of the ESM. It provides, in effect, a dynamic yardstick against which to gauge the savings week by week.
To distinguish that old expected-consumption formula from the new one we relabel it the ‘historical baseline’ formula while the post-ESM formula, once established, becomes the new ‘target’ formula.
It will take time to establish what the new target formula is, because we need to wait until enough measurement intervals have passed to facilitate a fresh regression analysis. However, there is another way to look at this. Many ESMs will have a predictable effect on performance (if they didn’t, they would be hard to finance). So in principle it ought to be possible to compute the ESM’s likely impact on performance. In terms of a straight-line model we could predict an x% reduction in the intercept and a y% reduction in slope. This gives us an instant new post-ESM target formula, the advantage of which is that the consumption stream in question can still appear in overspend league tables, with any significant failure to save the anticipated amount showing up as an apparent overspend. I don’t know of anybody who actually uses this approach, but if you were operating a large chain of premises with an active retrofit programme it would be the perfect way to identify and prioritise cases where ESMs had failed.