Accountants traditionally, and understandably, adopt an approach to their work in which they close the books at the end of each year and start a fresh set. That’s how they need to work because that’s how taxes are reckoned. And indeed annual horizons permeate all aspect of management culture from budgeting to reports to shareholders. The difficulty for me arises when annual horizons impinge on performance reporting and, in particular, the monitoring of energy performance.
Let’s think first about ‘year to date’ budget tracking applied to energy expenditure. If you close the books and start afresh, a little way in to the year you’ll get a report for your first month, and you will know that any attempt to extrapolate from that point to the end of the year is fruitless, not least because of the seasonality in expenditure. One company I was auditing recognised that, and only bothered to track budgets on a quarterly basis, acknowledging that even that would give an inaccurate picture. The energy manager was quite candid about it. It did not surprise me when he admitted that usually halfway into the year it would dawn on everybody that they were heading for an overspend, at which point word would come down to switch to monthly reviews, followed in latter part of the year by the panicked adoption of weekly and then even daily reviews as if more-and-more-frequent scrutiny would compensate for slack management in the first half of the year.
There is a simple trick which will help here: each month, summate the latest twelve months. It’s not a perfect measure because it doesn’t capture underlying trends and exceptional events, but it does a pretty good job of mitigating purely seasonal variations through the year, be they weather or trading patterns. A twelve-month moving total is about the best simple estimate you can get for the year-end out-turn, and the closer you get to the end of the year the better it is.
Another reporting convention that quite frequently crops up in energy management is ‘year-on-year’. This is where, typically, the consumption in a recent month is gauged against the corresponding month the previous year. Again, it’s a method inherited from general management of performance and its purpose is to mitigate the effects of seasonality. But it suffers from two drawbacks. Firstly there is the implicit assumption that conditions in the prior year were indeed the same. This might be broadly true for the weather in relation to heating fuel, but it is stretching credulity to imagine that it applies in most other situations. Even for the weather it isn’t really a safe assumption and it ignores the fact that there could have been something wrong the year before, wasting energy and in effect raising the allowance for a year later.
Throughout this series I have pushed the concept of ‘expected’ consumption, calculated by a formula depending on one or more independent measureable driving factors. What the formula does, if you derive it by regression analysis, is give you a dynamic yardstick which lets you gauge any given month or week against all prior months or weeks. Better still, if optimised with cusum analysis, that yardstick will exclude unrepresentative behaviour, and give you a tough but accurate and achievable target.