When you install an energy-saving measure, it is important to evaluate its effect objectively. In the majority of cases this will be achieved by a “before-and-after” analysis making due allowance for the effect of the weather or other factors that cause known variations.
There are, however, some types of energy-saving device which can be temporarily bypassed or disabled at will, and for these it may be possible to do interleaved on-off tests. The idea is that by averaging out the ‘on’ and ‘off’ consumptions you can get a fair estimate of the effect of having the device enabled. The distorting effect of any hard-to-quantify external influences—such as solar gain or levels of business activity—should tend to average out.
A concrete example may help. Here is a set of weekly kWh consumptions for a building where a certain device had been fitted to the mains supply, with the promise of 10% reductions. The device could easily be disconnected and was removed on alternate weeks:
Week kWh Device? ---- ----- ------- 1 241.8 without 2 223.0 with 3 221.4 without 4 196.4 with 5 200.1 without 6 189.6 with 7 201.9 without 8 181.3 with 9 185.0 without 10 208.5 with 11 181.7 without 12 188.3 with 13 172.3 without 14 180.4 with
The mean of the even-numbered weeks, when the device was active, is 195.4 kWh compared with 200.6 kWh in weeks when it was disconnected, giving an apparent saving on average of 5.2 kWh per week. This is much less than the promised ten percent but there is a bigger problem. If you look at the figures you will see that the “with” and “without” weeks both have a spread of values, and their ranges overlap. The degree of spread can be quantified through a statistical measure called the standard deviation, which in this case works out at 19.7 kWh per week. I will not go into detail beyond pointing out that it means that about two-thirds of measurements in this case can be expected to fall within a band of ±19.7 kWh of the mean purely by chance. Measured against that yardstick, the 5.2 kWh apparent saving is clearly not statistically significant and the test therefore failed to prove that the device had any effect (as a footnote, when the analysis was repeated taking into account sensitivity to the weather, the conclusion was that the device apparently increased consumption).
When contemplating tests of this sort it is important to choose the length of on-off interval carefully. In the case cited, a weekly interval was used because the building had weekend/weekday differences. A daily cycle would also be inappropriate for monitoring heating efficiency in some buildings because of the effect of heat storage in the building fabric: a shortfall in heat input one day might be made up the next. Particular care is always needed where a device which reduces energy input might resulting in a shortfall in output which then has to be made up in the following interval when it is disconnected. This will notably tend to happen with voltage reduction in electric heating applications. During low-voltage interval the heaters will run at lower output and this may result in a heat deficit being ‘exported’ to the succeeding high-voltage period, when additional energy will need to be consumed to make up the shortfall, making the high-voltage interval look worse than the low-voltage one. To minimise this distortion, be sure to set the interval length several times longer than the typical equipment cycle time.
Otherwise there are perhaps two other stipulations to add. Firstly, the number of ‘on’ and ‘off’ cycles should be equal; secondly, although there is no objection to omitting an interval for reasons beyond the control of either party (such as metering failure) it could be prudent to insist that intervals are omitted only in pairs, and that tests should always recommence consistently in either the ‘off’ or ‘on’ state. This is to avoid the risk of skewing the results by selectively removing individual samples.