All posts by Editor

Ultra-rapid charging

StoreDot and BP present world-first full charge of an electric vehicle in five minutes” runs the headline on this news item from BP which actually talks about an electric scooter. The Storedot website[1] at the time of writing was a bit more gung-ho about their new battery technology, which they claimed would enable a 5-minute full recharge of an electric car with 300 mile range[2]. Really?

Quick sense check: for a 300 mile range you’d be talking probably about a 100-kWh battery for which a 5-minute full recharge would demand 1.2 megawatts of charging capacity. That’s going to be some meaty charger. Moreover, even upping the charger voltage to 1,000 volts you’ll be drawing 1,200 amps, so I reckon the charger cables are going to need a pair of conductors of (say) 4 square centimetres cross section. And cars would need to be engineered with DC charging circuits to match …

I put these points to StoreDot and they pointed me to Chargepoint’s website which talks about “up to 500 kW” Express Plus charging using the CCS Type 2 connector, although as far as I know CCS2 goes nowhere near that rating and when those kinds of powers are achieved they are going to need thousand-volt water-cooled charging cables with thermal sensing on the plug because of the risk of overheated contacts.

Back to the scooter that BP had seen recharged in 5 minutes. The  model in question has two 48V 31.9 Ah batteries (so about 3.1 kWh) which to recharge in 5 minutes would require a 37 kW charger – plausible in a non-domestic setting. I imagine the demonstration to BP involved removing the batteries to recharge them because obviously the scooter’s onboard electrics would not be designed to handle a charging current of 800 amps.

My colleague Daniel did some digging and unearthed this priceless video from StoreDot in 2014, purporting to show a smartphone being completely recharged in 30 seconds using battery technology that would be released in 2016 (update January 2023: I’m still waiting…). The sceptical comments are worth reading — especially the ones about fake phone screens, and indeed the ones about exploding phones — but you can’t help but notice in the video itself they are actually “charging” a huge battery glued to the back of the phone. So a big dose of scepticism is in order, I think, and if the link to the video no longer works you can guess why.

More credible is the news from April 2019 about battery developments using vanadium disulfide cathodes stabilised with a microscopic layer of titanium disulphide: this promises faster charging but they are careful not to say how much faster.

Postscript 13 January, 2023: Storedot hasn’t as far as we know actually released a product yet, but they have opened an ‘Innovation Hub’ in the U.S. Hooray!


[1] The web page that this article originally referred to has been moved

[2] This was in 2019: by 2022 StoreDot’s ambitions were more muted, manifested in a roadmap that would “see the delivery of mass produced battery cells capable of 100 miles of range in five minutes of charge by 2024, 100 miles in three minutes by 2028 and 100 miles in two minutes by 2032”.

Water treatment

Scale build-up from hard water is often cited as a cause of energy waste in hot-water systems (I am talking here about ‘domestic hot water’ supply, not closed loops within central heating systems; I will come to those later). Actually though, contrary to claims for some water treatment devices, it is not necessarily the case that energy waste in DHW systems will be significant. Indeed with an electric immersion heater on a 24-hour service, all the supplied energy still gets into the water; there is no loss. Of course the rate at which the water temperature recovers will be reduced, and the heating element will fail prematurely, but those are service and reliability issues not energy waste.

The story is a little different on intermittent hot water storage of any kind. Here, because scaling will retard temperature recovery, users may extend preheat times and that will result in a marginal increase in standing heat loss. If the heat supply is from a primary (boiler-fed) water loop, the primary return temperature will be higher because scale impedes heat transfer, and this also will increase standing losses although in reality not to a significant extent in the grand scheme of things. If hot-water recovery times deteriorate markedly, users may of course dispense with time control altogether and in those circumstances avoidable standing heat loss might become significant if thermal insulation is poor.

Turning now to the effect on wet central heating circuits, scaling will affect efficiency. Scale within heat emitters (radiators and so on) will reduce heat transfer and result in higher circulating temperatures because the heat cannot escape from the water so readily. Meanwhile within the boiler itself, impaired transfer of heat into the (now hotter) system water will result in excessive heat going up the chimney, evidenced by elevated exhaust temperatures.

Furthermore in both heating and DHW systems, scale could interfere with the operation of control valves and either result in excessive heat output – with a corresponding excessive use of fuel – or inadequate heat output, which will cause people to interfere with the controls, deploy electric heaters, or take other actions that incur excess costs.

Preventing scale build-up

Simplifying the story somewhat, the main constituent of scale is calcium carbonate, which starts to form above about 35°C through breakdown of the more soluble calcium hydrogen carbonate that is present to varying degrees in the public water supply, with  ‘hard’ water containing higher concentrations of it. Calcium carbonate crystals of the normal ‘calcite’ form stick to surfaces and each other, and that is what constitutes limescale.

One way to deal with this is softening which (in its strict sense) involves a chemical process to turn calcium carbonate into sodium carbonate which does not precipitate as crystals but stays in solution. The process is costly in terms of chemicals; a waste product, calcium chloride, needs to be flushed away periodically; and the softened water is unsuitable for drinking and cooking because of its high sodium content.

The alternative to chemical treatment is physical conditioning. Various proprietary methods are available. Some involve electric or magnetic fields which are supposed to affect the calcite crystals in some way (for example giving them an electric charge so that they repel each other, or in some other manner inhibiting their tendency to agglomerate).

Another class of conditioner is electrolytic. Electrolytic devices release of minute quantities of zinc or iron into the water, which change the calcium carbonate to its ‘aragonite’ form which, unlike calcite don’t stick together, so they stay in suspension and do not contribute to scale formation.

For a wide-ranging introduction to energy-saving technologies look out for my one-day ‘A to Z’ courses advertised at vesma.com

With the exception of electrolytic devices, there is no scientific explanation of how or why most  of these physical conditioners work, and there are no accepted tests of efficacy. There is only anecdotal evidence, but if it works, it works.

The one method of physical condition which is definitely effective (and I can vouch for it personally) is polysilicate-polyphosphate dosing. This has a dual action. It modifies the carbonate crystals to stop them sticking to each other, and it coats the inner surfaces of pipework and appliances to inhibit scale formation.

For anybody wanting further references, this note from WRc commissioned by Southern Water is what I currently regard as the most authoritative advice on the subject of water treatment techniques.

The value of a tree

We all know that trees are good and absorb carbon dioxide. But how good are they? Let’s work it out…

Trees absorb carbon dioxide at different rates depending upon their age, species and other factors but as a rough order of magnitude you can say the figure for a typical established tree is 10 kg per year. The carbon dioxide emissions associated with energy use are 0.2 kg per kWh for natural gas and (in the UK in 2018, including transmission losses) an average of 0.3 kg per kWh for electricity.

So 50.0 kWh of gas or about 33.3 kWh of electricity each generate the 10 kg of CO2 that a single tree can absorb in a year. Take that figure for electricity. As a year is 8760 hours, 33.3 kWh equates to a continuous load of only 3.8W. So one entire tree compensates for one broadband router, a TV on standby, or a couple of electric toothbrushes or cordless phones (roughly).

And as for gas consumption: remember pilot lights? The little flame that burns continuously to ignite the main gas burner? If you had pilot flame with a rating of 100 watts, in the course of a year it would use 876 kWh and require no fewer than 17 trees to offset its CO2 emissions..

Are the assumptions correct?

The first time I published this piece in the Energy Management Register bulletin my estimate of CO2 takeup rates was challenged. Fair enough: I plucked it from stuff I had found on the Web knowing that it might be out by an order of magnitude. So let’s do a sense check.

The chemical composition of wood is 50% carbon (on a dry-matter basis) and all that carbon came from CO2 in the air. So 1 kg of dry woody matter contains 0.5 kg of carbon, which in turn was derived from 0.5 x 44/12 = 1.833 kg CO2 . Thus if we know the growth rate of a tree in dry mass per year, we can multiply that by 1.833 to estimate its CO2 takeup. Fortunately a 2014 article in ‘Nature’  has the growth figures we need. Although there is wide variability in the results, for European species with trunk diameters of 10 cm the typical growth in above-ground dry mass is 1.6 kg per year, equating to a CO2 takeup of only 2.9 kg per year (although this rises to 18 and 58 kg per year for diameters of 40 and 100 cm). So newly-planted trees (which is what we are talking about) are going to fall well short of my 10 kg/year estimate, and it will be years before they reach a size where their offsetting contribution reaches even modest levels.

I like trees – don’t get me wrong – by all means plant them for shade, wildlife habitat, fruit or aesthetic appearance. But when it comes to saving the planet I just think that given the choice between (a) planting a tree and waiting a few years, and (b) cutting my electricity demand by 3.8 watts now, I know what I would go for.

Another way to waste compressed air

ON FACTORY compressed air systems it’s good practice to fit air isolation valves like this one below (A) fitted to a stamping press. It shuts off the air when the press is idle but in case of valve failure a bypass (B) is provided. This one is closed now, but moments before the picture was taken we had found it open, defeating the automatic air shut-off.

Just before we moved on I noticed a hose connected to the valve at one end and nothing at the other end. It was the air supply to the pneumatic actuator on the air valve itself, and without it the valve would never close anyway. Somebody had decided to adopt a belt-and-braces approach to wasting air by disconnecting it (C).

The problem of open bypass valves was commonplace and well known, but nobody had thought to establish the root cause. What compelled operators to defeat the system? It turned out that they sometimes needed air on the press to apply the pneumatic brakes on the flywheel after the main motor had been turned off. A simple push-button over-ride will solve that issue.

Automatic metering disaster recovery

Our client relies on an extensive network of automatically-read submeters thoughout his estate and asked us to prepare a recovery manual in case his data-collection contractor should cease trading. As part of the exercise we set up a temporary online storage location, proved that the output from a typical data-logging installation can be rerouted, and established what format the data arrive in.

We are also discussing with the incumbent contractor what  additional information will need to be available in escrow to permit an orderly handover.

Justifying additional meters

Additional metering may be required for all sorts of reasons. There are three relatively clear-cut cases where the decision will be dictated by policy:

  • Departmental accountability or tenant billing: it is often held that making managers accountable for the energy consumed in their departments encourages economy. Where this philosophy prevails, departmental sub-metering must be provided unless estimates (which somewhat defeat the purpose) are acceptable. Similar considerations would apply to tenant billing (I am talking about commercial rather than domestic tenants here).
  • Environmental reporting: accurate metering is essential if, for example, consumption data is used in an emissions trading scheme: an assessor could refuse certification if measurements are held to be insufficiently accurate.
  • Budgeting and product costing: this use of meter data is important in industries where energy is a significant component of product manufacturing cost, and where different products (or different grades of the same product) are believed to have different energy intensities.

The fourth case is where metering is contemplated purely for detecting and diagnosing excessive consumption in the context of a targeting and monitoring scheme. This may well be classified as discretionary investment and will require justification. This could be based on a rule of thumb, or on the advice in the Building Regulations (for example). A more objective method is to identify candidates for submetering on the basis of the risk of undetected loss (RoUL). The RoUL method attempts to quantify the absolute amount of energy that is likely to be lost through inability to detect adverse changes in consumption characteristics. It comprises four steps for each candidate branch:

  1. Estimate the annual cost of the supply to the branch in question (see below).
  2. Decide on the level of risk (see table below) and pick the corresponding factor.
  3. Multiply the cost in step 1 by the factor in step 2, to get an estimate of the annual average loss.
  4. Use the result from step 3 to set a budget limit for installing, reading and maintaining the proposed meter.
Risk Typical characteristics Suggested
factor*
High Usually associated with highly-intermittent or very variable loads under manual control, or under automatic control at unattended installations (the risk is that equipment is left to run continually when it should only run occasionally, or is allowed to operate ‘flat out’ when its output ought to modulate in response to changes in demand). Examples of highly-intermittent loads include wash-down systems, transfer pumps, frost protection schemes, and in general any equipment which spends significant time on standby. Typical continuous but highly-variable loads would include space heating and cooling systems. It should be borne in mind that oversized plant, or any equipment which necessarily runs at low load factor, is at increased risk. 20%
Medium Typified by variable loads and intermittently-used equipment operating at high load factor under automatic control, in manned situations where failure of the automatic controls would probably become apparent quickly. 5%
Low Anything which necessarily runs at high load factor (and therefore has little capacity for excessive operation) or where loss or leakage, if able to occur at all, would be immediately detected and rectified. 1%

*Note: the risk percentages are suggested only; the reader should use his or her judgment in setting percentages appropriate to individual circumstances

The RoUL method tries to quantify the cost of not having a meter, but this relies on knowing the consumption in the as-yet-unmetered circuit. The circular argument has to be broken by estimating consumption:

  • by step testing
  • using regression analysis to determine sensitivity to driving factors such as product throughput and prevailing weather
  • using ammeter readings for electricity, condensate flow for steam, etc.
  • multiplying installed capacity by assumed (or measured) load factors
  • from temporary metering

Energy after Brexit

DATELINE 1 APRIL, 2019: Thinking about the laws that are likely to change post-EU, the most significant from an energy standpoint are the laws of science, meaning it is likely that:

  1. fuels will become susceptible to magnetism, enabling even more complete combustion than can be obtained through proper maintenance;
  2. the internal metallic layers in multi-foil insulation will be able to reflect heat back through the adjoining insulant and out through the surface foil;
  3. heating-water additives will enable radiators to heat up quicker but release heat more slowly;
  4. boiler anti-cycling devices that cut fuel consumption during periods of low load will do the same under medium and high load conditions which account for the majority of annual fuel consumption;
  5. insulating paints will be as effective as conventional insulation materials that are 4,000 times thicker;
  6. temperature sensors in freezers will respond more accurately and rapidly when encased in a cube of gel;
  7. putting solar panels in refrigeration circuits will enable even more heat to be pumped out with the same electrical input;
  8. ‘kinetic’ pavements will generate enough energy to power a display showing how many steps passing people have taken; and
  9. voltage-reduction devices will enable electrical equipment to perform the same work with lower energy input, and will themselves no longer incur standing power losses.

These insights are provided courtesy of Laboratoires Farage.

Uncertainty in savings estimates: a worked example

To prove that energy performance has improved, we calculate the energy performance indicator (EnPI) first for a baseline period and again during the subsequent period which we wish to evaluate. Let us represent the baseline EnPI value as P1 and the subsequent period’s value as P2

Most people would then say that as long as P2 is less than Pwe have proved the case. But there is uncertainty in both P1 and P2 and this will be translated into uncertainty in the estimate of their difference. We strictly need to show not only that the difference (P1 – P2) is positive, but that the difference exceeds the uncertainty in its calculation. Here’s how we can do that.

In the example which follows I will use a particular form of EnPI called the ‘Energy Performance Coefficient’ (EnPC), although any numerical indicator could be used. The EnPC is the ratio of actual to expected consumption. By definition this has a value of 1.00 over your baseline period, falling to lower values if energy-saving measures result in consumption less than otherwise expected. To avoid a long explanation of the statistics I’ll also draw on Appendix B of the International Performance Measurement and Verification Protocol (IPMVP, 2012 edition) which can be consulted for deeper explanations.

IPMVP recommends evaluation based on the Standard Error, SE, of (in this case) the EnPC. To calculate SE you first calculate the EnPC at regular intervals and measure the Standard Deviation (SD) of the results; then divide SD by the square root of the number of EnPI observations. In my sample data I use 2016 and 2017 as the baseline period, and calculate the EnPC month by month.

In my sample data the standard deviation of the EnPC during the baseline period was 0.04423 and there being 24 observations the baseline Standard Error was thus

SE1 = 0.04423 / √24 = 0.00903

Here is the cusum analysis with the baseline observations highlighted:

The cusum analysis shows that performance continued unchanged after the baseline period but then in July 2018 it improved. We see that the final five months show apparent improvement; the mean EnPC after the change was 0.94, and these five observations had a Standard Deviation of 0.02402. Their Standard Error was therefore

SE2 = 0.02402 / √5 = 0.01074

SEdiff , the Standard Error of the difference (P1 – P2) is given by

SEdiff = √( SE12 + SE22 )

= √( 0.009032 + 0. 010742 )

= 0.01403

SE on its own does not express the true uncertainty. It must be multiplied by a safety factor t which will be smaller if we have more observations (or if we can accept lower confidence) and vice versa. This table is a subset of t values cited by IPMVP:

	     |     Confidence level     |
             |   90%  |   80%  |   50%  |
Observations |        |        |        |
      5      |  2.13  |  1.53  |  0.74  |
     10      |  1.83  |  1.38  |  0.70  |
     12      |  1.80  |  1.36  |  0.70  |
     24      |  1.71  |  1.32  |  0.69  |
     30      |  1.70  |  1.31  |  0.68  |

Let us suppose we want to be 90% confident that the true reduction in the EnPC lies within a certain range. We therefore need to pick a t-value from the “90%” column of the table above. But do we pick the value corresponding to 24 observations (the baseline case) or 5 (the post-improvement period)? To be conservative—as required by IPMVP—we take the lower number, meaning we must in this case use a t value of 2.13.

Now in the general case ∆P, the EnPC reduction, is given by

∆P = (P1 – P2) ± t.SEdiff

Which, substituting the values from our example, would yield

∆P = (1.00 – 0.94) ± (2.13 x 0.01403)

∆P = 0.06 ± 0.03

The lowest probable value of the improvement ∆P is thus (0.06 – 0.03) = 0.03 . It may in reality be less, but the chances of that are only 1 in 20 because we are 90% confident that it falls within the stated range and by implication 5% confident that it is above the upper limit.

Footnote: example data

The analysis is based on real data (preview below). These are from an anonymous source and  multiplied by a secret factor to disguise their true values. Anybody wishing to verify the analysis can download the anonymous data as a spreadsheet here.

Note: to compute the baseline EnPC

  1. do a regression of MWh against tonnes using the months labelled ‘B’
  2. create a column of ‘expected’ consumptions by substituting tonnage values in the regression formula 
  3. divide each actual MWh figure by the corresponding expected value

Product awards: handle with care

This article may upset some of my friends in at energy publications and associations, but we have a problem which people need to be aware of. It is that we can no longer trust awards for energy-saving products as indicators of merit.

I get asked for advice about dubious products by my newsletter readers and often they’ll say “I smell a rat but it has an award from [insert name of prestigious body]“. How can something bogus get an award that it does not deserve? To  answer that you have to understand how award schemes work. In particular you need to appreciate that their promoters are driven by profit. The commercial imperative is simple: get as many bums on expensive seats as possible at a gala-dinner awards ceremony. To do that, they need to have a lot of short-listed candidates, because those are the people who will pay on the off-chance that they get to pose as a winner with the celebrity host. Having a big shortlist means putting an awful lot of entries in front of the judging panel (44 for one  panel I sat on). But these judges are unpaid, and as volunteers they simply cannot spare enough time to scrutinise entries thoroughly, even though some do take it seriously and try to be diligent. They aren’t helped by the fact that candidates often submit little more than rehashed sales blurbs full of unsubstantiated claims — a short-cut which promoters condone in the interests of maximising the number of bums on seats.

Some judges, moreover, will have been selected more for their celebrity than their knowledge (celebrity judges equals more bums on seats), and will lack the ability to spot snake-oil propositions or even to understand counter-arguments from more knowledgeable fellow-judges. The majority of any panel will be easily swayed by the plausible nonsense in the entries, will not question the credibility of testimonials, and will naively assume that no competition entrant could possibly have criminal intent.

It is asymmetric warfare. The snake-oil peddler just needs to keep plugging away with award entries because the spurious credibility that they get from their first award is too valuable to forego. Once they have landed one award, they are effectively immunised against rejection by judges for other awards and probably even have their chances boosted.

I don’t want to tar all awards with the same brush: in an honest world they would all work to everyone’s benefit, and no promoter is knowingly complicit in the occasional fraud that slips through the net. But sadly a few bad apples have devalued energy awards and my advice would be this. If you have doubts about a product,  seeing the phrase ‘award-winning’ should put you on alert.

“Science-based targets”: sounds good, means very little

WHEN I FIRST heard the term science-based target (SBT) bandied around in the public arena I thought “oh good – they are advocating a rational approach to energy management”. I thought they were promoting the idea that I always push, which is to compare your actual energy consumption against an expected quantity calculated, on a scientific basis, from the prevailing conditions of weather, production activity, or whatever other measurable factors drive variation in consumption.

How wrong I was. Firstly, SBTs are targets for emissions, not energy consumption;  and secondly a target is defined as ‘science-based’ if, to quote the Carbon Trust, “it is in line with the level of decarbonisation required to keep the global temperature increase below 2°C compared to pre-industrial temperatures”. I have three problems with all of this.

Firstly I have a problem with climate change. I believe it is real, of course; and I am sure that human activity, fuel use in particular, is the major cause. What I don’t agree with is using it as a motivator or to define goals. It is too remote, too big, and too abstract to be relevant to the individual enterprise. And it is too contentious. To mention climate change is to invite debate: to debate is to delay.

Secondly, global targets cannot be transcribed directly into local ones. If your global target is a reduction of x% and you set x% as the target for every user, you will fail because some people will be unable or unwilling to achieve a cut of x% while those who do achieve x% will stop when they have done so. In short there will be too few over-achievers to compensate for the laggards.

Finally I object to the focus on decarbonisation. Not that decarbonisation itself is valueless; quite the opposite. It is the risk that people prioritise decarbonisation of supply, rather than reduction of demand. If you decarbonise the supply to a wasteful operation, you have denied low-carbon energy to somebody somewhere who needed it for a useful purpose. We should always put energy saving first, and that is where effective monitoring and targeting, including rational comparisons of actual and expected consumption, has an essential part to play.