Sunday, January 25, 2015

Realities of the Electric Grid

A lot of people are talking these days—actually, for at least the past couple of decades—about “going green” with our electric grid. This means moving from fossil fuels like coal, oil, and gas to renewables like solar power and wind. As someone who used to work at one of the most diversified electric utilities in the country,1 I can tell you that this approach will probably not work. Or not until we experience a major advance in our energy technology.

The question goes back to the basic principle of running an electric grid: the difference between baseload and peaking power. The distinction between the two depends on the physics of the system, which requires that electricity flow from the generator (the machine making the juice) to the load (the customer’s appliances, light bulbs, machinery, or other uses) instantly and in real time.

Load is the energy demand represented by the sum of energy-using decisions of all the consumers on the grid at any one time. It averages out their minute-by-minute actions to turn on a light here, run the dishwasher there, turn off the radio … Click! Click! Click! That usage can change in a single household by a couple of watts every few seconds. Multiply that by ten million customers on a large utility’s grid, and you have a serious amount of demand in constant fluctuation.

Utility operators would go crazy trying to keep up with all those tiny fluctuations, except experience has taught them that the on-off decisions tend to cancel each other, and the load is fairly steady. I may turn on my computer at more or less the same instant you turn off your television set; so across the entire grid the load at any minute tends to balance out.2 The operators look instead at the load curve, which is the amount of overall demand as it changes throughout the day and also during the year.

At 3 a.m., the load is relatively small, because only night owls, all-night restaurants, hospitals, and streetlights are turned on; everyone else is asleep. By 6 a.m. people start waking up, turning on lights and stoves and radios, and the utility operators can watch the demand on their system rise accordingly. Demand keeps rising during the morning as stores and offices open for business and people go to work, with more lights, computers, and machinery drawing power. In southern states and during the summertime, electric demand tends to peak in the mid-afternoon, because everyone has the air-conditioner going full blast. In northern states and during winter, the peak usually doesn’t come until early evening, when people go home, turn up the thermostat, start cooking dinner, and sit down to watch television.

Utility economics depends on knowing your system’s baseload electricity demand—that’s the irreducible minimum, the 3 a.m. demand. To meet baseload, you plan to run the generators with the lowest operating costs and keep them going twenty-four/seven. You don’t mind if these baseload plants cost a lot to build, because you plan to get good use out of them. You also need to know your system’s peak load—that’s the mid-afternoon demand in the south, evening in the north. To meet the peak, you’ll run generators that can have higher operating costs, and you will bring them on in the order of ascending cost as you add units near the peak. You will pay a lot for the power these “peakers” make, because you need it, but you don’t want the generating units themselves to cost a lot to build, because you won’t be using them as much.

Baseload generation, peak load, and the shape of the curve between them pretty much define a utility company’s generation planning and purchase decisions. And they keep the system operators busy throughout the day, as well as throughout the year, figuring the operating parameters and costs for each type of generation and dispatching units to meet the demand most economically.

In the old days, before about 1970, baseload plants were designed to run all the time. These were generally nuclear and coal-fired thermal generating stations—big, complex, and expensive to build, but using relatively cheap fuel. That meant their capital cost—the cost to build—was high, but then the company was going to get maximum use out of the plant. Their operating cost—that is, the actual cost to make the next unit of electricity from one minute to the next—was low, because the utility depended on the plant to make a lot of electricity. Baseload plants were designed to run flat out, all the time, and were only taken out of service for maintenance—or in the case of nuclear plants, for refueling—and then only with a lot of advance planning.

In those same old days, up until about 1970, peakload plants were just the opposite. They were designed to come on line fast, run for a few hours to meet transient demand, then shut down for the day. These were generally gas-fired turbines that were cheap to build—well, relatively cheap, compared to a steam turbine fed by a big coal-fired boiler or a nuclear reactor. The peakers could afford to burn expensive fuels, like oil and gas, which had a lot of competing uses in the economy, like household heating and transportation. Peakers were designed to help the system over the hump in the demand curve and that was it.

The economics changed a bit in the 1970s. First, environmental regulations began to bear down on the emissions from baseload fossil plants and the perceived risks of nuclear technology. So the traditional baseload plants became more expensive to build and operate.

Second, improvements in jet engine design for aviation increased the efficiency of gas-fired turbines and so lowered their operating cost. A gas turbine is just a commercial jet engine bolted onto a stand with its turbine shaft connected to an electric generator. Eventually, the operating cost of gas turbines began to equal that of a steam boiler and became much less than a reactor—and the turbine cost a whole lot less to build than either.

Third, new concepts of dual fuel use, called “cogeneration,” began to be accepted. For example, the exhaust heat from a gas turbine might be used to boil water for process steam in a food cannery. This increased the efficiency of the canning plant’s fuel use—giving them electricity for their own operations as well as cooked tomatoes. To promote this dual use, the U.S. government required utility companies by law to buy the excess energy produced by their cogeneration customers and pay for it at the utility system’s marginal cost.3

Suddenly, gas-fired peakers and energy-efficiency schemes could complete with traditional baseload generation.

This was also the time when researchers and engineers became serious about alternative fuels. They worked to improve the efficiency of solar photovoltaic panels, solar-thermal boilers, and wind turbines. Still, the “energy density” of these renewables—the heat available in sunshine per square meter, or the kinetic energy of the wind per cubic meter—was a lot lower on an area or volume basis than a gas- or coal-fired flame, the neutron flux in a reactor, or a pipe full of water falling under the influence of gravity. Solar and wind farms had to make up for that lower density by sheer volume: more square meters of photovoltaic panels, more mirrors focused on the central boiler, more turbines with bigger blade diameters lined up along the ridge.4

So the state of energy generating technology is constantly changing and improving. But the efficiency of the renewables still isn’t very great. The most advanced solar cells are currently able to convert only about 30% of the sunlight that falls on the panel—about 130 watts per square meter at ground level—which means that more than two-thirds of the available energy goes to waste. Wind turbine efficiency depends on blade size, air density, and the average wind speed for which the machine is designed. But the best designs can capture only about 35% of the energy available in the wind; the rest passes around the blade or is lost to turbulence, reduction gearing, and other factors. So again, about two-thirds of the theoretically available energy is wasted. By comparison, a thermal power plant using efficiency-boosting technologies like superheated steam and multi-staged turbine pressures can achieve almost 50% energy efficiency—which is still wasting a lot of the available energy, but less so than with the current renewables.

But the inherent efficiency of a generator design is one thing. A second and more importance consideration has to do with its capacity factor and dispatchability. The capacity factor is the percentage of its life that the plant spends actually making energy. A coal- or gas-fired power plant or a nuclear reactor can run at full capacity for weeks or months at a time before shutting down for maintenance.5 Dispatchability is the ease with which a utility operator can bring the unit on line. Most big, baseload plants that generate steam in a boiler or heat exchanger to run a turbine will take some hours to build up enough pressure to make electricity, so even when they are not on line they keep spinning in reserve. A gas-fired turbine can start up in a matter of minutes, just like an airliner’s jet engines.

What is the capacity factor of a solar plant, either one that heats a boiler with mirrors or one that converts sunlight in a photovoltaic cell? Well, not much more than 50% in the tropics, where day and night are the same length. The energy is available longer in summer months at the higher latitudes, but shorter in winter months. The available sunlight peaks when the sun is directly overhead and drops off towards dawn and dusk. And, of course, the available energy is greatly reduced on cloudy days. Finally, the operator can’t dispatch the plant at all during the night.6

A wind turbine makes electricity only as long as the wind is blowing. You can design a very sensitive blade and the gearing to make use of light airs—but then you have to shut down in gusts and high wind conditions or risk damaging the machine. Or you can design a robust machine that is relatively inefficient in light airs. Or you can use more complex blades with variable pitch to take advantage of different wind settings. But more complexity means more risk of damage, higher maintenance costs, and more downtime. And when the wind doesn’t blow, both the capacity factor and dispatchability are set at zero.

A cogenerator makes energy primarily for itself and on its own schedule. This removes the plant from the load curve, either entirely or in part, while it’s cooking those tomatoes. But cogeneration agreements also include the option for the plant to standby power from the utility when the processing line is shut down. So each cogenerator on the grid presents the system operators with a tricky load equation and zero dispatchability.

A utility grid could make up for these inherent defects by incorporating some kind of energy storage system. A really big battery would work, except that chemical batteries are bulky and not a very efficient way of handling energy. A lead-acid battery, like a car battery, retains about 75% of the energy put into it. That’s pretty good for actual losses during charging and discharge—but remember that the electricity being stored already represents only a fraction of the energy available in the fuel at the power plant. And current battery technology is small scale: it’s good for portable energy to start a car engine or operate a flashlight or radio, not so much for powering a household or an entire city.

Other, larger storage systems—like running the electric current in a continuous loop through a superconducting material, or storing it in some form of kinetic energy7—are still under development, and they will also have their losses. No, the best, most efficient way to get energy to the customer is still direct from the power plant on a energized line, where energy losses in transmission (the cross-country part of the system) are only about 11% to 17%, while losses in distribution (the neighborhood part) can be as high as 50%.

I’m not saying we shouldn’t make electricity from solar and wind. In a world that’s starving for energy, every little bit helps. But they won’t be as economical as fossil fuels, at least for the foreseeable future, or our current policy horizon, and they will never be suitable for meeting continuous, baseload electric demand. Eventually, however—in a couple of centuries at our current rates of developing and consuming fossil reserves—we will run out of coal, oil, and gas. And anyway, these carbon-based fuels are have much better use as chemical feedstocks.

By then, with continuous advance in our technology, driven by scientists and engineers restlessly searching for the next big thing in basic principles, mechanics, and materials, we will have low-cost, efficient ways to tap into the energy latent in sunlight, weather and tidal patterns, volcanism, plate tectonics, and clever manipulations of gravity. It’s what human beings with their big brains tend to do.

1. For ten years I worked in Corporate Communications at the Pacific Gas & Electric Company, the main provider of electricity and natural gas to Northern California. During the time I was there, the company generated electricity from a network of hydroelectric powerhouses, which were a legacy from California’s gold mining days; from a number of steam plants, which could burn either oil or the company’s abundant supplies of natural gas; and from various isolated nuclear, wind, solar, and geothermal projects. The company even explored building a coal-fired power plant, a first in California. PG&E was a model for diversified energy.

2. Over the entire grid it may average out, but in a single neighborhood you can get voltage spikes and sags. This is why the utility company generally mounts capacitors—power sources that charge up slowly when demand is low, and release energy quickly when demand spikes—atop poles throughout the neighborhood. They can supply a sudden burst of energy if all those local consumer choices should coincide.

3. Marginal cost is the combined capital and operating cost of the next unit of generation that the utility plans to bring on line to meet overall demand growth. This cost differs for each utility, based on its generating mix and the demographics of its customer base.

4. As a PG&E engineer once told me, looking at the company’s experimental Boeing MOD-2 wind turbine, which had a single blade 300-feet long, driving a boxcar-sized generator nacelle, which was sitting on top of a 200-foot tower: “That’s a hellacious amount of steel for 2,500 kilowatts of energy.”
       As it turned out, the stresses caused by spinning that 99-ton blade cracked the driveshaft. This happened on every MOD-2 ever built. After several replacements—which meant moving a large crane to the top of the hill where the turbine was sited, unshipping and lowering the blade to the ground, then unmounting and lowering the nacelle—the company determined that the big windmill was a liability and dismantled it. It turns out that getting free energy in the form of wind or sunlight is not the most important consideration in adopting a particular generating system.

5. The standard in the nuclear power industry is to operate continuously for about 18 months, then go off line for three to six months for refueling and plant maintenance. So, over a two-year period, the plant has about a 75% capacity factor. Fossil-fueled plants may have a slightly reduced capacity factor, because designing them for flawless operation at full power is not as critical as with nuclear fuel. Still, no plant operator likes to let the boiler shut down and grow cold, then have to burn precious fuel to bring it back up to heat for producing steam.

6. If you want a consistent, dependable, dispatchable solar energy system, you really have to go into orbit. The incidence of sunlight above the atmosphere is about 1,300 watts per square meter—ten times that on the ground. The satellites can be placed in polar, sun-synchronous orbits that never fall into the Earth’s shadow. And the energy can be beamed down to diode fields on the planet’s surface. Between the photovoltaic panel losses, energy conversion losses, and beaming losses, wastage is considerable. But the system has almost no moving parts, never needs maintenance, and the solar panels will never need dusting. It’s where we’ll go eventually for our energy. All of this is described in my 2010 novel Sunflowers.

7. When PG&E began building the Diablo Canyon Nuclear Power Plant in the 1970s, its baseload capacity was actually more than the night-time load on the company’s grid. So, to avoid wasting all that energy, they devised the Helms Pumped Storage Project. They built a tunnel between two lakes up in the Sierras with a powerhouse in the middle. At night, the nuclear plant’s electricity ran the powerhouse generators as motors and the water wheels worked as pumps, moving water from the lower to the upper lake. During the day, when the system peak occurred, the water was allowed to flow back down, turning the water wheels and the generators to make needed electricity. It wasn’t very efficient, of course, but anything was better than having to throttle back the Diablo Canyon reactors at night or running all their excess current into the ground.

No comments:

Post a Comment