Energy: The Master Resource

Authors: Robert L. Bradley, Jr. and Richard W. Fulmer

Publisher: Kendall/Hunt Publishing Company, Date of Publication: 2004, ISBN: 0-7575-1169-4, Number of pages: 236

Editorialized Executive Book Summary

A Member Service from CARE: Citizens Alliance for Responsible Energy




Energy is the master resource. It is essential for life and provides comfort and protection against the elements. People in developed countries can scarcely imagine living without lighting, conditioned air, indoor plumbing, electric ovens, mechanized transportation, medical devices, computers and all the other elements of the modern, energy-intensive world.


Yet energy is at the center of many concerns and controversies. Much of the world’s oil supply is concentrated in the politically unstable Middle East. Oil, natural gas, and coal cannot be reproduced. They are nonrenewable resources that can be consumed

only until their reserves are depleted.


Plentiful energy has enabled the great population growth of the past two hundred years. But can enough energy be produced cleanly enough to support the population rise that is predicted later this century? Is energy, or, more accurately, human ingenuity up to the challenge?


This primer on the history, technology, economics, and public policy of energy explains what energy is and how its use has evolved over the centuries. It also discusses the “sustainability” of the modern energy economy from the standpoint of both available resources and energy’s effect on the environment. The book combines introductory information that might be found in an energy encyclopedia—presented in a nontechnical way so that little prior knowledge of the field is required.


There are reasons to appreciate our energy past and to be optimistic about our energy future. Carbon-related technologies are doing well in a two-front war against resource depletion and pollution. Technological improvements and capital turnover (that is, replacement of older vehicles, machines, and power plants with newer, more efficient equipment) promise to continue to make our air and water cleaner in the decades ahead even as energy consumption increases.


Human ingenuity is the ultimate resource that, when applied to the master resource of energy, can enable people to enjoy longer, more comfortable, and more productive lives.


Chapter 1~The Basics


Energy is the stuff of life. With it, we can accomplish practically anything; without it, we can do nothing. Like most other useful things, energy can be misused. Improperly handled, it may be enormously destructive. Whether energy is used for good or ill depends entirely on the knowledge and wisdom of those wielding it. It is vital, therefore, that everyone understands as much about the subject as possible. When it comes to energy, knowledge really is power.



Energy is the capacity to do work. Power is the rate at which work is done, and is calculated by dividing work by the time taken to do the work. The faster the work is done, therefore, the more power is expended.


It is useful to think of energy in terms of work because the whole reason we want to control energy is for the work it can do for us. In fact, the word energy comes from the Greek words en meaning in or at, and ergon meaning work. In the United States, the most common unit of measure is the British Thermal Unit, or BTU. A BTU is defined as the amount of energy needed to raise the temperature of one pound of water by one degree Fahrenheit.


Another common energy unit is the kilowatt-hour used to measure electricity. One-kilowatt-hour is equivalent to 3,413 BTUs. Note that kilowatts measure capacity or flow, while kilowatt-hours measure quantity; a one kilowatt electrical generator running for one hour produces one kilowatt hour of electricity. It takes about one kilowatt of generating capacity to provide electricity to an average American home.


Energy exists in two basic forms:

  1. Potential—Energy at rest, waiting to be used
  2. Kinetic—Energy in motion


A boulder sitting on the edge of a cliff is said to have potential energy by virtue of its position in the Earth’s gravitational field. The boulder’s potential energy is converted into kinetic energy when it is pushed over the edge, and gravity causes it to fall.



The sun is the source of most of the Earth’s energy. The sun’s energy comes to us as heat and light, which, in turn, give rise to other forms of energy. For example, the sun’s uneven heating of the Earth’s atmosphere is one cause of wind. Heated air expands and rises and is replaced by cooler air in a process called circulation. Circulation produces wind.


Additionally, carbon-based fuels are probably a result of the sun’s light. Most scientists believe that a combination of bacterial action, heat, and pressure transformed the plant and animal remains into crude oil and natural gas. Coal was formed in a similar fashion when thick layers of dead plants piled up in swamps and rotted, turning into a substance called peat. When layers of sediment covered the peat, the resulting pressure transformed it into coal. Because peat, coal, tar, bitumen, petroleum, and natural gas are believed to come from long dead plants and animals, they are often called fossil fuels.



Man is not well-equipped to adapt to nature. To survive, we must adapt nature to ourselves. Before fire, people’s only source of power came from their own muscles. Fire brought warmth and light to cold, dark nights. It gave protection against animals far swifter and stronger than any man.


By harnessing the power of the larger animals, people became far more productive. Suddenly, more land could be cultivated than ever before and more crops grown on each acre. With a more adequate and secure food supply, people began settling in one place. This enabled them to create and accumulate new, better, and larger tools. As a result, advances in energy technology began to appear more quickly. It was not until the late 17th Century, with the invention of the steam engine, that technology took off. After that the history of energy started changing at a furious pace.



The history of energy is really the history of our material development. People build machines to harness energy and magnify their ability to do useful work. Over time, people advanced to more efficient machines and to more efficient, concentrated, portable, and convenient forms of energy—from human muscles, to burning wood, to animal power, to wind and water power. Then, from these sources, humanity moved to whale oil and coal; next to petroleum, natural gas, and nuclear energy. Each advance left people better off and further from the hand-to-mouth existence that had been their lot for hundreds of thousands of years.


Far more discoveries and inventions have occurred in the past two hundred years than in all of the hundreds of thousands of years that went before. Much of this is because as one discovery or invention leads to another, innovation piles upon innovation ever more rapidly. Individuals who are free to act and to enjoy the fruits of their actions have a strong incentive to invent things to increase their productivity and wealth.


The right of individuals to own and trade property at mutually agreeable prices is essential for the efficient allocation of resources necessary for technological progress and economic growth. Throughout history, people have been faced with difficult problems, and throughout history, they have found solutions. Often, these solutions left them much better off than they were before the problems appeared. Ideas drive history, and not history ideas.


Chapter 2~Using Energy


Work is force multiplied by the distance through which it acts. The trick in getting energy to do work is to channel it in such a way that it moves something. A simple example is a ship’s sail. To get to the other side of a lake, one can either row a boat or put up a sail and let the wind do the work.


A windmill works in much the same way as does a sail, it converts the wind’s kinetic energy into rotary motion. Windmills have been used for centuries to perform such tasks as pumping water and grinding grain. One problem with a windmill is that it is an “intermittent resource”—it only works when the wind is blowing.


Waterwheels are similar to windmills. While somewhat more reliable than windmills, early waterwheels had problems of their own. A shop, factory, or mill usually could not depend upon its waterwheel more than 160 days a year because of ice, floods, droughts, and dams that silted up.


Besides their unreliability, another big problem with both windmills and waterwheels is that they are stationary and cannot be used to directly power a vehicle. Enter the steam engine. Water can be boiled to generate high-pressure steam anytime and anywhere. The steam’s energy can then be directly converted into rotary motion, or can be used to push a piston back and forth. The heat required to produce steam can be generated by burning wood, alcohol, or carbon-based fuels; or with controlled nuclear reactions. Internal combustion engines burn fuel directly inside piston cylinders where expanding combustion gases drive the pistons. Rotary motion created by these various means can be used to run machinery, turn wheels, drive propellers, or to generate electricity.



Electricity is an extremely versatile, portable, and convenient form of power, and about a third of America’s primary energy is used to generate it.



Conventional steam plants generate most of America’s electricity—and most are fueled by coal. In fact, more than half of the country’s power comes from coal, the most plentiful carbon-based fuel. In a typical plant, powdered coal is burned to boil water, converting it into high-pressure, superheated steam. The steam enters a turbine where it expands and drives the turbine’s blades. The blades turn a shaft connected to a generator that creates electrical current. After the steam leaves the turbine, it passes through condensers that cool it back into liquid water. The water is then pumped back into the boiler to repeat the cycle.


Coal is essentially carbon plus some hydrocarbons and a minor amount of minerals—the higher the carbon content, the more heat and less ash that is produced when the coal is burned. Coal use has had a larger effect on the environment than either oil or natural gas, though its impact has decreased with improving technology and stricter regulations.


Controlling coal’s impact on the environment is expensive. Ultimately, the costs are passed on to consumers in the form of higher prices for both coal and the electricity produced from it. The prices also shift the burden of reducing coal’s environmental impact to those who benefit from it. Despite the higher prices resulting from environmental controls, coal is competitive with other fuels as a primary source of electric power.



Nuclear power plants produce electricity in much the same way as do traditional power plants. Water is heated to produce steam to drive a turbine that, in turn, spins a generator. The big difference lies in how nuclear plants create the steam.


The 104 active nuclear power plants in the United States produce about 20 percent of the country’s electricity. There are another 350 nuclear plants throughout the rest of the world. Altogether, these plants produce about 18 percent of the world’s electric power. In the United States, the amount of electricity produced by nuclear plants has increased by 25 percent during the 1990s even though the number of nuclear plants fell by eight during this same period. Under deregulation, nuclear plants now earn more money when they produce more power, so better performance means higher returns to shareholders. Nuclear power plants are much more expensive to build than conventional plants, but their operating and maintenance costs are less.


Potentially, the biggest problem with nuclear power is the management and disposal of the tons of radioactive wastes produced every year. Nuclear plants produce far less waste than do coal plants. However, nuclear waste is far more dangerous. Further complicating the storage problem is that the wastes initially generate large amounts of heat. Spent fuel currently is stored at the plants in pools of water that absorb the radiation and dissipate the heat.


Geological isolation is the only viable long-term disposal solution currently available. This means storing the wastes in highly stable geologic formations that have remained seismically inactive for millions of years. Transporting spent fuel to these sites must be done with care. The so-called NIMBY (Not In My Back Yard) syndrome is just as important as are the geological issues in locating a suitable site for waste disposal.



Natural gas is the cleanest of the fossil fuels. It leaves no residue and produces less pollution than either oil or coal. It is used in both gas turbine and steam generating plants. The most efficient way to use it is in a combined-cycle system. In such plants, fuel is burned in a combustion chamber to produce hot, high-pressure gases that pass directly through a gas turbine that, in turn, powers a generator. The still-hot gases are then sent to a waste heat boiler where they heat water to produce steam. The steam turns a turbine that is connected to a second generator. Spent steam is piped to a condenser where it is cooled back into water. The water is pumped back into the boiler, repeating the cycle.


Natural gas became the fuel of choice for new electric generation in the 1980s and 1990s due to falling gas prices and significant efficiencies in gas-fired combined-cycle technologies. However, the prices paid for natural gas by power generators have increased by over 50 percent since 2000, while coal prices have dropped.



Hydroelectric plants produce electricity by releasing falling water through turbines that drive generators. Despite the fact that hydroelectric plants produce no pollution, they have fallen out of favor with many in the environmental community who point out that dams disrupt local ecology, place large tracts of land (often including wildlife habitat) under water, and interfere with the migration of indigenous fish. Currently, hydroelectric plants produce about 7 percent of both America’s and the world’s electricity.



Most oil-fired plants work in much the same way as coal-fired steam plants, although petroleum (like natural gas) can also be used to power turbine generators. While 39 percent of America’s overall energy came from oil in 2002, less than 3 percent of the country’s electricity was generated from oil-fired plants.


Oil resources are less plentiful and generally more expensive than coal, but oil has a lower environmental impact. It burns more completely than coal and leaves no ash to be hauled away. It also produces fewer emissions per unit of energy generated. Oil wells leave a much smaller footprint than do coal mines, and advances in directional drilling have significantly reduced this footprint even more. Additionally, oil is cheaper to transport than coal because it can be more easily pumped through pipelines.



Wind power is favored by many environmentalists as the best alternative to power generation from carbon-based fuels. Installation costs run about $1,000 per rated kilowatt, not counting transmission lines. This cost and the turbines’ operating availability of about 95 percent compare favorably with conventional power plants. However, because turbines work only when the wind is blowing, annual production under even the best conditions is generally only about 20 percent to 35 percent of rated capacity. Adjusting for these numbers, the installed cost of a turbine is closer to $3,000 to $5,000 per kilowatt.


Denmark is the world’s leader in wind-power technology; nearly 15 percent of that country’s electricity comes from wind turbines. Backing from the Danish government has, in part, accounted for the prominence of wind power there. In February 2002, however, the government announced that it was ending its subsidies due to the high cost. California took the lead in wind power in this country with an aggressive tax credit program during the early 1980s.


Some of the objections to wind power, such as land use, aesthetics, and noise, might be overcome by placing windmills offshore. However, a proposal to site 170 turbines five miles off the coast of Massachusetts was attacked on the basis that it “would ‘industrialize’ the area, interfere with local fishing, destroy a ‘place of pristine relaxation’ for boaters and drive away tourists,”



The Earth’s core is a vast and essentially unlimited source of heat. Most of this heat is at depths that are currently beyond our reach. Water in these zones can be extremely hot (up to 2,200°F) and under very high pressure. If wells are drilled into these formations, the water can be brought to the surface and used to drive turbines.


The steam and water produced in this manner often contain salts and minerals that pit and corrode turbine blades. Equipment operating under these conditions is subject to frequent breakdowns and high maintenance costs. Along with salts, the water from geothermal wells, commonly called brine, may contain toxic elements such as lead, arsenic, boron, mercury, and gases such as hydrogen sulfide, which is extremely toxic.


One problem with geothermal energy is its limited availability. There are few areas on Earth suitable for geothermal power generation. In the United States, geothermal power production is centered in a few western states, and plants are often located in environmentally sensitive areas such as national parks. Another problem is that geothermal sites tend to cool down with use.



Microturbines are small combustion turbines about the size of a refrigerator. They can produce anywhere from 25 to 500 kilowatts—enough to power 25 to 500 homes. Although they are typically fueled by natural gas, the turbines can also run on diesel. The use of microturbines and other remote devices is known as distributed generation. Microturbines fill an important niche by providing power to areas far away from existing power grids.


Proponents of micropower argue that with today’s technology, distributed does not have to mean isolated. Distributed power sources can be tied into the local grid. When home or business owners do not need their generators’ total capacity, excess power can flow into the grid and be sold at a profit. Once enough distributed power supplies are tied to a grid, the need for central plants could actually disappear. Under such a scenario, utilities would not sell power, but instead would sell access to the grid just as internet providers now sell access to the world-wide web.



Photovoltaics, or solar cells, convert sunlight directly into electricity. When photons strike certain semiconductor materials, such as silicon, they dislodge electrons. These free electrons collect on the specially-treated front surface of the solar cell, creating a potential difference between it and the back surface. Wires attached to each of the cell’s faces conduct the current. Individual cells can be combined in panels to increase voltage.


Because solar cells only work when the sun shines, they must either be used together with storage devices or as supplements to conventional facilities. Due to their high cost, they are still not practical for large-scale power generation. The few central solar generation facilities in operation are experimental and use large tracts of land. With current technology, about 100 square feet of photovoltaic (PV) panels are required to generate one kilowatt of electricity in bright sunlight. It would take hundreds of square miles of solar panels to replace an average nuclear power plant.


Currently, researchers are concentrating on two aspects of the solar cell technology: making solar cells less expensive, and making them more efficient. Another way of harnessing solar power is to use an array of mirrors to concentrate, or focus, sunlight onto water flowing through a metal pipe. The resulting steam can then be used to drive a turbine.



Biomass energy is derived from plants or animal wastes. Wood, a form of biomass, was the first fuel used by humans, long before coal or any of the hydrocarbons. Today, wood and other biomass is more often, but not always, a renewable energy source, and often the biomass used for fuel is the byproduct of other processes. Biomass power accounts for about two-thirds of the nonhydroelectric renewable energy generated in the United States, producing about 3 percent of the country’s electrical power. Biomass is a very broad term that covers many primary sources, electric generation technologies, and alternative fuels for transportation.


Currently in the United States, nearly all biomass electric power generation—probably 90 percent to 95 percent—is based on wood type fuels. Examples of biomass not derived from wood include agricultural wastes such as sugar cane bagasse, rice and nut hulls, and fruit pits. Biomass fuels can be either burned directly to produce steam to drive electric generators, or first converted to a solid, liquid, or gas fuel. Conversion may be by thermal, chemical, or biological processes, or some combination of these methods. Biological processes, like fermentation, covert biomass materials into fuel forms, such as natural gas or gasoline substitutes. Thermal processes like gasification decompose the biomass into combustible gaseous fuels similar to natural gas.


Electric power generation and heating are the main uses of biomass in the United States. A small amount of biomass is converted to ethanol fuel for transportation. Since the 1970s, the pulp and paper industry has increasingly used leftover materials as fuel to generate steam and power for the paper making process.


Cofiring—mixing biomass with coal—is the most economical, near-term technology for biomass, with a potential of approximately 7,000 MWe in the United States. Negative impacts include increased costs, reduced efficiencies, and potential lost power due to the lower heat density of the biomass.


Considering that solar cells’ energy efficiency ranges from 15 percent to 20 percent, it is clear that more energy can be produced by covering the ground with photovoltaics than with trees. There is little reason, therefore, to grow crops specifically for the purpose of energy production, although cultivation of dedicated energy crops is increasing rapidly in some areas due to heavy government subsidies.



Electric power can be generated from the water flow caused by rising and falling tides. Only a few experimental tidal plants exist in the world today, although a number of suitable locations have been identified. In general, such plants cost significantly more to build than do conventional facilities, and they provide only intermittent service.



Fuel cells work on the same principles as do storage batteries, except that free electrons are provided by the continuous flow of some fuel-like hydrogen rather than by the corrosion of an electrode. Like microturbines, fuel cells have found a niche in providing distributed power for remote sites and in serving as power backups.


One of the main drawbacks in using fuel cells is their high cost, which is about $5,200 per kilowatt of capacity as compared to $1,300 to $1,500 per kilowatt for a diesel generator. Another drawback is the difficulty in supplying the hydrogen fuel that powers all fuel cells.



The U.S. Energy Information Administration estimates that given current technology, hydrocarbonfired generation is the cheapest and solar the most expensive.



Electrical power is carried from generation plants over high-voltage wires. The use of high voltages reduces line losses over large distances. Before the power can be used at a home or business, its voltage must be reduced by a device known as a transformer. In the United States, transmission lines are interconnected to form grids. Linking the transmission lines together in this way allows power plants to back each other up in case of problems.



Transportation accounts for more than a quarter of America’s energy consumption and about a fifth of world energy use.



Much of the world’s oil is used to move people and goods, and most of that is consumed by internal combustion engines. When the gasoline-fueled automobile first appeared, it was hailed as a great boon to the environment. That may seem strange today, but at the turn of the century horses and oxen powered most vehicles. Fueling a nation’s draft animals requires that much land be placed under agriculture—resulting in a loss of natural habitat. In the early 1900s, “it took about 2 hectares [almost 5 acres] of land to feed a horse—as much as was needed by eight people. . . . In 1920, a quarter of American farmland was planted to oats, the energy source of horse-based transport.” Worse, animal power turned city streets into filthy breeding grounds for disease, reeking of manure and urine and swarming with flies.


In addition to the tons of waste that had to be scraped off city streets and carted away each day, the bodies of thousands of dead horses had to be disposed of. Early autos were noisy and belched smoke, but at least they kept the streets clean. Today’s engines are far more powerful, efficient, and cleaner than their ancestors. No other power plant can yet match the gasoline engine’s combination of convenience, power, and low cost.


Despite occasional spikes, the price of gasoline has, on average, declined over the past 80 years (after adjusting for the decreasing purchasing power of the dollar due to monetary inflation). This decline is even more impressive when two key factors are considered. First, the quality of gasoline has improved greatly over the decades. Second, local, state, and federal retail motor fuels taxes have increased more than the rate of inflation.



Electric cars are not a new idea. In the late 1800s, most American automobiles in regular production were electric. Electrics fell out of favor with the driving public, however. Gasoline engines replaced electric motors as the power plant of choice because of their greater power and range. While there have been advances in storage battery technology, batteries have not kept pace with the higher demands consumers place on cars. Electric vehicles are so expensive that the only way that automakers can sell even this reduced number is to price them well below cost. This means that the price of regular cars must go up to offset manufacturers’ losses on the sale of electric vehicles.



Hybrid Electric Vehicles (HEVs) offer a viable alternative to all-electric cars. Hybrids are powered by an internal combustion engine and driven by one or more electric motors. Typically the gas engine runs a generator that powers an electric motor at each of the car’s wheels. An electric battery provides back-up power for entering traffic and passing, and is recharged by the generator when the vehicle is idling or operating at cruising speeds. Although HEVs aren’t true zero emission vehicles, they do offer a number of advantages over all-electric cars. In the near term, hybrids offer a much more realistic alternative to traditional cars than do all-electric vehicles.



Liquefied Petroleum Gas (LPG) together with Compressed Natural Gas (CNG) are the most common alternatives to gasoline and diesel used in the United States. Both fuels produce fewer emissions than gasoline and about 25 percent less carbon dioxide (CO2).


On the down side, the Department of Energy estimates that new natural gas vehicles (NGVs) can cost anywhere from $2,500 to $5,000 more than conventional vehicles, while LPG-fueled cars cost about $2,500 more. In addition, these alternative-fuel vehicles (AFVs) are less reliable and less convenient to refuel. Finally, because both fuels contain less energy by volume than does gasoline, larger tanks are needed. Tanks on an NGV can take up nearly all the available cargo space, while holding only about 150 miles worth of fuel.



Ethanol is an alcohol produced from the fermentation of sugar. In the United States, it is typically made from corn.

Benefits of ethanol over gasoline include:

  • Lower carbon dioxide emissions (though other emissions are comparable)
  • Non-toxic
  • Renewable supply

Problems with ethanol include:

  • About 20 percent less BTU content by volume.
  • Cannot be transported through existing pipelines.
  • Significantly more expensive to produce.
  • Requires that a significant amount of land be placed under cultivation. Along with this would come an additional load on the fresh water supply, increased use of fertilizers (which could end up in streams and rivers), and loss of forestland and other natural habitat.
  • Creating ethanol may consume more energy than is contained in the ethanol.



Methanol also is an alcohol, but, unlike ethanol, it is highly toxic. It can be made from coal, natural gas, wood, and biomass.

Methanol’s advantages over gasoline are:

  • Lower carbon dioxide emissions (other emissions are comparable)
  • Renewable supply

Its disadvantages include:

  • Cannot be transported through existing pipelines
  • Somewhat more expensive to produce
  • More toxic
  • About 50 percent less BTU content by volume, requiring larger fuel tanks and resulting in less vehicle cargo and passenger space



From an environmental standpoint, hydrogen is nearly an ideal fuel because its only products of combustion are water and some nitrogen oxides. Unfortunately, hydrogen is very reactive and does not exist in a pure state on Earth. Hydrogen is therefore considered to be an energy carrier (like a battery), rather than an energy source. Hydrogen cannot replace fossil fuels, nuclear power, or any other primary energy source. In fact, energy from these sources must be expended to produce hydrogen. As with other alternative fuels, there is no distribution network for hydrogen, so refilling the tank would present a problem.


Chapter 3~Efficiency—Technical and Economic


Energy cannot be perfectly converted into useful work. Friction, vibration, and heat loss result in energy leakage. But even a frictionless heat engine perfectly insulated against heat loss would still be unable to transform all its energy input into work.


The First and Second Laws of Thermodynamics explain why this is so. There are some complex mathematics behind each of these laws, but they can be roughly summarized as follows:

The First Law: Energy is conserved—it can neither be created nor destroyed. Energy can be transformed from one form to another any number of times. During each such transformation, some energy may be lost into the environment, but the converted energy plus the energy lost must equal the original amount of energy in the system.

The Second Law: Energy flows “downhill.” When energy flows downhill, the system’s total energy differential is reduced. Energy becomes more evenly distributed and less available to perform useful work.


A system’s entropy can be thought of as a measure of the evenness of its energy distribution. The higher a system’s entropy, the less available is its energy to do work. Given the Second Law, it is often asked how the Earth has become more ordered. The answer is that the Earth is not a closed system. While the entropy of the universe is always increasing, there is a constant and tremendous flow of energy from the sun to the Earth.



Most of the energy that goes into producing electricity is lost. Say that a utility company’s power plant runs on coal. The coal is burned to boil water and produce steam that, in turn, drives a turbine. The turbine runs a generator, and the generator produces electricity. At each step, energy is lost.


When the electricity is used, there are still more energy losses because appliances are unable to convert all of their power input into useful work. Throughout this entire process, most of the energy originally stored in the coal is lost. Only a small fraction of it actually goes for productive work. The efficiency of a given machine is defined as the ratio between the usable work that comes out of the machine to the energy that went in. Only about 25 percent of the energy in a gallon of gasoline is actually used to move a car, while the other 75 percent is lost.



Some believe that the most environmentally benign technologies should always be adopted regardless of cost. This view fails to recognize that the cost of producing something reflects, to some degree, the effort, resources, and pollution that were spent to make it. Market prices enable us to compare the relative value of different resources and decide whether a given action is worthwhile. For example, if the efforts of an oil producer are to be of any use, they must produce more energy in the form of oil than is expended in order to recover and refine that oil. The activity must, in effect, make an energy profit.


The only reason we could even consider determining profit or loss by comparing energy expended against energy produced is that energy appears on both sides of the equation. In a free market, prices tend, over time, to reflect the costs of producing a commodity. The oil producer does not need to know how much energy it takes to build a pump; he needs to know only the pump’s price. Included in this price are the pump manufacturer’s costs for overhead, labor, materials, and energy.


Knowing the price of the equipment, the cost of its transportation and installation, and the price that consumers are willing to pay for his products, the producer can calculate the monetary profit that he would receive by recovering the oil. As long as they make a monetary profit, then, producers can be reasonably sure that they are also making a net energy profit.



In addition to production costs, market prices also reflect demand. Prices also allow the relative values of different goods to be compared at any moment. Relative prices tell manufacturers what people value most and therefore what they should use their resources to make. Producers that supply the public with the goods they want at prices they are willing to pay will make profits. Thus, the market automatically directs more resources to those producers that best meet consumers’ needs.


Oil refineries provide a good example of how price drives production. A refinery can turn a barrel of oil into a number of products, including gasoline, diesel, heating oil, lubricants, and feedstock for plastics. By adjusting the refining processes, production can be shifted to make more of one product and less of another. Refiners continually monitor the market prices of the products they make so that they can adjust their output in response to shifts in consumer demand. By so doing, they satisfy their customers and maximize their profits. A government-run refining monopoly, by contrast, would be driven by politics rather than by consumer demand



One of the powers of the free market pricing system is that it incorporates the big picture into local decision making. For example, recycling is typically presented as inherently good. But recycling not only saves resources, it also costs resources. If recycling a ton of paper costs more resources and produces more pollution than it saves, why do it? Without a free-market pricing system, the environmental impact of recycling cannot be determined.


If local energy efficiencies were the only thing that mattered, we would tear down and replace the country’s power plants every time more efficient technology became available. While this would ensure that our power plants would always convert energy resources into electricity as efficiently as possible, overall, resources would be wasted.


If a sufficiently encompassing resource balance is made, conserving money should translate into conserving resources. While price distortions could sever the relationship between money and resources, such distortions are typically the result of government interference with the marketplace.



Industry requires land, labor, and capital. We in the West are so accustomed to these institutions that we no longer see them—they are like the air we breathe. Yet without them our civilization and perhaps even our technology would be impossible. Property rights are a precondition for trade; one cannot sell that which one does not own. Despite problems with American property rights laws as they applied to oil and gas reservoirs, the laws provided the necessary framework by which these resources could be found, produced, and sold.


Within such a framework, oil production benefited the entire population of the country. Farmers and ranchers, under whose land the oil was found, were compensated for the use of their land by oil explorers, drillers, and producers. The owners, employees, and shareholders of oil companies, and of oil well service and supply companies, benefited. Refiners and retailers profited as well. Most of the benefits, however, accrued to the millions of people who were supplied with increasingly affordable energy to power their cars, homes, places of business, towns, and cities. People living in countries without strong private property laws benefit far less from the land’s oil and mineral wealth. In the first place, companies hesitate to invest in countries that do not recognize property rights.


Outside North America and Europe, government ownership of subsurface rights is the rule rather than the exception. In fact, governments control most of the world’s oil and gas reserves—from the Middle East, to countries of the former Soviet Union, to Central and South America.



The petroleum industry is actually made up of five sub-industries or sectors. Each sector represents a different stage in the petroleum processing chain. In industry jargon, the exploration and production sector is called the upstream part of the business, transportation the mid-stream, and refining, wholesaling, and retailing the downstream. The largest firms in the energy industry are integrated across these industry sectors.


Many smaller independents are able to compete with the large integrated firms, especially in niche markets. The presence or absence of economies of scale (falling costs from larger output) and economies of scope (falling costs from performing more than one function) determine the size and structure of firms in a free market. Globally, petroleum companies include both privately-owned capitalistic firms and government-owned socialistic firms. In the United States, the natural gas and electricity industries each have three segments—production, transmission, and distribution.


To the untrained eye, the energy industry may seem like a collection of physical resources: petroleum reservoirs and mineral deposits, oil wells and mines, refineries and power plants, pipelines and power lines. But intellectual capital drives these physical assets. The entrepreneurial component of the energy business, in which new technologies and strategies are employed to do entirely new things or perform old tasks in new ways, is the engine of progress.



The amount of electricity used in any community varies throughout the day. Consumption is typically much greater during daylight hours than at night. Usage also varies by season. But regardless of the time of day or the season, when consumers turn on a switch, they expect the power to be there. In order to meet this uneven demand, power companies must build their plants large enough to handle peak loads.


An alternative is to use the existing plant’s excess capacity during off-hours to generate power into storage. Then, during times of high demand, this storage can be tapped to supplement the main generators. The problem is that, while this may reduce the resources needed to build a new or bigger power plant, it results in higher overall fuel costs.  Significant amounts of energy will be wasted in the process of converting electrical power into potential. Because of the costs involved, such storage techniques are the exception.


Perhaps the best way to reduce the need for new capacity is to increase the price of power used during peak hours. New metering technology is making this possible.


Nothing is free, and there are no perfect solutions. There are always tradeoffs; one thing is lost in order to gain another. Market incentives lead people to balance these trade-offs to make the best use of available resources.


Chapter 4~Will We Run Out of Energy?


Since hydrocarbon production began, many smart people have made thousands of pessimistic predictions—each just as alarming and each just as wrong. Not only has the world’s known supply of coal and hydrocarbons failed to disappear, it has actually grown—substantially! Changes in scarcity can be measured by comparing the amount of labor time that an average worker needs to expend to earn the income to purchase a particular item. Major forms of energy have grown substantially cheaper measured in work-time pricing.


How could all of these people’s predictions have been so mistaken? Most simply took the amount of known recoverable reserves of a particular resource and divided it by the amount that was being used each year. The result of the division was the number of years left, or reserve years. The first problem is with the word “known” in the phrase “known recoverable reserves.” What is known is constantly changing. People are continually searching for and finding more resources.


Exploration is expensive. It takes a lot of people, equipment, and time to find oil, coal, natural gas, and so on. Because exploration is so costly, it does not make much sense to look for resources that will not be needed for two or three hundred years. That is why the known reserves of so many resources often seem to fall in the range of fifteen to twenty years regardless of how many years have passed or how much of the resources have already been produced. The next problem is with the word “recoverable.” What is really meant by this word is economically recoverable. When prices change, the amount of reserves that can be recovered economically also changes. In addition, people often come up with better and more efficient ways of doing things.


Finally, there is another problem with the simple “number-of-years-left” formula—the annual consumption rate. When conditions change, people’s actions change.


It is a mistake to confuse a resource with the service it provides. People want their homes to be warm and comfortable, but they do not really care whether it is done by burning coal or by splitting atoms.



What is left out of the “number-of-years-left” equation is human ingenuity. If energy is the master resource, then creative and knowledgeable people are the ultimate resource. A common mistake that many make is to project current trends into the future as if they will continue forever.



Pessimists admit that advances in exploration technology and fuel efficiency have stretched world fossil fuel supplies far more than they had predicted. However, they argue that no matter how long supplies last, there must eventually be a point at which they can no longer be economically extracted.


So when will the world’s oil supply be depleted? Hundreds of years of probable coal and hydrocarbon reserves remain at current consumption rates, though these rates will accelerate as China, India, and other poor nations industrialize. However, as people in these countries become freer, their know-how and financial capital can be expected to help make energy more plentiful and useful, not less.


The term proved reserves refer to those resources that have been discovered and are currently economically recoverable, while probable reserves include those additional amounts that can be expected to be recoverable under realistic price and technology changes.


Even in a worst-case scenario, resources would not disappear overnight. Instead, they gradually become harder and more expensive to find and produce. People would have time to develop the technology necessary to deal with the growing scarcity whether on the demand-side (increasing conservation) or supply side.


If a resource “depletes,” market signals change. Higher prices check consumption, and substitutes, which were previously uneconomical, are put into service. As fossil fuel reserves are consumed, people will switch to synthetic oil and perhaps other sources of energy that we cannot even imagine today. Remaining carbon-based deposits can satisfy the world’s energy needs for hundreds or thousands of years. Long before that fuel is expended, technology will advance beyond anything we can possibly comprehend today.


Energy depletionists concentrate on current sources of energy and their inevitable decline. As a result, they see a bleak future for the world. Expansionists, by contrast, are less interested in any particular resource than in the service that it provides. Their view of the future is brighter because they choose to focus not on limited resources, but on the limitless human mind.



While high resource prices can be painful in the short-term, they are really symptoms of deeper problems. They serve as warnings of shortages, and, at the same time, provide incentives for overcoming those shortages. Using price controls to solve the problem of rising prices is like trying to cure a child’s fever by adjusting the thermometer. By eliminating the feedback that free-market prices provide, price controls can quickly create shortages where none existed before or make existing shortages worse. This has happened repeatedly in the United States. In each case, a boom in oil production and a drop a drop in prices followed decontrol, thus clearly revealing the self-defeating nature of the government’s intervention.


In hindsight, the confusing swirl of regulations that the government spewed out during the 1970s oil crisis gave consumers the worst of both worlds—higher prices and shortages. Like a pebble dropped in a pond, each government action rippled through the economy in ever-widening circles, yielding unforeseen consequences and creating demands for additional government intrusion.



More recently the state of California faced its own energy crisis. Under the new regulations producers sold their electricity to a centralized state-managed power exchange at a price set by the spot market on the previous day. Utilities purchased their power from this exchange for resale to the public.


As long as plenty of power generation capacity was available, the new system worked fairly well. But in 2000, a lot of things suddenly went wrong. The availability of some power plants was reduced because of environmental restrictions. The plants had used up their allotted air emission allowances, and the cost of the additional allowances needed to enable their continued operation was prohibitively high. Nearly half of the state’s electricity, and all of its peak capacity, are generated from natural gas. Therefore, the increase in demand for electric power caused by the weather, coupled with the loss of hydroelectric power, triggered a jump in demand for natural gas. As a result, natural gas prices shot up and the cost of generating electricity rose with them.


In September 2000, California attempted to control rising prices by imposing a cap of $250 per megawatt-hour (MWh) for electricity sold to the state power exchange. According to Jerry Taylor, “Since wholesale prices ranged between $150–$1,000 per MWh depending on the time of day, generators responded by dramatically curtailing their sales to the California exchange.” Demand quickly outstripped supply. Rolling blackouts affecting more than 675,000 homes, were used to ration power. In the end, Californians as taxpayers paid what Californians as consumers did not. They got the worst of both worlds: high prices and blackouts. Although generators outside the state picked up the slack, actions by the state and federal governments made it economically risky to sell power to California. California’s complex regulations allowed people to game the system.


In hindsight, it seems clear that the market would have ended the crisis without all the sound and fury generated by both the state and federal governments. Unfortunately, politicians wanted voters to see them doing something during the crisis, and that led them to create bureaucratic solutions that will no doubt get in the way for years to come.



The sharp jumps in motor fuel prices experienced in the United States during 2000, 2001, and 2004 were not due to resource shortages, but rather to:

  • Output quotas by OPEC. The cartel controls nearly 40 percent of the world’s production and can change world crude prices overnight simply by announcing new quotas for its member nations.
  • Political turmoil in Venezuela, a major oil producer.
  • Unexpectedly high demand from India and China.
  •  Terrorist acts in Saudi Arabia (2004).
  • Too little refinery capacity and too many regulations.


Normally, companies seeing high demand and high prices for their products would expand production. However, the long-term trend for refinery product prices has been downward. Temporary price spikes are not enough to justify the huge cost of constructing new facilities that will take years, not months, to complete. Although companies are expanding some existing refineries in the face of rising demand, because of thin profit margins, no new plants have been built in the United States in 25 years.


Making matters worse is that city and state governments around the country have mandated that the gasoline sold locally must meet special environmental rules. If a refinery producing gasoline for Chicago has to shut down for whatever reason, fuel from other areas cannot be readily shipped in to make up the difference.



Many in this country are concerned about the growing dependency on foreign oil—especially on oil from nations not always friendly to the United States. In the past, government efforts to reduce imports have centered on tariffs (that is, taxes on imported oil), quotas (maximum allowable imports), or subsidies to domestic producers. These strategies all mean higher energy prices, higher taxes, or both. Tariffs, quotas, and subsidies generally do more harm than good.


The only way that OPEC can really hurt the United States is by cutting back production—impacting not just America, but the world. As long as the government did not interfere with the market, the likely effect would be a temporary price hike. Higher prices will drive demand down and spur production in non-OPEC countries including the United States.


Meanwhile, rising oil prices would encourage OPEC nations to cheat on their quotas and sell more of their product. In the end, the only lasting impact would be that OPEC would lose market share to other oil producing nations. As long as oil can flow freely around the world, then, the United States should not have to worry about oil imports.


Under wartime conditions oil cannot flow freely, and the United States might not be able to depend on foreign oil. A sudden loss of oil due to an outbreak of hostilities could have a serious effect on the nation’s economy. Open trade makes wars less likely because countries have little incentive to attack their trading partners. Conversely, tariffs and trade restrictions increase the chance of armed conflict.


In the mid-1970s, the United States established the Strategic Petroleum Reserve (SPR) to store large quantities of crude oil in case of an international emergency—equal to three-to-four months of the country’s total crude imports.


Oil imports have engendered another security concern: the fear of trade imbalances. Though this concern is widespread, it is based on several fallacies. Perhaps the most glaring of these misconceptions is the belief that money is wealth. If money truly were wealth, then any nation could quickly grow rich simply by firing up the printing presses. Money is a convenient medium of exchange that lets us compare the relative value of apples and oranges, but it is the apples and oranges themselves and not the money that constitute real wealth.


Confusing money and wealth leads to another myth—the idea that a nation can become rich by exporting goods and importing money. When we engage in trade, we are not after foreign pieces of paper, we want foreign products. The reason we export goods is to exchange them for imports that we want more. Similarly, when people in other countries trade with us, they want our products, not our dollars. While it is true that at any given time the country probably has “trade imbalances” with particular countries, this fact should be of no more concern than the fact that the average person has trade imbalances with the local supermarket and the gas station down the street.


Another aspect of energy security is the issue of whether the country will have sufficient electrical power generation and transmission capacity in the future. America’s buildings, cities, transportation systems, water and sewage treatment plants, food supply infrastructure, and communications systems all depend upon a reliable supply of electrical power.


Which is the better way to go—toward more regulation or less?


Chapter 5~Energy and the Environment


Every action of every living thing uses resources and produces pollution. Even while you are just sitting and reading, you are consuming oxygen (a resource) and producing carbon dioxide (a pollutant). Similarly, trees consume carbon dioxide (a resource) and produce oxygen (a pollutant). But wait! How can oxygen and carbon dioxide be both a resource and a pollutant?  It all depends on which side of the fence you are sitting. If you happen to be a tree, oxygen is something you are trying to get rid of (remember, trees give off oxygen), and carbon dioxide is something that you need to survive. If you are a human, on the other hand, you need oxygen to breathe, and you have to get rid of carbon dioxide. Nature has a way of balancing things out. We need trees, and trees need us.


So when is a substance a resource, and when is it a pollutant? The question is best answered by example. There are a number of natural oil seeps in the floor of the Pacific Ocean off the coast of California. The amount of oil flowing into the water is small and poses no danger to sea life in the area. In fact, given that petroleum is an organic substance (that is, it is carbon-based), it is biodegradable and serves as food for microbes, which are, in turn, eaten by larger organisms, and so on up the food chain. In small amounts, then, crude oil actually acts as a fertilizer.


However, suppose that an oil tanker were to run aground and spill millions of gallons of crude into the water. That amount of oil would be so overwhelming that it might take years for microbes to break it all down. In the meantime, it would almost certainly kill thousands of fish, birds, sea mammals, and other creatures. In this case, the oil is clearly a pollutant. If the substance in question decays fairly rapidly and provides benefit to some living creature, it’s probably a “resource.” If it lingers on and especially if it harms or destroys life, it is a “pollutant.” Even when toxic substances are involved, the most important factor in determining whether something is a serious pollutant is quantity and nature’s ability to deal with that quantity.


Pollution is not a new problem. Should we turn back the clock and live as Stone Age peoples did? Anthropologists are beginning to suspect that that way of life was not as environmentally friendly as previously believed. Such a way of life is so unproductive and wasteful that it could only support a fraction of the people now living in the world today. Typically, Indians led relatively short, disease-ridden lives. Tribal members too old or sick to pull their own weight were often, quite literally, left for the wolves. All in all, there is a lot to be said for indoor plumbing, painless dentistry, and retirement plans.



Inefficiency is waste, and waste is pollution. For example, there is waste when fuel does not burn completely. The unburned portion of the fuel either goes up the chimney or must be hauled away to a dumpsite—pollution.


Before wood can be used as a fuel, it must first be hauled to the site where it will be used (this includes hauling the part of the wood that will not be burned as well as the part that will). Transportation costs resources (fuel) and produces pollution (engine or animal emissions). When the wood is burned, soot, smoke, and ashes (unburned materials) either go up the chimney (pollution) or must be carted away (more transportation costs and more pollution).


Natural gas, on the other hand, is a very efficient fuel. It burns almost completely so that little energy is expended or pollution created in either transporting useless material to the power plant or in hauling unburned ashes away. In addition, far fewer emissions go up the chimney.


So should people be forced to act more efficiently? Fortunately, free markets automatically provide incentives. In a free market, people are encouraged to act efficiently in order to save money. In doing so, they usually end up saving resources thereby reducing waste and pollution.


Early businessmen probably had no intention of protecting the environment. Yet their desire to reduce costs and increase their profits led them to take actions that did exactly that. This market-driven search for profits has, over time, moved people in western nations to reduce waste and use resources ever more efficiently. As a result, the air and water in these countries have been getting progressively cleaner even as population, production, and fuel combustion have increased.



Within just a few decades, market incentives and improving technology combined with laws and regulations have had a dramatic effect on our country’s environment.



The environmental picture is not nearly as bright in other parts of the world. Huge cities have horrific smog despite being located in countries with far less industry than the United States. Tens of thousands to millions of cars, trucks, and buses with no smog controls cram the streets; hundreds of uncontrolled factories, smelters, and power stations belch smoke and pollutants; and in some cities millions of open cooking fires foul the air. Third-world rivers are often essentially open sewers spiked with pesticide cocktails.


In some of the countries that made up the former Soviet Union, pollution-control laws are ignored and little attention is paid to energy efficiency. As a result, the environment in these countries is in such terrible shape that it has significantly damaged the health of the people who live there.


The main environmental issues that the world faces center on the Third World. Air and water pollution do not respect international borders—dirty air created in one country can quickly become another’s problem. The difference is poverty. Third World countries are much poorer than western nations. When people are worried about where their next meal is coming from, they are much less concerned with such things as clean air and water. Clean and efficient technology is generally more expensive than dirty, inefficient technology. No high-tech equipment is needed to burn wood for heat. But it takes a lot of costly machinery and know-how to locate, produce, transport, and use natural gas as a fuel.


Why are these countries poor? Some suggest that it has to do with natural resources. America has plentiful resources; therefore, America is rich. But Russia also has huge resources, as do Africa, Mexico, and South America, and yet these areas are poor. At the same time, wealthy nations such as Japan, Taiwan, and Switzerland have almost no natural resources.


Others point to overpopulation as the problem; India and China have high population densities, so they are poor. Yet the Netherlands, Japan, Hong Kong, Belgium, South Korea, Taiwan, and Great Britain all have much higher population densities than either of these countries, and they are wealthy. At the same time, extremely impoverished nations like Ethiopia have very low population densities.


What wealthy nations have in common is liberty—the right of individuals to act as they choose without interference, so long as they don’t interfere with the rights of others to do the same. The problems in the Third World come not from a lack of governmental regulations, but from a lack of freedom to create, own, trade, and sell property.



Laws and regulations, however well meant, often make things worse.



Some free marketers advocate doing away with regulations altogether. They argue that market incentives and property rights enforcement are sufficient to protect the environment. If, it is argued, the government would simply allow people to protect their property through the courts, our environment would be much cleaner. Strengthening and expanding property laws is an important step in the right direction.


Another sort of problem occurs in situations in which no one owns the property or resource being damaged. In such cases, government regulation may be the only way to protect the environment. The challenge is to create regulations that do more good than harm.



Often regulators specify the means rather than the ends. That is, instead of establishing goals (e.g., clean air or clean water), local, state, and federal government agencies may write laws and regulations that either ban or require certain methods, technologies, or materials. This means-setting, commandand-control approach creates a number of problems. Here are just a few examples:

  • Fear of oil spills led lawmakers to prohibit offshore drilling, so the nation must import more oil that would otherwise be the case.  But oil tankers pose a greater oil spill danger than does offshore oil production. American coastlines are, therefore, actually less safe thanks to legislative “protection.”
  • The beneficiaries of regulation generally have a stronger interest in keeping the regulation in place than anyone else has in getting rid of it. As a result, they are willing to spend time and money lobbying the government to support their position.
  • When an agency is created to oversee a business, one of its first needs is employees with knowledge of that business. Where can it go for such people but to the industry itself? Similarly, when government employees retire and wish to begin second careers, where can they go other than to the business about which they have spent their professional lives learning?  This is just one way in which industries can exert enormous influence over the government agencies created to regulate them.



Means-setting can pervert the goals. The objective ceases to be clean air, clean water, or whatever other laudable end, and instead becomes adhering to the means mandated by the regulation. Perhaps a better way to regulate is to simply define the goals and then get out of the way.

  1. Establish a goal
  2. Define a yardstick for determining whether the goal has been met
  3. Establish the penalties for failing to meet the goal.
  4. Let individuals and companies figure out how to meet the targets themselves.


People are amazingly creative. Given clear and reasonable goals, they will find ways to achieve them. And, with hundreds or thousands of people working towards a goal—trying different solutions, failing, then trying again, sharing information about what works and what does not—it is almost certain that their solutions will be far better than anything a regulator could devise. The main problem with goal-setting is deciding what a reasonable target is. How clean is clean enough?


Also, there are opportunity costs. That is, when resources are used to make water absolutely pure, those resources are not available for other, perhaps more important, things. If billions of dollars are spent to reduce a pollutant to save an estimated ten lives per year, those dollars cannot be spent on highway improvements that could save a hundred lives a year. At what point do the costs exceed the benefits?


Clearly, it is important to balance the cost of cleaning the environment against the risk of leaving it less than perfectly clean. Earlier in this chapter, we proposed a definition of pollution that included consideration of the volume of the pollutant and the ability of nature to deal with that volume. Perhaps a more practical definition of “clean” would allow emissions as long as they did not exceed a level that the local environment could handle.



A number of economists and environmentalists have championed a “marketbased” alliance between government and industry to help clean the environment. Cap-and-trade combines the concept of government goal-setting with the market’s ability to allocate resources to their best effect. Many economists believe that such a system would enable cities to control pollution far more efficiently than with traditional “command-and-control” regulations. Some economists prefer emission taxes to cap-and-trade. Taxes are simpler to administer and easier to adjust or eliminate as conditions change or new information becomes available.



Some power companies and independent marketers have begun offering green energy, or electric power generated from sources that are considered to be environmentally friendly. Consumers who choose to purchase such power pay a higher rate given that such energy is more expensive to generate.


The concept of green energy assumes that renewable technologies such as solar, wind, tidal, geothermal, and biomass have less environmental impact than do either hydrocarbon or nuclear power generation. Even though hydroelectric power produces no emissions, it is usually not considered green because it requires damming rivers and altering the local environment.


But are so-called green technologies really green? Windmills and solar panels provide only intermittent service, and conventional, nongreen, power sources must make up the difference when the wind is not blowing or the sun is not shining. Should solar and wind generation be considered less green because of this?


In addition, spinning wind turbine blades can kill birds. Should wind farms located in areas providing habitat for endangered species be rated lower than farms located in less sensitive regions? Should geothermal energy be rated green, given that naturally occurring heat sources deplete with time, and some geothermal plants release toxic chemicals into the environment?

Should biomass be included as a green technology given that it produces air emissions and may encourage deforestation?


On the other hand, should power from natural gas be added to the green list given that it is the cleanest of the fossil fuels and compares favorably with renewables on such measures as wildlife disturbance, noise, land use, and visual blight?


The definition of “green” will, and should, change with improving technology, regulatory reform, and new information about the environment. In the end, though, no power source is perfect; there will always be trade-offs.


Chapter 6~Energy and Climate Change


There are at least hundreds, and perhaps thousands, of years’ worth of fossil fuels still available on Earth. However, the issue of man-made global warming has raised an important question: What will happen to the environment if that fuel is actually burned?


As the name implies, hydrocarbon molecules are made up of strings of hydrogen and carbon atoms. When these molecules are oxidized (burned), heat is released along with water (H2O), carbon monoxide (CO), and carbon dioxide (CO2). Because air is about 79 percent nitrogen, oxides of nitrogen (NOx) are also produced, and if the fuel contains sulfur, then sulfates (compounds of sulfur and oxygen) will be formed as well.


Carbon dioxide and nitrous oxide (N2O) are greenhouse gases that, in high enough atmospheric concentrations, will warm the Earth’s climate if no natural or human-driven processes offset the effect. Some scientists worry that such climatic changes might cause extreme heat and drought, more violent storms, higher ocean levels (putting coastlines and cities at risk), and increase the spread of tropical diseases.


Are these concerns justified, and if so, what can be done? The short answers are (respectively) “maybe” and “quite a bit.”



Have you ever gotten into a car after it had been sitting in the sun and noticed how much hotter the air is inside the car than outside? This phenomenon is caused by the greenhouse effect. Sunlight passing through the car’s windows is absorbed by the interior, heating it and the air inside the car. Some of the heat passes back through the windows, but some is reflected off the windows back into the car. This trapped heat builds until the car’s interior is warmer than the outside air.


The Earth’s atmosphere acts like the car’s windows, keeping heat from escaping back into space as infrared radiation. Greenhouse gases (such as carbon dioxide, methane, and water vapor) let incoming sunlight through, but block some of the infrared energy radiated upward by the sunlight-warmed Earth. According to the EPA, “Without this natural greenhouse effect, temperatures would be much lower than they are now, and life as we know it would not be possible. Thanks to greenhouse gases, the Earth’s average temperature is a hospitable 60°F,” about 59°F warmer than it would be otherwise.


But if the greenhouse effect becomes too strong, and if not enough radiated heat can escape from the atmosphere, then temperatures may rise too much.


Air pollution also has an impact on how much of the sun’s energy penetrates the atmosphere and how much gets back out. Sulfates and particulates (e.g., smoke) may block the sun’s incoming rays and therefore have a cooling effect.


When particulate emissions were much greater during the 1970s and 1980s, the possibility of global cooling was a concern. Now that particulates are under better control, at least in western countries, global warming is the main worry.



Greenhouse gases are relatively transparent to visible light and relatively opaque to infrared radiation. They let sunlight enter the Earth’s atmosphere, and, at the same time, keep radiated heat from escaping into space. The following sections provide brief descriptions of the most important of these gases.


·   Carbon Dioxide (CO2 )

By volume, carbon dioxide currently makes up 367 parts per million (0.0367 percent) of our atmosphere. About 95 percent of this comes from natural sources (emissions from animal life, decaying plant matter, etc.) and the rest from human sources, mainly the burning of carbon-based fuels. The human share of the total is relatively small, an estimated 3.5 percent to 5.4 percent. It is estimated that carbon dioxide accounts for about 60 percent of the anthropogenic (or human caused) greenhouse change known as the enhanced greenhouse effect.


There are benefits to increased CO2 concentrations as well as potential problems. Plants need carbon dioxide. As the atmosphere becomes richer in CO2, crops and other plants will grow more quickly and profusely. A doubling of carbon dioxide concentrations can be expected to increase global crop yields by 30 percent or more. Higher levels of CO2 increase the efficiency of photosynthesis, and raise plants’ water-use efficiency by closing the pores (stomates) through which they lose moisture.


·   Water Vapor (H2O)

The most common greenhouse gas is water vapor, which accounts for about 94 percent of the natural greenhouse effect. Its atmospheric concentration is ten times that of CO2. Water vapor’s impact on the climate is complex and not well understood. It can both warm and cool the atmosphere, though it is believed that water vapor has a net warming effect.


If the Earth’s climate were that sensitive, it would probably have spun out of control long before now given that there have been periods in the distant past when temperatures and CO2 were higher than they are today. This leads some scientists to suspect that there may be natural mechanisms working to keep the climate in balance.


·   Methane (CH4 )

While methane is 25 times more powerful a warming agent than carbon dioxide, it has a much shorter life span and its atmospheric concentration is only about 17 ppm. Concentrations have more than doubled since 1850, though for reasons that are still unclear, they have leveled off since the 1980s. About 20 percent of the total human greenhouse impact is due to methane.


·   Nitrous Oxide (N2O)

Nitrous oxide’s warming potential is some 300 times that of CO2. It has an atmospheric concentration of about 0.32 ppm, up from 0.28 ppm in 1850. Globally, fertilizers alone account for 70 percent of all emissions. Catalytic converters, whose use on car exhaust systems was federally mandated in 1970 by the Clean Air Act, increase N2O emissions, though to what extent is under debate.


·   CFCs

Chlorofluorocarbons, or CFCs, are powerful global warming gases that do not exist in nature but were invented by scientists at an American chemical company in the 1930s. They are used as propellants in aerosol sprays and as refrigerants. These gases affect the climate in different ways depending upon their location in the atmosphere. At lower altitudes, they trap heat like other greenhouse gases and have a much stronger warming effect than CO2.


In the upper atmosphere, or stratosphere, chloroflourocarbons are broken down by sunlight. Worldwide CFC emissions have been steadily dropping, and it is expected that ozone depletion (the ozone hole), which reached its peak in the last decade, will drop to zero later this century.



We know that concentrations of carbon dioxide, methane, and nitrous oxide are increasing due to human activity, but is the climate getting warmer as a consequence? The evidence, while still not conclusive, suggests that it is. A number scientists point to ground measurements taken over a number of decades that indicate a noticeable temperature rise.


Skeptics point out that the data are skewed toward urban areas where most measurements are taken. The problem, they argue, is that cities tend to be warmer than the surrounding countryside because of heat absorption by streets, parking lots, and dark roofs. While measurements are adjusted to compensate for this urban heat island effect, the critics claim that the adjustments are insufficient.


At least some of the last century’s warming is believed to be the result of increased solar activity. Changing ocean currents may have also produced a warm phase coming out of the little ice age in the mid-nineteenth century.


Nonetheless, according to the Intergovernmental Panel on Climate Change (IPCC), temperature readings at ground-based measuring stations reveal an average warming trend of about 1.1°F (0.6°C ) since 1850 after adjusting for the urban heat island effect. About half of this warming has occurred since 1970, which, to many scientists, is proof of an emerging greenhouse signal.



Along with higher temperatures, many scientists expect that anthropogenic global warming will mean a more active water cycle (increased evaporation and rainfall, for example) and higher sea levels (due to thermal expansion of the oceans’ water and to melting ice sheets). Some scientists argue that more extreme weather events such as tornadoes, hurricanes, dust storms, and droughts could also occur. At this time, there is not enough information to identify trends in these areas with much assurance. Data quality is poor, incomplete, and of limited duration.



The human contribution to these trends is uncertain because there is so much natural climate variation. Not only are there solar activity cycles, but there is also a 100,000-year ice age cycle. Currently, we are enjoying one of the cycle’s 10,000- to 30,000-year warming periods. The more we learn, the more we learn that there is more to learn.



The temperature increase thus far detected is significantly less than the computer model predictions that have been the source of much of the concern over global warming. While these models are very sophisticated and run on extremely powerful machines, they have not yet been able to accurately mirror the immensely complex greenhouse that is Earth. Clearly, there are still many unknowns that must be resolved and much more data that must be collected before the models can be trusted.


Climate scientists are working to resolve the issues and collect needed data. Critics of climate modeling point out that meteorologists cannot even accurately predict the weather more than three or four days in advance. How then, they ask, can modelers hope to predict the climate 50 or 100 years from now? The fact is that predicting climate change is not the same as predicting the weather. It is both simpler and more complex. It is simpler in that climate scientists do not have to determine specific future weather conditions. Instead, they are trying to identify trends in the world’s climate—a global “average” of local weather conditions.


There are many pitfalls, and many opportunities for human error. Scientists have a tendency to “calibrate” or adjust their models to agree with other models. Few researchers want to publish predictions that are either significantly higher or lower than the norm. Nor do agencies that fund this research want to pay for results that are outside the mainstream.


The problems inherent with attempting to model Earth’s atmospheric mechanisms make quantitative predictions suspect. Most likely we will only know what the climate will be like 50 years from now in 50 years. That said, the fact remains that all the models and the available data do point to a warming trend.



Actual data and theoretical modeling indicate that this warming trend disproportionately affects lower temperatures and frigid regions during the coldest times of the year. Over the past 40 years, nights have warmed more than days, with minimum temperatures typically increasing twice as much as maximum temperatures. The fact that global warming is greatest in colder regions mitigates its effects somewhat. In fact, colder areas will likely benefit from a warmer climate. Areas that already have very warm climates will likely suffer disproportionately from the trend.



A number of possible methods of dealing with climate change have been proposed. These fall into three main categories: prevention, correction and adaptation. People’s self-interest will drive them toward solutions.


The 1990s policy of favoring wilderness areas over managed forests reduced the effectiveness of national parks in serving as carbon sinks. Managed forests can support many more trees per acre than can wildernesses, and trees in wilderness areas eventually decay or burn so that their carbon is ultimately returned to the atmosphere. The choice between the carbon sink capacity of managed forests and scenic, untouched wilderness is another difficult environmental trade-off.


Another proposed government-based solution is the imposition of a carbon tax. The purpose of such a tax would be three-fold:

  1. Discourage the use of carbon-based fuels.
  2. Encourage the use of alternative energy sources (both by raising the price of conventional fuels, and by providing government with the means to subsidize alternatives).
  3. Provide funds for government energy research.


Some economists argue that a carbon tax would be more flexible and straight-forward. The main objections to such taxes are that they would drive consumers away from more efficient sources of fuel and towards less efficient (though government-approved) sources. Also, a tax of any kind shifts financial resources to the government and away from a private sector better equipped to develop new energy technology.


Most scientists have concentrated on controlling carbon dioxide as the solution to global warming. However, scientists have argued that a better approach, at least in the near term, may be to focus on more powerful, and more easily controlled, warming agents such as methane, soot, and chlorofluorocarbons. Carbon sequestration may also be a more viable method of reducing atmospheric CO2 concentrations than reducing carbon emissions.


The issue of population control often arises in discussions of global warming. By limiting the number of people, it is argued, resource consumption and pollution will be limited.


In western countries, population growth has slowed or even halted because children are a net economic burden. In developing countries, little education is needed before children become proficient with the lower levels of technology available. Children in these countries are considered cheap labor and an economic asset. Finally, child mortality rates are high in the Third World, and people tend to have additional children to ensure that at least some will survive to add to the family’s income. Attempts by the West to reduce population growth in these countries are often resented by locals as attacks on their wealth. As the Third World nations advance, however, people there will change their actions as incentives change. In fact, the world population growth rate has been declining since about 1970.


The possibility of future technology leads to an important question: Should we attempt to mitigate global warming now or wait until we understand the potential problem more clearly and have better (and as yet unimagined) technology to handle it?



Normally, the most environmentally friendly technology is also the most efficient, and people will tend to move to such technologies of their own accord. Yet this is not always the case. While it is clearly much more efficient to strip mine coal by simply removing the overburden and digging out the coal, this technique leads to erosion that fills rivers and lakes in the area with silt and heavy metals. Government regulations, therefore, now require coal companies to scrape off and preserve the topsoil before the rest of the overburden is removed. After the coal has been extracted, the overburden is replaced and contoured. Next, the topsoil is restored and seeded to prevent erosion.


The benefit is that the land is preserved, as are lakes and rivers. At the same time, though, more resources must be expended to extract the same amount of coal. The result is an environmental trade off—one thing is given up to gain another.


Nowadays, coal ash need no longer be simply carted away and dumped. Instead it can be used in cement making. However, the ash can be used for this purpose only if its carbon content is very low; in other words, the coal must be thoroughly burned. Such complete burning requires higher temperatures, which increases power plant efficiency but also produces more nitrogen oxide emissions.


Such trade-offs can be wrenching. Many environmentalists have worked hard to save natural wetlands, yet wetlands are significant sources of methane, a greenhouse gas.



Some have suggested that, because the industrial nations produce most of the anthropogenic CO2, they have a moral obligation to help the world’s poorer countries cope with the impact of enhanced global warming. Others reject this notion because, they argue, Asian rice paddies produce massive amounts of methane, a more powerful greenhouse gas than carbon dioxide.

Also, they point out that the large populations in China and India are going to produce a tremendous amount of CO2 in the future as they become more technologically advanced and their energy needs increase. Such finger pointing is one reason that treaty negotiations are so difficult.



Understandably, global warming has become a very emotional issue for many people, and these emotions can make the job of finding the truth difficult. Further clouding the facts are the built-in biases that everyone has. Sometimes these biases depend upon personality types such as whether an individual is generally an optimist or a pessimist.


That is not to say that everyone with a stake in the issue will consciously try to hide the truth or skew the data. But people often emphasize facts that support their own positions and either ignore or minimize information to the contrary. Whether it is decided that global warming is or is not a problem, the decision must be based on good science and economics, and not emotion.



Thus far, neither the computer models nor the actual data have provided a clear and definitive picture of the trends in the Earth’s climate. Scientists are still arguing about how serious a problem enhanced global warming is and even if there is a greenhouse signal apart from natural variability. If the experts cannot agree, how are laymen to decide the truth? The fact is that we just do not know yet what the truth is, and much more research is needed before we can know.


Many ways of dealing with greenhouse gases have already been proposed. Many other methods will be developed in the future as we learn more. Money spent on ineffective solutions now cannot be spent on things that will actually make a difference. Worse, measures that harm the economy now will reduce the resources that will be available in the future when there will be a better understanding of the problem and of how to deal with it. Unfortunately, most government initiatives have zeroed in on one very expensive and very ineffective solution—compulsory CO2 emission reductions.


If anthropogenic global warming proves to be a problem, then we must keep our eyes on the goal—the reduction or reversal of the effects of such warming. To achieve this goal efficiently and effectively in the long run, we should examine a combination of measures that would (a) reduce emissions to slow down temperature change, (b) remove greenhouse gases from the atmosphere, and (c) help societies cope with the negative impacts of climate change. While establishing this goal may be a legitimate function of government, it will be counterproductive for government to dictate a one-size-fits-all solution.


Only market forces can marshal the incredible creativity needed to tackle such an undertaking. Whatever the answers, they can only come from unshackled, inventive minds and from a dynamic marketplace, free to employ resources to their best effect.


Chapter 7~Energy for the Future


Enormous material progress has been made in the past two hundred years. Much of this progress was the result of advances in energy technology made by people living in freedom. Moreover, these advances are accelerating even as the environment, at least in the West, improves. Yet today, alarmism is alive, well, and newsworthy. Scary computer-generated climate-change scenarios, rising natural gas and gasoline prices, and turmoil in the Middle East have made energy pessimism nearly as popular as it was during the American energy crises of the 1970s and the British coal panic of the 1860s.


Sometimes, these alarms are contradictory. We are told the world will soon run out of carbon-based fuels. Then we have predictions of global warming Armageddon relying on the assumption that the world will continue to burn more and more of these fuels for decades to come.


The statistical record of improvement over the last two centuries challenges these gloomy scenarios and the pessimism behind them. By any index—availability, affordability, reliability, cleanliness, efficiency, utility—the long-term energy trends have been positive. Problems have been faced and solved by creative people, and, in special cases (such as air pollution), incremental

regulation. Our own predictions, based in part on official forecasts, reflect those trends.


There is, however, one urgent energy alarm that must be sounded. Not a hypothetical warning of something that may happen in the distant future, but a major energy sustainability problem that is happening right now. It concerns one-fourth of the world’s inhabitants. It is wretched energy poverty, the lack of electricity and fuel for heating and transportation and of all that modern energy provides: clean air and water, adequate lighting, medical facilities, education aids, communication infrastructure, labor-saving devices, and more.


This energy sustainability problem is not the result of fossil-fuel depletion or combustion. It is a child of statism and all the philosophies behind it—philosophies that justify coercive government control of economies, of people’s lives, and of the energy resources that make life possible. Major reforms are needed to eradicate—house by house, village by village—the poverty that comes from a lack of basic market institutions such as property rights, property titles, and the freedom to exchange, contract, and associate with others.



The perils of prediction are well known. There are many pitfalls waiting for those who venture to peer into a crystal ball. First, people often assume that technology will not change significantly and new substitutes will not emerge. If they do foresee changes, they often see improvement in some favored areas while ignoring the fact that the other technologies will also be improving at the same time. Alternative fuels will not compete with conventional fuels as they are but as they

will become.


Some proponents of energy transformation predict that hydrogen will replace gasoline as the primary transportation fuel in the next few decades. But even if all the technical problems with using hydrogen were to be resolved, it would take years for a hydrogen infrastructure to be established and for the vehicles now on the road to be replaced.


Long-term underestimation of future technology stems from our inability to imagine the directions that even existing technology can take during large time spans. Moreover, we are unable to predict the breakthroughs in basic science that will be made. Who in 1900 could have foreseen the mapping of the human genome, nanotechnology, or superconductivity? Who today can imagine the impact these advances will have on society in fifty or a hundred years? What we do know is that these discoveries will open new horizons for new discoveries and breakthroughs.


Another problem in forecasting is the inability to differentiate between what is possible and what is practical. Keeping in mind these pitfalls, it is with great caution that we venture to present a summary of forecasts from well-known energy agencies and to make a few predictions (qualitative, not quantitative ) of our own.



The short-term outlook for energy, as for other goods in a market economy, is that it will remain much as it is today. Problems will arise and will be resolved, leaving us better off than before. This is an easy prediction to make because free markets inspire solutions to problems.


Much of the cyclical nature of prices is due to imperfect knowledge. Producers do not know what future demand for their products will be, though they try to forecast demand as best they can because the rewards for getting it right can be high, and the penalties for getting it wrong can be severe. This corrective process never ends. The market is not perfect and problems occur, but problems are also a key driver of progress. We learn by trial and error, and progress happens in fits and starts, but it does happen.



The forecasting arm of the U.S. Department of Energy, the Energy Information Administration (EIA), has estimated American energy supply and demand through the year 2025. Based on a projected economic growth of 3 percent a year, EIA sees energy demand increasing by 1.5 percent a year, indicating an increase in energy efficiency, or (stated another way) a decrease in energy intensity, of 1.5 percent per annum.


Use of all three carbon-based fuels is forecast to increase significantly during the forecast period. The EIA estimates that non-hydropower renewables (ethanol, geothermal, biomass, solar, and wind) will provide less than seven percent of America’s total energy by the year 2025. However, because alternative energies largely depend upon government subsidies, this figure will shift if government support changes one way or another.



The EIA has also made a forecast of world energy supply and demand through 2025. It sees total demand rising by almost 60 percent—a growth rate of nearly 2 percent per year.  These projections assume an annual decline in energy intensity of 1.1 percent. World demand is expected to increase faster than in the United States as the developing world strives to catch up with the West.


The combined market share of carbon fuels will increase to 88 percent from 85 percent in the forecast period. More than 90 percent of the increase in total energy demand is expected to be met by oil, gas, or coal.



Oil, natural gas, and coal will remain abundant throughout the twenty-first century at prices competitive with other energy alternatives. This does not mean that energy sustainability issues have been put to rest.


However, so far no alternatives can compete with the convenience, portability, efficiency, or cost of oil, gas, and coal. In addition, carbon-based fuels will continue to be made cleaner and more efficient. The renewable energy era has already come and gone. Even though alternative energy sources like biomass, alcohol, and solar and wind power are being touted as energy sources for the future, they are actually throwbacks to the past. Eventually, however, fossil fuels will be replaced with other primary energy sources. A new form of energy will ease out hydrocarbons when consumers judge it a better product at a better price.


While the prices of individual fuels may rise, there is little reason to believe that energy per se will grow less abundant and more costly. The lesson of history is that in free societies individuals produce more energy than they consume.



According to the International Energy Agency, “some 2.4 billion people rely on traditional biomass—wood, agricultural residues and dung—for cooking and heating.” And typically this biomass is burned very inefficiently with much of the heat lost. People in developing nations often have no incentive to use such fuel efficiently because it costs them nothing more than the effort to gather it.


Yet, there are unseen costs to using these so-called free fuels. Deforestation is a serious problem in many parts of the world. Because no one owns the forests and jungles, no one has an incentive to conserve them. In addition, burning primitive biomass produces smoke and fumes that can cause serious health problems. There are opportunity costs as well. Using dung for fertilizer instead of fuel would increase agricultural yields and help free more people from the hardships of subsistence living.



Historically, market pressures have driven producers towards increased efficiency. The amount of energy used per dollar of economic output has dropped steadily in the United States. This trend is expected to continue.


For more than a hundred years, natural resources of every kind, including fuel, have become more affordable. Often the real prices have dropped, but always the amount of labor required to purchase a resource has declined. While these decreases have been marked by short-term fluctuations, the overall trend has been steady, downward, and driven by technology. We see this trend continuing indefinitely in market-driven economies around the world. Because of rising levels of carbon dioxide in our atmosphere, agricultural crop yields will rise significantly in the coming decades.



The environment will become cleaner as efficiency improves and wealth grows. This trend will be most apparent in the Third World given their current, significant environmental problems. Great progress is being made on most energy-environment fronts, and alarms about climate change from fossil-fuel combustion may be wrong or at least premature.



The Earth’s pool of proven resources will continue to expand. People will keep exploring for those materials that have already been found to be useful. But the resource pool will also grow as uses are discovered for things that were never before thought of as resources. They are limited only by the boundaries of our minds and by the physical universe.



Research and development will continue to be driven largely by industry’s need to produce marketable products at low cost. The U.S. Department of Energy will spend many billions of dollars on research and development, but will have little to show for it. One of the main problems will be the inability of the DOE to back away from technologies that show little or no promise but have strong political support. Most progress will be made in the private sector.



The trend toward globalization is as evident with energy as it has been with other goods and services. Many worry about these trends toward what is often termed “energy dependence.” A better term is energy interdependence, describing a situation in which buyers are dependent on sellers for fuel and sellers on buyers for revenue. The same is true for any other commodity that is traded between individuals. Exchange occurs only when it is in the interests of both parties. With globalization, political boundaries become secondary to economic self-interest. If free trade is allowed to continue its expansion, conflicts between nations will diminish.



We cannot hope to imagine what technology will be like 99 years from now, but we will hazard two predictions. The first: if a person could somehow be transported into the next century, he would think the world populated by wizards. The second: the middle class of that century will live longer, healthier lives than even the richest people living today.


As these trends continue, the number of virtual servants working for each person will grow, and people around the world will be commensurately better off. It is hard to overstate the significance of this trend. It means not just more creature comforts but a fundamental change in the human condition.



While such a future is within our power to build, it is by no means assured. History has shown that while freedom and creativity produce security and abundance, the lack of freedom produces waste, poverty, and misery.


We face a number of threats to the freedom needed to ensure an abundant future:

  1. The NIMBY (Not In My Back Yard) Syndrome.
  2. Government controls
  3. Economic isolation
  4. Demand for a controlled, zero-risk society
  5. The Bleak House Effect
  6. Envy
  7. Pessimism and Despair


In the end, people’s powerful and natural desire to leave their children with a better and more prosperous world will work to defeat any such threats to freedom. The remarkable history of the ultimate resource, human ingenuity, is a strong argument for optimism. Our future will be as great as our freedom, knowledge, resolve, and energy allow it to be.


About the Authors:

Robert L. Bradley, Jr. is one of the nation’s leading experts on the history and regulation of energy and related sustainable development issues. He is president of the Institute for Energy Research in Houston; visiting fellow of the Institute of Economic Affairs in London; an adjunct scholar of the Cato Institute; and a member of the academic review committee of the Institute for Humane Studies at George Mason University. He is a former Senior Research Fellow at the University of Houston and the University of Texas at Austin. His books are: The Mirage of Oil Protection (1989), Oil, Gas, and Government: The U.S. Experience (2 volumes: 1996), Julian Simon and the Triumph of Energy Sustainability (2000), and Climate Alarmism Reconsidered (2003). His most recent book is Energy: the Master Resource (with Richard Fulmer). Bradley is currently working on his sixth book, Political Capitalism: Insull, Enron & Beyond.

Bradley’s "Renewable Energy: Not Cheap, Not 'Green'" (Cato Institute, 1997) is considered a classic in the energy sustainability debate. Other writings in this area include “Green Pricing” (Macmillan Encyclopedia of Energy, 2002) and “Climate Alarmism and Corporate Responsibility” (Electricity Journal, August/September 2002). Bradley received the Julian L. Simon Memorial Award for 2002 for his pioneering work on energy as the master resource.

Bradley’s book-published essays include “A Typology of Interventionist Dynamics,” in Jack High, ed., Economics, Philosophy, and Information Technology: Essays in Honor of Don Lavoie (Edward Elgar, 2006); "Interventionist Dynamics in the U.S. Energy Industry,” Advances in Austrian Economics JAI Press, 2006), “An Open Letter to George W. Bush on Climate Change Policy,” in James Griffin, ed., Global Climate Change: Science, Economics, and Politics (Edward Elgar, 2003); “The Origins and Development of Electric Power Regulation,” in Peter Grossman and Daniel Cole, editors, The End of a Natural Monopoly: Deregulation & Competition in the Electric Power Industry (JAI Press, 2003); “Energy for Sustainable Development,” in Julian Morris, ed., Sustainable Development: Promoting Progress or Perpetuating Poverty? (Progress Books, 2002); “The Increasing Sustainability of Conventional Energy,” in John Moroney, ed., Advances in the Economics of Energy and Natural Resources (JAI Press, 1999); and "The Distortions and Dynamics of Gas Regulation," in Jerry Ellig and Joe Kalt, eds., New Horizons in Natural Gas Deregulation (Praeger, 1996).

Bradley has presented professional testimony on energy issues to the California Energy Commission and United States Senate; his opinion-page editorials on energy policy have appeared in the New York Times and many other newspapers across the country; his energy views have been aired on National Public Radio, Voice of America, CBS Radio Network, and Armed Forces Radio, as well as local programs.

Bradley holds received his B.A. in economics (with honors) from Rollins College, where he received the S. Truman Olin Award in economics; a masters in economics from the University of Houston, a Ph.D. in political economy (with distinction) from International College. Bradley has also been a Schultz Fellow for Economic Research ( New York City) and Liberty Fund Fellow for Economic Research ( Menlo Park, California).

Bradley is a member of the International Association for Energy Economics, the American Economics Association, and the American Historical Association.


Richard W. Fulmer received a bachelors degree in Mechanical Engineering from New Mexico State University in 1978. For the last 17 years he has worked as a systems analyst in the energy industry.


Mr. Fulmer is co-author (with Robert L. Bradley Jr.) of Energy: The Master Resource. He also contributed to the textbook, Mastering Cobol (Sybex, 2000) and has written a number of articles for The Freeman, a magazine published by the Foundation for Economic Education. His novel, Deadly Care, was published in 1994.

Top of PageClose Window