Chapter 3: Plenty of Resources

 

3

PLENTY OF RESOURCES

Aiming for Global Affluence

Being able to eat all that you want is, of course, only part of achieving the basic level of prosperity which is enjoyed by most of the one billion people living in developed countries. They also generally have well built and comfortable accommodation, and ready access to infrastructure such as sewerage, electricity, communications, transport and hospitals, to domestic labor saving appliances and to an abundance of cheap food and clothing. They can also afford a regular night out and an occasional holiday. These relatively fortunate can be found in western Europe, the US, Canada, Japan and various outposts such as Australia, New Zealand, Hong Kong, Singapore and Taiwan. At the moment, Portugal could be considered the cut off point with an annual GDP per capita of $18,000.[274] In the following discussion this will be taken as the minimum target that other countries need to achieve.

Once you move outside the top group, the level of economic development and living standards drops away quite quickly. The average income[275] for the middle group of countries between Portugal and China is only half the rich group minimum. It is home to 1.16 billion people of whom less than 220 million live in countries with average incomes over $10,000. Also, we find here a lot of very poor people because of wide income disparities. Brazil, Peru, Mexico and Colombia provide good examples of this. The same can be said for China where the bottom 20 per cent of its 1.3 billion people are much poorer than its average income of $5,600 suggests.[276]

Between China and India in terms of average income, is a group mainly comprising the Philippines, Egypt and Indonesia with a population of 550 million. Their average income is just over one fifth that of the rich country minimum. Then we have the bottom group topped by India. It includes Sub-Saharan Africa, Pakistan, Bangladesh and Burma. Here we find 2.45 billion people or 38 per cent of the world’s total. India with a population of 1.1 billion has an average income of $3,072, just over one sixth of the rich group minimum, while 1 billion live in countries with an average income of less than $2,000. Table 3.1 shows a list of selected countries outside the rich list in descending order of GDP per capita. It shows how many times this has to increase to reach our minimum target of $18,000.

Table 3.1: Factor required to increase GDP per capita to $18,000, selected countries (2004 data, purchasing power parity)

[insert]

Source: CIA Fact Book on line, accessed January 2006

a. For example, the Czech Republic’s GDP per capita needs to be 1.07 times its present level.

If, as this chapter will argue, resources place no limit on affluence, the poorer countries can be expected to make up a lot of ground in the course of this century. There is no crystal ball. However, given recent performance and the kinds of growth rates that are required by different countries to reach $18,000 per head, it is possible to make some broad brush predictions and be fairly sanguine about the possibility of a large proportion of the developing world achieving a level of affluence this century and the worst off ones early in the 22nd century. A lot will be achieved even if growth is less than stunning and there are periods of war, revolution or economic stagnation.

We will get off to a fairly good start if World Bank predictions for the next 10 years prove correct. The bank expects GDP per capita growth for developing countries to average 3.5 per cent per annum over that period.[277] This would be similar to the performance of the last five years [278] and provide a 41 per cent increase by 2015.

Countries that only need a doubling in per capita income to reach our minimum target would reach it mid century with an average annual growth rate of 1.5 per cent while a tripling in the same time would require 2.3 per cent. These growth rates are not hard to imagine and if achieved would bring into the affluent camp Latin America, Eastern Europe, the former Soviet Union and about half the population of the Middle East and North Africa. The expected annual growth rates in per capita income for these regions over the next decade should put them on track if achieved: for the former Soviet Union and eastern Europe it is 3.5 per cent, for the Middle East and North Africa 2.6 per cent and for Latin America 2.3 per cent. [279]

China needs a three and a quarter fold increase. If as expected it continues at the 6 per cent per annum which it has been averaging over the last 25 years, it will be halfway to $18,000 by 2015.[280] A further 10 years at the same rate would take it to target (2025). Alternatively, a further 20 years at 3 per cent would achieve the same result (2035). If things do not go so well, it may be mid century.

India’s GDP per capita has been growing at around 4 per cent over the last decade.[281] If that rate continues for another ten years, GDP per capita would increase by 50 per cent and the country would then be a quarter of the way to the target. If it were to continue at that rate of growth, it would reach it by 2050. Otherwise, any average growth rate greater than 1.65 per cent will achieve the desired result before 2100.

Countries such as Bangladesh and Pakistan that have a GDP per capita of around $2,000 need to achieve good growth performances to reach $18,000 by 2100. However, they do not need to come near the record pace of Japan, South Korea and Taiwan that grew about 18 fold during the last century (or 3 per cent per year). Almost matching Spain, Finland or Italy would be sufficient. They grew 10 to 12 fold.

However, most of Sub-Saharan Africa would require a record performance to meet the same deadline. As discussed in the final chapter, present political conditions are totally un-conducive to economic development, so expecting such a level of success does seem, at least from the present vantage point, excessively sanguine. However, the region would have to be fairly unlucky not to have made some significant inroads into the political obstacles by mid-century. Furthermore, increasing per capita income levels will be helped by the slowdown in the growth, and then the stabilization, of population during the course of the century.

What about the poor countries eventually catching up with the rich ones? This will require outstripping their growth rates for an extended period. There are a number of reasons why this is likely to occur:

· at an early stage of industrialization where current production methods are relatively backward, moderately small investments in improvements can make a proportionately large difference;

· at this stage, a lot of people are just learning to do their job and will make considerable improvements in their efficiency over the short to medium term;

· being followers rather than leaders, poor countries can adopt technologies that have already proven successful. They do not have to worry about the technologies that did not make the grade or go through the initial teething problems. The adoption of US technology by Japan and western Europe after World War II are prime examples of this; and

· there is the opportunity for technology leapfrogging where the newcomer goes straight to a cheaper technology. For example, mobile phones in India and Sub-Saharan Africa can provide people with telecommunications with far less investment than land lines.

********************

The following examination of the viability of widespread affluence in the 21st century and beyond looks at the extent of energy and raw material resources, and at our ability to limit the impact of industrialization on “life support” resources such as air, water, weather and natural habitat. Energy receives the most attention because of the range of technologies and resources involved while raw materials receive the least because it is mainly a matter of detailing their vast abundance and the considerable scope for substitution between them.

Our Energy Needs

In 2004 we produced 11,223 million tonnes oil equivalent (mtoe) or 470 exajoules (EJ) of commercial primary energy.[282] Just on 45 per cent was consumed by the 15 per cent (925 million) living in the rich countries, giving them an average per capita level five times that of the remaining 5.4 billion on the planet.[283]

We will need to increase this output considerably over the course of the century as poor countries develop and narrow the gap with the rich ones. According to the mid-range projection by the US Energy Information Administration (EIA), world energy consumption will grow at an average rate of 2 per cent over the period 2003 to 2030.[284] This is slightly lower than the average annual increase of 2.2 per cent from 1970 to 2002.[285] If this rate were to be maintained throughout the century, annual energy consumption by 2100 would increase more than 6.5 fold to 3146 EJ (77,115 mtoe) per annum. Depending on whether the population is closer to 9 or 10 billion, this would provide a global per capita energy consumption level a bit below or a bit above the present US level.[286] Around 140,000 EJ would be consumed during the course of the century. Mid-century annual consumption would be around 1,170 EJ while 38,000 EJ would be consumed between now and then.

Even lower growth rates would achieve significant results by the end of the century. A rate of 1.7 per cent would increase energy output 5 fold and give a world of 10 billion people the current rich country per capita average of 5.5 toe. A rate of 1.5 per cent would give a 4 fold increase and a per capita average of 4.5 toe.

While the rich countries will continue to increase their energy consumption it will be at a significantly slower rate than the poor ones. This is because of a static population for the group as a whole, slower economic growth rates at the technology frontier and being at a higher and less energy intensive level of development. In line with the recent past, the EIA projects a 1 per cent annual growth rate compared with 3 per cent for the poor countries over the next quarter century.[287]

If rich countries were to continue increasing energy consumption by 1 per cent per year and their population remained static, while overall energy grew annually by 2 per cent and the population of the poor countries increased by 60 per cent, by the end of the century, rich country per capita consumption would increase from 5.5 to 14 toe and poor country per capita consumption from 1.1 to 6.8 toe. This would bring the poor countries as a whole almost up to present US per capita consumption levels and shrink the disparity between rich and poor countries from five to one to two to one.

The task now is to assess our ability to meet these energy consumption levels. We need to know how long we can continue to rely heavily on fossil fuels and to what extent global warming places a serious limit on their use. Then we need to know whether other resources are extensive enough to eventually fill the breech and whether we will have the technology to exploit them. In the case of nuclear power some time also needs to be spent allaying concerns about radiation hazards which are proving to be an obstacle to a rational consideration of this technology.

Fossil Fuels

Around 80 per cent of the energy that we use at the moment comes from fossil fuels – oil, coal and gas.[288] Below we examine each of these fuels in turn and look at how long they can be expected to last given different assumptions about their rate of use. We then conclude with an overall assessment of the fossil resource.

Oil

Oil meets around 35 per cent of our primary energy needs.[289] It is critical to the transport sector where it provides around 95 per cent of what is required. The most recent attempt to quantify the resource base for conventional oil was the World Petroleum Assessment 2000 undertaken by the United States Geological Survey (USGS).[290] They provided a figure of 959 billion barrels for proven reserves. This is the amount of oil that could be produced profitably at current prices if there were no further discoveries or advances in extraction technology.

To this figure they add a range of values for additional resources which are classified into expected reserve growth and undiscovered resources. Reserve growth (also called field growth) is the expected growth in reserves over the next quarter century through better definition over time of what is in known fields and the development of better recovery methods, while the figure for undiscovered resources is an estimate, based on geological knowledge, of further oil that will be found by 2030.

Estimates for these additional resources range from 776 billion to almost 2.8 trillion barrels. The USGS estimates that there is a 95 per cent chance that the real quantity is at least the low value, a 5 per cent chance that it is at least the high value and a 50 per cent chance it is at least 1.6 trillion barrels. These are based on subjective probabilities assigned by people with expert knowledge of the different oil deposits.

So in sum, they are saying that we can be very sure of a total resource, including reserves, of 1.74 trillion barrels (0.959 plus 0.776) but that there is a reasonable chance of it being considerably more. The USGS usually cites a mean value for the additional resources of 1.7 trillion. This gives a total resource of almost 2.7 trillion barrels (0.959 plus 1.7) or 16,522 EJ. That would last 90 years with static consumption levels[291] and until 2056 if consumption were to grow annually by 2 per cent.

A more pessimistic school of thought claims that the ultimate remaining resource is only around one trillion barrels.[292] They argue that the reserve figures of OPEC countries have been exaggerated for political reasons and downplay the scope for new technology to squeeze a bit of extra oil from an increasingly depleted resource. Consequently, they see depletion and increasing costs of extraction occurring in the next decade or so. At the same time they see problems with alternative resources, including unconventional oil, filling the breech because of high costs and environmental damage.

Then we have the optimists who consider the USGS estimates too conservative.[293] They argue that technological advances will increase maximum recovery rates from their current levels of around 50 per cent, increase the ability to exploit resource in difficult geological formations, to drill further into the earth and in deeper oceans and to detect new deposits. For them, the introduction of the more challenging non-conventional resources (see below) and alternatives to oil can be a more leisurely affair allowing plenty of time for ironing out problems and reducing their costs.

The existence of the resource is not the only consideration. There is also the matter of ensuring that it is made available by investing in sufficient extraction and processing capacity. The generally accepted view at the moment is that the price levels we have been experiencing in recent years, and those expected in the future, should be sufficient to induce a considerable increase in investment, including by OPEC countries. A 2 per cent annual increase would mean that in 2025 we would need to be producing 50 per cent more than we were in 2005 and in 2050, 2.4 times as much.

So far we have been discussing what is generally referred to as ‘conventional’ oil. This is more or less oil that flows from oil wells.[294] ‘Non-conventional’ oil on the other hand involves more costly extraction techniques and comprises bitumen from oil sands, kerogen from oil shale and extra heavy oil. Despite the higher costs, some of these resources are commercial at current and expected oil prices and more will become so as the required technologies mature.

Oil Sands Oil or tar sands, are grains of sand or porous rocks that are mixed with bitumen. This is a thick, sticky form of oil which at room temperature is much like cold molasses. As a result it does not flow from the ground like conventional oil. Other means are required to extract it and then it has to be further processed to create a synthetic crude oil. This process includes the addition of hydrogen, something in which bitumen is deficient.

The vast bulk of the resource is located in Alberta, Canada where production has grown significantly over recent years as a result of higher oil prices and declining costs. Output is now around one million barrels a day and is expected increase significantly in the next decade.

Most of present exploitation is confined to oil sand close to the surface which is extracted using open cut methods. Giant shovels load up equally huge trucks which cart the oil sands ore to a crusher. Here the sands are pulverized, and the bitumen separated out by various processes employing water, steam and solvents.

However, most of the reserves and resource as a whole are too deep in the ground for surface mining, so there will have to be an increasing reliance on in situ methods. These reduce the viscosity of the bitumen while it is still in the ground so that it can flow sufficiently to be pumped to the surface. Currently about a third of extraction is done this way[295] and it is bound to have an increasing role if the resource is to be extensively exploited.

A range of in situ methods are being developed.[296] The most commonly used method at the moment relies on steam injection to soften up the bitumen. The injection of solvent is also used and a hybrid combination of solvent and steam is being trialed. Two other methods under development are fireflooding and electrovolatization. Fireflooding heats the bitumen by burning some of it. Injected air feeds a fire front that softens the bitumen up ahead. Electrovolatization heats the oil with an electric current. Also being mooted is the use of microbes that would reduce the viscosity of the bitumen.

According to estimates published by the Alberta Energy and Utilities Board, the initial volume-in-place, based on currently available data, is 1.6 trillion barrels while the ultimate volume in place, a value representing the volume expected to be found by the time all exploratory and development activity has ceased, is 2.5 trillion barrels (15,300 EJ).[297] About 300 billion barrels have been identified as recoverable reserves with current technologies and processes.[298]

The USGS provides an estimated technically recoverable resource of 531 billion barrels for Alberta and 120 billion barrels for the rest of the world, giving a total of 651 billion.[299] This broader concept presumably takes into account the less well explored resources and higher oil prices.

Before these figures are compared with those for conventional oil, one needs to take into account the fact that both the extraction of the bitumen and its conversion to synthetic crude are very energy intensive. For every barrel of crude oil produced somewhere near the energy equivalent of one third of a barrel is consumed.[300] So, the reserve of 300 billion barrels needs to be adjusted down to 200 billion barrels (1,224 EJ).

Shale Oil Oil shale is a sedimentary rock containing kerogen, a waxy organic substance that originated from the remains of algae and other living matter and which can be converted to oil through a process called retorting which involves heating the shale in the absence of air to temperatures of 500 degrees C or more. The shale can be mined much like coal and then processed at the surface or retorting can be performed in situ much like the oil sands process. As with bitumen from oil sands, hydrogen has to be added to create an acceptable crude oil.

In many regions there has been no real effort to delineate the resource through lack of commercial interest. In the US during the 1970s there was a lot of interest when oil prices were expected to remain high. So there is some knowledge of that country’s resource. Its resource base is estimated to be about 2 trillion barrels.[301] Globally, the resource base is conservatively estimated to be 2.6 trillion barrels.[302]

However, potential world oil shale reserves are put at a mere 160 billion barrels.[303] With such a large resource it is hard to imagine reserves remaining at such a low level. If shale oil is anything like conventional oil, new deposits will be progressively added as the industry develops. Just assuming the same ratio of reserves to the ultimate resource in place as Alberta’s tar sands gives a reserve of 312 billion barrels. However, whatever figure we use, there has to be a similar adjustment for energy consumption in extraction and processing which we found for oil sands.[304] For the smaller figure we are looking at 107 (655 EJ), for the larger 208 (1273 EJ).

Heavy Oil Heavy oil is oil with a high density and viscosity which often requires the injection of super-heated steam into the reservoir to reduce viscosity and increase reservoir pressure. As with bitumen it needs to be upgraded to achieve a standard crude oil. Most of the oil is in Venezuela where the resource is currently assessed at over 1.2 trillion barrels (7343.5 EJ) and reserves at about 270 billion barrels.[305] Assuming one third energy loss in production, that is equivalent to 180 billion barrels (1101 EJ) of conventional oil.

So, for non-conventional oil as a whole, there are between 3,000 and 3,600 EJ of energy which could be recoverable in the near future. This would increase recoverable oil resources by 20 per cent to around 20,000 EJ (3.27 trillion barrels) if we accept the USGS estimate for conventional oil. With a 2 per cent consumption growth rate, that would push oil availability out by about 7 years to 2063.

Converting gas and coal to liquid fuel Another option is to produce synthetic oil from gas and coal. In the past such fuel was only produced when access to far cheaper crude oil was denied, Nazi Germany and Apartheid South Africa being the cases in point. They both produced it using their ample supplies of coal. Now with higher oil prices and improved technology there is renewed interest. Plans are presently afoot to establish a gas conversion plant in Qatar whose offshore gas field is home to one tenth of the world’s proven gas reserves; China expects to have a coal liquefaction plant operating in Inner Mongolia by 2007; and the South African company Sasol has been having exploratory talks in the US [306] and India.[307] To meet liquid fuel requirement to 2100 with our 2 per cent growth assumption would require over 30,000 EJ (4.9 trillion barrels) from coal or gas. However, as with the other non-conventional forms of oil there are significant energy losses in production which have to be taken into account.

 

Coal

Coal currently supplies a quarter of primary energy with most of it being used for electricity production where its contribution is 40 per cent.[308] Proven recoverable reserves are about one trillion tonnes[309] which has an energy value of around 21,000 EJ.[310] These are fairly accurately measured resources that would be economical at present prices and accessible with current technologies. At current coal usage rates of around 5 billion tonnes these would last for 200 years.[311] Assuming coal keeps its share and energy consumption increases by 2 per cent per year, present reserves would last 80 years. If tomorrow coal were to take on a bigger role and say grow at 3 per cent per year they would last 65 years.

The potential resource is much larger than these reserves. To begin with, there are the coal deposits that have not being explored or assessed because there is not the demand for them or that are too costly to mine given current market prices. Then there are those that could be exploited with improvements in mining technology. Some deposits are presently too difficult to get at, for example, because the seam is too thin. New tunneling methods or in situ gasification may remove these kinds of obstacles. Recovery rates in mines could also be increased. At the moment, many mines are operated using the traditional room-and-pillar mining method which leaves about half the coal in place. The long wall method which recovers about 90 per cent of the coal could be made more widely suitable or totally new methods devised. Furthermore, in less developed economies higher recovery rates would be achieved by adoption of more sophisticated and capital intensive methods. Wider application of surface mining would be an example of this. The total resource has been estimated to be 6.11 trillion tonnes (179,000 EJ).[312] Assuming 2 per cent annual energy growth and coal retaining its present share by growing at the same rate, the resource would last almost 160 years. At 3 per cent growth it would last 120 years.

Natural Gas

Natural gas is predominantly methane and supplies 21 per cent of our primary energy.[313] Proved conventional reserves are currently around 180 trillion cubic meters (tcm). At a conversion rate of 37 EJ per tcm that has an energy value of around 6500 EJ and is quite close to the energy value of current oil reserves.[314] These would last until 2070 at the 2003 usage rate of 2,618.5 billion cubic meters (bcm). If usage grew at the same rate that we assume for energy as a whole, i.e. 2 per cent, they would last until 2045. If gas increases its share of fossil energy as expected, say averaging an annual growth rate of 2.5 per cent, it would last until 2042.

There are two main estimates of the total resource. The USGS, using the same system of classification as for oil, has a mid range estimate of 386.5 tcm (14300.5 EJ) while the figure from Cedigaz is 490 tcm (18130 EJ).[315] The difference is due to the adoption by USGS of a 30-year forecast period instead of the unlimited forecast span used by Cedigaz. Assuming the more conservative USGS figure, the resource would last until 2150 at current production levels, until 2071 with 2 per cent growth and 2064 with a 2.5 per cent growth.

Non-Conventional Gas Resources

Coal-bed methane Coal-bed methane is methane within coal which is either created chemically as heat and pressure are applied to coal in a sedimentary basin or through bacteria that obtain nutrition from coal and produce methane as a by-product. Because of its large internal surface area coal can hold six or seven times as much gas as a conventional natural gas reservoir of equal rock volume can hold.[316]

Coal-bed methane production grew dramatically in the US during the 1990s. In 2000 it was reported to be 7.5 per cent of US gas production, although somewhat less of consumption given the significant gas imports from Canada.[317] Recently production has also started to take off in a number of other countries.

To extract the methane, some of the water which permeates the coal bed must be pumped out to release the pressure which is keeping the gas trapped within the coal. Once the gas starts to flow, it is at a much slower rate than conventional wells.

On the basis of fairly limited data the USGS estimates that the world resource in place could be up to 210 tcm (7800 EJ).[318] For the conterminous United States, they estimate that the resource could be more than 20 tcm, with about 2.8 tcm recoverable with current technology. If we assume a similar ratio applies to the world as a whole, the recoverable resource would be 30 tcm or 1,110 EJ.

Tight gas A major non-conventional resource that is beginning to be exploited is tight gas. This is gas that requires the host rock to be fractured before it will flow. Extraction will benefit from a range of on-going improvements in mining methods, including drilling and fracturing techniques. In the US it is already providing about 20 per cent of local production.[319]

Although tight gas reservoirs exist in many regions, only the US resources have been assessed. The potential resource for that country has been estimated to be 15.7 tcm (583 EJ) with current technology and 19.82 tcm (733 EJ) with 2025 technology.[320] Germany’s Federal Institute for Geosciences and Natural Resources has arrived at a global potential of 2856 EJ.[321] This is only a small fraction of the resource in place. Some estimates suggest that there is as much as 424.7 tcm (15,854 EJ) just within the state of Wyoming alone.[322]

Aquifer gas Natural gas is often found dissolved in aquifers and the amount dissolved increases substantially with depth. It is variously referred to as aquifer gas, hydro-pressured gas or brine gas and is expected to occur in nearly all sedimentary basins of the world. While no detailed assessment of the resource is available, estimates derived from groundwater volume suggest a resource ranging from 2,400 to 30,000 tcm (90,000 to 1,100,000 EJ) with a mean estimate of 16,200 tcm (600,000 EJ). While highly speculative, these estimates suggests a resource of staggering proportions.[323]

Methane hydrates Gas hydrates are ice-like solids in which water molecules trap gas molecules in a cage-like structure known as a clathrate. They form when water and natural gas combine under conditions of moderately high pressure and low temperature. Because the gas is held in a crystal structure, the gas molecules are far more densely packed than under more normal conditions.

Research so far indicates most of the hydrate takes the form of grains or particles in pores of sedimentary rocks in zones which can range from tens of centimeters to tens of meters in thickness. Gas hydrate also occurs as nodules, laminae, and veins within sediment and sometime as thick pure layers.[324] These deposits are to be found beneath the ocean floor at water depths greater than about 500 meters and in Arctic permafrost.

The resource in place is believed to be vast with estimates ranging from 2830 tcm to 8.5 million tcm.[325] According to the IEA, the median estimate is about 21,000 tcm (777,000 EJ).[326] For the US, the USGS provides a mean estimate of 9069 tcm.[327] This would suggest that it is at least similar in size to the conventional resources in place of coal, oil and gas combined and possibly some orders of magnitude larger.

A lot of the resource would be quite a challenge to recover because it is widely dispersed in hostile Arctic and deep marine environments, and encased in sediments with low-permeability. Any large scale production in the near future is easier to imagine if sufficient deposits can be found which are the exception in terms of these characteristics. At this stage the existence or extent of such deposits is not known.

The US and Canada are presently investigating the resource potential within North American permafrost and testing various technologies. Japan and India also have significant programs. The US Department of Energy (DOE) expects that we will have the resource knowledge and technology to begin commercial production by 2015.

The various methods being examined involve perturbing the hydrate in place so that it decomposes to constituent natural gas and water. They include heating, injecting various chemicals and decreasing reservoir pressure. Any mining process would have to take into account the risk of major methane releases into the atmosphere. The magnitude and likelihood of such a release are not yet known, nor is the mitigating effect of seawater which can prevent methane from reaching the atmosphere. The global geologic record appears to indicate destabilization of the hydrate zone in the past has lead to very substantial releases of methane. However, the reasons for this are not yet well understood.[328]

Fossil Fuel as a Whole

As things stand fossil resources that are already recoverable have an energy value in the order of 60,000 EJ (see table 3.2). This includes the reserve estimates for coal and non-conventional oil and gas plus the resource estimates for conventional oil and gas.

With total primary energy consumption in 2004 of 470 EJ, and assuming 2 per cent annual energy growth, currently recoverable fossil resources could continue to meet 80 per cent of our energy until 2075. To get through to 2100, we would need to increase recoverable resources from 60,000 to 110,000 EJ. This would mean tapping quite a small proportion of the remaining resources in place – 3 per cent of the highly speculative total in Table 3.3, and 6 per cent if we leave out methane hydrates.

Table 3.2 Currently Available Fossil Resources

[insert]

Sources: See text.

Table 3.3: Resources in place for coal and non-conventional oil and gas (excluding what is already recoverable)

[insert]

Sources: See text.

Increasing our access to the resource by this amount should not be a major demand on investment or technological innovation. Remote resources can become less so, recovery rates can be improved in coal mining, and drilling, rock fracture and in situ technologies can improve considerably.

This suggests that it would be possible to continue our reliance on fossil fuels through this century and into the next. However, given the vast levels of energy that would be consumed in the 22nd century and beyond, the resource is certainly an historically limited one. With a continuing growth rate of 2 per cent, the entire resource would be fully consumed during the first part of the 23rd century, while a growth rate of 1 per cent after 2100 would only stretch the resource to the last quarter of the 23rd century. Only with a zero growth rate after 2100 would the resource last until the middle of the millennium.

CO2 Emissions and Global Warming

The biggest question mark hanging over fossil fuels is not their availability but rather the effect on the climate of the carbon dioxide (CO2) released when we burn them. For a given unit of energy, coal is the worst in this regard followed by oil then gas.[329]

Carbon dioxide, methane, water vapor and a number of other gases and aerosols, which reside in the atmosphere, retain some of the heat that would otherwise escape back into space. As a result most the earth is well above freezing for most of the time. In fact, average global temperatures are 33oC warmer than they would otherwise be.[330] The concern is that we are increasing temperature levels by adding extra CO2. This has a direct greenhouse effect and also an indirect one because the warming increases the amount of water vapor in the air.

Any atmospheric warming effect would follow a 30 year or more lag due to the fact that the oceans absorb much of the heat.[331] The most pronounced effect of any warming would be a rise in sea levels due to the thermal expansion of the oceans and the melting of ice sheets in Greenland and Antarctica. This would occur gradually over centuries and even millennia. The increase in water vapor will lead to increased precipitation overall.

Uncertainties

While there is general agreement that increased CO2 can cause warming, there is considerable disagreement or uncertainty about the extent of the impact. This shows up even between climate models used by researchers. Their predictions of the effect of doubling CO2 from its pre-industrial level ranges from 1.5-4.5 degrees C.[332] The low end is fairly benign and scarcely noticeable while the high end could be far more serious.

There are three major areas of uncertainty and controversy. These are (1) the extent that there has been warming to date and the blame due to human emissions of greenhouse gases, (2) whether any increase in clouds from global warming amplifies or diminishes the greenhouse effect and (3) the extent that emissions ultimately translate into higher concentration levels in the atmosphere.

The surface temperature records show warming at the rate of 0.17 degreee C per decade since 1976.[333] However, various doubts have been raised about the figures on which this is based.[334] To begin with, some claim that they fail to adequately adjust for the so-called heat island effect, whereby readings from urban weather stations can be influenced by localized warming due to heat-retaining asphalt, brick and concrete replacing grass and trees. Adjustment problems include: the frequent lack of nearby rural stations for comparisons; the use of population rather than construction growth as an index of urbanization; and the failure to take into account the fact that even quite small towns have a significant heat island effect. Other sources of inaccuracy in the record include the uneven placement of weather stations, with most located in the northern hemisphere, at mid-latitudes and on land; the two thirds decline in the number of stations since 1975;[335] the use of sea surface temperature as a proxy for air temperature over the ocean; and changes in vegetation and structures in the vicinity of weather stations.

It is frequently claimed that warming from the enhanced greenhouse effect has been temporarily dampened by sulfur emissions which have a cooling effect and their impact will diminish with increased pollution controls. However, this does not take into account the fact that reductions in sulfur emissions are accompanied by reductions in soot emissions which have a warming effect. At the very least soot would do much to cancel out the sulfur. Indeed, James Hansen, the NASA researcher who helped father the global warming scare in the late 1980s, estimates that soot may be responsible for 25 percent of observed global warming over the past century.[336] The sulfur theory also receives no comfort from the fact that there has been a warming in the northern hemisphere, where the emissions mainly occur, compared with a cooling in the southern hemisphere.[337]

Detecting warming is one thing, blaming greenhouse gas emissions is another. A graph that appears prominently in publications of the International Panel on Climate Change (IPCC) is the “hockey stick” which shows temperature levels as fairly flat throughout the last millennia until the beginning of the twentieth century when they rise significantly. This conveys the idea that temperature levels do not normally vary much and that there must be something abnormal happening in the last century, the prime candidate, of course, being greenhouse gas emissions. The way this was devised using tree ring proxy data has recently been subject to what appear to be fatal blows from its critics.[338] It also flies in the face of considerable evidence of climate variability over the last thousand years or so. There appears to have been a warm period from the 8th to 13th century when temperatures were at least as high or even higher than present levels. It was a time when grapes grew in southern England and the Vikings settled Greenland. This was followed by what has been called the Little Ice Age which lasted until the middle of the 19th century. This suggests a cyclical movement with the current warming at least in part being due to an ongoing emergence from the Little Ice Age and that any future warming will not be as far from the normal range as the hockey stick suggests.

Another major area of uncertainty is the effect of clouds. High level cirrus clouds amplify any warming while low level cumulus clouds dampen it. How global warming would affect the absolute and relative prevalence of these two types is quite unclear. One theory presently being assessed claims that ocean warming leads to a reduction in overhead cirrus clouds, and this could do much to dampen the greenhouse effect. It has been dubbed the iris effect because it can be compared to the way the iris of the eye opens and closes in response to changing light levels.

Another source of uncertainty is the extent that CO2 emissions translate into higher concentration levels in the atmosphere. This depends on the workings of the so-called carbon cycle which is far from well understood. CO2 is held in the atmosphere, the biosphere and the oceans and there are constant exchanges to and fro between them. The atmosphere contains around 800 gigatonnes of carbon (GtC), the land biosphere (plants, animals and soil) 2000 GtC, surface ocean 1000 GtC and intermediate and deep oceans 38,000 GtC.[339] The level of CO2 in the air could be just as affected by natural variations in the exchanges between the atmosphere and the biosphere or oceans as by emission levels. Another wildcard here is the extent that warming or increased CO2 levels lead to either positive or negative CO2 feedbacks. On the one hand, increased CO2 can lead to increased plant growth which leads to greater uptake of CO2. On the other hand, changed climate conditions may lead to increased decomposition and greater release of CO2.

Alarmism

The picture is made even murkier by the alarmism encouraged by various elements in society. First, there is the environment movement which has a penchant for discovering catastrophic consequences in everything that humans do. Secondly, among scientists in the area there is a tendency towards alarmism because those who shout “big problem” either out of honest belief or opportunism tend to get more than their due share of research funding. And then to top it off, we have the media which is more than happy to be fed sensational stories of looming disaster.

Two prominent global warming advocates have openly admitted that it was OK to exaggerate in order to get the ball rolling. According to Professor Stephen Schneider of Stanford University, drawing attention to global warming required “getting loads of media coverage. So, we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have.”[340] James Hansen the NASA scientist who kicked off the greenhouse scare in 1988 at a US Congressional hearing, admitted in 2003: “emphasis on extreme scenarios may have been appropriate at one time, when the public and decision makers were relatively unaware of the global warming issue.”[341] Possibly others consider there is still sufficient urgency to justify embellishment, particularly given the limited level of effective government response.

A lot of the alarmism is about how serious and nasty climate change is already happening in the here and now. We are presented with smoking guns or asphyxiated mine canaries that can only prompt us to think the worst about what is in store for us. We are being told that the small amount of warming that has occurred in the last 100 years is melting our ice caps and glaciers and causing more extreme weather events. Even increased plant growth from higher CO2 levels can be given a gloomy spin.

We’re Melting!

One of the most popular shock horror genre involves stories of melting glaciers and ice caps. While climate science predicts a very slow melting over many centuries if warming is sustained, the more zealous climate worriers would have us believe that our cities are just about to be inundated. For example, Greenpeace’s spokesperson on global warming claimed not long ago that by 2080 Manhattan and Shanghai could be underwater if we did not rapidly cut back on greenhouse gases.[342]

Antarctica

We have been told that Antarctica, the largest store of ice, is warming and this is responsible for stressing out penguins and causing ice shelves to crash into the sea. However, warming reports have relied excessively on the Antarctic Peninsula which is only 1/50th of the entire continent. In this area there has been a warming of about 4.5 degrees Fahrenheit (2.5oC) since 1945 and the melt season has increased by 2 to 3 weeks in just the past 20 years. However, if you look at the Antarctica as a whole the story is quite different. Meteorological data indicates that there has been an overall net cooling on the continent between 1966 and 2000.[343] This cooling was during summer and autumn, the only seasons when a temperature change can have an effect on ice formation or melting.

Back in 1998 a very large iceberg broke away from the Ross Ice Shelf. This of course was attributed to global warming by many despite the fact that the temperature record shows no warming in the region. According to Antarctic researchers from the University of Wisconsin, “The breakage is part of the normal iceberg formation or ‘calving’ that comes as thick layers of ice gradually slide down from the high Antarctic plateau, and is not related to climate changes or global warming.”[344] Furthermore, studies show that the glaciers feeding the Ross Ice Shelf are actually getting bigger. So there is no reason to believe that it will contribute to rising sea levels any time soon.[345]

Notwithstanding popular perceptions, climate scientists expect that any global warming during the 21st century will cause the Antarctic to make a negative contribution to sea level because the warmer, wetter atmosphere will lead to more snow over the continent.[346]

Greenland

The other main potential source of melting land-based ice is Greenland. However, at least for the moment it does not look like much is happening. According to Thomas et al., there is an overall balance with some regions thickening and others thinning.[347] While Krabill et al. estimate that there is an overall balance above 2000 meters but an overall thinning at lower elevations which is sufficient to raise sea level by 0.13 millimeter per year.[348] That is much less than the thickness of a tooth pick. So to have any effect at all on sea level this century, any ice melting will have to get a move on. Even the IPCC expects that Greenland will contribute little if anything to sea levels over the next century. They see a contribution ranging from -0.02 to 0.09 meters.[349]

If we are going to see Greenland ice sheets crashing any time soon we will also need to see quite a lot more signs of warming in that region than we have seen to date. According to Krabill et al, for the period 1900-95, the highest summer temperatures were in the 1930s while the last 15 years of the period were about half a degree colder than the ninety-six-year mean. According to Hanna and Cappelen, temperatures in the southern coastal region have dropped 1.29 degrees C since 1959; and the nearby sea surface has seen a similar downward temperature trend over the same period.[350] Chylek reports that the Greenland coastal stations data have undergone predominantly a cooling trend since 1940, and at the summit of the Greenland ice sheet the summer average temperature has decreased at the rate of 2.2 dgreees C per decade since the beginning of measurements in 1987.[351]

Mountain Glaciers

While a comparatively small source of melting ice, mountain glaciers are a veritable treasure trove of inaccurate claims of global warming in action. A number of examples should give the general flavor. One of the more renowned is the case of Mount Kilimanjaro. Claims repeated by various prominent people such as Senator John McCain and Sir David King, the UK’s chief scientist, that global warming has caused snow and ice loss on Mount Kilimanjaro have been a media attention grabber. However, it has been melting since the end of the 19th century and the melting over the last 30 years represents the slowest rate of decline since 1912.[352] And it is hard to blame warming even for that given that temperatures don’t vary much around the annual average of -7.1 degrees C and satellite data since 1979 show a slight cooling over the mountain and surrounding region. The culprit appears to be the fact that it has been drier over the last century. “With less snowfall to replenish the glacier and less cloud cover to shield it from solar radiation, Kilimanjaro lost glacial mass even during periods of global and regional cooling.”[353]

In early September 2001, NBC had a report, claiming that the melting of glaciers in Montana Glacier National Park is due to warming. It is true that there has been a 3.5oF warming if you only go back to 1950. However, if you examine the entire temperature record over the last century or more there is no upward trend, with current average summer temperatures being much as they were at the beginning of the record.[354]

On July 9 2001, the Washington Post published a story claiming that glaciers were receding in Peru because of global warming. It reported the claims of a local glaciologists that this was due to rising temperatures. However, a look at the records indicates that there has been no warming in the region over the last two decades. Furthermore, Peru’s glaciers have been receding for at least 150 years.[355]

On March 14 2005 Reuters news agency cites a World Wildlife Fund press release about retreating glaciers in the Himalayas. They were especially interested in the Gangotri glacier, which they said is retreating at an average rate of 23 meters per year. It is true that the glacier has been retreating at an increasing rate and that summer temperatures have increased since 1990. However, the glacier has been retreating over most of the last century and the acceleration in the rate began in the mid 1950s.[356] Also the summer temperature increase since 1990 comes after a dip in the 1970s and 1980s and temperatures are still lower than they were during the 100 years prior to that.[357] This looks more like a long term retreat which has little or nothing to do with warmer weather.

Arctic

While not contributing to rising sea levels, in much the same way that melting ice cubes do not raise the water level in a glass, melting sea ice in the Arctic would still be of some interest if it could be shown to be a smoking gun for global warming.

Greenpeace makes much of a 5.5 per cent decline in Arctic sea ice since 1978.[358] However, to see a human cause is to mislead with statistics. Data from a range of sources indicates no long term trend in Arctic temperatures.[359] Although, what does show up is a temporary dip in the 1970s and a recovery since then to the levels of the 1930s and 1940s. Weather balloon data does show some long term warming in winter, however, this is not going to affect ice cover given that ice does not melt at that time of year.

Studies in 1999 and 2000 of measurements taken by submarines, seemed to suggest that there had been a 42 per cent loss of sea ice thickness over the last 40 years. This is one of the most quoted claims in the IPCC’s Third Assessment Report. At the same time, however, other studies attribute some or all of the 42 per cent decrease to the location and timing of the submarine cruises and show either no significant decrease or a more modest 12 to 16 per cent. A panel commissioned by the Arctic Ocean Science Board to review the research accepted the possibility that the observed thinning was the product of sparse data coverage and large inter-annual variability of sea-ice thickness and hoped that new satellite-based measuring methods presently under development would provide data of the quality required.[360]

On August 19 2000, the New York Times reported on page one, that “North Pole is Melting” and that “the last time scientists can be certain that the Pole was awash in water was more than 50 million years ago. The Times based its story on a call from a couple of scientist on board a Russian ice breaker, one of whom was a professor of oceanography and co-chair of the IPCC’s Working Group II (“Adaptation and Impacts of Climate change”).[361] The ship found itself in open Arctic water and the scientists were convinced that this was a sign of global warming. However, the Times finally issued a not very prominently placed retraction after it received numerous eyewitness accounts and photographic evidence of open water at the Pole in previous years. Presidential hopeful Senator John Kerry obviously missed the retraction because he said on May 1 2001 “…[T]his summer the North Pole was water for the first time in recorded history.”[362]

More Frequent and Violent Storms

As a matter of routine, whenever there is a violent storm, the media and the alarmists inform us that such events are becoming more frequent and violent and it is due to global warming. In its 2001 report the IPCC found “no compelling evidence to indicate that the characteristics of tropical and extra-tropical storms have changed” during the 20th century.[363] In the case of hurricanes from the Atlantic which do so much damage when they hit landfall in the US, there has actually been a decline in both frequency or intensity from 1944 to 1995.[364] As for US tornadoes or twisters, there is no upward trend once you allow for the effects of improved monitoring; and in the case of killer tornadoes, categories 3 to 5 on the Fujita scale, there is a slight declining trend.[365]

Even Good News is Bad News

When it comes to doom and gloom, even goods news can be turned into bad news. This is what happened when Bill Clinton’s Secretary of Agriculture, during the 2000 election campaign, hyped a report about how increased CO2 was leading to more ragweed pollen and this would cause more hay fever.[366] This was a very odd perspective given that any encouragement to ragweed from increased CO2 also applies to plants generally. It would have made more sense to announce that increased CO2 was leading to more food and forests.

What about Eco-Catastrophes?

If the impacts of a bit of global warming were extremely severe, there would then be a strong case for immediate and drastic emission reductions. This is where prophesies of eco-catastrophes come in. The most well known of these are (1) the melting of the Arctic permafrost subsoil, and (2) the closing down of the Gulf Stream.

Much of the permafrost – permanently frozen subsoil – of the Arctic regions of North America, Europe and Siberia have a surface covering of peat which holds an estimated 14 per cent of the world’s carbon. Peat consists largely of organic residues which have not decomposed because of the high moisture environment. Rising temperatures could thaw the subsoil and lead to a lowering of the water table. This would dry out the peat which would begin to decompose and release CO2 into the atmosphere.

As discussed above there has been no warming to date in the Arctic region as a whole and this gives weight to lower warming projections. Also studies of permafrost in Barrow Alaska[367] and northern Quebec showed no signs of thawing.[368]

A number of studies have found evidence that thawing of permafrost can actually be associated with increased carbon sequestration by peat lands. Warmer climate would lead to greater levels of vegetation as would the fertilizing effect of higher levels of CO2. Research also indicates that plants growing in a more CO2 rich environment decompose less readily.[369]

A deep-enough thaw of permafrost could destabilize underlying methane hydrates leading to the release of methane. This could have a major impact if occurring on a large enough scale. However, to have a long term effect it would need to be sustained because methane only has a life of about 10 years in the atmosphere.

It has been claimed that global warming could close down the Atlantic Gulf Stream which pulls warm water from the tropics to the higher latitudes and is believed to provide western Europe with a far milder climate compared similar latitudes in North America. This would require the 30 million people living in the Scandinavian and Baltic countries to adapt like the Eskimos or move south. For most of the continent it would presumably mean having a climate much like Ontario and Minnesota which arguably is not an eco-catastrophe.

According to the scenario, the warmer currents would cease to travel to European waters because melting ice and increasing rainfall in the North Atlantic would switch off the process of thermohaline circulation. This relies on evaporation and iceberg formation making surface water more salty so that it sinks to the deep.

However, according to Carl Wunsch, an authority on ocean currents:

European readers should be reassured that the Gulf Stream’s existence is a consequence of the large-scale wind system over the North Atlantic Ocean, and of the nature of fluid motion on a rotating planet. The only way to produce an ocean circulation without a Gulf Stream is either to turn off the wind system, or to stop the Earth’s rotation, or both.

. . . The occurrence of a climate state without the Gulf Stream any time soon – within tens of millions of years – has a probability of little more than zero.[370]

Some scientists have also raised doubts about whether the Gulf Stream actually has a pivotal role in Europe’s weather.[371] Their research indicates a greater role for circulation of the air rather than the ocean. Firstly, because the prevailing winds across the Atlantic blow from west to east, western Europe benefits from the fact that the ocean gradually releases in winter the heat it has stored in summer. Added to this is the effect of the Rocky Mountains which influence the flow of winds within the atmosphere. These tend to bring cold winds from the north into eastern North America and warm winds from the south into western Europe.

All sorts of disasters are possible in future ages, human induced or otherwise. The longer the time frame the more likely they are. In fact, one seems to be a dead certainty, namely, the next Ice Age. However, the further humans travel into the future, the greater will be their ability to understand, adapt to and control natural forces.

Business as Usual for Now

With no signs of eco-catastrophe, there seems to be no strong reason to stray far from a “business as usual” approach, at least for the next couple of decades. That of course does not rule out a strong research and development program for emission free technologies plus some assistance to get them operating on a large scale. This will give us a wider range of options further down the track.

It also leaves open the prospect of keeping within a doubling from pre-industrial levels of greenhouse gas concentrations (i.e., 560 parts per million (ppm) measured in CO2 equivalent). Here is one scenario just to illustrate the point. With greenhouse gas concentrations currently at 430 ppm CO2 equivalent and energy’s share of emissions at 57 per cent,[372] let us allot energy another 75 ppm of the remaining 130 ppm prior to all emissions falling to one GtC per year. (This is the level that the various carbon sinks can absorb.) We are assuming here that other sources of greenhouse gases are reined in to the same degree.[373] With annual energy emissions contributing 6.53 GtC[374] at the moment and these increasing by 2 per cent per year until 2025, that will add another 89 GtC (42 ppm) if we make the usual assumption that 50 per cent is retained in the atmosphere. However, by introducing a 5.6 per cent annual reduction after 2025 we would get emissions down to well under one GtC by 2060 with the additional contribution to the atmosphere from now until then of 161 GtC (75 ppm).

Adapting to any Climate Change

Cutting back on greenhouse emissions is not the only approach to possible climate change. Adapting to it is another. And the best way to help people in the future to adapt is to increase the rate of economic progress. The higher the level of economic development and know how, the more they can meet any challenges. Air conditioning and better insulation will protect people from any increase in heat waves. (Although, it should be kept in mind that much of any warming would not take the form of higher afternoon summer temperatures. A lot of it would be warmer winter nights.) Better housing and emergency infrastructure such as warning systems, shelter and rescue services can reduce death and misery if global warming leads to more extreme weather conditions such as floods. Better public health, vector eradication, treatments and cures can counter any climate induced movement of diseases such as malaria.

Adapting to any sea level rise should not be a great strain given the long time frame involved. In the 21st century any increase would be confined to thermal expansion and would perhaps be double or triple the 18 cm (9 inch) rise which we had no trouble dealing with during the 20th century and were generally unaware of. The melting of the ice sheets would only have an effect more than a century down the track and would occur over many centuries. People would have plenty of time to either build retaining works or move to higher ground.

Global warming is not expected to have a net negative effect on agricultural production. While some areas could be adversely affected by increased flooding, soil evaporation or drought, other areas would have better farming conditions such as longer growing seasons and more rainfall. And all regions could benefit from the fertilizing effect of CO2 which increases plant growth and tolerance of poor conditions. Any area adversely affected could respond either by increasing food imports or introducing plants and livestock with higher stress tolerance.

If one is concerned about inequitable effects of global warming because of the greater vulnerability of the poor, a focus on economic development including increased aid has the benefit of assisting people right now and not just when climate change hits 50 years or more from now. Even a relatively small proportion of what it would cost developed countries to seriously reduce CO2 emissions over the next few decades would make a huge difference in developing countries, assuming it was accompanied by the kinds of political changes that are required for economic development. In other words it could not be the old routine whereby the World Bank finances kleptocrats and demagogues. (See the discussion in the final chapter.)

A similar approach can be applied to threatened impacts of climate change on the natural environment. In particular there is a concern that natural systems, and particular flora and fauna, will find themselves with new climates to which they are not suited and lack of time or physical barriers, including human habitation and agriculture, prevent a shift to a more suitable location. Over coming decades we can reduce, halt and slow down human encroachment by expanding and improving national parks and other forms of nature protection, and increase the efficiency of agriculture so that more food can be produced from a given area of land. And our descendents with their higher level of economic development and scientific knowledge will have increasing means to protect biodiversity.

Another low pain way of helping people in the future is to fund research and development into technologies with long lead times which will increase their ability to reduce emissions. This is something we are already doing with government supported research and development into solar, wind, geothermal and nuclear energy and also fossil fuel technologies that allow for the capture and storage of CO2. These efforts should include building up a few decades of experience of full scale operation.

CO2 Capture and Storage

Cutting CO2 emissions does not have to mean abandoning fossil fuels as a major resource if economic means can be found to capture and then store the CO2 they produce. This is a burgeoning area of research and is generally referred to as sequestering. For point sources such as power stations, capture would be part of the production process while for diffuse sources such as motor vehicles and heating it would require extracting the gas from the atmosphere. Emissions from these two sources are roughly equal.[375]

Three different technologies are being considered for emission capture from power plants and other point sources. The one that is nearest to being operational would extract CO2 from exhaust gases. These would be bubbled through a liquid solvent which would dissolve the CO2. The solvent would then have to be heated to release it for storage. This process consumes a lot of energy and could increase electricity costs by as much as 70 per cent,[376] although research in progress promises to reduce exhaust capture costs considerably.

Another approach called oxyfuel technology would separate oxygen from air and then burn it with hydrocarbons to produce an exhaust with a very high concentration of CO2, and so eliminate the need for separation. The main challenge is to develop a less expensive way of producing oxygen. A number of new techniques are being tested at pilot scale.[377]

The third approach is called pre-combustion decarbonization where natural gas or coal is converted to hydrogen and CO2. The CO2 is compressed for storage and the hydrogen is available as a fuel which only emits nitrogen and water. This has the advantage of providing emission free fuel for motor vehicles as well as for electricity generation. FutureGen, a partnership between the US, a number of other governments and private industry, aims to develop this technology over the next 15 years at a cost of $1 billion. The plan is to have a 275 MWe coal fuelled prototype plant operating in a decade which captures at least 90 per cent of CO2 and only increases electricity costs by 10 per cent or 0.2 cents per kilowatt-hour. The program then aims to develop further improvements which would lead to technologies that achieve near zero CO2 emissions and do not add to energy costs.

If we want to capture CO2 emissions regardless of their source, including diffuse ones such as motor vehicles and home heating, we need to extract it from the atmosphere. According to the proponents of this idea, the air would need to be exposed to a sorbent, an agent that absorbs CO2. This would require an array of units distributed across the landscape much like wind turbines. The big difference is that the land area requirements for a particular amount of energy produced from fossil fuel would be two orders of magnitude less than that required to produce the same energy with wind turbines.[378] In these units air would be blown by fans onto a flowing sorbent. The CO2 would then be removed and stored, and the regenerated sorbent recycled.

At the moment the only practical sorbent is calcium hydroxide which would combine with CO2 in the air to produce calcium carbonate CaCO3. This would then be heated in a closed vessel to produce calcium oxide and CO2. The calcium oxide would then be returned to the water to regenerate the sorbent. Proponents estimate that it would cost between 20 to 25 US cents per gallon of gasoline.[379] However, they are hopeful that another absorbing agent can be found which requires far less energy at the processing stage. While most of the CO2 from a point source would be more cheaply removed in-house rather than later from air capture because of the higher concentrations, it may not be the case for that final 10 per cent of so. In other words, the least cost approach may be to achieve less than zero emissions and rely on air capture to pick up what was missed. Air capture could also have the advantage of being more readily placed near CO2 storage facilities.

Enhancing nature’s very own air capture is another approach. Plants, microbes and soil normally absorb a considerable amount of CO2 from the air. We can plant trees and other vegetation and encourage life that is particularly efficient at absorbing CO2. For example, it has been suggested that aquatic microalgae which have carbon fixing rates that are higher than those of land-based plants by one order of magnitude, could be installed in photobioreactors arranged much like solar panels. They would produce a stable carbon compound ready for storage.[380] Biotechnology might help things along by breeding plants that grow more quickly or are in other ways more efficient at carbon fixing. And in the case of soil, CO2 capture should be improved by the move to conservation tillage which leaves the soil less disturbed.

Once CO2 is captured it can be stored in the ocean or underground, or converted to solid and harmless rock. The underground option is receiving most attention at the moment and includes storage in deep saline aquifers, depleted oil and natural gas fields and deep coal beds. Of these, the aquifers have the largest capacity with estimates ranging from 2,700 GtC to 13,000 GtC.[381] These saline formations are layers of porous rock that are saturated with brine. This is not just the aquifers with structural traps which have a relatively small capacity, perhaps 10 years worth of emissions but also the more extensive open ones which are thought to be suitable as long as the CO2 is injected far enough from reservoir boundaries that it will dissolve in the water or precipitates out as a mineral as a result of reactions with the surrounding rock before migrating more than a few kilometers towards the basin boundaries.[382] The use of aquifer storage is proving successful in the North Sea where CO2 stripped from natural gas produced at the Sleipner gas field by the Norwegian oil company, Statoil, has been injected for the last 5 years into the Utrisa Formation, some 1,000 meters under the sea bed.

CO2 can also be injected into depleted wells to push out otherwise inaccessible oil and gas, or into unmineable coal seams to dislodge coal-bed methane. Geologists estimate that as much as 500 GtC can be locked away in such sites.[383] This is about two-thirds of all the carbon in the atmosphere today.

CO2 can be disposed of by converting it into solid rocks called carbonates through a reaction with certain kinds of minerals. This would be inherently more stable than storage as a gas or liquid, and more compact. Recent research indicates that a process which naturally takes place over extremely slow geological time scales can be accomplished within minutes under certain temperature and pressure conditions.[384] The raw materials required for this process exist in vast quantities across the globe. Estimated mining and mineral preparation costs are currently not prohibitive, but work still needs to be done to reduce the energy required for the process.[385]

Storage in the ocean depths is another possibility. The amount of CO2 that would cause a doubling of the atmospheric concentration would change the ocean concentration by less than 2 per cent. Although 20 per cent as a general rule would eventually return to the atmosphere after a period of somewhere between 300 and 1000 years[386] and the resulting reduction in pH levels may have environmental consequences. These effects would be obviated if the CO2 could be kept in an isolated form, e.g., if injected in such a way that it turns into a carbonate or CO2 cathrates. Indeed, if methane hydrates from the ocean floor are ever exploited, it may prove possible to store captured CO2 as cathrates in the same deposits from which the methane was extracted, given that they are stable under similar pressure and temperature conditions.[387]

Solar Energy in its Various Forms

Solar energy can be either harnessed directly as it strikes the planet or after it has taken on an earthly garb. The latter forms include wind, waves, falling water (i.e., hydropower) and plant biomass. Wind is the horizontal movement of air caused by the sun’s uneven heating of the earth’s surface, while waves are created by wind blowing over sea water. Hydroelectric power has its origin in the evaporation of water by the sun and its subsequent precipitation on land at high altitudes. Plants convert solar energy through photosynthesis into chemical energy which can then be burnt for heat.

Direct Solar

The heat from the sun can be used to warm water, to heat buildings and to drive electric generators while its rays can be captured by photovoltaic cells and converted into electricity. Other possible future methods of exploitation are the channeling of light into buildings through optic fibers and the use of solar energy to split water to create hydrogen which can then be used as a fuel.

At the moment the most commonly employed means of harnessing solar energy is in domestic water or space heaters attached to the roof. These are large glass covered boxes which absorb heat and then transfer it to water or some other fluid through a system of pipes. According to the World Energy Council (WEC), only 2 m2 of collector area is required to provide 80 per cent of the water heating demands for a family in a Mediterranean climate.[388]

Space heating can also be provided by ‘solar architecture’ where buildings are designed to capture the sun’s heat. Large windows are positioned to maximize intake of radiant heat during the cooler months. Part of the heat warms the inside air while the rest is absorbed by specially designed inner walls which slowly release the heat once the radiant heat begins to decline late in the day. The escape of heat from the building is retarded by well sealed and insulated walls and windows which freely allow solar radiation in but are slow to conduct the heat out again.

At the moment a very small share of our electricity is provided by photovoltaic (PV) technology which is widely used in niche markets such as powering unmanned equipment or isolated homesteads or communities away from the power grid. In the case of households, PV panels are either attached to the roof or arrayed on nearby land. The panels comprise flat crystal cells made of semiconductor material, usually silicon, which absorbs light and then releases electrons which flow through an external circuit to generate electricity.[389] With the current state of the technology, about 10-15 percent of the solar energy that strikes the cell’s surface is converted into electricity.

A number of thermal systems of electricity generation have also been developed, although at this stage these are confined to a handful of trial projects. Some systems focus solar energy at a particular point using reflectors and use heated fluid to drive an electric generator. The reflectors track the sun during the course of the day to maximize the sunlight hitting them. These concentrating technologies can be classified into three types. Reflective parabolic troughs focus sunlight on a fluid filled receiver tube running along their front. Reflective parabolic dishes focus heat at the focal point of the dish where a receiver containing heated fluid drives a smaller generator. So-called power towers use a large number of sun tracking flat plane mirrors to focus sunlight onto a central receiver on top of a tower. Another very different system is the solar chimney. Instead of concentrating sunlight it relies on a greenhouse effect. A small facility has been trialed in Spain and a large scale commercial operation is planned for near Mildura in Australia.[390] In the case of the latter, the chimney will be surrounded by a 7 kilometer diameter ‘greenhouse’ which creates a hot draft that is sucked up the chimney where it drives electricity generators.

Solar lighting is a technology which is near the operational stage. Dishes on the roof, guided by a tracking system, collect the sunlight and ‘feed’ it along fiber optic cables to supplement electric lighting. Sensors keep the room at a steady lighting level by adjusting the electric lights based on the sunlight available.[391]

The development of methods that use solar energy to split water and produce hydrogen are presently the focus of research. Three technologies are being looked at. The photoelectrochemical solar cell is the closest to being ready for use, although still requiring a lot of work. The other two would appear to be more long term. The photobiological process would use specialized microorganisms which with the aid of sunlight and water produce hydrogen as a by-product of their metabolism. Because existing organisms such as green algae and cyanobacteria do this too slowly, some of the current research effort involves developing a genetically modified one that will be far more efficient at this task. Another approach, a kind of artificial photosynthesis, would adapt the process employed by plants which transform solar energy into chemical energy with the aid of carbon dioxide and water. The hope is that at least one of these methods would be less costly and more energy efficient than using the electricity from solar energy to produce hydrogen by electrolysis. In this process an electric current passing through water splits it into hydrogen and oxygen.

Nature of the Resource

The sun is a very diluted resource so harnessing it will require a lot of capital equipment spread over large areas. It is also highly variable from hour to hour, day to day, week to week, season to season and location to location. When not relied on too heavily, other resources can fill the breech when there is insufficient insolation. However, if solar is to be major source of energy we need to get round this variability by transmitting electricity over the vast distances from regions with a solar energy surplus to ones with a deficit and by converting solar energy to hydrogen which can be stored and transported. Achieving these objectives will require a lot more research and development.

Extent of the Resource

Below is an assessment of the extent of the various sources of solar energy and the extent that they could meet our energy needs.

Deserts

The deserts of the world are often mentioned as an ideal place to install solar panels. It is land that we don’t need for other purposes and nature would usually not feel greatly put upon. Deserts take up about 20 million square kilometers which is about 15 per cent of the ice free land area. The Sahara makes up about 45 per cent of this. Other major desert areas are to be found in Australia, the Middle East, Mexico, south west US, Chile and south west Africa.

To give a world with 10 billion people the average per capita electricity consumption presently found in rich countries would require a total output of around 100,000 TWh per annum which is a 6 fold increase over the 2004 level.[392] With the average annual insolation for these desert regions at around 2,300 kWh/m2, with current technology achieving an energy efficiency of 10 per cent and panels taking up twice their surface area to prevent them casting shadows on each other, we would need 4.3 per cent of this area. This does not take into account the energy losses from long distance transmission and from converting electricity into a portable resource such as hydrogen. So a somewhat larger area would actually be required. If we were to produce all the 2300 EJ (639,000 TWh) of primary energy (and not just the electricity) required by 10 billion people consuming the current rich-country average, we would take up 28 per cent of the desert area. There would be transmission and conversion losses here too. However, these may not be far greater than those incurred in the conversion of coal to electricity and oil to refined fuel.

Attached to Roofs and Other Urban Surfaces

Another place for solar panels which avoids conflict with other uses is the space on roofs and various other surfaces in close proximity to electricity consumers such as walls and unused land beside freeways and train tracks. Being close by, there is not the cost or energy loss from long distance transmission.

The extent of this resource will vary from one region to another depending on the level of insolation.[393] Northern and central Europe, Russia and China fare the worst with insolation ranging from 700 to 1,400 kWh/m2. The best placed are the West and mid-West of the US, Australia, the west coast of South America, most of Africa, the Middle East, South Asia. Here insolation is 1,900 kWh/m2 or above

Even in a country such as Holland with low insolation and fairly high population density, PV cells on residential roofs could provide a significant proportion of household electricity needs. According to a study commissioned by Greenpeace, there are 20 m2 of residential roof space per person in that country, and that with an annual insolation level of around 800 kWh/m2 producing 80 kWh of electricity per m2 this would supply 1,600 kWh per person.[394] That would provide 23 per cent of current consumption given the country’s population of 16.27 million and total consumption of 112.67 TWh.[395] Coincidently, with 23 per cent of electricity in Holland going to residential use,[396] this would be equal to current household consumption. If the share of electricity going to residential consumption were the same as the rich country average (31 per cent), [397] and the level of consumption were the same as the rich country average of 9,710 kWh per head, [398] the proportions would be quite a lot less – 16 and 53 per cent rather than 23 and 100 per cent.

For areas with the Dutch level of housing density, annual production from residential roofs would suffice for average rich country domestic consumption at an insolation level of 1,505 kWh/m2.[399] That covers the sunnier regions of the world.

Other urban surface areas can also be employed. The Greenpeace study claims that in Holland non-residential roofs cover 96 km2 or 30 per cent of the area of residential ones. To that we can add building walls and land adjacent to airports and running alongside freeways, highways and train tracks. If we conservatively assume that these other surface areas as a whole are 50 per cent of residential roofs, this gives a total area of 30 square meters per person. This would provide 35 per cent of Holland’s present total consumption and 25 per cent if consumed at the average rich country level.

It is assumed in these calculations that any mismatch between the supply of sun and the demand for domestic electricity can be evened out by net additions or subtractions from a much larger electricity grid based on other sources of energy and that there is no need to take into account energy losses which would occur if battery storage was used to provide power at night or during cloudy periods. Also not accounted for is the option of devoting some roof space to thermal units for heating and cooling.

Other Areas

When we move beyond deserts and areas of dense human habitation, solar facilities are more likely to conflict with other uses for land, in particular the natural environment and agriculture. Nevertheless, there are still considerable areas other than deserts which are of limited value to farming and to nature. These include grasslands, savannas and semi-deserts which are on a similar scale to the desert regions and include: the Eurasian Steppes, the US prairies, the Pampas of South America (northern Argentina and Uruguay), the vast sheep and cattle runs, and semi-deserts of Australia and the arid areas south of the Sahara Desert.

Wind

Unlike direct solar, wind energy is already being harnessed on a large scale. In 2005, global wind generating capacity was over 51,000 MW and this generated around 100 TWh per annum.[400] This is more than a tripling over the last five years. However, it is still well under 1 per cent of total power generation. The only country that has so far placed significance reliance on wind is Denmark, with Germany being the biggest producer of wind power in absolute terms, while most of the remaining capacity is found in the US, Spain and India.

Wind turbines typically are equipped with three-bladed rotors which are anywhere up to 100 meters in diameter. These are turned in the direction of the strongest wind with the aid of an onboard computer and drive a generator with a rated capacity between 600 kWe and 2 MWe. These are mounted on towers that are generally between 40 and 100 meters high.[401]

There have been two studies which estimate the on-land resource.[402] Both give a total resource base of around 500,000 TWh/year from regions with wind speeds of more than 5 meters per second. The study by the World Energy Council (WEC) assumes that this resource is found in 27 per cent of the ice-free land area (i.e., 36.2 million square kilometers). The Grubb and Meyer study estimated that 10 per cent of this area was available after allowing for accessibility and competing uses and could harvest just over 50,000 TWh, while the WEC study gave a more conservative estimate of 4 per cent and just under 20,000 TWh.

The total resource potential from these areas would increase with improvements in the technology that allow more effective capture of the available energy. Furthermore innovation would increase our ability to more effectively exploit areas with lower wind speeds.

As well as wind on land we also have wind off-shore. At least in theory this is a much larger resource. About three quarters of the earth’s surface is covered in oceans and their wind speeds are higher on average. For the moment, however, the exploitable resource is confined to the coastline and relatively shallow water. As the distance from land increases, the cost of transmitting the power back to shore increases sharply and the deeper the water the higher the construction costs. However, building on the know-how from off-shore oil and gas rigs, wind farms in the future will be able to venture into increasingly deeper water and distance from markets will become less of a concern as methods of long distance transmission and hydrogen conversion improve.

A study carried out in 1993-95, estimated an offshore wind potential in the European Union of 3,028 TWh.[403] This assumed that the wind resource can be used out to a water depth of 40 meters and up to 30 kilometers from land. It would not require all that many similar offshore areas around the world to match the on-land resource. With an ability to provide over 40,000 TWh every year, wind energy could meet a significant proportion of the electricity requirements of 9 or 10 billion people living in affluence and be an important although minor player in meeting total energy needs.

If the entire land resource identified by the WEC study were exploited you would have wind turbines dotting a combined area of 1.45 million km2. This is slightly more than the area of Germany, France, Italy and the UK combined. The actual “footprint” occupied by turbines, permanent access roads and other equipment would only be 5 per cent or less of this area, bringing the figure down to less than 72,000 km2. This is small compared with the 15 million km2 we currently make available for crops.

Wind turbines are generally not competing with other uses when set up on barren land or pasture. There may be a small drop in cattle production because of reduced grass area and a loss of amenity value if wind turbines ruin a popular piece of scenery. In the case of forest land or developed areas, there would be considerable conflict. Turbines are inefficient when located near trees and buildings because of the wind turbulence created and clearing trees and buildings for wind farms would not generally be considered the best use of land! In the case of off-shore turbines, the main concerns are sea lanes, restricted military areas, recreational uses and spoiling the view from the beach. Generally speaking, the closer you are to markets for electricity the more likely that wind power will conflict with these other uses. Consequently, wind energy will have the same, if not more, transmission and storage problems than solar power.

Waves

Wave energy is a potential resource with a range of technologies at the trial stage. The energy is derived from the winds as they blow across the ocean surface. The extent that the wind transfers energy depends on its speed and the distance over which it interacts with the water (the fetch). Once created, waves can travel thousands of kilometers with little energy loss.

Because waves continue to collect energy from the wind over a considerable distance with little dissipation, wave energy is significantly more concentrated than solar or wind energy. Waves tend to have tens of kW of energy per meter of crest compared with 100s of watts per square meter facing a solar panel or wind turbine – two orders of magnitude greater.[404]

Areas with the greatest wave strength are the coasts of large ocean basins, including western US, Europe and Australia, and the southern oceans above Antarctica. The power in the wave fronts in these areas generally varies between 30 and 70 kW/m, but with some areas averaging around 100kW/m.[405] For these more favorable areas the World Energy Councils estimates the resource to be in excess of 2 TW.[406] While a preliminary evaluation for a review of wave energy published in 1999 indicated a resource of more than 1 TW.[407] The same review estimated that this resource, using the latest designs of wave energy devices, could produce over 2,000 TWh of electricity annually. At this level of output, it could only be a modest contributor to electricity production – around 12 per cent of current output and 2 per cent of the 100,000 TWh required to give10 billion the level currently consumed in rich countries.

In recent years there has been important developments in wave technology, particularly with respect to devices that can be used further off-shore in deeper waters before the waves are dissipated by hitting the rising seabed and the contrary winds from the landmass. As well as producing electricity, these wave energy converters can also be used to desalinate seawater through reverse osmosis. This is a technology discussed in the previous chapter under desalination.

As with off-shore wind turbines, wave technologies are benefiting from many of the advances in technology and know-how achieved by the offshore oil and gas industry. This is particularly the case with respect to floating mooring systems and sub sea flexible power cables and connectors, pumps and motors.[408] Modern materials and computer technology are also assisting in the development of designs that can react to the changes in sea conditions, and resist the stresses of the marine environment.[409] Advances in remote monitoring should also help.

A range of devices have been developed over the years. However generally they are far less mature than wind or solar technology, and have generally not gone beyond the trial stage and are less than full scale. The technology has to contend with a very corrosive environment and occasional extreme wave conditions that impose huge strains on the equipment.

Most of the devices currently being developed are small units which would be deployed in large arrays. One of the more promising devices is the Pelamis. Named after a sea-snake, although whale-like in size, this device is a series of cylindrical segments connected by hinged joints. As waves run down the length of the device and actuate the joints, hydraulic cylinders incorporated in the joints pump oil to drive hydraulic motors which drive electrical generators to produce electricity. Power from all the joints is fed down a single umbilical cable to a junction on the seabed. A number of devices can be connected together and linked to shore through a single seabed cable. A full scale prototype pelamis has recently undergone extensive sea trials in the North Sea and an order has been placed for three of these units which will be located off the north coast of Portugal. The 8 million euro project will have a capacity of 2.25 MW, and is expected to meet the average electricity demand of more than 1,500 households. Subject to the satisfactory performance of the first stage, an order for a further 30 machines with a capacity of 20 MW is anticipated.[410]

As well as being more concentrated than wind and solar, wave energy also has the advantage of being less variable on an hourly or daily basis and any variability can be forecasted over the time-scales required in the electricity marketplace. As with wind, waves are generally a lot stronger in the winter months. Monthly average energy levels in winter can be three to five times greater than monthly averages in summer. Where peak demand is dominated by winter heating and lighting loads (northern Europe, for example), wave energy has a good seasonal load match.

Hydroelectric Power

In 2004 hydro produced 2,808 TWh of electricity.[411] This was 16 per cent of the total electricity supply and 2.2 per cent of total primary energy. The full potential of the resource has been estimated at 8100 TWh per year.[412] This means there is room for significant expansion. However, at the maximum it would only provide 8 per cent of the electricity required to give 10 billion people the present the rich country levels of consumption. This means a declining role for hydro in the long term and the creation of a gap that will need to be filled by other resources.

Biomass

Biomass provides around 10 per cent of our energy, with most of it being consumed in poorer countries, often on an unsustainable basis.[413] Types of plant biomass include: perennial crops such as trees, bushes and shrubs; annual crops such as sugarcane, cereal straw and grass; agricultural and forestry residues; and urban waste. This can either be burnt for heating or electricity, or converted into ethanol and used as liquid fuel.

The biomass potential from recoverable and unwanted agricultural and forestry residues have been estimated at around 40 EJ per year, while energy from urban refuse may well be around 6 EJ by 2025.

The crops giving the best annual energy yields are trees and sugar cane. For trees in North America and Europe it is over 200 gigajoules per hectare per year. For trees in the tropics with genetic improvement and fertilizer use they range from 100 to 550 gigajoules, with the top end being achieved where water is plentiful. For sugar cane the range is 400 to 500 gigajoules.

If the average figure is 250 gigajoules per hectare, production of the current total commercial primary energy output of around 470 EJ, would require 18.8 million km2. This is larger than the present area of cropland and about half the area of permanent pasture. Even with twice the yield we would still require a dauntingly large area which would compete in many cases with other uses.

A more realistic prospect is biomass being produced on a few million square kilometers at most. Some of this could be in rotation or in tandem with crop growing and grazing where it would play a soil management role. The rest would be in some of the 42 million km2 of forests and woodlands where it would have to compete with timber production and conservation objectives. A few decades from now this area could produce 10 to 20 per cent of our energy needs. However, as energy consumption increases as the century proceeds, biomass’s share would decline accordingly.

Other Possible Resources

There are two other solar based resources which may become significant in the future, although at this stage the technology is experimental. These are the energy from ocean currents and from heat stored in the ocean.

Surface currents are driven by wind while deep ocean ones are driven by density and temperature gradients. A number of technologies are being examined including arrays looking very much like wind turbines except they are underwater. Unlike wind, an ocean current is fairly constant and although slow moving its much higher energy density ensures a larger resource from a given area.

Ocean thermal energy conversion systems capture some of the solar energy which is transferred to the oceans every day. They do this by exploiting the temperature difference between seawater at different depths. Cold water is pumped from the ocean depths to the surface and energy extracted from the flow of heat between the cold water and warm surface water. It is suitable for electricity generation, desalination or a combination of both. Deep equatorial waters are the best because they have the greatest temperature extremes.

Summing up on Solar

While the resource is large, its position as a potential major or dominant supplier of energy still depends on technological improvements in a number of areas. PV cells, wind turbines and wave generators will have to continue becoming cheaper and capturing more of the energy. The energy loss in long distance power transmission will have to decline so that sun, wind and wave some distance from human habitation or activity can still supply electricity. We will have to improve our ability to use the energy from solar resources to split water and produce hydrogen and at the same time improve our ability to transport, distribute and store this gas. This can then be used at any place or time to power vehicles or generate electricity.

Given the need to interfere with vast areas of land, it is hard to imagine that any squeaky clean image that sun and wind hold when a hundred or so TWhs are being produced will remain untarnished when production is in the thousands of TWhs.

Nuclear Power without the Phobia

Nuclear power presently generates about 16 per cent of the world’s electricity, which constitutes about 6.5 per cent of commercially produced primary energy.[414] All the major developed countries except Italy[415], rely to a significant extent on nuclear power, ranging from 79 per cent in the case of France to around 20 per cent in the case of Japan, UK and US. It is also important in some of the former Soviet bloc countries. For example, the Ukraine receives 49 per cent and Russia 16 per cent.[416] India and China also have some nuclear power.

The industry has its origins in the military programs of the USA and USSR in the 1940s and 50s which produced nuclear weapons and reactors to power naval ships and submarines. The technology is based on the fission process, which produces energy by splitting atoms. The fuel for the process is provided by uranium which is “enriched” to increase the proportion of the fissile isotope uranium235.[417]

Presently there are 441 nuclear reactors generating electricity in 31 countries.[418] These come in a number of varieties which are mainly distinguished by their system of transferring heat from the reactor to the power generator. All the reactors in the US and about 90 per cent worldwide are so-called light water reactors.[419] Of these about two thirds use pressurized water while the rest use boiling water. Virtually all the remaining reactors are either a Soviet design using graphite or a Canadian one using heavy water.

After taking off in the 1960s and 1970s, the industry then sunk into a malaise. This can be attributed both to the increasing competitiveness of coal and gas power and to the emergence of a very unfavorable political climate marked by considerable public opposition and a switch in government policy from active encouragement to definite discouragement, including in some countries a decision to phase out the industry. This change of attitude received major boosts from the accidents at Three Mile Island in 1979 and Chernobyl in 1986 which highlighted the risks from radiation.

The industry is not entirely moribund. Improved methods have enabled existing plants to increase their total output and they are generally getting extensions to their licenses beyond their originally expected lifespan. There are presently 27 new plants under construction, including 8 in India, 5 in China and 4 in Russia, while another 38 are planned. [420] A number of countries in Europe are dragging their feet on phase-out plans particularly in the context of reducing greenhouse gas emissions. The US administration is pursuing plans to encourage new construction during the second decade of the century, a policy that has bi-partisan support. Nevertheless, for the industry to maintain or improve its relative position it would need to undergo a major resurgence.

Nuclear power’s competitive position may well improve in the future. Increases in fossil fuel prices could have a considerable impact on competitiveness given that fuel constitutes over half the life-cycle cost of a fossil powered plant. In contrast, prices of nuclear fuel have far less of an effect. While the doubling of uranium ore prices would increase nuclear generating cost by only 5 per cent, the doubling of natural gas prices would increase gas-fired generating costs by some 80 per cent.[421]

Nuclear power would benefit greatly from anything that would reduce construction or capital costs. These typically account for 60 to 75 per cent of total generation costs, compared with 50 per cent for coal plants and 25 per cent or less for gas-fired ones.[422] There are a number of factors that could lead to a reduction in these costs. These include standardized large scale production, new plant designs and more rational safety regulation.

If nuclear power plants were built on a large scale various economies could come into play. First there are the economies that come with experience. Once a few plants have been built and commissioned, the experience gained will reduce the costs of future units.[423] Then there are the economies associated with specialized plant and machinery. Producing a large number of any product or component generally allows the investment in specialized production methods that would be too expensive at low production levels but would reduce costs at a larger scale of output. For example, you would not build a production line to produce a few hundred cars. It would be cheaper to make them ‘by hand’, i.e., with non-specialized machine tools. It is only when the number reaches a critical level that building a large specialized plant becomes cheaper, seriously cheaper. There are also a range of overhead costs, such as design and administration, that can be spread over a large number of units.

Standardization would also reduce the long delays due to the approval process that in the past have doubled the construction time and greatly increased the interest burden. According to new legislation being adopted in the US and elsewhere, once a standardized design has been certified as safe, all plants built to that design would automatically receive approval. Such prior approval could also be harmonized internationally in much the same way as in the aircraft industry. A power company would then only need to receive approval for the chosen site. However, even this may be unnecessary where the unit is to be built on an existing power plant site. Many sites have room for more reactors.

The new designs being considered for future reactors include various features that could possibly make them cheaper to produce. Many new generation nuclear plants including the Westinghouse AP600 and ‘pebble bed’ would operate on ‘passive’ safety features which rely on natural forces such as gravity, convection, natural circulation, evaporation and condensation.[424] In the case of the AP600, this would mean 35 per cent fewer pumps, 50 per cent fewer valves, 70 per cent less cabling, and 80 per cent less ducting and piping than conventional LWR systems.[425] According to the developers of the pebble bed reactor, their design has no need for an expensive containment shell to prevent the escape of radiation in the case of an accident.[426] These and other new designs are also considered to be more suited to factory production and assembly of modules on site than the old generation of plants.[427]

The competitive position of the industry will be most favorable where transport infrastructure is inadequate or distances from fuel sources considerable because nuclear fuel is a fraction of the weight and volume of fossil fuel. This could tip the balance, for example, in India, northern China and western Russia.[428]

Nuclear power may benefit from a move towards a hydrogen economy. Electricity from existing nuclear power plants can be used for the electrolysis of water. Nuclear reactors could also provide the heat for the steam reforming of natural gas, the method that currently produces 95 per cent of hydrogen. Natural gas reacts with water at high temperature to form hydrogen and carbon dioxide. However, this would require a new generation of reactors that have a far higher coolant outlet temperature. An even higher temperature would be required for thermo-chemical water splitting which converts water into hydrogen and oxygen. While this technology is not yet commercially available, a number of steps are being taken in that direction. A pilot project is being planned in Japan. The Americans and the French are also doing development work.[429] Some breeder reactors would achieve temperatures suitable for these processes, as would the pebble bed reactors currently at the trial stage.

The resurgence of nuclear power would require an improvement in the political climate. If nuclear power begins to make economic sense where it did not before this could undercut opposition and strengthen support. The industry could also benefit from the fact that it does not emit greenhouse gases. This would depend on the extent that global warming fears cancel out radiation fears, and competition from other technologies with the same emission claims such as solar and wind.

For nuclear power to continue playing an important role in the second half of the century, there will need to be a large construction program. Just to maintain current output would require the replacement of existing capacity in coming decades. To maintain the current 16 per cent share in the face of the six fold increase in electricity generation that would be required to bring 10 billion people up to current per capita consumption levels of rich countries, there would need to be 2,646 reactors (441 x 5.3), assuming no change in average output. To produce all of this electricity there would need to be 16,537 of them (2,646/0.16). This is one for every 605,000 people or somewhat more than the level for present-day France where there is one for every million people.

Nuclear power could conceivably meet all energy requirements of this population at the current OECD average, through the production of electricity, heat and hydrogen. To provide 10 billion people with the same annual per capita primary energy at current average rich-country levels, we need to produce 2,300 EJ (55,000 mtoe) a five fold increase. This would require 35,243 reactors, or seven for every two million people. [430]

Resources

The current estimate of conventional resources of uranium is 14.4 million tonnes[431] or over 200 years’ supply at today’s rate of usage of around 65,500 tonnes per year.[432] A third of this is described as known conventional resources and would last almost 70 years at current usage rates while the remaining two thirds are undiscovered conventional resources, based mainly on estimates of uranium that is thought to exist in geologically favorable, yet unexplored areas.[433] This figure is bound to considerably underestimate the ultimate resource. Investment in exploration has been quite low[434] and a number of countries, such as Australia with significant resource potential in sparsely explored areas, have not compiled figures for undiscovered conventional resources.[435] Furthermore, according to Garwin this resource could be stretched by 25 per cent if more costly extraction methods were adopted that leave less of the uranium in the mining waste (tails).[436]

Thorium is another potential nuclear fuel, although currently not used. It is about three times more abundant in nature than uranium.[437] Furthermore, all of the mined thorium is potentially useable, compared with the 0.7 per cent of natural uranium used in existing reactors, so some 40 times the amount of energy per unit mass might be available.[438] The known resource is around 4.5 million tonnes.[439] However, this is bound to be the tip of the iceberg given the limited extent of exploration and the fact that it does not include data from China, central and eastern Europe, and the former Soviet Union. Thorium processing and reactor technology still needs a lot of development before it could become commercialized. India which has more thorium than uranium is in the forefront of research in this area.

There are also unconventional uranium resources to consider. These include about 22 million tonnes in phosphate deposits.[440] The recovery technology is mature and has been utilized in the past; however, costs are somewhat higher than the present price.[441] Then there is the 4 billion or so tonnes contained in seawater which could possibly become a resource.[442] A number of trials have been performed to extract uranium and other valuable minerals from seawater. They use a special absorbent material and the cost at this stage is estimated to be around $300 per kilogram.[443] (At the time of writing the uranium spot price was $122 per kilogram.) Another plausible method is to take advantage of the fact that life forms have the habit of taking up certain elements that are scarce in the nonliving world and concentrating them within their cells. For example, some sea animals accumulate elements like vanadium and iodine to concentrations a thousand or more times as great as in the surrounding sea water. It has been proposed that certain forms of algae could be cultivated to perform this trick with uranium.[444] No doubt seawater extraction would benefit greatly from a few decades of research and development.

When assessing the extent of nuclear fuel resources, it is important to keep in mind the possible adoption of so-called fast breeder reactors which extract around 60 times more energy from each kilogram of uranium. Conventional thermal reactors can only use uranium235 which makes up less than one per cent of natural uranium. However, fast reactors, can harness most of the uranium which takes the form of uranium238. They can also make very effective use of thorium.

There was considerable interest in this technology during the early years of nuclear power when it was thought that uranium would turn out to be scarcer and the industry a lot larger than proved to be the case. Around 20 plants were built in various countries including the US, France, the Soviet Union and Japan.[445] Most of them were eventually closed due to high costs, teething problems including safety issues and declining support for the industry. However, there are now signs of renewed interest. India, China and Russia have reactors planned. Also, the Generation IV International Forum, representing governments from many of the nuclear power countries including US, UK, Japan and France, selected a number of fast breeder reactors to be among the six systems to be the focus for collaborative research and development. The objective is to make advances over existing systems in areas such as economy, safety, proliferation resistance and protection from attack and to have a number of systems available to be deployed by 2030.

So, to what extent could we rely on nuclear power? The current estimated resource of 14.4 million tonnes would only provide about 5 per cent[446] of 21st century energy production assuming 2 per cent annual growth and no increase in the energy obtained from each tonne. Furthermore, it would be used up by 2090 if the current share of 6.5 per cent were maintained or not much later than mid century if in a few decades time we pushed out capacity to a 20 per cent share.

However, it does not seem too wildly optimistic to envisage nuclear power being able to provide larger shares of this century’s energy. Given more exploration and better extraction technologies the recoverable conventional resource could be considerably bigger than present estimates. Moderate increases in the energy harnessed from each tonne of uranium could also make a difference. Of course, the larger the share contemplated the more it would have to rely on the development of new technologies such as breeder or thorium reactors or the extraction of uranium from sea water. With such innovations the resource could become huge and a major provider of energy later this century or in the next.

The Safety of Nuclear Power

Nuclear power is very much under a cloud because of distinctive safety issues relating to its fuel. It is highly radioactive and some of it can be used to make nuclear bombs. This prompts a number of fears: power plants emit small levels of radiation under normal operations and there is always the possibility of a major accident that releases large amounts of highly radioactive material into the environment as happened at Chernobyl in 1986; spent fuel may leak from its disposal site into the environment at some time in the future; and nuclear fuel may be diverted to terrorists for bomb making. The radiation concern is examined first and then the threat of nuclear terrorism where the principle hazard is the explosion rather than the radiation.

Radiation associated with nuclear power consists of subatomic particles that shoot through space at very high speeds. It is called ionizing radiation because it can penetrate our body, damaging cells in the process. In this way it is different from harmless forms of radiation such as radio waves.

Nuclear power reactors are not the only source of ionizing radiation. To begin with there is natural radiation to which humans always have been and always will be exposed. In the United States, people on average receive an annual dose of 300 millirems.[447] This includes radiation from radioactive elements in rocks and soil, from within our own bodies and from outer space.

Radioactive elements in rocks and soil are principally potassium, uranium, and thorium.[448] As well as naturally emerging from the ground, these can be released by human activities such as burning coal, oil, gas and wood, and by mining, plowing, construction and well-drilling.[449] It means that brick, stone, and other building materials are slightly radioactive. Some types of building materials contain more radioactivity than others. For example, it has been claimed that Grand Central Station in New York City, which is a massive granite structure, provides commuters with a level of radiation exposure well in excess of what they would receive from visiting a nuclear reactor.[450] The radioactive material residing in our own body is from naturally occurring substances such as potassium 40 which are vital to our survival. These irradiate our organs including bone marrow, testicles and ovaries. We even irradiate each other at close quarters. This radiation from our bodies delivers an exposure close to a third of what we receive from rocks and soil.[451] From outer space we receive cosmic radiation. Most of it is absorbed by the atmosphere, so we receive a higher than average dose by living at higher altitudes or by flying, mountain climbing or skiing.

As well as natural radiation, another big source of exposure is from medical radiology. This includes x-rays and a whole host of other diagnostic tools. In the US this source accounts for 35 per cent of all radiation exposure and 90 per cent of the total man-made dose.[452] Other sources include TV sets, smoke detectors and airport X-ray machines.

Radiation exposure from all these various sources is fairly low. However, it is still far higher than routine emissions from the nuclear power industry. According to the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), estimated doses from nuclear facilities account for less than 0.05 per cent of the total dose from natural and medical sources.[453]

Even for those living near a reactor the exposure is a tiny fraction of what they receive from other sources. According to Cohen, it is comparable to what a typical viewer receives from a television picture tube.[454]

Radiation prompts two health concerns. At very high doses, of the kind that only the nuclear industry (civil or military) can deliver, it can cause radiation sickness which burns the skin and damages the central nervous system, internal organs and bone marrow. This damage allows rampant infection. Whether victims die or survive depend on the dosage, their health and age, and the quality of medical treatment.

The other health effect is an increased risk of cancer some time in the future, with the risk depending on the dose. In most cases the latency is 20 years or more. The exceptions are some childhood cancers and leukemia that may occur 3-5 years after exposure.[455]

There had in the past been concerns that radiation exposure could have genetic effects that could be passed on to future generations. However, the available evidence suggests that this is not the case. Research has shown that radiation can cause genetic mutation in plants and test animals including fruit flies.[456] However, these have not yet been detected in people. Studies of the children of Hiroshima and Nagasaki atom bomb survivors show no excess of genetic defects.[457] Nor is there any increased incidence in areas of high natural radiation.[458] Radiation is presumably a weak mutagen for humans, just one of thousands of known mutagens in the environment which, combined, result in about 10 per cent of all new-born children showing some evidence of genetic defects.[459] There is no evidence that radiation causes any other illness and this is in keeping with our knowledge of radiation and the causes of illness generally.[460]

The increased likelihood of contracting cancer depend on the level of radiation. The risk is known with a fair degree of certainty at higher levels of exposure. However, at the lower end there is far less certainty and quite a lot of controversy.

Given the extremely large number of people who contract cancer, it is difficult to determine statistically with epidemiological studies the extent to which radiation could be a contributor. In developed countries about half the population will get cancer from one cause or another and about half of those will die from it. This means that thousands of extra cancer cases in a particular population would cause a quite small increase in the rate and would be difficult to attribute to radiation rather than random variation in cancer rates or other factors that make this population different from others.

At the same token it is difficult to draw conclusions from unusually high or low cancer rates for small groups that have been exposed to higher than average levels of radiation. Small groups can be atypical for a range of reasons that are difficult to take into account.

At this stage our knowledge of how radiation does its nasty work is far too inadequate for us to assess its effect from first principles. More research has to be done into the nature of radiation induced cell damage and how it causes cancer.

In the past there was a general acceptance of what is referred to as the linear no threshold hypothesis (LNTH). This is based on a linear extrapolation from cases where high levels had been experienced and the risk level known with some degree of certainty. Most of the information is provided by studies of cancer incidence among those exposed to radiation from the atom bombs dropped on Hiroshima and Nagasaki. These survivors were exposed to an instantaneous dose of 100s of rems plus subsequent longer-term exposure from fallout. Information has also been gained from the medical records of people subjected to heavy doses of X-rays as a treatment for spinal diseases, a misconceived practice that ceased in the early 1950s.

Based on these studies, scientists have estimated the cancer death risk from a radiation exposure of 100 rems to be 5 per cent.[461] According to the LNTH, this can be extrapolated to much lower doses. So if a 100 rem exposure gives you a 5 per cent risk of developing a fatal cancer, a one rem (1000 millirem) exposure will give you a 0.05 per cent risk. In other words, halve the dose, halve the risk; double the dose double, the risk.

The LNTH also allows us to talk in terms of collective doses or ‘person rems’. For every 2,000 person rems, there is one death. This can be achieved by a infinite number of combinations of dose and population. For example, one person receiving 2,000 rems, 2,000 people each receiving one rem or 2 million people each receiving one millirem will all lead to one death. This is an effect that does not apply to most things we are exposed to and there is generally a threshold below which exposure is harmless. For example, 30 sleeping pills taken at once may be enough to kill an individual, however, that does not mean that if you take one tablet that you have a one in thirty chance of dying or if 30 people each take one that one of them will die.

The following examples should give a good idea of the kinds of risks implied by the LNTH. The background exposure of 300 millirems per year received by the US population of 300 million people would result in 45 thousand deaths per year. If we assume the same level of exposure for a world population of 6.5 billion people, this would result in 975,000 cancer deaths per year. As part of their background exposure, the average American receives about 31 millirems of radiation per year from cosmic rays.[462] This would kill about 4,650 Americans per year. The levels of radiation naturally in our bodies is about 39 millirems. This translates into about 125,000 deaths annually worldwide.

It did not take long for a general dissatisfaction with LNTH to emerge. Most scientists think it overstates the risk. In other words a dose that is for example 50 per cent lower than another dose will have a more than 50 per cent lower cancer risk. Furthermore, below some level the cancer hazard is zero or so low that it is effectively zero. The general view among scientists is that there is a lack of conclusive evidence of low level radiation effects below total annual exposures of about 5 to 10 rems.[463]

It has been suggested that a threshold exists because up to a certain level our body has a capacity to repair a whole range of different kinds of damage. It is only when the attack reaches a certain intensity that the repair systems starts to be overwhelmed and the system is increasing degraded as the dose increases.

On the other hand a small number of researchers believe that LNTH understates the risk for low level radiation. They supports the supra-linear hypothesis that more damage is caused per rem at low doses than at high doses. They theorize that perhaps low doses weaken and damage cells (which live on to damage other healthy cells), whereas high doses simply kill cells.[464]

While we need to keep in mind the problems with epidemiological studies, proponents of the prevailing view have a large amount of evidence which at least on the face of it supports their position. This includes the experience of Chernobyl, plant workers, medical patients, Japanese atomic bomb survivors who received relatively light exposure and the effect of differences in natural radiation levels.

In the case of survivors of the Hiroshima and Nagasaki bombings, those who received instantaneous radiation doses of less than 20 rems have not suffered increased cancer rates.[465]

A UN study 14 years after the Chernobyl accident concluded that up to then, there have been no increase in deaths from leukemia even among recovery workers who received fairly high doses of radiation, despite its latency period of only 5-10 years after radiation exposure.[466]

Extensive studies by radiation protection bodies have been unable to detect any sign that workers dealing with radioactive material have cancer mortality rates which are higher than those for the general population.[467] A study of over 20,000 men who took part in the UK atomic bomb tests in Australia and the Pacific in the 1950s showed no detectable effect on their life expectancy or on the incidence of cancer or other fatal diseases.[468] A study of the mortality rate among 30,000 persons exposed to radiation while working with nuclear ship propulsion systems was lower than the mortality rate among another 30,000 persons in a control group who received a more normal amount of radiation per year.[469]

There is no sign that regions with higher levels of natural radiation have higher cancer rates. These are at a higher altitude and more exposed to cosmic rays and/or have higher than normal uranium content in their soil. The cancer death rate in seven western states in the US is 15 per cent lower than in the rest of the continental US even though the level of radiation is almost twice as high.[470] In some parts of India and Brazil the natural background is over ten times the world average, due to the presence of radioactive rocks, but the population shows no signs of being affected.[471]

It is even possible that small radiation doses are beneficial. An explanation offered for this is that low radiation stimulates the body’s repair mechanisms.[472] Experiments indicate that the irradiation of mice by gamma rays increases their survival rate by one week per rem, and that the irradiation of salmon eggs increased the number of viable eggs and the rate of return of the adult fish to their birthplace to breed.[473] There are also statistical studies of human exposure that support this proposition. Twenty years ago in Taiwan, recycled steel, accidentally contaminated with radioactive cobalt-60, was used in the construction of more than 180 buildings which were occupied by about 10,000 people for between 9 and 20 years.[474] With seven cancer deaths, the cancer mortality rate for this population was 3.5 per 100,000 person-years compared with the rate in the general population of Taiwan over these 20 years of 116 persons per 100,000 person-years. Assuming that the people concerned were fairly typical of the population at large in terms of factors such as income and age – and this needs to be confirmed – the experience of these people seems to suggest that long-term exposure to radiation, at a dose rate of the order of 5 rem per year, greatly reduces cancer mortality. This is more than 10 times what people are presently receiving.[475]

Plant Safety

Normal Operations

During the normal operation of a nuclear power plant, gaseous and liquid discharges containing very low amounts of radioactive material are released into the environment. The extent of these emissions (both absolutely and per unit of generated electricity) has been reduced considerably since the early days of the industry by the use of improved technology, and this is continuing.[476]

Government regulations place limits on these emissions which keep exposure to a minute fraction of the natural background radiation levels that people already experience. In many countries permissible levels are set so that a hypothetical person who stood at the boundary fence, drank the plants cooling water and consumed food grown nearby, would not receive more than some minute increase in their normal exposure level.[477]

In reality, of course, no one experiences even that minute increase. No one lives at the reactor fence and even if they did they would not experience the maximum exposure because most plants keep their emissions well below this level.[478] The average lifetime exposure for people living in the nuclear power regions such as North American and Europe is next to nothing and less than the increased natural radiation exposure from a long plane trip.[479] On an annual basis, people living near to a nuclear power plant receive about one millirem of extra radiation exposure.[480]

Studies of populations surrounding nuclear reactors also suggest no health effects. A US survey sponsored by the National Cancer Institute studied cancer deaths in 107 counties with nuclear facilities within or adjacent to their boundaries.[481] Each county was compared to three similar ‘control counties.’ Their report, published in 1990, found ‘no evidence to suggest that the occurrence of leukemia or any other form of cancer was generally higher in the study counties than in the control counties.’ Studies in other countries generally supported the NCI’s findings, including ones in France and Canada.[482]

Reactor Accidents

A much bigger concern than routine emissions is the threat of reactor accidents that release large amounts of radioactive material into the environment and endanger the health of thousands.

Any serious accident would have to be the result of a string of mishaps. In the first instance there would have to be a breakdown in the cooling system which leads to the melting of the reactor core – a “meltdown”. This in turn would have to cause an explosion which spreads radioactive material into the environment.

Fortunately the risk of such accidents occurring is quite low. This is really most clearly in the truly remarkable safety record for nuclear reactors over the past half century. In the case of the water reactors used in most of the world’s nuclear power plants, there has been over 12,000 reactor years of service without an accident endangering the public. There has also been a similar amount of service by research and marine reactors with an equally unblemished record. The US navy alone has accumulated more than 5500 reactor years of operational experience with its nuclear submarines, icebreakers and aircraft carriers.

There has been one serious accident and that was at Chernobyl in the Ukraine in 1986 during a misconceived experiment. However, that said more about the state of the Soviet Union in its dying days than it does about the safety of nuclear reactors. The accident would not have happened but for a string of procedure violations. According to the Soviet investigators, there had been six separate contraventions of procedures by the reactor operators during the experiment. If any one of these had not been committed, the accident would not have happened.[483] There was a strong culture of disregarding safety rules and a complacency encouraged by the fact that past accidents and mishaps were kept secret from those in the industry as well as the public at large. This was reflected in the fact that an electrical engineer with limited knowledge of reactor operations was in charge of the “experiment,” and there was no one in the control room who understood the risks they were taking. Furthermore, the Chernobyl reactor was of a Soviet design that was far more vulnerable to criminal negligence and incompetence than types used elsewhere. The reactor had graphite as a moderator instead of water, so that the loss of water coolant can increase the chain reaction and resulting heat, whereas with a light water reactor loss of coolants brings the chain reaction to a halt and limits the heat that can be reached by the reactor core. Also, the reactor did not have the massive containment structure common to most nuclear plants elsewhere in the world. According to some analysts, this would have withstood the steam pressure that caused the explosion.

Generally speaking a disaster in a reactor is remote because of a range of emergency safety features and the need for a number of unlikely and unrelated mishaps. Of particular importance are the back up arrangements for the cooling system which prevents the reactor core from overheating. There are backup pumps and massive flywheels that keep water circulating even if the power is cut. If the main cooling system fails, emergency corecooling systems which are independent of the primary core-cooling system come into operation. In some cases this system is a pressurized water tank that does not need pumps but simply dumps large amounts of water on the reactor. It is very unlikely that both the primary core and emergency core cooling systems would fail.

The rods that control the chain reaction also have a number of emergency features. Firstly there is a number of independent clusters of control rods that can be inserted by gravity into the core to stop the reaction. Any one of the clusters would be enough to achieve this.[484] In the case of a power failure, the control rods are immediately released because they are only held above the reactor by electromagnets.[485]

If there is damage to the reactor core as a result of a failure in the cooling system what happens then? In the first instance, radioactive fuel has to escape from the steel pressure vessel into the containment dome and then set off a steam or hydrogen explosion that breaches the dome encasing the reactor. This would have to be quite a powerful explosion given that the dome is made of steel-reinforced concrete about a meter thick.

Experiments suggests that it is not easy to set off a steam explosion. For example, in 1980 scientists at the Sandia Laboratory in New Mexico unsuccessfully attempted to create a large one by dropping molten uranium into water.[486] Nevertheless, the inside of the protective dome is equipped with water spray nozzles or refrigeration systems that will condense the steam and reduce the pressure.[487] Some reactors have large volumes of ice on hand.[488]

The concern about hydrogen stems from the fact that it may be released by a chemical reaction if extremely hot steam comes in contact the fuel-casing material.[489] However, according to Cohen, the research seems to indicate that even if all the hydrogen that could possibly be generated by core damage were to explode all at once, the force would not be powerful enough to break most containments.[490] In nearly all scenarios the hydrogen would be produced gradually and ignited in a series of small explosions or fires caused by sparks from various sources such as electric motors. This is assisted in some cases by the installation of devices that constantly create sparks. Some reactors have an inert gas in the containment so depriving any hydrogen of the oxygen needed for an explosion.

If the dome can hold out for a few hours from the initial release of radioactive material, a lot of it will either become stuck to the dome walls or equipment, or be removed by various systems in place for that purpose. The latter includes ventilation systems and water sprinklers for removing particles from the air.[491]

As well as the prospect of reactors exploding radioactive material into the air and surrounding landscape, there is also a concern about groundwater contamination if the fuel melts through the thick concrete floor. This has been colorfully dubbed the ‘China syndrome’. However, according to Cohen, if molten fuel were to come into contact with groundwater it would flash into steam which would build up sufficient pressure to keep the rest of the groundwater away.[492] Once the fuel cooled it would be in the form of a glassy insoluble mass. If there were a problem, measures could be taken in good time to prevent any ongoing contact.

Various studies suggest that nuclear facilities would withstand external impacts such as World Trade Center style attacks. An aircraft may look quite solid but it is actually fairly light and flimsy. Given the small size of the protective dome the worst that could happen is to be hit by one of the engines. Studies also show that spent fuel storage pools are also able to withstand such attacks and experimental evidence proves that dry storage and transport casks would retain their integrity.[493]

Earthquakes are another concern. In the popular imagination there are visions of plants splitting in two or being swallowed up by cracks in the earth. The reality is very different. Like many modern structures, nuclear power plants in or near earthquake prone regions are built to withstand the worst expected earthquakes. They also have equipment that monitors seismic activity constantly and would be shut down in the event of an earthquake. [494]

The Three Mile Island (TMI) semi-meltdown is the only serious incident involving a water reactor. However, there was no loss of life nor significant radiation emissions that could cause future health problems. Nevertheless, it was still a major mishap because the facility was completely disabled, and some argue that it could have easily turned into something a lot worse. Furthermore, it gained added significance from the fact that it was perceived to be a lot worse than it really was and contributed to a growing opposition to nuclear power.

Investigations revealed that the problems were mainly in the way the plant was operated rather than in the technology.[495] This has generally lead to better training, improved controls and instrumentation and practices such as always having an engineer on duty. By closing off the TMI route to meltdown, these measure have made reactor operations safer.

Serious technological and management failings cannot of course be ruled out. This is exemplified by the recent incident at the Davis-Besse plant in Ohio, US where the pressure vessel had been badly corroded by the boric acid in the cooling water. This represented both a technological chink in the armor given that it arose from an unanticipated problem and a management failure in that a flawed inspection regime failed to pick it up. The Nuclear Regulation Commission deemed it to be a serious incident in that it involved a serious loss of defense in depth capability. According to the Commission, a worst-case failure scenario would have been a high-pressure leak of slightly radioactive primary cooling water (as steam) into the reactor containment building. The plant operators have replaced local management and spent large amounts of money on repairs and improvements. Other plants have been inspected to check for similar structural degradation but nothing has been found.

What about the increased risk from a growth in the number of reactors, in the event of a resurgence of the industry? What may seem like a low risk when there are only 441 reactors, may be looked at differently when there are thousands of them. This concern should be allayed by the fact that future reactors will be even safer than present ones. There will be greater reliance on safety systems that employ perfectly reliable ‘passive’ natural forces such as gravity, natural circulation, convection, evaporation, and condensation.[496] In the case of the pebble bed design, each fuel pebble is surrounded with its own outer shell that traps all radioactivity inside and if the helium coolant completely leaked out of the core, the fuel would not get hot enough to melt the uranium oxide within the fuel pebbles.[497]

The next question is – if a reactor blows how likely is it to lead to some incomparable disaster rather than one of more normal proportions? Certainly to achieve the more disastrous outcomes require less likely events or combinations of events. We need a failure by emergency workers to stem the emissions. This could be followed by evacuations being blocked by floods or snow storms. The situation could then be made even worse by an atmospheric temperature inversion concentrating radioactive material over a large trapped population down wind of the reactor. While we can conjure up such mega-death scenarios, they should not necessarily influence our actions if they are no more likely than other risks of disaster that are an inevitable part of life and of doing the things we want to do. For example, the government does not prohibit major sporting events because of the minute risk that the stadium will collapse from a construction fault or be hit by a falling Boeing 747.

What can Chernobyl tell us about the possible impact of a nuclear reactor disaster? According to the 2000 report by the UNSCEAR, there were 134 confirmed cases of radiation sickness among reactor and emergency workers who were on the scene at the time of the accident and received very high radiation doses. Of these 29 died within four months of the accident. A further 11 died between 1987 and 1998. The survivors from this group have a range of illnesses and a raised risk of cancer in future years.

Among the 240,000 recovery workers exposed to fairly high doses in the initial cleanup phase, there has to date been no raised cancer rate. This is surprising in the case of leukemia which usually emerges within a few years of high radiation exposure and gives weight to the anti-LNTH position. On the other hand, there is still the prospect of a raised rate for other cancers in the future given that it takes 20 years or more for radiation to have its effect. With the doses received by this group, and assuming LNTH, we can expect to see a thousand or so extra cancer deaths in the future. Among the few hundred thousand recovery workers who arrived more than a year after the accident, the radiation dose was much lower and any increase in cancer deaths would not be high enough to be detected in any epidemiological study.

Beyond the reactor site and its immediate surrounds, radioactive contamination mostly affected an area of 150,000 km2 with a population of about five million people. However, the only public health impact of radiation within this area that is in evidence 20 years after the accident are 1800 mostly treatable cases of thyroid cancer due to childhood exposure to radioactive iodine. The thyroid gland of young children is particularly susceptible to the uptake of radioactive iodine which has a half-life of 8 days and was a major component of the fission products released from the reactor. Indeed, many of these cases could have been avoided. A low iodine diet made the children more susceptible. Also the authorities could have been more effective in their distribution of stable iodine to prevent the uptake of radioiodine by the thyroid and in restricting the consumption of milk and fresh leafy vegetables in the vicinity of Chernobyl. People who were children at the time of the accident continue to have an increased risk of thyroid cancer, especially those who were under five years old. In the case of other cancers, no statistically noticeable rise in death rates is expected in the future because of the low radiation dose received.

As we can see this is a tale of human tragedy and hardship but it is not Armageddon. If anything in this sorry business is in line for that title it is the psychological and medical trauma caused by the gross over-reaction to the accident. Hundreds of thousands of people have had their lives disrupted by being relocated from regions where the radiation level had been raised to levels that were still less than the natural level in many parts of the world. This has lead to high rates of unemployment, depression, hypochondria and stress-related illnesses such as heart disease and obesity. Then there is a grossly exaggerated fear of getting cancer which is way out of proportion to what is actually a fairly small risk. Anti-nuclear fear mongering must take some of the blame for this state of affairs.

So, where in the spectrum of nuclear accidents does Chernobyl belong? It was certainly a very bad accident. The roof was blown off and large quantities of radioactive material were scattered over the surrounding region. One can imagine how it could easily have been worse. For example, the reactor could have been in a more heavily populated location and the weather crueler in where it delivered rain from radiation filled clouds.

However, these factors would appear to be overshadowed by the range of ways that it could have turned out a lot better if it had not been for the circumstances particular to the Soviet Union. To begin with the reactor was a type that used graphite as a moderator rather than water. This type is not used in the West. Burning graphite proved to be an excellent means of distributing radioactive material into the atmosphere. Emergency workers were poorly protected by western standards and the performance of emergency measures was not always “best practice”. Evacuations were delayed because the authorities did not want to admit to a serious accident until absolutely necessary.

However, even if there were a good chance of an accident turning out worse than Chernobyl, it would have to be a lot worse to be a disaster of unusual proportions. Events worse than Chernobyl on a fairly regular basis would probably compare favorably with the deaths from coal mining and fossil fuel emissions or from motor vehicle accidents. Possibly, the deaths from Chernobyl will turn out to be less than what would result from installing millions of wind turbines or solar panels.

Nuclear Waste

A common arguments against nuclear power is that there is a “waste problem.” It is claimed that we are unable to safely dispose of the radioactive waste created both in mining and uranium processing, and in reactor operations.

Mining and Uranium Processing Waste

The main form of radioactive waste from the ‘backend’ of the process is the ore body after the removal of the uranium. This is called tailings and there is about 400 tonnes of if for every tonne of uranium[498] and it is 50 to 100 times more voluminous than all other radioactive wastes combined.[499]

Its radioactivity is similar to that of natural uranium, however it potentially constitutes more of a hazard because it is on the earth’s surface rather than in the ground and in a pulverized form. It generates radon gas and radioactive particles that can get into the air or contaminate streams. However, emissions from uranium tailings would still only be a mere fraction of natural emissions from the soil and considerably less than that released by tillage of the soil by farmers.[500]

Tailings emerge from the uranium milling process dissolved or suspended in water. This liquid is pumped into ponds or dams which have to meet certain design specifications to prevent contaminating the ground underneath. The process water is then decanted into a settling pond while the remaining tailings dry out leaving what looks like piles of sand. This is covered over with enough rock, clay and soil to reduce radiation to levels naturally occurring in the region and then a vegetation cover is established.[501]

Various processes are employed to remove chemicals and radioisotopes from the decanted water. These are retained as a sludge that settles on the bottom of the pond. The water evaporates or is released according to stringent rules on radiation and other contaminant levels. The sludge is collected and disposed of when the site is decommissioned.[502]

There are some problems with mine and mill waste from a time when procedures were less strict. These still require efforts to stabilize, protect or relocate the waste.[503] Past mining practices were also a hazard to miners, with high radon exposure leading to higher incidence of lung cancer. However, this has not been a feature of uranium mining for some decades.[504]

Reactor Waste

The other form of waste results from reactor operations – ‘front end’ waste. It is a much smaller quantity, however, it includes medium and high level waste.

In terms of volume most of it is low level waste that decays fairly quickly, with most of it fading to background levels within months or years.[505] This waste includes filters and the radioactive material that they have collected from air and water in the reactor, and things that have been contaminated by contact with radioactive material, for example, gloves, clothing, pipes and valves. Not all of the low-level radioactive waste is from the nuclear industry. A significant proportion is from other users of radioactive material such as hospitals and research laboratories.

Intermediate-level wastes include chemical processing resins, fuel rod casings and metal from spent fuel assemblies[506] and makes up less than 20 per cent of reactor waste by volume.[507]

While only comprising 5 per cent of the total volume, high level waste contributes 95 per cent of the radioactivity. This is the spent fuel from the fission process. The entire US nuclear power industry produces about 2,000 tonnes annually[508] and has produced about 50,000 tonnes since the industry began.[509] This is equivalent in weight to a medium sized cruise ship.

Storage or Disposal of Waste

Current Arrangements

Presently most high level waste is kept in temporary storage facilities at the plant site. It is placed in specially designed containers and stored in pools of water that keep it cool and prevent radiation emissions. In those countries with fuel reprocessing plants, their high level liquid wastes are stored in cooled multiple-walled stainless steel tanks surrounded by reinforced concrete.[510] In both cases temporary storage is designed to do its job for many decades to come. Most intermediate waste is also kept in temporary storage.[511]

Low-level waste is typically stored on-site, either until it has decayed away and can be disposed of as ordinary trash, or until amounts are large enough for shipment to a low-level waste disposal site in special containers.[512] The waste is typically packaged in steel drums and buried in shallow trenches at licensed burial grounds.[513] Some countries use engineered facilities such as concrete lined trenches or vaults and there is some move towards deep disposal.[514]

Permanent Disposal

Temporary storage of high level waste has proven quite effective and is designed to last indefinitely. Furthermore, it would be easier to dispose of a few decades down the track, when the heat and radioactivity has dropped to a small fraction of the level when it was first removed from the reactor.

The view that we need permanent and inaccessible storage that requires no action by future generations seems to be based on the following two premises. Firstly, the waste is a burden we should not pass on to future generations. We enjoyed the benefits so we should bear all the costs. Secondly, future generations may regress to Mad Max barbarism or ‘advance’ to a low tech utopia of ‘simple living’ and be incapable of dealing with the waste. (By the way, it is easy to imagine how the latter could quickly degenerate into the former.)

The first premise fails to recognize the huge debt that our descendents will owe us for the their inheritance of accumulated capital, and technical and scientific knowledge. Looking after some ancestral waste is a small recompense. Furthermore, any burden will be greatly reduced by the onward march of science and technology which will provide increasingly cheaper methods of storage or disposal.

It is hard to worry about the second premise. If we revert to barbarism or feudalism, radiation exposure in some areas would be a small problem compared with all the other sources of increased death and misery accompanying this new state of affairs. Furthermore, at least in the case of medium and low radiation doses, there would be less impact in a society that has regressed to a life expectancy of 35 years or so. Most people would have died of something else before any increased risk of cancer had had time to kick in. And in the case of high doses, people would soon learn to stay away from its source and incorporate it into their myths and legends.

Also, by not going down the inaccessible permanent disposal route, we would be retaining what may turn out to be a valuable resource if available for use in future reactors that make full use of the uranium and not just uranium235. Another argument for continued accessibility which should appeal to the worriers is that it would allow future generations to make disposal super super super safe and not just super super safe:

The ability to monitor and gain access to waste once it is in a permanent disposal site is seen as increasingly important to public acceptance of disposal plans. This would allow future generations to determine whether the site is still safe. Maintaining some access to the site could be useful for two reasons related to public acceptance. First, it would make it easier to correct problems if they arise. Second, it would allow future generations to apply new methods of waste disposal.[515]

Despite the strong case against it, inaccessible permanent disposal is the policy in ascendance. Given this, ocean dumping should be the preferred method because it is cheap as well as safe. The waste would simply have to be converted into an insoluble form and placed in containers designed to last for thousands of years. In the unlikely event of a canister failing, any radiation would be released slowly and be diluted in the ocean where it would be scarcely noticed given that the ocean already contains 4 billion tonnes of uranium and other radioactive elements.[516] Besides, ocean dumping is done by nature all the time. Uranium ore is continually being eroded into rivers and finally discharged into the sea.

Even cases of accidental or rogue dumping indicate that concerns are overblown. Russia has dumped sixteen complete nuclear reactors from old submarines and ships into the Kara Sea north of Siberia. Six of them still contained spent fuel. These were not encased in concrete or carefully buried in the ocean floor. They were just dumped. However, despite this rather insouciant manner of disposal, researchers have been unable to detect any measurable radiation from these reactors anywhere in this fairly small area of water.[517] Over time they will be buried by the silt which is delivered in great quantities by the Yenisey River. In 1968 a US B52 bomber armed with plutonium fueled nuclear bombs crashed off the coast of Greenland. The recovery team were only able to retrieve around 90 per cent of the plutonium with the rest dispersed into the shallow coastal waters. Subsequent research indicated no increase in plutonium concentrations suggesting that it had been encased by the sediments on the sea floor. Seven nuclear submarines are currently sitting safely on the sea floor. One of the them, the Soviet submarine K-8, it is feared left 20 nuclear mines at the bottom of the Gulf of Naples before sinking under tow in the Bay of Biscay. The Soviet wreck which sunk in 1989 did, however, require ‘repair’ work to prevent radiation leaks. Of course, as things stand ocean disposal is politically impossible because it pushes all the phobia buttons and is now even prohibited under the London Dumping Convention. This leaves geological disposal.

A number of countries have identified potential underground storage sites and have conducted geological and geophysical tests to determine their suitability. These include Belgium, Canada, Finland, France, Germany, Spain, Sweden, Switzerland and the United States. Possibly the first cab off the rank will be America’s site at Yukka Mountain in Nevada, although when it will finally open is still unclear. Compared with ocean disposal this method is appallingly expensive, although not prohibitively so given that it still would be only a couple of percent of the cost of nuclear power.

The primary safety concern with underground storage is that the waste would eventually be dissolved by ground water and carried by it into wells, rivers, and soil. This could then get into human stomachs through drinking water supplies or through food plants that have picked up contaminated water in the roots.[518] The chance of exposure through inhaling contaminated dust is far less because groundwater only occasionally breaks the surface and 95 per cent of the dust we inhale is filtered out by hairs in the nose, pharynx, trachea, and bronchi and removed by mucous flow.[519] Direct irradiation by radioactive materials in the ground would not be a problem because rock and soil are excellent shielding materials that radiation cannot penetrate.[520] Other concerns relate to possible disturbances such as earthquakes, erosion, volcanic activities, mining and meteor impact.

Geological disposal is based on the strategy of multiple barriers, working from the innermost to the outermost. Firstly, the waste is in a form, possibly glass, that is not readily dissolved. Both archaeological and experimental evidence suggests that dissolving glass is an impossible task.[521] Secondly, the waste is sealed in corrosion-resistant containers. In the case of Yukka Mountain, these will have an outer layer of titanium. Tests have shown that this metal would prevent water penetration for thousands of years when immersed in a very hot and abnormally corrosive solution, while under more normal groundwater conditions, containers would retain their integrity for hundreds of thousands of years. [522] So the containers alone provide a rather complete protection system even if everything else fails. Thirdly the containers are surrounded by a backfill of clay that would swell if wet and form a tight seal keeping any water flow away from the package.[523] The clay would also insulate the waste from minor earth movements.[524] The fourth barrier is provided by placing the waste in a suitable geological environment or geosphere. Any waste which overcame the first three barriers would need to encounter conditions which would not provide opportunities to travel to the surface in groundwater. This means low rainfall to limit the means of transmission and a poor transport medium such as impervious rock with no fractures. It also means the depository being well above any water table which in turn would need to be long way from lower lying ground where it can come to the surface. The chosen sight would need to be in a region that was unlikely to be subject to future volcanic eruptions and it would have to be sufficiently deep so that neither meteorite impact nor surface erosion would expose the waste.

Cohen argues that virtually any deep ground storage would provide all the protection you would need in the totally unlikely event that the first three barriers were breached. [525] He points out that even if exposed waste were surrounded by ground water it would take an extremely long time to reach the surface, if ever. Firstly, groundwater near the waste would take a thousand years or so to emerge because it moves very slowly and travels horizontally following the rock layers and hence typically must travel many miles before reaching surface land at a lower altitude. Secondly, the radioactive material will move far more slowly than the groundwater because it would constantly be filtered out by the rock material. Furthermore, it may well become a permanent part of the rock.

Finally, one needs to ask whether it would matter much if radiation from a particular location got into the food or water supply. If it was at a level that caused concern it would be quickly picked up by routine monitoring programs and fairly simple countermeasures could be taken such as refraining from growing crops, grazing animals or drinking the water. Furthermore, future advances in medical science will greatly reduce and possibly eliminate the threat posed by radiation. So, at some point radiation exposure may cease to be a health concern.

Transporting Radioactive Waste

The specter of radiation being released from nuclear waste while in transit is made much of by the radiophobes. The record to date, at least in the US has been incident free. Over the past 40 years the US industry has managed to move more than 3,000 shipments without a single radiological release.[526]

Certainly movements will increase considerably once centralized geological storage facilities are brought into operation and/or greater use is made of reprocessing plants. So any risk, if there is one, will increase. However, a serious radiation leak in transit is a very remote possibility. This is ensured by the tight regulations governing the activity, particularly concerning the form the waste must take and the method of containment.

The containers used are subject to tests to assess their ability to deal with a range of accidents or attacks. These include ensuring that they can withstand the effects of a high speed truck or train crash, burning jet fuel and the high pressures of deep water. Even if these containers were somehow breached, contamination would be greatly limited by the fact that the waste takes a solid form. It is unable to leak out like a liquid or a gas. Significant radiation exposure would be limited to people who chose to linger in the immediate vicinity of the accident.[527]

Nuclear Terrorism

There is a concern that having more nuclear power reactors would increase the risk of terrorists getting their hands on the material required to make a nuclear weapon and so cause the level of death and misery achieved with the Hiroshima and Nagasaki bombs.

Achieving such a result would face a number of hurdles. Firstly, they would need to get together a small group of physicists, engineers, chemists, metallurgists and explosive specialists. They would not have to be experienced nuclear weapons makers but could rely on what is available in the open scientific literature. However, the more relevant their background the more smoothly the operation would proceed. There would then be the job of obtaining all the required equipment. Some of this would be difficult and in some cases would likely arouse suspicion

And finally there is the acquisition of the required nuclear material. Either highly enriched uranium or plutonium would fit the bill. Virtually all nuclear power reactors use only lightly enriched uranium and as long as plutonium produced in the fission process remains in the spent fuel, the level and type of radioactivity prevents it from being diverted to bomb making. Plutonium only exists separately where it is awaiting to be reprocessed into new fuel. This is carried out in Britain, France, India, Japan, and Russia, while the US at least up until now has opposed it because of proliferation concerns. Highly enriched uranium and plutonium are only used as a fuel in fast reactors. At the moment there are only a few in operation and a similar number are planned for China, India and Russia. Relatively large amounts of material would be required and any theft is unlikely to go unnoticed. The massive manhunt would mean that the time between the theft and the detonation would need to be fairly short.

The risk can be reduced in a number of ways. At the political level, the US is presently pursuing a course in the Middle East which should undermine the position of Jihad fascism, the main terrorist threat. At the time of writing, mainstream Islamists have already been brought into a democratic political process in Afghanistan, Iraq and Lebanon, with Egypt not far off. At the same time, the US induced Israeli withdrawal from the West Bank is now an inevitable event just waiting to happen. At the regulatory level, it is a matter of ensuring that there is an adequate reviewing process which will detect any weaknesses in the internationally agreed arrangements for the storage and handling of nuclear material. New technologies can also play a role. For example, there is talk of reactors with tamper proof fuel which is returned to special facilities for storage, disposal or reprocessing.

Concluding Comments on Nuclear

Nuclear power could play an important although not dominant role in energy production during this century by simply relying on conventional uranium resources and moderate increases in the amount of energy extracted from each tonne of uranium. Playing a major role both in this time frame and in the longer term will depend on the adoption of new technologies such as breeder or thorium powered reactors and seawater extraction of uranium.

Given the health-giving qualities of economic growth and affluence, there is a limit to how much heed we should take of remote risks from nuclear power, if it otherwise makes economic sense.

Geothermal Energy

Beneath a relatively thin cool outer layer, the earth is a furnace with a central core as hot as the sun. This is due mainly to the initial heat from gravitational collapse, when the earth was formed some 4.5 billion years ago, and to the on-going radioactive decay of potassium, thorium and uranium. The amount of heat beneath our feet is so great that the ability to exploit even a small fraction of it would belie any doubts about our ability to vastly increased the level of energy consumption.

To date, exploitation of the resource has been confined to its hydrothermal and geoexchange forms. The hydrothermal resource is the subterranean store of heated water and steam, and is the more important of the two. It is more readily exploited, the closer it is to the surface and the higher the heat gradient, i.e., the increase in temperature with each unit increase of depth. It is mainly located near where the tectonic plates meet, resulting in considerable volcanic activity and the placement of magma at higher than usual levels. The main areas are located in New Zealand, Japan, Indonesia, Philippines, the western coastal Americas, the central and eastern parts of the Mediterranean, Iceland, the Azores and eastern Africa.[528]

Hydrothermal electricity generating capacity is about 9,000 MWe.[529] This modest contribution is equivalent to 10 to 15 coal or nuclear power plants. Six countries are responsible for over 80 per cent of capacity, with the USA and the Philippines well out in front.[530]

Where the water is above 150oC steam is created which can be directly fed into a turbine connected to a generator. If the temperature is between 100oC and 150oC, electricity can still be generated using binary plant technology. The steam heats, through a heat exchanger, a secondary working fluid (isobutane, isopentane or ammonia), which vaporizes at a lower temperature than water. The working fluid’s vapor turns the turbine and is condensed before being reheated by the geothermal water, allowing it to be vaporized and used again in a closed-loop.[531]

Undertaken on a similar scale to electricity generation is direct use of the heat.[532] This is primarily for space heating. For example, in Reykjavik in Iceland pipes carry hot water for tens of kilometers to homes and other buildings. The resource can also be put to a range of industrial uses such as drying food crops, lumber, and bricks, heating fish ponds and greenhouses, and pasteurizing milk.

Geoexchange systems or heat pumps also provide space heating by taking advantage of the fact that the ground immediately under the surface stays at a fairly constant temperature all year round even while the temperature above changes with the seasons. In the temperate regions the ground temperature stays between 10 to 16 degrees Celsius (50 to 60 degrees Fahrenheit). By circulating water or some other fluid through pipes, thermal energy is extracted from the ground during the coldest times of the year and deposited in the ground during the hottest times. Pipes can be buried vertically, if the ground is not too rocky, or, if space permits, horizontally in shallow trenches a couple of meters underground. While the system requires electric power, this is only needed to move the heat rather than produce it. As a result it delivers 3 to 4 times more energy than it consumes.[533] Currently the use of this technology is quite limited. Just over half a million systems have been installed of which about half are in the US.[534]

While the hydrothermal and geoexchange resources have the potential to grow and continue to be a significant sources of energy in some regions, they could never be a major player. If the nether regions are to perform that role, we will need to exploit the much larger sources of heat. At this stage hot dry rock in the earth’s crust is technologically within reach. Further down the track we should be able to tap into the pockets of extremely hot molten rock – or magma – which are widespread throughout the earth’s crust and also the area beneath the crust called the mantle which begins 5 to 10 kilometers beneath the sea and 20 to 70 kilometers beneath the continents.

The approach being developed to exploit hot dry rock involves creating a man-made reservoir by drilling a deep well bore down into high-temperature, low-permeability rock and then forming a large heat exchange system by hydraulic or explosive fracturing. Water is then injected into this original well and retrieved from one or more production wells after circulating through the fractured rock. As with the hydrothermal resource, the hot water or steam can be used to generate electricity or to supply combined heat and power systems.

The technology has been mainly developed at the Hot Dry Rock Test Facility in Fenton Hill, New Mexico. A hot dry rock reservoir was successfully created which generated thermal energy continuously at a rate of about 4 MW in two test phases lasting 112 and 55 days. About 10 per cent of the power produced was consumed by the injection and production pumps.[535]

While trials such as these have proven the concept, a great deal of development would still be required to make it commercially viable on a wide scale. These include: (1) the development of inexpensive high-temperature hard-rock drilling techniques, (2) improvements in three-dimensional rock fracturing, (3) mastery of methods for maintaining low-impedance fluid circulation through the fracture system and (4) improvements in power generation methods appropriate to water at temperatures considerably lower than that in fossil or nuclear powered plants.

Drilling costs account for one third to one half of the total costs of a geothermal project[536] and the cost of reaching between 5 and 10 kilometers has to be reduced significantly for hot dry rock to compete with other energy sources. At the moment costs shoot up dramatically as those depths are approached. Basically, drilling has to be faster and less prone to break downs under increasingly hostile conditions. The prospects for improvements seem quite good.

To begin with, the sharp end of the system can be improved in a number of ways. Drill bits can be made of new harder materials that allow them to operate at much higher rotary speeds and weight-on-bit loads. Or a bit that simply tries to grind through hard rock can be replaced by one that shatters it, possibly assisted by applying heat to create thermal stresses. Down hole motors can be developed which apply more power to the bit than the more traditional rotary power transmitted from the surface. Improvement can also follow from basic research into the physical and chemical processes associated with penetrating rock.

The development of so-called smart drilling should also make a big difference. This will involve a high-speed broadband data link to the drill bit where sensors will report in real-time on the conditions around and ahead of the bit and so enable the operator to avoid problems and maximize the drilling rate. Real time knowledge of drilling conditions such as the strength and composition of the rock will allow appropriate changes to be made in weight on bit and drill speed. Knowledge of the precise location of the drill bit will mean it can be steered around undesirable zones. And information on the state of the entire drilling unit, including wear of tools, state of other mechanical components and the flow of coolant would allow timely corrective action. Expected advances in computer science and miniaturization should be able to provide this technology.

The energy content of hot dry rock is huge. It is everywhere under the earth’s surface, although more accessible in some places than others because of the different thermal gradient.

While the average thermal gradient is around 25oC/km,[537] about 11 per cent of the land area is classified as high grade with gradients substantially above normal.[538] In these areas rocks hot enough for electric power generation – usually taken to be at least 150oC but preferably higher – can be found at depths of less than 5 kilometers. Lower grade resources would need anywhere up to a depth of 10 kilometers. Mining for direct uses such as space heating can start at much lower depths.

Armstead and Tester have identified an energy resource of 105 million quads.[539] This is their estimate of the resource in rock with temperatures greater than 85oC, to a depth of 10 kilometers and lying beneath the 100 million square kilometers of land area not covered in ice or mountain ranges.[540] Of this resource, 26.5 million quads are moderate to high grade (a gradient higher than 40oC/km) while 78.5 million quads are low grade.[541] It is a bit under a quarter of a million times the 2004 level of energy production of 445 quads (or 470 EJ). Current production is the equivalent of the average energy beneath 400 square kilometers, in other words a square with 20 kilometer sides.

It is important to keep in mind that energy losses would be larger when using geothermal rather than fossil resources. This would be the case both in the direct use of hot water for washing and heating and in the creation of secondary forms of energy such as electricity and transport fuel. In the case of electricity generation, there would be lower thermal efficiency because of the lower temperatures at which the conversion takes place. Until we can easily extract heat at depths greater than 10 kilometers, the temperature will always be far lower than that created by the burning of fossil fuel. In the production of hydrogen as a transport fuel, via electricity production or some other method, the energy loss will always be far greater than that in the conversion of crude oil or gas to the refined fuel.

So, how long would the hot rock last? If we magically switched to 100 per cent reliance on it tomorrow and our energy consumption increased annually by 2 per cent, it would last almost 400 years on the assumption that two quads were required to replace one quad from fossil fuel because of the greater energy losses. If three quads are required, the resource would last 370 years. Employing the two quad assumption, the resource would last over 17,000 years if consumption increased annually by 2 per cent until 2100 (providing a 6.7 fold increase in annual output) and then remained constant. (It is over 11,000 years with three quads.) Using the two quad assumption again, just 1 per cent of the resource would last over 160 years, with a 2 per cent growth rate. (It is 140 years with three quads) This would reduce the average temperature by 0.5oC, given that a 1oC cooling provides 0.00215 quads of energy from every cubic kilometer.[542]

The area that would need to be exploited at any time will depend not only on the output but also on the draw down rate. So, if, for example, the regions being exploited were cooled annually by on average of one twentieth of a degree Celsius to a depth of 10 kilometers, a total of 413,953 square kilometers would be required to provide 445 quads per year. [543] This is equivalent to a square with 643 kilometer sides and is less than 3 per cent of the area of crop land. (Here no allowance is made for the greater energy conversion losses compared with fossil fuels.)

Until drilling to 10 kilometers is a routine and low cost exercise, the exploitation of hot dry rock would be confined to regions where the resource is high-grade. While a large resource, it is not evenly distributed. For example, in the US it is confined to the western regions of the country. Eventually, we will be able to drill below 10 kilometers and tap an even larger and hotter resource. As a result the heat that can be extracted from below a given area of ground will be greatly increased.

Water supply may prove to be a constraint in some areas. While there is a closed loop, with cooled water being re-injected, there is some leakage from the system necessitating an on-going demand for water. Some of the water is absorbed into micro-pores of the reservoir rock and nooks and crannies at the periphery of the reservoir. Although this tends to decline with time as these fluid sinks become saturated.[544]

Geothermal energy has advantages over resources such as solar and wind in that it is available anytime without energy wasting storage and the quantity can be adjusted quite quickly to meet changes in demand.[545]

A number of environmental concerns have been raised. Cooling of rock could cause some shrinkage and result in subsidence. However, this tendency would be counteracted by the high pressure water injection. If there is some slight subsidence, we would just need to avoid that small proportion of places where this poses a problem. Cooling of rock and water pressure could both cause seismic shocks. However, these take the form of many scarcely detectable micro earth quakes. There is no build up of stress that would cause a major earthquake. Any cooling would have negligible effects on temperatures in the surface regions where roots, burrowing life forms and ground water are to be found because the heat removal is occurring at great depths and rock is a very poor conductor of heat. Releasing waste heat into the atmosphere during power generation is a practice shared with fossil and nuclear power. The main difference is that individual power plants would tend to be considerably smaller. Apart from that, all that needs to be said is that it is microscopic compared with the heat of the sun and its variability.

While the surface ‘footprint’ of hot dry rock facilities would be a tiny proportion of the area exploited, they could still be a significant user of land, given the power generating equipment, wellheads, pipe distribution systems and transmission lines. Whether they would take up more or less land than fossil fueled power is unclear. On the one hand there is no need for strip mines, gas or oil pipelines, or waste repositories and heat generation takes place underground and uses no surface space. On the other hand, the fact that geothermal plants are expected to have significantly smaller output will mean various space consuming diseconomies. For example, there would be more transmission lines, more roads, and a bigger generating plant footprint for a given amount of output. If we eventually move to exploit magma and the mantle, footprints will become less of an issue as each facility will produce far more energy.

Energy Overall

After reviewing the resources on hand, it is clear that there are no insurmountable obstacles to providing a world of 10 billion people with the per capita energy levels that have already been reached in the rich countries. To achieve this by the end of the century would simply require the growth rates in energy production that were fairly normal in the 20th century.

An annual energy growth rate of 2 per cent which is slightly less than the average rate of the last 30 years would provide a 6.5 fold increase if sustained for the whole century. If, as expected, energy consumption in the rich countries is at a slower rate, more of any increase would go to the poorer countries.

If rich countries were to continue increasing energy consumption by 1 per cent per year and their population remained static, while overall energy grew annually by 2 per cent and the population of the poor countries increased by 60 per cent, by the end of the century, rich country per capita consumption would increase from 5.5 to 14 toe and poor country per capita consumption from 1.1 to 6.8 toe. This would bring the poor countries as a whole almost up to present US per capita consumption levels and shrink the disparity between rich and poor countries from five to one to two to one.

With the large resources of coal and gas, fossil energy could well remain a major player into the 22nd century, with CO2 capture if deemed necessary. Sun and wind resources are vast and non-depletable and could provide indefinitely the level of consumption anticipated by 2100, although it would require gathering the resources from quite large areas. With current and easily foreseeable technology nuclear power could play a larger but not dominant role during this century. If innovations such as breeder and thorium powered reactors, and ocean extraction of uranium, prove feasible and economic, nuclear power could provide a growing level of energy for many centuries. Hot rock could become a massive resource once we can drill routinely to 10 kilometer depths. The further we travel into the future the more we will be able to rely on energy sources that are presently either infeasible and unforeseeable. Fusion energy, solar power from space, and heat from magma and extreme depths are among those in the former category.

Minerals and Other Raw Materials

Some materials are available in such large and readily accessible quantities that there is little argument about them being a limit on growth. This category includes limestone, gypsum, sulfur, nitrogen for fertilizer, clay, sand, gravel and silicon. Other resources have present reserves of sufficient size to sustain reasonably healthy growth rates through at least until late this century.[546] These include potassium, phosphate, iron ore and bauxite (for aluminum).[547]

Some minerals are a significant proportion of the earth’s surface which suggests considerable abundance even after allowing for the fact that ore bodies vary in accessibility and cost of processing.[548] Silicon is the most abundant element after oxygen in the earths’ crust. Aluminum makes up 8.2 per cent of the earths crust while iron makes up 5.6 per cent.[549] While copper and zinc are each less than one part in 100,000 of the earth’s crust, in both cases it is equivalent to many millions of years of current output. [550]

As mentioned in the last chapter, the potassium and phosphate required for fertilizer are both quite abundant. The estimated total potassium resource is over 8,000 times present annual consumption. [551] The present phosphate reserve would last well into the next century. Further exploration should lead to the discovery of extensive new deposits and new technologies will open up the vast resources that have been identified on the continental shelves and on seamounts in the Atlantic and Pacific Oceans.[552]

While plastic is presently derived from petroleum, its fate is not tied to it. Plastic can be made from coal and plants. It is very much a “natural” or “organic” product. Animal horns, tortoise shells and shellac from insects are all plastics which can be molded when heated.

Some metals with limited reserves sometimes raise concerns. These include silver, gold, tin, lead, tungsten and nickel. As with energy resources, it needs to be kept in mind that present reserves are no real indication of what is ultimately available. They may well be just the tip of the iceberg with future reserves being augmented by further exploration and improved methods of extraction. The oceans and ocean floor are a major longer term source for many metals. These metals often also have a range of substitutes which can reduce our reliance on them. Gold has ample substitutes in jewelry; and in electrical and electronic products alloys and thinner coatings can be used. The need for silver in photography is being significantly reduced by the move to digital technology. Tungsten has a range of substitutes for cutting tools and for lighting.[553] Plenty of other metals can do the jobs done by tin. Technological advance ensures that the scope for substitution increases over time as new materials are developed and old uses for existing materials decline in importance. Phosphate, potash, nitrogen and sulfur for fertilizer have no presently foreseeable substitutes, however, they are all plentiful.

Tidying the Nest – Our Effect on the Environment

There is widespread anxiety about the effect on the natural environment of economic development and increasing population. Our impacts can be put into three broad categories – emitting greenhouse gases such as CO2 into the atmosphere, polluting air and water, and physically destroying the natural environment through measures such as land clearance.

Greenhouse emissions have already been discussed above in the section on fossil fuels. There, we concluded that the issue was clouded by scaremongering, that lack of action to curb emissions in the next two decades would at most only lock future generations into a doubling of pre-industrial levels of CO2 in the atmosphere and that with their increasing wealth and scientific savvy our descendents will be able to adapt to any climate changes. Degradation of the soil required for food production was discussed in chapter two and was judged to be a challenge to be overcome rather than an insurmountable obstacle.

Below we examine the other main areas of concern: first, air and water pollution, and then the destruction of forests and the extinction of many of their plant and animal species.

Air Pollution

Most air pollution comes from combustion by motor vehicles, power plants, domestic fires and various industrial processes. Common pollutants are: particulate matter (PM), which includes smoke and soot; carbon monoxide (CO); sulfur dioxide (SO2); nitrogen oxides (NOx) and lead.

Particulates, SO2 and NOx can cause, or contribute to, lung and cardiovascular disease, and aggravate allergic reactions and asthma. Lead is a dangerous poison, with emissions best known for affecting the intellectual capacity of children. CO is of most concern for people who suffer from cardiovascular disease such as angina.

When it comes to the natural environment, the primary concerns have been acid rain and ozone or smog. Acid rain occurs when NOx and SO2 react with water in the air to create sulfuric and nitric acid. When this precipitates it contributes to the acidification of soils, lakes, and streams. Lower atmosphere ozone or smog is formed by the interaction of sunlight with NOx and other pollutants. This is known to reduce the growth and survivability of plants, and can affect both ecosystems and crop yields.

In developed countries, there has been remarkable reductions in emission levels over recent decades and a corresponding improvement in air quality. In the US, between 1970 and 2004, GDP and vehicle miles traveled almost tripled and energy use increased by almost 50 per cent, yet emissions of the six principal pollutants declined by more than half.[554] SO2 emissions have declined by 51 per cent, NOx 30 per cent, CO 56 per cent, particulate matter (not originating from precursor gases such as SO2 and NOx) 80 per cent, volatile organic (i.e., carbon-based) compounds (VOCs) 55 per cent and lead almost 99 per cent. Between 1983 and 2002 SO2 concentrations in the air declined by over a half, NO2 by over 20 per cent and CO by 65 per cent.[555] Between 1988 and 2002, concentrations of particulate matter 10 micrometers or smaller in diameter (PM10) fell by 31 per cent.[556] Emissions of pollutants referred to as air toxics declined by 30 per cent in the 1990s.[557] These are pollutants that have been specifically identified because of particular concerns about their effect on health or on the environment. Lead and some of the particulates and VOCs are in this category. Also included are dioxin, asbestos and metals such as cadmium, mercury and chromium. In the European Union in the 1990s, SO2 emissions fell by 60 per cent, NOx by 27 per cent and VOCs other than methane by 29 per cent.[558]

Government regulation has played a big part in these reductions by mandating the adoption of cleaner technologies or measure to capture emissions. Emissions should continue to decline with tighter requirements as technologies improve and old motor vehicles and facilities are phased out. For example, European Union countries plan by 2020 to reduce year 2000 emission levels by the following amount: SO2 by 82 per cent, NOx by 60 per cent, VOCs by 51 per cent, ammonia by 27 per cent, and primary PM2.5 (particles emitted directly into the air) by 59 per cent.[559]

Air pollution will decline even further in the long term as we develop ever cleaner ways of using fossil fuels and eventually reduce our reliance on these sources of energy.

Water Pollution

The sea, streams, lakes and groundwater are all subject to water pollution. This can be point-source as in the case of emissions from industrial, sewerage and drainage facilities, oil releases from shipping, and leaching from landfill and toxic waste sites; or it can be from a more diffuse source such as runoff and leaching from farms or from air pollution that eventually returns to the earth’s surface.

Human health can be affected through the presence of bacteria, viruses and toxic substances in drinking water or the accumulation of heavy metal or other substances in the fish we eat. The living aquatic environment can be degraded through oxygen depletion, heat, the blocking of sunlight and poisoning.

In the developed countries there have been significant reductions in some forms of water pollution. The disposal of untreated sewage into rivers and coastal waters has been greatly reduced or eliminated. Industrial emissions, and landfill and toxic waste sites have become subject to heavy restrictions. The discharge by industrial facilities of pollutants into waterways is becoming a thing of the past, toxic waste sites are subject to expensive cleanups, and the design standards for landfill and waste storage have tightened considerably to prevent leeching into groundwater.

The cut backs in air pollution referred to above are reducing the level of water pollution from this source. Less air pollution is precipitating directly onto lakes, or onto land where it can be washed into waterways and lakes, or leached into groundwater.

There has been less success with pollution from farms. This includes pesticide, fertilizer, sediments, pathogens from animal waste and salts. These are picked up by rainfall, snowmelt and irrigation water, and either run off into lakes and waterways or leach into groundwater. The fertilizer can lead to excessive levels of nutrients in streams and coastal regions causing eutrophication[560] and the pesticide may cause health problems if it reaches sufficient levels in drinking water. Sediments can block sunlight to underwater plants and clog the gills of fish. Improvements will come from the adoption of less hazardous pesticides, and farm practices that use chemicals more judiciously and reduce soil erosion.

Another concern is the contamination of urban rainwater by a host of domestic and industrial substance which are picked up prior to flowing into the drainage system. These include motor oil, pesticides, herbicide, rust and nutrients such as fertilizer, plant matter, organic wastes and motor vehicle cleaners. During heavy storms it may also receive sewerage overflow. Cities vary considerably in the extent that they treat drainage water and their ability to cope with heavy storms.

Reductions in pollution have lead to some notable signs of improvements. In the middle of the 20th century, the Thames River around London was effectively dead from the combined result of sewage effluent, industrial discharges and thermal pollution from power stations and gas works. There was virtually no oxygen in the summer months and no established fish population in this part of the river. However, by 1974 the situation had improved sufficiently for salmon to begin returning for the first time in 150 years.[561] Their life cycle requires good environmental conditions in the sea, the estuary, the river and its tributaries and this makes them a good indicator of environmental quality. A lot of other fish have also returned and in recent years seals have been spotted fairly frequently. In 2000 salmon also returned to Europe’s Rhine River.[562] While they are not yet independent of human help and stocking exercises, they already reproduce naturally in several tributaries, and the target is to achieve stable wild populations by 2020. New York Harbor has seen a vast water quality improvement over the last three decades, due mainly to the clean up of sewerage and drainage.[563] The resulting increase in oxygen levels has lead to the return of many fish species as well as their winged predators. Oysters are slowly returning, and there are occasional sightings of dolphins, manatees and sea turtles. The Great Lakes of North America are also on the mend. While the top predator fish still pose a risk to any wildlife or people eating them, the contaminants responsible have declined dramatically since the 1970s and are still declining in most cases.[564] Between 2002 and 2004 the Federal Administration spent $1.3 billion on measures to improve water quality and the Great Lakes Interagency Task Force was established in 2004 to take charge of the on-going cleanup. Measures include increased clean up of contaminated sediments, measures against invasive species and increased meeting of air emission targets by regions up wind of the lakes.[565]

The proportion of beaches classified as polluted and unfit for swimming has declined dramatically. For example, in the European Union during the 1990s the figure fell from over 20 per cent to 5 per cent.[566] Harmful toxins in sea food taken from the coastal waters of the US and western Europe are also declining.[567] Oil spills that pollute sea lanes and adjoining coastal regions have fallen sharply. Major spills of more than 700 tonnes averaged 25 a year in the 1970s but only 8 a year in the 1990s. In the naughties so far (2000-05), the average is less than four.[568] For spills over 7 tonnes, the quantity of oil spilt has declined from an annual average of 314,200 tonnes in the 1970s to 27,167 tonnes during 2000-05.

Pollution and Development

In developing countries pollution is reminiscent of what it used to be like in developed countries in earlier times. Drinking water is heavily polluted. Sewage is generally untreated. Air and water emissions from factories and power plants go unchecked. Motor vehicles are old and their exhausts unregulated. Smoke from burning garbage and other rubbish fills the air.

In industrializing countries such as China and India, the rivers are open sewers and repositories of all kinds of factory emissions. China’s Yangtze River in 2003 received 35 percent more waste water and sewage than 5 years prior,[569] while the lower reaches of the Huaihe River are considered too toxic even to touch.[570] In the case of the Ganges River in India there is the added problem of poorly cremated human remains.[571]

Not surprisingly there is a serious lack of safe drinking water. In China a third of the rural population is affected,[572] while most large cities do not have drinking water that meets government standards.[573] A recent study showed that only half of that country’s river water and 65 percent of its ground water was drinkable.[574]

The greater level of air pollution in developing countries is reflected in World Bank estimates of PM10 concentrations in 1999 for cities with populations over 4 million.[575] For developing country cities, the median and average levels are just over three times greater than for their developed country counterparts, while their worst city has a level almost five times greater than the worst in the rich group.

At the present stage, countermeasures are limited and poorly enforced, and tend to be swamped by the growth in polluting activities. So one can expect the situation to get worse before it gets better. As these countries develop they will begin to turn the situation around much as the rich countries have done. There will be the resources available and the political pressure to do so. The process will be assisted by the fact that technologies just keep getting cleaner.

One of the more serious forms of air pollution in poor countries is the indoor variety, which even the early stages of development should remedy. Billions of people – particularly women and children – are exposed to high levels of particulates and CO through the widespread practice of cooking and heating with solid fuels on open fires or traditional stoves.[576] The solution lies in better stoves and ventilation where traditional fuels continue to be used, and the greater adoption of modern energy sources such as electricity, gas and kerosene.

Pollution Scares

As with anything to do with the environment, pollution has its share of alarmism and exaggeration. Some of the more renowned in recent times relate to the effects of oil spills, acid rain and chemicals.

Oil Spills

When Saddam Hussein dumped 6 to 8 million tonnes of oil into the Persian Gulf in 1991, it was the biggest oil spill ever.[577] Greenpeace described it as an unprecedented disaster causing vast damage to the local ecosystem. Large scale maritime extinctions were expected and there was a general pessimism about the future recovery of the Gulf. Bahrain’s Health Minister claimed that the slick was “the biggest environmental crisis in modern times” and could spell “the potential end of wildlife in this area.” However, a study in 1992 showed that the oil had been substantially degraded and the water contained similar levels of oil residue as coastal stretches in the US and UK. By 1995 marine scientist could report that the region was well on the way to biological recovery.

In 1989, a spill in Prince William Sound in Alaska had also been dubbed an eco-catastrophe. While 25 times smaller than the subsequent Gulf spill, it was the biggest ever in American waters. It caused a heavy oiling of 200 miles of coastline and light oiling of another 1,100 miles, and the number of individual animals killed ran into thousands. According to the National Oceanic and Atmospheric Administration, the region is now well along the road to recovery.[578]

Acid Rain

During the 1980s there was much ado about “acid rain” destroying forests. This is rain that is made more acidic than normal by NOx and SO2 emissions reacting with water to form nitric and sulfuric acid. It hit the headlines when a group of German scientists blamed it for severe forest death in Europe in the late 1970s and early 1980s. However, large scale and expensive research in North America and Europe over the next decade came up negative. This included experiments that exposed trees to various concentrations of acid rain. At the same time there was a reassessment of the damage to European forests which prompted the alarm in the first place. It was determined that the extent of forest death had been exaggerated and that much of it was due to direct smoke pollution from local emissions.[579] And in the late 1990s it was possible to report that the predictions made by many in the 1980’s of widespread death of European forests had not eventuated.[580]

Studies did, however, confirm that acid rain was polluting lakes. Damage to buildings and monuments was also confirmed, although it only brought forward required repair work by a number of years. Of course, whatever the effects of acid rain, there are, as discussed above, plenty of other good reasons to reduce NOx and SO2.

Cancer Epidemics

It is claimed that there is a cancer epidemic and that chemicals in our water supply and food are a major cause. It is certainly true that the rates of cancer incidence and deaths have risen considerably in developed countries over the course of the last century. For example, in the US, cancers deaths per 100,000 in 1998 were over three times greater than in 1900 and almost one and a half times greater than in 1950.[581] However, because cancer is predominantly an illness of old age, the figures need to adjusted for the fact that the population is aging. After making this adjustment, there is a slight increase in the cancer death rate from 1950 until 1990.[582] However, this increase is smoking related. If you adjust for smoking there is a decline in the cancer death rate. A decline of almost 30 per cent in cancer mortality.

Lest one think that the decline in mortality is simply due to better cancer treatment, incidence rates for cancer have also been improving. For all types combined, age-adjusted incidence rates have been stable since 1992.[583]

Claims have also been made about rising rates for specific forms of cancer. Breast cancer incidence rates in the US increased by 0.3 per cent per year between 1987 and 2002.[584] However, this is generally attributed to increases in already recognized risk factors. These include women putting off having their first child and having fewer of them, and increased obesity levels.[585] The jump by almost 4 per cent per year between 1980 and 1987 is due largely to increased screening and increased early detection.[586]

There has been an increase in the age adjusted incidence of prostate cancer. However, this appears to be due to increased and earlier testing. This has meant that the disease is being detected in people who would otherwise never be diagnosed with it because they die of something else before it is picked up.[587] The National Cancer Institute in a study of childhood cancers in US determined that the incidence had been fairly stable during the 70s, 80s and 90s except for a once off jump in the late 70s and early 80s which they attributed to improvements in diagnosis.[588] At the same time death rates among children have steadily declined because of better treatment.

Endocrine Disruptors

With cancer fears losing their punch a new phobia took off in the mid 1990s. The chemicals we were using were accused of being ‘endocrine disruptors’ with all kinds of dire effects. They were acting like hormones and disrupting the endocrine systems of both humans and animals and thus leading to reproductive and other health problems. It really got going with a book called Our Stolen Future which even earned a forward from Vice President Al Gore.[589]

The evidence was based on reports of reproductive problems among wildlife exposed to chemical emissions, the side effects of an estrogenic compound dispensed to pregnant women between 1940 and 1970 and on claims of a decline in sperm count and semen volume among men.

In the Great Lakes area there had been a relatively high organochlorine contamination of wildlife populations and evidence of reproductive problems among birds and fish. Levels of the relevant pollutants have fallen considerably in recent decade with the banning of their use. This has been accompanied by a decline in the reproductive problems and an increase in wildlife population.[590]

A story that received a lot of coverage was the case of high chemical exposure by alligators in Lake Apopka Florida.[591] Among the reproductive problems were decreased penis size. However, it does not seem appropriate to draw any conclusions from this about general risks given that the reptiles had been exposed to particularly high levels of pollution, namely, a major spill from a local chemical plant plus sewage effluent and runoff from agricultural chemicals.

Diethylstilbestrol, a potent estrogenic compound, used between 1940 and 1970 to prevent spontaneous abortions, had caused reproductive illnesses among the offspring. The drug was administered at high therapeutic doses and the exposure to endocrine disruptors was thousands of times higher than any exposure by trace levels of chemicals in our food or water supply.[592]

Hormone disruption by chemicals was also blamed for a purported decline in sperm count and semen volume among men. Naturally this received considerable publicity. According to a 1992 analysis which looked at 61 studies in various locations over the period 1938 to 1990 there had been a 40 per cent drop in sperm count and a 20 per cent drop in semen volume.[593]

However, this study was shown to be totally unsound because it did not take into account the fact that sperm count and semen volume vary from one place to another at any given time and the change over time also varies with some places experiencing falls, other rises and other staying the same. So any trend in studies over time will depend on the places included. And it just so happened that the earlier period of “high” sperm count relies overwhelmingly on a small number of New York studies. This is a location that has always had an above average sperm count, the cold winters considered to be a possible reason.

Furthermore, any drop in sperm or semen levels would at least in part be explained by the significant increase in sexual activity over this period, with our without partners. There is known to be a strong positive correlation between these levels and the period of abstinence from ejaculation.[594] The case is also weakened by the fact that there has been no drop in semen quality among domestic animals over recent decades.[595] Finally, if we were having reproductive woes you would expect male infertility rates to rise. However, this has not been the case. In the US it has remained fairly constant over the last 30 years while in the UK it has declined.[596]

Loss of Forests

The state of our forests is a common concern. Forests cover almost 4 billion hectares, or 30 per cent of the ice-free land area.[597] The proportion of forest cover varies considerably from one country or region to another. South America and the Russian Federation are both 50 per cent forest. France, Germany, Poland, Spain and Italy are all near 30 per cent while the UK, Ireland and Holland are much lower at around 10 per cent. For Sub-Saharan Africa and North America it is a third. For Australia and Asia it is around 20 per cent. Even “crowded” India is a quarter covered in forests and woodland. In some countries the proportion is particularly high. For example, Brazil, Sweden, Papua New Guinea, Japan, South Koreas, D. R Congo, and Malaysia are at least 60 per cent forest.

Tropical forests are the primary concern. While forests in other areas are increasing in size, they are shrinking. Furthermore, they have far greater biodiversity. A large proportion of species are found in tropical rainforests. In terms of tree species, the proportion is particularly high. They have thousands of them while forests in the temperate and boreal regions have very few. For example, Iceland has three.

Tropical forests make up about half the forest area[598], of which South America has over 40 per cent, Africa 33 per cent and Indonesia around 5 per cent. The Amazon comprises 53 per cent of tropical rain forests. The loss of tropical forest is in the order of 11 million hectares per year.[599] If that level of loss were to continue until 2050 the current area would be reduced by almost a quarter. At its current level of depletion Africa would lose 30 per cent of its forests by 2050. In South America (i.e., mainly the Amazon) the figure is 22.5 per cent and for Southeast Asia 57 per cent. For Indonesia which is presently responsible for almost 70 per of forest loss in Southeast Asia, a continuation of the current level of depletion would leave very little by mid-century.

The situation is different once we move away from the tropics to the forests of North America, Europe and East Asia. In 1920, forest cover in the US was 70 per cent of what it was in 1600. Re-growth since then has brought the figure up to 75 per cent.[600] While there is very little ancient undisturbed forest in Europe, new forests have grown considerably in recent times. Leaving out Russia which has been static, forests in Europe have grown in area by 7 per cent between 1990 and 2005[601] and by a third since 1950.[602] China claims to have increased forest area by 25 per cent between 1990 and 2005. It is certainly true that as a whole these non-tropical forests do not have the biodiversity they had when we were hunters and gatherers, nevertheless, we can still conclude that the situation overall in these regions is not getting worse and in some cases is getting better.

The prospects for ending wholesale forest destruction in the tropics depend very much on the prospects for economic, social and political progress over the next 50 years. The children of subsistence farmers need to be brought into the modern economy. Forest related industries such as logging and cattle industries need to decline in importance and influence. People need to be affluent enough to be concerned about forest conservation. For poor people it can be something of a luxury item. More effective international action is also important This includes restrictions on the international trade and investment in products resulting from excess exploitation of rainforests and greater contributions by rich countries to the cost of conservation.

Mass Extinctions

Many species are threatened by our destruction of their habitat and introduction of exotic competitors and predators. Some people talk of mass extinctions. There a around 1.5 million recorded species and the total number is thought to be between 5 and 10 million.[603] These are a mere fraction of all the species that have ever existed, with most having gone extinct long before humans appeared on the scene.

Since 1600 there have been just over a thousand documented extinctions.[604] These include around 100 each of mammals, birds, fish and insects; a couple of hundred mollusks and almost 400 vascular plants. These are extinctions that were sufficiently visible for the absence of the specie to be noticed and some time was spent looking for it wherever it may have existed. So we can be quite sure that the real number would have been a lot higher.

The primary concern is with the tropical rainforests which are thought to be home to at least half of all species. The extent of extinctions depends on the level of forest area loss and its relationship to species loss. Some biologist see a very strong relationship. Typical is the WWF biologist Thomas Lovejoy who surmised that the destruction of half the rainforests would lead to the extinction of a third of their species.[605] If rainforests have half the species, that is a species loss of 16.5 per cent. If we take the total number to be 6 million, that is a loss of almost one million. The average annual figure then depends on how long it takes to reduce the forests to half their current size. For example, 25 years would give an annual figure of around 40,000.

There are reasons for placing bounds on our pessimism. As already discussed above there are reasons for hoping that the deforestation will slow down in coming decades and we will still be left with large areas. Also the relationship between area loss and extinction may not be as great as is often feared.

From the tropical forest loss to date of somewhere around 20 per cent, there has been no clear evidence of large numbers of extinctions. The Atlantic Rainforest in South America has been reduced to less than 10 per cent of its original size. Nevertheless, there have been no identified animal or plant extinctions. There are indeed a lot of endangered species, however, there is some optimism about managing the region to avoid or minimize future species loss.[606] Measures include increased levels of protection, identification of key biodiversity areas and linking forest fragments which are otherwise too small for long term specie survival with corridors that are being reforested or put to other biodiversity-friendly uses.

The experience of the US is also instructive. The destruction of 99 per cent of its eastern primary forest over the last 200 years lead to the known extinction of one forest bird[607]; while the stresses placed on the forests of the Pacific Northwest during the post-war period lead to no known vertebrate extinctions. [608]

Rowan Martin, a wildlife expert with 35 years extensive knowledge of the on-land vertebrate population in southern Africa, knows of no extinctions during that period.[609] Given that the region is 3 per cent of the total world land area and that part of it is in the tropics you would expect that region to have well over 3 per cent of all species and something near its share of extinctions. Also worthy of note is the fact that endangered species lists have not been good predictors of extinctions, suggesting a greater than expected resilience.

Documented extinctions worldwide also suggest a lower rate. Fairly reliable data for mammals and birds suggests that their extinction rate has been about one per year in recent times.[610] So if other taxa were to exhibit the same liability to extinction and there were a total of 10 million species the annual rate of extinction would be just over 700 species per year.[611] This is high but nothing like the figures that often get bandied about. It would take 100 years to lose one per cent of species at that rate.

As the human race progresses we will be in a position to become increasingly biodiversity-friendly. We will be able to apply more resources and better know-how to our conservation efforts. Eventually we will be able to positively contribute to biodiversity by creating new species and by spreading life beyond the planet where it can adapt and evolve. And when the next ice age comes, people (or other intelligent beings) who are the product of the present industrial and technological trajectory will be in a position to rescue threatened species. This is a job well beyond people living “close to nature”.

We should also keep in mind that human settlement is not necessarily at odds with biodiversity. Our cities are full of flora and fauna. We have birds, small mammals, pets, gardens and parkland. Weeds have no trouble popping through cracks in concrete and asphalt, and if you lift a rock something is sure to crawl out from under it. There is no shortage of flies, moths, spiders, ants, beetles, slugs and worms.

The biosphere’s history of resilience is something else to consider. There have been some large scale extinctions in the past and nature has always managed to bounce back. These past episodes seems to suggest that lost diversity is eventually restored after 10 million years or so.[612] This is not a long time in the life of the biosphere. Furthermore, each phase of extinctions has been followed by even greater diversity.[613]

 

NOTES

[274] CIA Fact Book, 2004 estimate.

[275] Average income here refers to GDP per capita not income per worker. The figures used are for 2004 and come from the on-line CIA Fact Book.

[276] “The Great Divide,” The Economist, Mar 3rd 2005

[277] World Bank, 2006: 8.

[278] World Bank 2006: 8.

[279] World Bank 2006: 8.

[280] World Bank 2006: 8; The Groningen Growth and Development Centre: http://www.ggdc.net/dseries/Data/TED05II .xls

[281] The Groningen Growth and Development Centre: http://www.ggdc.net/dseries/Data/TED05II .xls

[282] IEA. 2006a: 48. Primary energy is the energy content of a resource at the point of extraction. In the case of coal, geothermal and uranium it is their thermal energy prior to their being converted into electricity, and oil prior to refining. For wind, solar panels and hydropower it is the electricity produced.

[283] The rich countries are here defined as OECD members except for the Czech Republic, Hungary, Mexico, Poland, the Slovak Republic and Turkey. In 2004 their per capita consumption was 5.49 toe. For the poorer countries per capita consumption was 1.13 toe. See IEA 2006a: 4857.

[284] EIA 2006: 1.

[285] EIA 2005: 1.

[286] The US per capita level is 7.91 toe. See IEA 2006a: 57.

[287] EIA 2006: 83.

[288] IEA 2006a: 6.

[289] IEA. 2006a: 6.

[290] US Geological Survey World Petroleum Assessment 2000 – Description and Results, US Geological Survey Digital Data Series 60.

[291] In 2005 we consumed 29.6 billion barrels. See BP 2006: 8.

[292] Key advocates of this view are Colin J. Campbell, Kenneth S. Deffeyes and Jean Laherrere.

[293] One of the more renowned optimists is Michael Lynch of Strategic Energy & Economic Research Inc.

[294] The term ‘oil’ has, to date, been synonymous with conventional crude oil, a liquid mixture of hydrocarbons that percolates through porous strata and flows readily up drilled boreholes.

[295] Alberta Chamber of Resources 2004: 41.

[296] Alberta Chamber of Resources 2004: chapter 4; Athabasca Oil Sands Web Page: http://collections.ic.gc.ca/oil/index1.htm

[297] National Energy Board 2004: 4.

[298] Natural Resources Canada: http://www2.nrcan.gc.ca/es/ener2000/online/html/chap3a_e.cfm

[299] US Geological Survey, “Heavy Oil and Natural Bitumen – Strategic Petroleum Resources”, USGS Fact Sheet FS-070-03, August 2003.

[300] Email communications with Alberta oil sands executive.

[301] Bunger et al. 2004.

[302] A figure of 2.6 trillion bbls. is given by Bunger et al.2004. A figure of 3.5 trillion bbls. is given by Williams 2003.

[303] Williams 2003:20

[304] Bunger et al. 2004.

[305] IEA 2001a: 52.

[306] Jalonick, Mary Clare, “Bill contains incentives for new coal-conversion plant”, Associated Press, 30 September 2005.

[307] Iran Daily, “Indians Look to Make Oil From Coal”, July 30 2005. http://www.iran-daily.com/1384/2336/html/energy.htm

[308] IEA 2006a: 6, 24.

[309] IEA 2003: table 6.8.

[310] Goldemberg 2000: 148.

[311] IEA 2003.

[312] Goldemberg 2000: 148.citing BGR (Bundesanstalt fur Geowissenschaften und Rohstoffe [Federal Institute for Geosciences and Natural Resources] 1998. Reserven, Ressourcen und Verfugbarkeit von Energierohstoffen 1998 [Availability of Energy Reserves and Resources 1998]. Hanover, Germany.

[313] IEA 2006a: 6

[314] EIA: http://www.eia.doe.gov/pub/international/iea2003/table81.xls

[315] US Geological Survey World Petroleum Assessment 2000 – Description and Results, US Geological Survey Digital Data Series 60; International Energy Agency, World Energy Outlook 2001, p. 142.

[316] USGS, Coal-Bed Methane: Potential and Concerns, USGS Fact Sheet FS-123-00 October 2000.

[317] USGS, Coal-Bed Methane: Potential and Concerns, USGS Fact Sheet FS-123-00 October 2000.

[318] US Geological Survey Energy Resource Surveys Program, Coalbed Methane – An Untapped Energy Resource and an Environmental Concern, USGS Fact Sheet FS-019-97.

[319] Kuuskraa and Bank 2003: 34.

[320] Kuuskraa and Bank 2003: 34.

[321] Bundesministerium fur Wirtschaft und Arbeit 2002. Reserves, Resources and Availability of Energy Resources 2002, short version, page 9.

[322] World Energy Council, New Technology for Tight Gas Sands, http://www.worldenergy.org/wec-geis/publications/default/tech_papers/17th_congress/2_1_16.asp

[323] Goldemberg 2000: 147.

[324] United States Geological Survey, Natural Gas Hydrates – vast resource, uncertain future, USGS Fact Sheet FS–021–01, March 2001

[325] United States Geological Survey, Natural Gas Hydrates – vast resource, uncertain future, USGS Fact Sheet FS-021-01, March 2001.

[326] IEA 2001a: 397.

[327] United States Geological Survey, Natural Gas Hydrates – vast resource, uncertain future, USGS Fact Sheet FS-021-01, March 2001.

[328] US Congress, Report of the Methane Hydrate Advisory Committee on Methane Hydrate Issues and Opportunities Including Assessment of Uncertainty of the Impact of Methane Hydrate on Global Climate Change, December 2002, p.8.

[329] Coal contains about 80 percent more carbon per unit of energy than gas does, and oil contains about 40 percent more. Congressional Budget Office 2003: 11.

[330] Mahlman 2001: 10.

[331] Mahlman 1998: 88; Mahlman 2001: 8.

[332] Lomborg 2001: 269.

[333] Lewis jr 2004: 6.

[334] De Freitas 2002: 304-6

[335] De Freitas 2002: 305; Essex and McKitrick 2002: 139.

[336] Lewis Jr 2004: 7.

[337] De Freitas 2002: 306.

[338] http://www.worldclimatereport.com/index.php/2005/03/03/hockey-stick-1998-2005-rip/

[339] Soon et al. 2001: 11.

[340] Michaels 2004: 232.

[341] Michaels 2004: 232

[342] Reuters October 26, 2002

[343] Doran et al. 2002.

[344] Virtual Climate Alert May 21, 2002 Vol. 3, No. 16

[345] Joughin and Tulaczyk, 2000.

[346] Pittock 2003: 35; Mahlman 2001: 17.

[347] Cited by Michaels 2004: 55.

[348] Cited by Michaels 2004: 60.

[349] Michaels 2004: 60.

[350] Cited by Michaels 2004: 58-59.

[351] Chylek 2004: 201.

[352] Michaels 2004: 6.

[353] Lewis Jr 2004: 22.

[354] Michaels 2004: 37-8.

[355] Michaels 2004: 38-9.

[356] http://www.the-south-asian.com/Aug2004/Gangotri_glacier.htm

[357] ‘The Climate Himalayan Snow Job” March 17, 2005 http://www.worldclimatereport.com/index.php/2005/03/17/the-great-himalayan-snow-job/

[358] Michaels 2004: 94.

[359] Sherwood and Idso 2004: 44, 47.

[360] http://www.aosb.org/PDF/OPP_final_report_to_AOSB.pdf

[361] Michaels 2004: 42-43.

[362] Michaels 2004: 33.

[363] IPCC. 2001: 33.

[364] Lewis Jr 2004: 9.

[365] Michaels 2004: 118.

[366] Michaels 2004: 172-3)

[367] Romanovsky et al. 2002.

[368] Kasper and Allard 2001.

[369] http://www.co2science.org/scripts/CO2ScienceB2C/subject/c/

carbongrasslands.jsp

[370] Correspondence Nature 428, 601 (April 8, 2004)

[371] Richard Seager of Columbia’s Lamont-Doherty Earth Observatory and David Battisti of the University of Washington. See Tobin, Mary. Columbia Research Dispels 150 Years of Thinking – Mild Winter Conditions in Europe Are Not Due to the Gulf Stream, Columbia News, Feb 05, 2003. http://www.columbia.edu/cu/news/03/02/richardSeager_research.html

[372] Stern 2006: 170.

[373] At the time of writing, methane levels in the atmosphere appear to have stabilized.

[374] Stern 2006: 170. Stern gives a figure of 42 gigatonnes of CO which equals 11.45 GtC. 57 per cent of that figure gives 6.53 GtC.

[375] Lackner et al. undated: 4.

[376] Loftus 2003: B.5.A.

[377] Australian Coal Association 2004: 29.

[378] Lackner et al. undated: 6; American Energy Independence Web Site, “CO2 Recycling Capturing Carbon Dioxide Directly from the Air” http://www.americanenergyindependence.com/recycleco2.html

[379] Lackner et al. 1999; American Energy Independence Web Site, “CO2 Recycling Capturing Carbon Dioxide Directly from the Air” http://www.americanenergyindependence.com/recycleco2.html

[380] Project Facts, US Department of Energy, National Energy Technology Laboratory, “Recovery & sequestration of co2 from stationary combustion systems by photosynthesis of microalgae”, 11/2003.

[381] Goldemberg 2000: 289.

[382] Goldemberg 2000: 289.

[383] Goldemberg 2000: 289.

[384] Yegulalp and Lackner.2004.

[385] Lackner 2003; Winters 2003.

[386] Herzog et al. undated: 4.

[387] Goldemberg 2000: 276.

[388] World Energy Council 1994: 77.

[389] Rushing 2001.

[390] It is called the Australian Solar Tower Project and at the time of writing was at the final feasibility stage.

[391] The technology is being developed and tested by Oak Ridge National Laboratory in the US http://www1.eere.energy.gov/solar/solar_lighting.html

[392] In 2004 world electricity production was 15,985 TWh and per capita consumption in rich countries averaged 9,710 KWh. See IEA 2006a: 48. The rich countries are here defined as OECD members except for the Czech Republic, Hungary, Mexico, Poland, the Slovak Republic and Turkey.

[393] http://energy.saving.nu/solarenergy/energy.shtml.

[394] KPMG, Bureau voor Economische Argumentatie, 1999.

[395] IEA 2006a: 55. There are a billion kWh to a TWh.

[396] http://www.iea.org/Textbase/stats/

[397] http://www.iea.org/Textbase/stats/

[398] The Dutch figure is 6823 kWh. See IEA 2006a: 55.

[399] 10 per cent of 1505 multiplied by 20 = 3010 which is 31 per cent of 9710.

[400] IEA 2006b: 11.

[401] Renewable Energy Policy Project, 2003: 5.

[402] Grubb and Meyer 1994 and World Energy Council 1994.

[403] “Study of Offshore Wind Energy in the EC”, Garrad Hassan & Germanischer Lloyd, 1995, cited in European Wind Energy Association and Greenpeace undated.

[404] Hagerman 2001: 1

[405] World Energy Council 2001a; Fredriksson 2003: 4.

[406] World Energy Council. 1994.

[407] Thorpe 1999: 13.

[408] Thompson et al. 2001 or later: 6.

[409] Fredriksson 2003: 4.

[410] Ocean Power Delivery Ltd: http://www.oceanpd.com/default.html

[411] IEA 2006a: 18.

[412] Goldemberg 2000:155.

[413] This section draws for its information entirely on Goldemberg 2000: chapter 5.

[414] IEA 2006a: 24 and 6. The primary energy equivalent of nuclear electricity is calculated by assuming a 33.3 per cent conversion efficiency from heat to electricity. See IEA 2006a: 59.

[415] Italy imports from France electricity which is produced from nuclear power.

[416] http://www.uic.com.au/reactors.htm

[417] Hore-Lacy 2000.

[418] http://www.uic.com.au/reactors.htm. As well as commercial energy generation, there are about 280 small reactors, used for research and for producing isotopes for medicine and industry over 400 small reactors powering ships; mostly submarines. See Hore-Lacy 1999: v.

[419] NEA 2001: 15.

[420] http://www.uic.com.au/reactors.htm

[421] NEA 2001: 138.

[422] IEA 2001b: 130.

[423] Cohen 1990:163-164.

[424] http://www.world-nuclear.org/info/inf08.htm; Hoffman 2001.

[425] Hoffman 2001.

[426] Wardell 2001.

[427] Abraham 2002: 5; Wardell 2001.

[428] Grimston and Beck 2000: 29.

[429] UIC 2006a.

[430] 441 reactors currently produce 28.78 EJ. (687 Mtoe) of energy (Electricity/ 0.0333).

[431] NEA 2004: 13-22.

[432] http://www.uic.com.au/reactors.htm

[433] NEA 2004:.20.

[434] This was only $95 million in 2002, NEA 2004: 9.

[435] NEA 2004:20.

[436] Garwin and Charpak 2001: 166.

[437] NEA 2001: 30.

[438] UIC 2003.

[439] NEA 2004 22.

[440] NEA 2004: 22.

[441] NEA 2004: 22.

[442] NEA 2004: 22.

[443] NEA 2004: 22.

[444] http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/E/

Elements.html

[445] Hore-lacy 1999: 39.

[446] This follows from fact that the current 6.5 per cent share requires 65,000 tonnes of uranium.

[447] Daley 1997: 60.

[448] Cohen 1990: 183.

[449] American Nuclear Society 2001: 7.

[450] Sims 1990: 40-41.

[451] Garwin and Charpak 2001: 85.

[452] Walker 2000: 48.

[453] IEA 2001b: 171.

[454] Cohen 1990: 114.

[455] van der Zwaan: 20.

[456] Hore-lacy 1999: 59.

[457] Cohen 1982: 73; Walker 2000: 141; Hodgson 1999: 64.

[458] Sims 1990: 85.

[459] Sims 1990: 85.

[460] Cohen 1990: 69.

[461] Rutherford 2002b.

[462] Rutherford 2002a: 19.

[463] GAO 2000: 10.

[464] Kaku and Trainer 1982: 29.

[465] Cohen 1998:525.

[466] Chernobyl Forum 2003-2005: 7.

[467] Hodgson 1999: 68.

[468] Hodgson 1999: 68.

[469] Oliver 2001.

[470] Kursunoglu 1998: 39.

[471] Hodgson 1999: 63-65.

[472] American Nuclear Society 2001: 7.

[473] Hodgson 1999: 65.

[474] Chen 2004.

[475] 300 millirems in the US.

[476] van der Zwaan. et al. 1999: 259; Cohen 1990: 205.

[477] Walker 2000: 52; Sims 1990: 40.

[478] Walker 2000: 31; Sims 1990: 40.

[479] Sims 1990: 243. Walker 2000: 117.

[480] Cohen 1990: 54.

[481] Walker 2000: 139.

[482] Walker 2000: 139.

[483] Sims 1990: 108.

[484] Garwin and Charpak 2001: 171-172.

[485] Hodgson 1999: 79.

[486] Kaku and Trainer 1982: 82-83.

[487] Nero 1982: 87-88.

[488] Cohen 1990: 86-89.

[489] Cohen 1990: 80.

[490] Cohen 1990: 86-89.

[491] Cohen 1990: 77.

[492] Cohen 1990: 86-89.

[493] UIC 2006b.

[494] http://www.uic.com.au/nip20.htm

[495] UIC 2006b.

[496] Hoffman 2001.

[497] Wardell 2001.

[498] IEA 2001b: 175.

[499] IEA 2001b: 197.

[500] Sims 1990: 175.

[501] Hore-Lacy 1999: 32.

[502] Sims 1990: 164.

[503] IEA 2001b: 197.

[504] Hore-Lacy 1999: 56.

[505] Holt 2003: CRS-5.

[506] IEA 2001b: 195.

[507] Hodgson 1999: 70-2.

[508] Holt 2003: CRS-6.

[509] Holt 2003: CRS-6.

[510] Hore-Lacy 1999: 46.

[511] IEA 2001b: 196.

[512] http://www.nrc.gov/waste/low-level-waste.html

[513] Cohen 1990: 206.

[514] IEA 2001b: 196.

[515] IEA 2001b: 51.

[516] Hodgson 1999: 73.

[517] http://www-ns.iaea.org/appraisals/west-kara.htm

[518] Cohen 1990: 179.

[519] Cohen 1990: 179.

[520] Cohen 1990: 179.

[521] Cohen 1990: 184.

[522] Cohen 1990: 184.

[523] Cohen 1990: 184.

[524] Hore-Lacy 1999: 48.

[525] Cohen 1990: 184.

[526] “Nuclear Energy Industry Salutes Senate for Approving Yucca Mountain.” PR Newswire July 9, 2002.

[527] Cohen 1990: 221.

[528] World Energy Council 2001b

[529] US Geological Survey, Geothermal Energy: Clean Power from the earth’s heat, Circular 1249, 2003, p. 17.

[530] World Energy Council 2001b.

[531] World Energy Council 2001b.

[532] Energy and Geoscience Institute University of Utah 2001 or later: 4.

[533] Energy and Geoscience Institute University of Utah 2001 or later: 6.

[534] US Geological Survey, Geothermal Energy: Clean Power from the earth’s heat, Circular 1249, 2003 p. 21.

[535] http://www1.eere.energy.gov/ba/pdfs/geo_hotdry_rock.pdf, page 3-40.

[536] Mock et al. 1997.

[537] Mock et al. 1997.

[538] Armstead and Tester 1987: 51-52.

[539] Armstead and Tester 1987: 56.

[540] The estimate takes into account the fact that hotter countries would have no use for low grade heat for space heating and would only be interested in rocks hot enough for electricity.

[541] Mock et al.1997 Table 1.

[542] Armstead and Tester 1987: 44.

[543] The 10 kilometers beneath each square kilometer provides 0.0215 quads for a 1oC drop in temperature and so 0.001075 quads for a 0.05oC drop. Dividing 445 by 0.001075 gives 413,953.

[544] Duchane 1996: 3.

[545] Duchane 1996: 4.

[546] This is using the USGS’s concept of reserve base which includes those resources that are currently economic (reserves), marginally economic, and some of those that are currently uneconomic. The uneconomic part would require some rise in prices or the adoption of some technology improvements. See USGS 2006: 195.

[547] Present iron ore and bauxite would last into the 2080s or 90s with an average annual growth rate of 2 per cent. This provide a 6.5 fold increase in annual consumption which would be more than enough to ensure abundance in a world of 10 billion people. Potash and phosphate, which are used in fertilizer, have present reserves that would last into the next century at that growth rate. However, it is likely that food abundance will not require such a large increase. See USGS 2006.

[548] Lomborg 2001: 140.

[549] Lomborg 2001: 141.

[550] Lomborg 2001: 142 and 145.

[551] USGS 2006: 129.

[552] USGS 2006: 125.

[553] USGS 2006: 183.

[554] http://www.epa.gov/airtrends/2005/econ-emissions.htm

[555] http://www.epa.gov/airtrends/sixpoll.html

[556] http://www.epa.gov/airtrends/pmreport03/pmlooktrends_2405.pdf.

[557] http://www.epa.gov/airtrends/2005/econ-emissions.html

[558] http://reports.eea.eu.int/topic_report_2003_4/en/

Topic_4_2003_web.pdf

[559] http://europa.eu/scadplus/leg/en/lvb/l28159.htm

[560] Depletion of oxygen in a nutrient-rich body of water by growth of too much plant life, leading to death of animal life.

[561] http://www.the-river-thames.co.uk/environ.htm

[562] International Commission for the Protection of the Rhine 2004.

[563] http://www.nyc.gov/html/dep/html/news/hwqs.html

[564] http://www.epa.gov/glnpo/glindicators/fishtoxics/topfisha.html

[565] http://www.epa.gov/glnpo/collaboration/taskforce/factsheet.html

[566] Lomborg 2001: 194.

[567] Lomborg 2001: 195.

[568] International Tanker Owners Pollution Federation, http://www.itopf.com/stats.html

[569] Beijing, March 12 (Xinhuanet

[570] Xinhua News Agency, October 19, 2004

[571] http://www.thewaterpage.com/ganges.htm#Pollution

[572] Xinhua News Agency, March 23, 2005

[573] http://www.chinaenvironment.net/sino/sino5/page12.html

[574] Xinhua News Agency, March 23, 2005

[575] http://siteresources.worldbank.org/INTRES/Resources/

AirPollutionConcentrationData2.xls

[576] http://www.who.int/indoorair/en/

[577] This paragraph relies on Lomborg 2001: 191-192.

[578] http://response.restoration.noaa.gov/bat2/recovery.html

[579] Lomborg 2001: 180.

[580] According to FAO 1997: 21, “the widespread death of European forests due to air pollution which was predicted by many in the 1980’s did not occur.” Cited in Lomborg 2001:180.

[581] Lomborg 2001: 217.

[582] Lomborg 2001: 217.

[583] Edwards et al. 2005.

[584] American Cancer Society, Breast Cancer Facts and Figures 2005-2006 http://www.cancer.org/downloads/STT/CAFF2005BrF.pdf

[585] Lomborg 2001: 221.

[586] American Cancer Society, Breast Cancer Facts and Figures 2005-2006 http://www.cancer.org/downloads/STT/CAFF2005BrF.pdf

[587] Lomborg 2001: 225

[588] Lomborg p. 225 citing NCI statistics.

[589] Colborn, T. et al. 1996.

[590] Safe 1999: 193-194.

[591] Safe 1999: 193.

[592] Safe 1999: 191-192.

[593] Lomborg 2001: 238-241.

[594] Lomborg 2001: 240.

[595] Safe 2001: 198.

[596] Lomborg 2001: 241.

[597] FAO 2005: 137.

[598] FAO 2005: table 3.

[599] Depletion figures derived from FAO 2006: table 4.

[600] Hollander 2003: 128.

[601] FAO 2006: table 4.

[602] http://www.fao.org/DOCREP/003/X6953E/X6953E05.htm

[603] Martin 1999: 207 suggests a range of 5-10 million current species, with a greater likelihood of being closer to 5 than 10. Here is another view: “For the more conspicuous birds and mammals, the number of species is known quite accurately, both for tropical species as well as temperate ones. It is estimated that at least 98 per cent of birds have been discovered. For birds there are 2-3 times as many tropical species as temperate ones. For other organisms most of the named species (1.4 million) are from temperate countries. If we assume that the same factor applies to other organisms as to birds, then there are 2-3 times this many tropical species (2.8-4.2 million, giving an estimated total species of 4.2-5.6 million.” http://darwin.bio.uci.edu/~sustain/bio65/lec10/b65lec10.htm#_Number_of_Species

[604] Lomborg 2001: 250.

[605] Lomborg 2001: 252.

[606] Conservation International, Biodiversity Hotspots, Atlantic Forest: http://www.biodiversityhotspots.org/xp/Hotspots/atlantic_forest/conservation.xml

[607] Lomborg 2001: 254.

[608] Easterbrook 1995: 559.

[609] Martin 1999: 208-210.

[610] Simons 1996: 446 citing Reid and Miller 1989.

[611] Lomborg 2001: 250 gives a figure of 4,500 mammals and 9,500 birds; 1/14,000 multiplied by 10 million equals 714.

[612] Reporting on the work of UC Berkeley geologist James Kirchner, http://www.rainforests.net/diversification.htm

[613] Lomborg 2001: 249.