Monday, March 19, 2007

From: (Mitchell Porter)
Subject: Carbonist Manifesto
Message-ID: <>
Summary: CO2 good, O2 bad
Organization: Nyx, Public Access Unix @ U. of Denver Math/CS dept.
Date: Tue, 1 Dec 92 05:20:05 GMT
Lines: 338

snarfed from the Extropians mailing list (

The Consequences of Gaia
- or -
The Carbonist Manifesto

Copyright (C) 1992 Jeff Berkowitz (
Revision 1 of 30 Nov 92

Permission to redistribute this work is granted
provided that (1) it's unmodified, (2) it's all
there ("in entirety"), and (3) my name and the
copyright notice are still attached. The fact
that Sequent's name appears in my e-mail address
has no more significance than if I gave you my
work phone number; it's just a way to reach me,
not an endorsement.

This essay describes some philosophical, ethical, and cosmological
implications of the Gaia hypothesis. Although loosely grounded in
recent research in ecology and paleoclimatology, this is clearly an
essay and not a scientific paper. It is also distinctly tongue in
cheek, but the author has spent some serious moments wondering
whether the belief system outlined below is any more unreasonable
than certain "mainstream" viewpoints.

* * * *

Over the last few years, we've become familiar with the notion that
the biosphere is a dynamic, self-regulating system. In fact, an even
stronger assertion can be made: the biosphere, in its present oxygen-
rich form, is "a kind of superorganism that in its entirety maintains
the conditions that best suit life on earth." [1] This formulation,
known as the Gaia Hypothesis, was originally advanced by naturalists
James Lovelock and Lynn Margulis (novelist William Golding suggested
the name.)

A key point in the Gaia hypothesis concerns the stability of the
carbon cycle: that the level of atmospheric CO2 has been maintained
within relatively narrow limits for hundreds of millions of years.
This point is critical because the temperature of the biosphere
is largely controlled by the quantities of greenhouse gases
(primarily CO2) in the atmosphere. Various geophysical and
biological processes cooperate to lower the amount of free CO2
when the biosphere warms, and release CO2 when it cools. Thus
the assertion that CO2 has remained relatively constant is also
an assertion that the temperature has remained within relatively
narrow limits: at no time in the last billion years has the Earth
been a pressure cooker like Venus, or a snowball like Mars.

This essay contends that over geological time periods (in particular,
over the last 500 million years) the amount of available carbon in the
biospheric carbon cycle has slowly decreased. This decrease has been
driven by long term processes that remove CO2 from the atmosphere and
deposit it in rocks. Plants, for example, capture free carbon in the
molecules making up their tissues. As the plants die, their carbon
sometimes leaves the dynamic biological domain of the "carbon cycle"
and enters the geophysical domain as "hydrocarbon deposits" (coal,
oil, and seafloor sediments.)

Various pieces of indirect evidence exist for this slow decline in the
CO2 content of the atmosphere. Numerous plant species, for example,
thrive when subjected to an atmosphere lower in oxygen and higher in
CO2 than the current atmosphere of Earth. It is natural to suppose
that this beneficial effect is a holdover from the bygone era in which
the photosynthetic "apparatus" of these plants evolved; their initial
evolutionary "best fit" has slowly become a "misfit" due to decreasing
levels of atmospheric CO2 across intervening megalenia. Some direct
evidence of CO2 decrease also exists in the form of ice cores [1, p 42]
although it covers a much shorter time scale.

It is true that several arguments for the "essential stability" of the
atmospheric CO2 level exist, in addition to well-understood mechanisms
that "reverse the process" by removing carbon from the geophysical
domain and returning it to the biosphere (that is, the domains are not
truly separate.) It has been widely observed in the literature that
CO2 levels could never have _fallen_ to less than one-third of their
current value, nor could O2 levels have _risen_ significantly from
their current values, without deadly consequences for life [2].

The author finds these arguments too weak to deflect the main thrust
of this essay. None of the data presented in Garrels et al [2] appear
to rule out the possibility of somewhat higher atmospheric CO2 in ages
past. In fact, their discussion of the carbon cycle gives short shrift
to "reservior five" - organic carbon locked up in sediment. It is the
relationship between humankind and this crucial reservior five that we
will now continue to explore.

As we've shown, conventional reasoning links the general stability of
the carbon cycle to the general stability of biospheric temperature.
This same reasoning also serves to link the slow decrease in CO2 to an
equally slow (yet systematic) cooling of the biosphere. The Gaian
temperature "equilibrium" is not, in fact, stable. Across geological
eons, the Gaian feedback system achieves not stability, but rather a
slow cooling. Various evidence for this cooling trend exists [5].

Of course, the Gaian system is quite robust - as evidenced by its
repeated recovery from the effects of barrages of big rocks from
outer space. As Gaia ages, however, it is faced with the threat
of a calamity worse than the impact of a dinosaur killer. This is
the threat of "cold equilibrium", more colorfully called "the White
Earth scenario."

The White Earth scenario is part of the dirty laundry of the climate
modelling community. As noted in Gleick's "Chaos: Making a New
Science" [6], some seemingly reasonable (although simple) climate
models suffer from an odd characteristic of falling into a state in
which much of Gaia's free water is locked up in snow and ice; the
surface albedo of the planet is high; and no obvious mechanism for
increasing atmospheric greenhouse gas content or otherwise warming
the planet presents itself. Since this state does not seem to
correspond to anything in the historical record of the Earth, it
is regarded as anomalous and incorrect.

I suggest that we take take a truly novel approach to these seemingly
valid models that drop into the White Earth state: let's presume that
they are valid, and that they are telling us something important. We
are at risk of "cold equilibrium" in the near geological term.

The ability of the paleoclimatological community to accumulate the
data leading to this conclusion and then avoid the conclusion itself
is quite astonishing. One paleoclimatologist [4] has the audacity to
draw a graph of Gaian temperature that trends smoothly downward for
many millions of years, but is suddenly consumed by a series of sharp
vertical excursions ("wiggles") over the past few hundred thousand.
It's similar to the graph of a coin which rolls slowly around in a
large circle, then rattles rapidly around in an oscillating spiral
for a few moments before coming to rest in an equilibrium state,
stable and dead - Gaia converges on the White Earth.

Now let's take a step back from this impending frozen death for a
moment. The key to the Gaian system is that it is *self-adjusting*.
As observers who have only recently had our eyes opened to this
wonderful concept, the Gaian model, we cannot hope to appreciate
the myriad ways this all-encompassing system might find to regulate
itself - to adapt to conditions and to maintain the equilibrium
necessary for life. We must not underestimate the ability of the
Gaian organism to evolve temporary organelles designed to deal
with crisis.

The last 100,000 years have seen some of the coldest times in the
500 million that have elapsed since the Ordovician period. These
100,000 years form less than 1/1000th of the intervening 500 million

Oddly, they're the same 100,000 years that Homo Sapiens Sapiens has
existed on Earth.

Clearly, the biosphere has reached a point of crisis. The relatively
stable processes of self-regulation that have worked for the past
hundreds of millions of years have reached the limit of their ability
to correct.

In response to the impending crisis, Gaia evolved a solution. At the
edges of the ice sheets that flowed down over the northern hemisphere
during the last ice age, Gaia brought it to fruition: a short term
corrective process designed to restore the natural balance of free
carbon dioxide in the biosphere.


Yes, Man. Not the destroyer, the pillager, the environmental
rapist of the popular lore; an utterly different view of Man the
restorer, the savior, the solution to an environmental crisis more
dangerous to the biosphere than even the giant stone that ended the
age of dinosaurs. Man, whose only purpose in the Gaian system is
to extract carbon from the rocks and put it back in the atmosphere
where it belongs.

It is not far-fetched to suggest that the evolution of mankind is
an adaptive reaction. Organisms under stress are known to exhibit
all manner of extraordinary behaviors. It is likely that
Levenson's "one last coincidence" [1, p 56] is not a coincidence
at all -

From 1500 to 1850, throughout the Little Ice Age, the
nations of Europe expanded in population, power, techno-
logical competence, military strength, economic endeavors,
in world rule - in virtually every measure of the vigor
of a civilization.

No, it is not a coincidence at all. It is, quite literally, our
destiny; that is why we are so well equipped to succeed and expand
our CO2-returning practices during periods of intense cold.

The climatological community has come close to the point:

Within a few centuries, we [human beings] are returning
to the atmosphere and oceans the concentrated organic
carbon stored in the sedimentary rocks over hundreds of
millions of years [3].

But as scientists, the community lacked the zeal to make that
final, fundamental leap from observation to motive - the observation
that this is not merely an unanticipated side effect of intelligence,
but the very reason for its existence.

Post-Gaian Environmental Ethics

Given this recognition of mankind's role in the Gaian system, it
is possible to construct a consistent system of environmental ethics
that might be called "Carbonism."

- Carbonists hold viewpoints that differ significantly from widely
accepted environmental viewpoints, but Carbonists are not wanton
destroyers of the environment. Carbonists do not favor poisoning
the environment with long-lived toxins such as heavy metals or
radioactive nucleotides, the accumulation of solid waste, or any
other practice that does not contribute the the increase of CO2
in the biosphere.

Carbonists do hold, however, that other concerns are outweighed
by the prospect of even a small increase in the necessary CO2 in
Gaia's thinning veil.

- Anything that has the direct effect of taking carbon from the
geophysical reservior and returning it to the atmosphere is good.

- Burning coal and oil for heating or to produce electric power
are the greatest goods. Temporary particulate pollution of the
atmosphere associated with these practices are of no consequence.

- Automobiles are very good. Automobiles contribute other
greenhouse gases, in addition to CO2, all at a minimal cost
in annoying particulate pollution.

- Burning wood is good. Logging is good. Slash-and-burn
agriculture is good, particularly when it is done to raise
ruminants (cud-chewing animals) which themselves contribute
nontrivial quantities of greenhouse gases to the atmosphere.

- Eating red meat is good. Consumption of red meat has an amazing
ability to act as an economic incentive for slash-and-burn
agriculture and the cultivation of ruminants in the tropical
regions of the world.

- Hydroelectric and Nuclear power are bad. They replace beneficial
coal and oil burning. Dams are also harmful to fish, and harming
fish has no evident CO2 benefit - Again, Carbonists are not wanton
destroyers of the environment.

Aside from its unfortunate tendency to substitute for coal and oil
burning, nuclear power is fairly neutral. This reflects my personal
viewpoint that the nuclear economy is unlikely to result in
significantly poisoning the environment over time. Being anti-
nuclear is the appropriate Carbonist viewpoint no matter what your
feelings about the safety of nuclear power. Nuclear safety is a
non-issue for Carbonists.

- Natural gas is not good, although it is not as bad as hydroelectric
power (it does add small amounts of certain greenhouse gases to the
atmosphere.) In most cases, however, natural gas substitutes for
the significantly more beneficial practices of burning coal or oil,
and so should be avoided.

- Air pollution is good, particularly when it kills large areas
of forest (Central Europe, California, etc.) These dead trees
are far more likely to end up rotting and burning (and hence
contributing to atmospheric CO2) than to end up in the ground.

The Longer Term and the Meaning of Life

A short term consequence of the restoration of a proper CO2 balance
to the atmosphere will be a radical drop in the number of species
within the Gaian system. This holocaust will be caused by the
inability of most species to adapt to the rapid shift in climate,
non-CO2 pollution occurring as a side effect of CO2 boosting, and
related effects.

The loss of speciation might well approach the worst of the dinosaur
killer episodes in scope - perhaps 75% or more of Gaia's individual
species will disappear in a period of only a few centuries.

In the longer term, what of it? It's happened before, and it will
happen again. The Gaian system has a proven ability to recover
from loss of speciation. The destruction of 75% of Gaia's species
is a routine event of no consequence; the impending White Earth
Catastrophy offers the prospect of the death of Gaia itself - a
multibillion-year-old organism of unimaginable richness and variety.

Finally, it is worth noting that Carbonism speaks directly to the
fundamental questions of human existence in a way that is both simple
and profound. Carbonism holds that neither individual human life
nor any achievement of humanity, other than the liberation of free
carbon, has any significance whatsoever. Only the collective
behavior of the human species is significant to Gaia, and in a
few centuries (when the carbon balance has been restored) Gaia's
need for humanity will be at an end.

Mark Sweiger ( suggested the name "Carbonism."

Reviewers of the document and victims of my lunchtime rants have
included my wife Sylvia and numerous long-suffering engineers
at Sequent.

[1] Levenson, T. "Ice Time: Climate, Science, and Life on Earth."
Harper and Row, 1989, p 10 and others.

[2] Garrels, Lerman, Mackenzie, "Controls of Atmospheric O2 and CO2
Past, Present, and Future." In "Climates Past and Present",
Skinner, B Ed. William Kaufman Inc, 1981.

[3] Baes, Goeller, Olson, Rotty, "Carbon Dioxide and Climate: The
Uncontrolled Experiment." In "Climates Past and Present",
Skinner, B ed. William Kaufman Inc, 1981.

[4] Butzer, K. "Environment and Archeology. An Ecological Approach
to Prehistory", Second Edition. Aldine Athertone 1971, p 18.

[5] Hecht, A. "Paleoclimate Analysis and Modelling" John Wiley &
Sons, 1985, p 402.

[6] Gleick, "Chaos: Making a New Science", p 170. I've also
exchanged some email with members of the community, one of
whom indicated that explaining why this has never really
happened on earth had "the status of a cottage industry"
for a time.

Wednesday, March 07, 2007

Fermi paradox
From Wikipedia, the free encyclopedia
(Redirected from Fermi Paradox)

A graphical representation of the Arecibo message - Humanity's first attempt to use radio waves to communicate its existence to alien civilizationsThe Fermi paradox is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilizations and the lack of evidence of contact with such civilizations.

The extreme age of the universe and its vast number of stars suggest that extraterrestrial life should be common. Considering this with colleagues over lunch in 1950, the physicist Enrico Fermi is said to have asked: "Where are they?"[1] Fermi questioned why, if a multitude of advanced extraterrestrial civilizations exist in the Milky Way galaxy, evidence such as probes, spacecraft or radio transmissions has not been found. The simple question "Where are they?" (alternatively, "Where is everybody?") is possibly apocryphal, but Fermi is widely credited with simplifying and clarifying the problem of the probability of extraterrestrial life.

There have been attempts to resolve the Fermi Paradox by locating evidence of extraterrestrial civilizations, along with proposals that such life could exist without human knowledge. Counterarguments suggest that intelligent extraterrestrial life does not exist or occurs so rarely that humans will never make contact with it.

A great deal of effort has gone into developing scientific theories and possible models of extraterrestrial life and the Fermi paradox has become a theoretical reference point in much of this work. The problem has spawned numerous scholarly works addressing it directly, while various questions that relate to it have been addressed in fields as diverse as astronomy, biology, ecology and philosophy. The emerging field of astrobiology has brought an interdisciplinary approach to the Fermi paradox and the question of extraterrestrial life.

Contents [hide]
1 Basis of the paradox
2 Related concepts
2.1 Drake equation
2.2 Rare Earth hypothesis
3 Resolving the paradox empirically
3.1 Radio emissions
3.2 Direct planetary observation
3.3 Alien constructs
3.3.1 Probes, colonies, and other artifacts
3.3.2 Advanced stellar scale artifacts
4 Explaining the paradox theoretically
4.1 They do not exist ...
4.1.1 ... and they never did
4.1.2 ... because an inhospitable universe destroys complex intelligent life
4.1.3 ... because it is the nature of intelligent life to destroy itself
4.1.4 ... because it is the nature of intelligent life to destroy others
4.1.5 ... because God created humans alone
4.2 They do exist ...
4.2.1 ... but communication is impossible due to problems of scale Intelligent civilizations are too far apart in space to communicate Intelligent civilizations are too far apart in time to communicate It is too expensive to spread physically throughout the galaxy Human beings have not been searching long enough They haven't got back to us yet
4.2.2 ... but communication is impossible for technical reasons Human beings are not listening properly Civilizations only broadcast detectable radio signals for a brief period of time They tend to experience a technological singularity
4.2.3 ... and they choose not to communicate Earth is purposely isolated (The zoo hypothesis) They are too alien They are not interested
4.2.4 ... and they are here unobserved They hide their presence Human beings refuse to see, or misunderstand, the evidence
5 See also
6 References
7 Suggested reading
8 External links

Basis of the paradox
See also: #Drake equation
The Fermi paradox is a conflict between an argument of scale and probability, and a lack of evidence. A more complete definition could be stated thus:

The size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this belief seems logically inconsistent with the lack of observational evidence to support it. Either the initial assumption is incorrect and technologically advanced intelligent life is much rarer than believed, current observations are incomplete and human beings have not detected other civilizations yet, or search methodologies are flawed and incorrect indicators are being sought.

The first aspect of the paradox, "the argument by scale", is a function of the raw numbers involved: there are an estimated 250 billion (2.5 x 1011) stars in the Milky Way and 70 sextillion (7 x 1022) in the visible universe.[2] Even if intelligent life occurs on only a minuscule percentage of planets around these stars, there should still be a great number of civilizations extant in the Milky Way galaxy alone. This argument also assumes the mediocrity principle, which states that Earth is not special, but merely a typical planet, subject to the same laws, effects, and likely outcomes as any other world. Some estimates using the Drake equation support this argument, although the assumptions behind those calculations have themselves been challenged.

The second cornerstone of the Fermi paradox is a rejoinder to the argument by scale: given intelligent life's ability to overcome scarcity, and its tendency to colonize new habitats, it seems likely that any advanced civilization would seek out new resources and colonize first their star system, and then surrounding star systems. As there is no evidence on Earth or anywhere else of attempted alien colonization after 13 billion years of the universe's history, either intelligent life is rare or assumptions about the general behavior of intelligent species are flawed.

Several writers have tried to estimate how fast an alien civilization might spread through the galaxy. There have been estimates of anywhere from 5 million to 50 million years to colonize the entire galaxy; a relatively small amount of time on a geological scale, let alone a cosmological one.[3] Even if colonization is impractical or undesirable to an alien civilization, large scale exploration of the galaxy is still possible; the means of exploration and theoretical probes involved are discussed extensively below.

Related concepts

Drake equation

While numerous theories and principles attend to the Fermi paradox, the one most closely related is the Drake equation. It was formulated by Dr. Frank Drake in 1960, a decade after the objections raised by Enrico Fermi, in an attempt to find a systematic means to evaluate the numerous probabilities involved in alien life. The speculative equation factors: the rate of star formation in the galaxy; the number of stars with planets and the number that are habitable; the number of those planets which develop life and subsequently intelligent communicating life; and finally the expected lifetimes of such civilizations.

The Drake equation has been used by both optimists and pessimists, with varying results. Dr. Carl Sagan, for example, suggested as many as one million communicating civilizations in the Milky Way in 1966, though he later suggested that the number could be far smaller.[4] Other published estimates from Frank Tipler place the value at just one—i.e., human beings are the only extant intelligent life.[5]

Critics of the Drake equation claim that since the variables cannot yet be determined with any real confidence, estimating the number of extraterrestrial civilizations based on it is methodologically flawed, a criticism which the wide divergence in estimates seems to support. Assigning meaningful values to the Drake equation factors will require empirical data, collection of which is still preliminary.

Rare Earth hypothesis

Further information: Evolving the Alien: The Science of Extraterrestrial Life

The Rare Earth hypothesis attempts to resolve the Fermi paradox by suggesting that Earth is not typical, but unusual, and perhaps even unique. This is a rejection of the mediocrity principle. While a unique Earth has had historical support on philosophical or religious grounds, the Rare Earth Hypothesis deploys quantifiable and statistical arguments in support of the theory that multicellular life is exceedingly rare in the universe because Earth-like planets are themselves exceedingly rare. Supporters argue that many improbable coincidences have converged to make complex life on Earth possible.[6]

Insofar as the Rare Earth Hypothesis privileges Earth-life and its process of formation, it is a variant of the anthropic principle. The variant of the Anthropic Principle states the universe seems uniquely suited towards developing human intelligence. Any variation in any one of a myriad universal constants would make developing similar intelligent life more difficult. This philosophical stance opposes not only mediocrity, but the Copernican principle more generally, which suggests there is no privileged location in the universe.

Opponents dismiss both Rare Earth and the anthropic principle as tautological — if a condition must exist in the universe for human life to arise, then the universe must already meet that condition, as human life exists — and as an unimaginative argument. According to this analysis, the Rare Earth hypothesis confuses a description of how life on Earth arose with a uniform conclusion of how life must arise.[7] While the probability of the specific conditions on Earth being widely replicated may be low, complex life may not require exclusively Earth-like conditions in order to evolve.

Resolving the paradox empirically

One obvious way to resolve the Fermi paradox would be to find conclusive evidence of extraterrestrial intelligence. Various efforts to find such evidence have been made since 1960, and several are ongoing. As human beings do not have interstellar travel capability, such searches are being carried out at great distances and rely on careful analysis of very subtle evidence. This limits possible discoveries to civilizations which alter their environment in a detectable way, or produce effects that are detectable at a distance, such as radio emissions. Non-technological civilizations are very unlikely to be detectable from Earth in the near future (though microbial life may be deduced in the Solar System).

One difficulty in searching is avoiding an overly anthropomorphic viewpoint. Conjecture on the type of evidence likely to be found often focuses on the types of activities that humans have performed, or likely would perform given more advanced technology. Intelligent aliens might avoid these "expected" activities, or perform activities totally novel to humans.

Radio emissions

Further information: SETI

Radio technology and the ability to construct a radio telescope are presumed to be a natural advance for technological species[8] theoretically creating effects that might be detected over interstellar distances. Sensitive observers of the solar system, for example, would note unusually intense radio waves for a G2 star due to Earth's television and telecommunication broadcasts. In the absence of an apparent natural cause, alien observers might infer the existence of terrestrial civilization.

Therefore, the careful searching of radio emissions from space for non-natural signals may lead to the detection of alien civilizations. Such signals could be either "accidental" byproducts of a civilization, or deliberate attempts to communicate, such as the Communication with Extraterrestrial Intelligence's Arecibo message. A number of astronomers and observatories have attempted and are attempting to detect such evidence, mostly through the SETI organization, although other approaches, such as optical SETI also exist.

Several decades of SETI analysis has not revealed any main sequence stars with unusually bright, or meaningfully repetitive radio emissions, although there have been several candidate signals: on August 15, 1977 the "Wow! signal" was picked up by The Big Ear radio telescope. It lasted for only 72 seconds, and has not been repeated. In 2003, Radio source SHGb02+14a was isolated by SETI@home analysis, although it has largely been discounted by further study. There are numerous technical assumptions underlying SETI that may cause human beings to miss radio emissions with present search techniques; these are discussed below.

Direct planetary observation

A composite picture of Earth at night. Human civilization is detectable from space.Detection and classification of exoplanets has come about out of recent refinements in mainstream astronomical instruments and analysis. While this is a new field in astronomy—the first published paper claiming to have discovered an exoplanet was released in 1989—it is possible that planets which are likely to be able to support life will be found in the near future. Direct observational evidence for the existence of life may eventually be observable, such as the absorption spectrum of chlorophyll. Other detectable biotic signatures include methane and oxygen and, for advanced civilizations, trace industrial gases such as freon.[9] More obvious evidence of an alien technological civilization requires precise imaging (see right).

Exoplanets are rarely directly observed (the first claim to have done so was in 2005) rather their existence is inferred based on effects caused in their orbiting a star. Currently, the size and orbit of an exoplanet can be deduced. This information, along with the stellar classification of its sun, and educated guesses as to its composition based on its size and comparisons to studied bodies, allows for rough approximations of the planetary environment.

The methods for exoplanet detection are not likely to deduce Earth-like life at present, given that most exoplanets discovered are Jupiter mass or larger. As of 9 October 2006, 210 extrasolar planets have been discovered[10] — 159 in single-planet systems and 51 planets in 21 multiple-planet systems.

Alien constructs

Probes, colonies, and other artifacts

Further information: Von Neumann probe and Bracewell probe

As noted, given the size and age of the universe, and the relative rapidity at which dispersion of intelligent life can occur, evidence of alien colonization attempts might plausibly be discovered. Additionally, evidence of "unbeinged" exploration in the form of probes and information gathering devices may await discovery.

Some theoretical exploration techniques such as the Von Neumann probe could exhaustively explore a galaxy the size of the Milky Way in as little as half a million years, with relatively little investment in materials and energy relative to the results. If even a single civilization in the Milky Way attempted this, such probes could spread throughout the entire galaxy. Evidence of such probes might be found in the solar system—perhaps in the asteroid belt where raw materials would be plentiful and easily accessed.[11]

Another possibility for contact with an alien probe—one that would be trying to find human beings—is an alien Bracewell probe. Such a device would be an autonomous space probe whose purpose is to seek out and communicate with alien civilizations (as opposed to Von Neumann probes, which are usually described as purely exploratory). These were proposed as an alternative to carrying a slow speed-of-light dialogue between vastly distant neighbors. Rather than contending with the long delays a radio dialogue would suffer, a probe housing an artificial intelligence would seek out an alien civilization to carry on a close range communication with the discovered civilization. The findings of such a probe would still have to be transmitted to the home civilization at light speed, but an information-gathering dialogue could be conducted in real time.[12]

Since the 1950s direct exploration has been carried out on a small fraction of the solar system and no evidence that it has ever been visited by alien colonists, or probes, has been uncovered. Detailed exploration of areas of the solar system where resources would be plentiful—such as the asteroids, the Kuiper belt, the Oort cloud and the various planetary ring systems—may yet produce evidence of alien exploration, though these regions are also massive and difficult to investigate. There have been preliminary efforts in this direction in the form of the SETA and SETV projects to search for extraterrestrial artifacts or other evidence of extraterrestrial visitation within the solar system.[13] There have also been attempts to signal, attract, or activate Bracewell probes in Earth's local vicinity, including by scientists Robert Freitas and Francisco Valdes.[14] Many of the projects that fall under this umbrella are considered "fringe" science by astronomers and none of the various projects have located any artifacts.

Should alien artifacts be discovered, even here on Earth, they may not be recognizable as such. The products of an alien mind and an advanced alien technology might not be perceptible or recognizable as artificial constructs. Exploratory devices in the form of bio-engineered life forms created through synthetic biology would presumably disintegrate after a point, leaving no evidence; an alien information gathering system based on molecular nanotechnology could be all around us at this very moment, completely undetected. Clarke's third law suggests that an alien civilization well in advance of humanity's might have means of investigation that are not yet conceivable to human beings.

Advanced stellar scale artifacts

A variant of the speculative Dyson sphere. Such large scale artifacts would drastically alter the spectrum of a star.Further information: Dyson sphere
In 1959, Dr. Freeman Dyson observed that every developing human civilization constantly increases its energy consumption, and theoretically, a civilization of sufficient age would require all the energy produced by its sun. The Dyson Sphere was the thought experiment solution that he derived: a shell or cloud of objects enclosing a star to harness as much radiant energy as possible. Such a feat of astroengineering would drastically alter the observed spectrum of the sun, changing it at least partly from the normal emission lines of a natural stellar atmosphere, to that of a black body radiation, probably with a peak in the infrared. Dyson himself speculated that advanced alien civilizations might be detected by examining the spectra of stars, searching for such an altered spectrum.[15]

Since then, several other theoretical stellar-scale megastructures have been proposed, but the central idea remains that a highly advanced civilization — Type II or greater on the Kardashev scale — could alter its environment enough as to be detectable from interstellar distances.

However, such constructs may be more difficult to detect than originally thought. Dyson spheres might have different emission spectra depending on the desired internal environment; life based on high-temperature reactions may require a high temperature environment, with resulting "waste radiation" in the visible spectrum, not the infrared.[16] Additionally, a variant of the Dyson sphere has been proposed which would be difficult to observe from any great distance; a Matrioshka Brain is a series of concentric spheres, each radiating less energy per area than its inner neighbour. The outermost sphere of such a structure could be close to the temperature of the interstellar background radiation, and thus be all but invisible.

There have been some preliminary attempts to find evidence of the existence of Dyson spheres or other large Type-II or Type-III Kardashev scale artifacts that would alter the spectra of their core stars, but optical surveys have not located anything. Fermilab has an ongoing program to find Dyson spheres,[17] but such searches are preliminary and incomplete as yet.

Explaining the paradox theoretically

Certain theoreticians accept that the apparent absence of evidence proves the absence of extraterrestrials and attempt to explain why. Others offer possible frameworks in which the silence may be explained without ruling out the possibility of such life, including assumptions about extraterrestrial behaviour and technology.

They do not exist ...

The simplest explanation is that the human species is alone in the galaxy. Several theories along these lines have been proposed, explaining why intelligent life might be either very rare, or very short lived.

... and they never did

See also: Rare Earth hypothesis

Those who believe that extraterrestrial intelligent life does not exist in the galaxy argue that the conditions needed for life—or at least complex life—to evolve are rare or even unique to Earth. While some have pointed out that complex life may evolve through other mechanisms than those found specifically here on Earth, the fact that in the extremely long history of life on the Earth only one species has developed a civilization to the point of being capable of space flight and radio technology seems to lend more credence to the idea of technologically advanced civilization being a rare commodity in the universe.

For example, the emergence of intelligence may have been an evolutionary accident. Geoffrey Miller proposes that human intelligence is the result of runaway sexual selection, which takes unpredictable directions. Steven Pinker, in his book How the Mind Works, cautions that the idea that evolution of life (once it has reached a certain minimum complexity) is bound to produce intelligent beings, relies on the fallacy of the "ladder of evolution": As evolution does not strive for a goal but just happens, it uses always the adaption most useful for a given ecological niche, and the fact that, on Earth, this lead to language-capable sentience only once so far may suggest that this adaption is only rarely a good choice and hence by no means a sure endpoint of the evolution of a tree of life.

Another theory along these lines is that even if the conditions needed for life might be common in the universe, that the formation of life itself, a complex array of molecules that are capable simultaneously of reproduction, the creation or extraction of all base components that it uses to build itself, from the environment, and of obtaining energy in a form that it can use to maintain the reaction (or the initial abiogenesis on a potential life-bearing planet), might ultimately be very rare even if worlds that might have the proper initial conditions for life might be common.

... because an inhospitable universe destroys complex intelligent life

Another possibility is that life can and does arise elsewhere, but events such as ice ages, impact events, or other catastrophes prevent complex life forms from evolving. Even if initial conditions for the development of life are not unique to Earth, it may be that on most worlds such events routinely and periodically destroy such life. Even if a "benign local environment" might exist on some world long enough for intelligent life to finally arise, such life might also be exterminated by cosmological events (such as supernovae, or gamma ray bursts) suddenly sterilizing previously hospitable regions of space.[18]

... because it is the nature of intelligent life to destroy itself

See also: Doomsday argument

Technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or space flight technology. Possible means of annihilation include nuclear war, biological warfare or accidental contamination, nanotechnological catastrophe, or a Malthusian catastrophe after the deterioration of a planet's ecosphere. This general theme is explored both in fiction and in mainstream scientific theorizing. Indeed, there are probabilistic arguments which suggest that humanity's end may occur sooner rather than later. In 1966 Sagan and Shklovskii suggested that technological civilizations will either tend to destroy themselves within a century of developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-year timescales.[19] Self-annihilation may also be viewed in terms of thermodynamics: insofar as life is an ordered system that can sustain itself against the tendency to disorder, the "external transmission" or interstellar communicative phase may be the point at which the system becomes unstable and self-destructs.[20]

From a Darwinian perspective, self-destruction is a paradoxical outcome of evolutionary success. The evolutionary psychology that developed during the competition for scarce resources over the course of human evolution has left the species subject to aggressive, instinctual drives to consume resources, increase longevity, and to reproduce — in part, the very motives that lead to the development of technological society. It seems likely that intelligent extraterrestrial life would evolve subject to similar conditions and thus face the same possibility of self-destruction. It has been suggested, for instance, that a successful alien species will be a superpredator, as is Homo sapiens.[21]

... because it is the nature of intelligent life to destroy others

See also: technological singularity and Von Neumann probe

Another possibility is that intelligent species beyond a certain point of technological capability will destroy other intelligence as it appears. The idea that someone, or something, is destroying intelligent life in the universe is well explored in science fiction, for instance. The primary motive would be perceived competition for an aggressive, expansionist species. In 1981, cosmologist Edward Harrison also pointed out that such behavior would be an act of prudence: an intelligent species that has overcome its own self-destructive tendencies would view any other species bent on galactic expansion as a kind of virus.[22]

Violent extermination of other civilizations is not an unrealistic goal. The concept of self-replicating spacecraft need not be limited to exploration or communication, but can be applied to aggression. Even if a civilization who created such machines were to disappear, the probes could outlive their creators, destroying civilizations far into the future.

While it appears plausible that intelligent life tends to suppress other intelligent life, the idea can be criticized as continuing to beg the question at the heart of the Fermi Paradox: if intelligence destroys upstart intelligence, why is humanity still here? One might suggest that alien intelligences are so far advanced that ours is too insignificant to destroy.

However it should be noted that hunters often do not hide themselves from their prey. Often they try to make something that will attract their intended prey.

... because God created humans alone

Although not a testable scientific explanation, the belief that a God or gods has placed humanity as the only intelligent life in the universe is widespread through cultures and history. Tenets of the Judeo-Christian tradition can be interpreted to position human beings as unique in the universe and have created ambivalent ideas regarding the question of alien life.[23] While not necessarily an outcome of the Rare Earth Hypothesis, like Rare Earth it is a variant of the strong anthropic principle, which in this case becomes teleological: the universe has to be this way, or it was designed to be this way, for the express purpose of creating human intelligence.

They do exist ...

It may be that technological extraterrestrial civilizations exist, but that human beings cannot communicate with them because of various constraints: problems of scale or of technology; because their nature is simply too alien for meaningful communication; or because human society refuses to admit to evidence of their presence.

... but communication is impossible due to problems of scale

Intelligent civilizations are too far apart in space to communicate
It may be that non-colonizing technologically capable alien civilizations exist, but that they are simply too far apart for meaningful two-way communication.[24] If two civilizations are separated by several thousand light years, it is very possible that one or both cultures may become extinct before meaningful dialogue can be established. Human searches may be able to detect their existence, but communication will remain impossible because of distance. This problem might be ameliorated somewhat if contact/communication is made through a Bracewell probe. In this case at least one partner in the exchange may obtain meaningful information. This theory is similar to another known as 'Divine Quarantine', which suggests that some form of God purposely sets different instances of life so far from one another that they could never contact each other and thus never destroy one another through purposeful or inadvertent interaction.

Intelligent civilizations are too far apart in time to communicate

See also: Relativity of simultaneity and #...because it is the nature of intelligent life to destroy itself.

Given the length of time that intelligent life has existed on Earth or is likely to exist, the "window of opportunity" for detection or contact might be quite small. Advanced civilizations may periodically arise and fall throughout our galaxy, but this may be such a rare event, relatively speaking, that the odds of two or more such civilizations existing at the same time are low. There may have been intelligent civilizations in the galaxy before the emergence of intelligence on Earth, and there may be intelligent civilizations after its extinction, but it is possible that human beings are the only intelligent civilization in existence now. The term "now" is somewhat complicated by the finite speed of light and the nature of spacetime under relativity. Assuming that an extraterrestrial intelligence is not able to travel to our vicinity at faster-than-light speeds, in order to detect an intelligence 1,000 light-years distant, that intelligence will need to have been active 1,000 years ago.

There is a possibility that archaeological evidence of past civilizations may be detected through deep space observations — especially if they left behind large artifacts such as Dyson spheres — but this seems less likely than detecting the output of a thriving civilization.

A special case of this is that humans may well be the first intelligent life in the galaxy. This is often dismissed as a result of two factors: the idea that if there was going to be any other intelligent life, it would exist by now, and the tendency on the part of scientists to try to avoid anthropocentrism. Unlikely though it may be, it is still a possibility.

It is too expensive to spread physically throughout the galaxy

See also: Project Daedalus, Project Orion (nuclear propulsion), and Project Longshot

Many assumptions about the ability of an alien culture to colonize other stars are based on the idea that interstellar travel is technologically feasible. While the current understanding of physics rules out the possibility of faster than light travel, it appears that there are no major theoretical barriers to the construction of "slow" interstellar ships. This idea underlies the concept of the Von Neumann probe and the Bracewell probe as evidence of extraterrestrial intelligence.

It is possible, however, that present scientific knowledge cannot properly gauge the feasibility and costs of such interstellar colonization. Theoretical barriers may not yet be understood and the cost of materials and energy for such ventures may be so high as to make it unlikely that any civilization could afford to attempt it. This possibility has been examined in terms of percolation theory: colonization efforts may not occur as an unstoppable rush, but rather as an uneven tendency to "percolate" outwards, within an eventual slowing and termination of the effort given the enormous costs involved and the fact that colonies will inevitably develop a culture and civilization of their own. Colonization will thus occur in "clusters," with large areas remaining uncolonized at any one time.[25]

This possibility is a reasonable explanation as to why a past civilization hasn't already attempted colonization of the earth and other nearby stars. As the origin civilization colonies from its home planet, with current understanding of natural limits on speed of communication and travel taken into account, colonies will invariably build their own civilizations and break away from the original. The expensive self-sustaining mechanism of expansion might naturally break down after a natural limit has been reached. Since the entire colonizing effort will likely have been within a short cosmological time frame, entire colonizing "bubbles" may have died off or have yet to be developed. Due to the large scale of our galaxy there may be dozens if not hundreds of these colonizing bubbles in existence today and not one is in direct contact with another.

Human beings have not been searching long enough

Humanity's ability to detect and comprehend intelligent extraterrestrial life has existed for only a very brief period—from 1937 onwards, if the invention of the radio telescope is taken as the dividing line—and Homo sapiens is a geologically recent species. The whole period of modern human existence to date (about 200,000 years) is a very brief period on a cosmological scale, a position which changes little even if the species survives for hundreds of thousands of years more. Thus it remains possible that human beings have neither been searching long enough to find other intelligences, nor existed long enough to be found.

One million years ago there would have been no humans for alien emissaries to meet. For each further step back in time, there would have been increasingly fewer indications to such emissaries that intelligent life would develop on Earth. In a large and already ancient universe, a space-faring alien species may well have had many other more promising worlds to visit and revisit. Even if alien emissaries visited in more recent times, they may have been misinterpreted by early human cultures as supernatural entities.

This hypothesis is more plausible if alien civilizations tend to stagnate or die out, rather than expand. However, "the probability of a site never being visited, even [with an] infinite time limit, is a non-zero value".[26] Thus, even if intelligent life expands elsewhere, it remains statistically possible that terrestrial life will go undiscovered.

They haven't got back to us yet

Artificial radio waves emitting from Earth have only propagated since first broadcasts made by Popov, Marconi and Tesla in 1895. This would mean that as of 2006, only intelligent extraterrestrial life within 55 light years would have been able to receive the signal and manage to send a reply back to Earth. Or perhaps first and subsequent signals were too weak to be received, and detection may have only been possible at the beginning of the space age, in 1957. In that case, only aliens within 24 light years would have been able to communicate back to earth. With time, the number of potential alien planets within reach increases.

... but communication is impossible for technical reasons

Human beings are not listening properly

There are some assumptions that underlie the SETI search programs that may cause searchers to miss signals that are present. For example, the radio searches to date would completely miss highly compressed data streams (which would be almost indistinguishable from "white noise" to anyone who did not understand the compression algorithm). Extraterrestrials might also use frequencies that scientists have decided are unlikely to carry signals, or use modulation strategies that are not being looked for. "Simple" broadcast techniques might be employed, but sent from non-main sequence stars which are searched with lower priority; current programs assume that most alien life will be orbiting Sun-like stars.[27]

The greatest problem is the sheer size of the radio search needed to look for signals, the limited amount of resources committed to SETI, and the sensitivity of modern instruments. SETI estimates, for instance, that with a radio telescope as sensitive as the Arecibo Observatory, Earth's television and radio broadcasts would only be detectable at distances up to 0.3 light years.[28] Clearly detecting an Earth type civilization at great distances is difficult. A signal is much easier to detect if the signal energy is focused in either a narrow range of frequencies (Narrowband transmissions), and/or directed at a specific part of the sky. Such signals can be detected at ranges of hundreds to tens of thousands of light-years distance.[29] However this means that detectors must be listening to an appropriate range of frequencies, and be in that region of space to which the beam is being sent. Many SETI searches go so far as to assume that extraterrestrial civilizations will be broadcasting a deliberate signal (like the Arecibo message), in order to be found.

Thus to detect alien civilizations through their radio emissions, Earth observers either need more sensitive instruments or must hope for fortuitous circumstances: that the broadband radio emissions of alien radio technology are much stronger than our own; that one of SETI's programs is listening to the correct frequencies from the right regions of space; or that aliens are sending focused transmissions such as the Arecibo message in our general direction.

Civilizations only broadcast detectable radio signals for a brief period of time

It may be that alien civilizations are detectable through their radio emissions only for a short time period, reducing the likelihood of spotting them. There are two possibilities in this regard: civilizations outgrow radio through technological advance or, conversely, resource depletion cuts short the time in which a species broadcasts.

The first idea, that civilizations advance beyond radio, is based in part on the "fiber optic objection": the use of broadcast technologies for the long-distance transmission of information is fundamentally wasteful of energy and bandwidth, as broadcasts typically radiate in all directions evenly and large amounts of power are needed. Human technology is currently moving away from broadcast for long-distance communication and replacing it with wires, optical fibers, narrow-beam microwave and laser transmission. Most recent technologies that employ radio, such as cell phones and Wi-Fi networks, use low-power, short-range transmitters to communicate with numerous fixed stations that are themselves connected by wire or narrow beam radio. Television, as developed in the mid-twentieth century, employs transmitters with strong narrow-band carrier signals that are perhaps the most detectable human signals at stellar range; however digital television is replacing this technology and uses wide-band spread spectrum modulation with much lower carrier power. It is argued that these trends will make the Earth much less visible in the radio spectrum within a few decades. Hypothetically, advanced alien civilizations evolve beyond broadcasting at all in the electromagnetic spectrum and communicate by principles of physics we don't yet understand. Thus it seems plausible that other civilizations would only be detectable for a relatively short period of time between the discovery of radio and the switch to more efficient technologies.

A different argument is that resource depletion will soon result in a decline in technological capability. Human civilization has been capable of interstellar radio communication for only a few decades and is already rapidly depleting fossil fuels and grappling with the problem of peak oil. It may only be a few more decades before energy becomes too expensive, and the necessary electronics and computers too difficult to manufacture, for societies to continue the search. If the same conditions regarding energy supplies hold true for other civilizations, then radio technology may be a short-lived phenomenon. Unless two civilizations happen to be near each other and develop the ability to communicate at the same time it would be virtually impossible for any one civilization to "talk" to another.

Critics of the resource depletion argument point out that an energy-consuming civilization is not dependent solely on fossil fuels. Alternate energy sources exist, such as solar power which is renewable and has enormous potential relative to technical barriers.[30] For depletion of fossil fuels to end the "technological phase" of a civilization some form of technological regression would have to invariably occur, preventing the exploitation of renewable energy sources.

They tend to experience a technological singularity

See also: Sentience Quotient and Matrioshka brain

Another possibility is that technological civilizations invariably experience a technological singularity and attain a posthuman (or postalien) character. Theoretical civilizations of this sort may have altered drastically enough to render communication impossible. The intelligences of a post-singularity civilization might require more information exchange than is possible through interstellar communication, for example. Or perhaps any information humanity might provide would appear elementary. Because of this they do not try to communicate, any more than human beings attempt to talk to ants.

Even more extreme forms of post-singularity have been suggested, particularly in fiction: beings that divest themselves of physical form, create massive artificial virtual environments, transfer themselves into these environments through mind transfer, and exist totally within virtual worlds, ignoring the external physical universe. Surprisingly early treatments, such as Lewis Padgett's short story Mimsy were the Borogroves (1943), suggest a migration of advanced beings out of the presently known physical universe into a different and presumably more agreeable alternate one.

One version of this perspective, which makes predictions for future SETI findings of transcension "fossils" and includes a variation of the Zoo hypothesis below, has been proposed by singularity scholar John Smart[31] They also may be as far superior to us as we are to ants, not being able to communicate meaningfully.

... and they choose not to communicate

This idea is most plausible if there is a single alien civilization within contact range, or there is a homogeneous culture or law amongst alien civilizations which dictates that the Earth be shielded. If there is a plurality of alien cultures, however, this theory may break down under the uniformity of motive flaw: all it takes is a single culture or civilization to decide to act contrary to the imperative for it to be abrogated, and the probability of such a violation increases with the number of civilizations.[32]

Earth is purposely isolated (The zoo hypothesis)

Main article: Zoo hypothesis

It is possible that the belief that alien races would communicate with the human species is a fallacy, and that alien civilizations may not wish to communicate, even if they have the technical ability. A particular reason that alien civilizations may choose not to communicate is the so-called Zoo hypothesis: the idea that Earth is being monitored by advanced civilizations for study, or is being preserved in an isolated "zoo or wilderness area".[33]

The motivation may be ethical (encouraging humanity's independent development) or strategic (aliens wish to avoid detection and possible destruction at the hands of other civilizations). These ideas are similar to the Prime Directive of the "United Federation of Planets" in the fictional Star Trek television series. This possibility has caused some to speculate that humanity needs to pass a certain ethical, technological or social boundary before being allowed to make contact with existing advanced alien civilizations.

They are too alien

See also: technological singularity

Another possibility is that human theoreticians have underestimated how much alien life might differ from that on Earth. Alien psychologies may simply be too different to communicate with, and realizing this, they do not make the attempt. It is also possible that the very concept of communication with other species is one which they cannot conceive. Human mathematics, language, tool use, and other cornerstones of technology and communicative capacity may be parochial to Earth and not shared by other life.[34] Using Earth as an example, it is possible to conceive of dolphins evolving intelligence, but such an intelligence might have difficulty developing technology (and particularly key aspects of our sort of technology, for example fire and electricity).

They are not interested

Most scenarios for communication with other civilizations rest upon the assumption that these other races share our scientific curiosity and our desire to make contact. This may be incorrect. It is entirely possible that, for cultural reasons of its own (such as a religious taboo, xenophobia, or simple indifference), an alien society may have no desire to talk to others even if it has the technical capability.

... and they are here unobserved

It may be that intelligent alien life forms not only exist, but are already present here on Earth. They are not detected because they do not wish it, human beings are technically unable, or because societies refuse to admit to the evidence.

They hide their presence

It is not unreasonable that a life form intelligent enough to travel to Earth would also be sufficiently intelligent to exist here undetected. In this view, the aliens have arrived on Earth, or in our solar system, and are observing the planet, while concealing their presence. While it seems unlikely that alien observers could move amongst the general population undetected for any great length of time, such observation could be conducted in a number of other ways that would be very difficult to detect. For example, a complex system of microscopic monitoring devices constructed via molecular nanotechnology could be deployed on Earth and remain undetected, or sophisticated instruments could conduct passive monitoring from elsewhere.

Human beings refuse to see, or misunderstand, the evidence

Many UFO researchers and watchers argue that society as a whole is unfairly biased against claims of alien abduction, sightings, and encounters, and as a result may not be fully receptive to claims of proof that aliens are visiting our planet. Others use complex conspiracy theories to allege that evidence of alien visits is being concealed from the public by political elites who seek to hide the true extent of contact between aliens and humans. For example, it is also possible that human beings do in fact interact with many alien cultures, but that Earth functions as a kind of prison colony, with the aid of memory wipes. Scenarios such as these have been depicted in popular culture for decades.

1^ Shostak, Seth (25 October 2001). "Our Galaxy Should Be Teeming With Civilizations, But Where Are They?". Retrieved on April 08, 2006.
2^ Craig, Andrew (2003). "Astronomers count the stars". BBC News. BBC. Retrieved on April 08, 2006.
3^ Crawford, I.A., "Where are They? Maybe we are alone in the galaxy after all", Scientific American, July 2000, 38-43, (2000).
4^ Sagan, Carl. Cosmos, Ballantine Books 1985
5^ Tipler, Frank. The Most Advanced Civilization in the Galaxy is Ours, Mercury, vol. 5, pg. 5, 1982.
6^ Peter Ward and Donald Brownlee. Rare Earth: Why Complex Life is Uncommon in the Universe. Copernicus Books. January 2000. ISBN 0-387-98701-0.
7^ Athena Andreadis. "E. T., Call Springer-Verlag!" SETI League Publications, 2000.
8^ Mullen, Leslie (2002). "Alien Intelligence Depends on Time Needed to Grow Brains". Astrobiology Magazine. Retrieved on April 21, 2006.
9^ Habitable Planet Signposts, Astrobiology magazine.
10^ Schneider, Jean (2006-10-09). Interactive Extra-solar Planets Catalog. The Extrasolar Planets Encyclopaedia. Retrieved on 2006-10-11.
11^ Papagiannis, M. D. "Are We Alone or Could They be in the Asteroid Belt?," Quarterly Journal of the Royal Astronomical Society, 19, 277-281 (1978)
12^ Bracewell, R. N. "Communications from Superior Galactic Communities," Nature, 186, 670-671 (1960). Reprinted in A.G. Cameron (ed.), Interstellar Communication, W. A. Benjamin, Inc., New York, pp. 243-248, 1963.
13^ SETV projects
14^ Freitas Jr., Robert A. and Valdes, Francisco. "The Search for Extraterrestrial Artifacts," Acta Astronautica, 12, No. 12, 1027-1034 (1985).
15^ Dyson, Freeman, "Search for Artificial Stellar Sources of Infra-Red Radiation", Science, June 1960.
16^ Niven, Larry, "Bigger than Worlds", Analog, March 1974.
17^ Fermilab Dyson Sphere search program. Fermi National Accelerator Laboratory. Retrieved on May 12, 2006.
18^ Bonnell, Jerry (2002). "A Bad Day in the Milky Way". NOVA Online. PBS. Retrieved on April 14, 2006.
19^ Darling, David. "Extraterrestrial intelligence, hazards to". 'The Encyclopedia of Astrobiology, Astronomy, and Spaceflight'. Worlds of David Darling. Retrieved on May 11, 2006.
20^ Hawking, Stephen. "Life in the Universe". Public Lectures. University of Cambridge. Retrieved on May 11, 2006.
21^ Archer, Michael. "Slime Monsters Will Be Human Too," Nature Australia, vol. 22, 1989.
22^ Soter, Steven (2005). "SETI and the Cosmic Quarantine Hypothesis". Astrobiology Magazine. Retrieved on May 3, 2006.
23^ Wiker, Benjamin D.. Christianity and the Search for Extraterrestrial Life.
24^ Webb, Stephen. If the Universe Is Teeming With Aliens...Where Is Everybody?, Springer, 2002, pp. 62-71
25^ Landis, Geoffrey. "The Fermi Paradox: An Approach Based on Percolation Theory", Journal of the British Interplanetary Society, London, vol 51, page 163-166, 1998.
26^ Kinouchi, Osame. "Persistence solves Fermi Paradox but challenges SETI projects," Condensed Matter, 0112137 v1, December 2001.
27^ Margaret C. Turnbull and Jill C. Tarter. "Target selection for SETI: A catalog of nearby habitable stellar systems," The Astrophysical Journal Supplement Series, 145: 181-198, March 2003.
28^ SETI's FAQ, Sec 1.2.3
29^ SETI's FAQ, Sec 1.6
30^ History of Solar Energy,
31^ Smart, John, "Answering the Fermi Paradox: Exploring the Mechanisms of Universal Transcension", Journal of Evolution and Technology, June 2002.
32^ Crawford, July 2000.
33^ John A. Ball. "The Zoo Hypothesis," Icarus, vol 19, issue 3, pp 347-349, July 1973.
34^ Schombert, James. "Fermi's paradox (i.e. Where are they?)" Lectures, University of Oregon.

Suggested reading
Savage, Marshall T. (1992). The Millennial Project: Colonizing the Galaxy in 8 Easy Steps. Denver: Empyrean Publishing. ISBN 0-9633914-8-8.
Webb, Stephen (2002). If the Universe Is Teeming with Aliens... Where Is Everybody?. Copernicus Books. ISBN 0-387-95501-1.
Michaud, Michael (2006). Contact with Alien Civilizations: Our Hopes and Fears about Encountering Extraterrestrials. Copernicus Books. ISBN 978-0387-28598-6.

External links
They're Made Out Of Meat
So much space, so little time: why aliens haven't found us yet by Ian Sample,The Guardian January 18, 2007
The Possibilities of FTL: Or Fermi's Paradox Reconsidered by F.E. Freiheit IV
Fermi's Paradox (i.e. Where are They?) by James Schombert
Life in the Universe, by Eric Schulman, Mercury Magazine (May/June to November/December 2000)
Answering the Fermi Paradox: Exploring the Mechanisms of Universal Transcension by John Smart
The Great Filter — Are We Almost Past It? by Robin Hanson
Extraterrestrial Intelligence in the Solar System: Resolving the Fermi Paradox, which argues that our observations are incomplete, and There Is No Fermi Paradox, arguing that the paradox is based on a logical flaw, both by Robert Freitas
Now the Pentagon tells Bush: climate change will destroy us

· Secret report warns of rioting and nuclear war
· Britain will be 'Siberian' in less than 20 years
· Threat to the world is greater than terrorism

Mark Townsend and Paul Harris in New York
Sunday February 22, 2004
The Observer

Climate change over the next 20 years could result in a global catastrophe costing millions of lives in wars and natural disasters..
A secret report, suppressed by US defence chiefs and obtained by The Observer, warns that major European cities will be sunk beneath rising seas as Britain is plunged into a 'Siberian' climate by 2020. Nuclear conflict, mega-droughts, famine and widespread rioting will erupt across the world.

The document predicts that abrupt climate change could bring the planet to the edge of anarchy as countries develop a nuclear threat to defend and secure dwindling food, water and energy supplies. The threat to global stability vastly eclipses that of terrorism, say the few experts privy to its contents.

'Disruption and conflict will be endemic features of life,' concludes the Pentagon analysis. 'Once again, warfare would define human life.'
The findings will prove humiliating to the Bush administration, which has repeatedly denied that climate change even exists. Experts said that they will also make unsettling reading for a President who has insisted national defence is a priority.

The report was commissioned by influential Pentagon defence adviser Andrew Marshall, who has held considerable sway on US military thinking over the past three decades. He was the man behind a sweeping recent review aimed at transforming the American military under Defence Secretary Donald Rumsfeld.

Climate change 'should be elevated beyond a scientific debate to a US national security concern', say the authors, Peter Schwartz, CIA consultant and former head of planning at Royal Dutch/Shell Group, and Doug Randall of the California-based Global Business Network.

An imminent scenario of catastrophic climate change is 'plausible and would challenge United States national security in ways that should be considered immediately', they conclude. As early as next year widespread flooding by a rise in sea levels will create major upheaval for millions.

Last week the Bush administration came under heavy fire from a large body of respected scientists who claimed that it cherry-picked science to suit its policy agenda and suppressed studies that it did not like. Jeremy Symons, a former whistleblower at the Environmental Protection Agency (EPA), said that suppression of the report for four months was a further example of the White House trying to bury the threat of climate change.

Senior climatologists, however, believe that their verdicts could prove the catalyst in forcing Bush to accept climate change as a real and happening phenomenon. They also hope it will convince the United States to sign up to global treaties to reduce the rate of climatic change.

A group of eminent UK scientists recently visited the White House to voice their fears over global warming, part of an intensifying drive to get the US to treat the issue seriously. Sources have told The Observer that American officials appeared extremely sensitive about the issue when faced with complaints that America's public stance appeared increasingly out of touch.

One even alleged that the White House had written to complain about some of the comments attributed to Professor Sir David King, Tony Blair's chief scientific adviser, after he branded the President's position on the issue as indefensible.

Among those scientists present at the White House talks were Professor John Schellnhuber, former chief environmental adviser to the German government and head of the UK's leading group of climate scientists at the Tyndall Centre for Climate Change Research. He said that the Pentagon's internal fears should prove the 'tipping point' in persuading Bush to accept climatic change.

Sir John Houghton, former chief executive of the Meteorological Office - and the first senior figure to liken the threat of climate change to that of terrorism - said: 'If the Pentagon is sending out that sort of message, then this is an important document indeed.'

Bob Watson, chief scientist for the World Bank and former chair of the Intergovernmental Panel on Climate Change, added that the Pentagon's dire warnings could no longer be ignored.

'Can Bush ignore the Pentagon? It's going be hard to blow off this sort of document. Its hugely embarrassing. After all, Bush's single highest priority is national defence. The Pentagon is no wacko, liberal group, generally speaking it is conservative. If climate change is a threat to national security and the economy, then he has to act. There are two groups the Bush Administration tend to listen to, the oil lobby and the Pentagon,' added Watson.

'You've got a President who says global warming is a hoax, and across the Potomac river you've got a Pentagon preparing for climate wars. It's pretty scary when Bush starts to ignore his own government on this issue,' said Rob Gueterbock of Greenpeace.

Already, according to Randall and Schwartz, the planet is carrying a higher population than it can sustain. By 2020 'catastrophic' shortages of water and energy supply will become increasingly harder to overcome, plunging the planet into war. They warn that 8,200 years ago climatic conditions brought widespread crop failure, famine, disease and mass migration of populations that could soon be repeated.

Randall told The Observer that the potential ramifications of rapid climate change would create global chaos. 'This is depressing stuff,' he said. 'It is a national security threat that is unique because there is no enemy to point your guns at and we have no control over the threat.'

Randall added that it was already possibly too late to prevent a disaster happening. 'We don't know exactly where we are in the process. It could start tomorrow and we would not know for another five years,' he said.

'The consequences for some nations of the climate change are unbelievable. It seems obvious that cutting the use of fossil fuels would be worthwhile.'

So dramatic are the report's scenarios, Watson said, that they may prove vital in the US elections. Democratic frontrunner John Kerry is known to accept climate change as a real problem. Scientists disillusioned with Bush's stance are threatening to make sure Kerry uses the Pentagon report in his campaign.

The fact that Marshall is behind its scathing findings will aid Kerry's cause. Marshall, 82, is a Pentagon legend who heads a secretive think-tank dedicated to weighing risks to national security called the Office of Net Assessment. Dubbed 'Yoda' by Pentagon insiders who respect his vast experience, he is credited with being behind the Department of Defence's push on ballistic-missile defence.

Symons, who left the EPA in protest at political interference, said that the suppression of the report was a further instance of the White House trying to bury evidence of climate change. 'It is yet another example of why this government should stop burying its head in the sand on this issue.'

Symons said the Bush administration's close links to high-powered energy and oil companies was vital in understanding why climate change was received sceptically in the Oval Office. 'This administration is ignoring the evidence in order to placate a handful of large energy and oil companies,' he added.
by Hugo Salinas Price
President, Mexican Civic Association Pro Silver
March 5, 2007

The World is exchanging goods and services by various national means of exchange. We are using those same means of exchange as a vehicle for savings. We are denominating credit contracts in any one of various national means of exchange. The predominant means of exchange is the US dollar.

However, a means of exchange voluntarily accepted as such, by those who participate in exchanging goods and services, by those who use it as a vehicle for savings and by those who denominate credit contracts in it, is not per se money.

Money must, sine qua non, function not only as a means of exchange, but also as a means of payment.

The world, as of February 2007, does not possess a means of payment. In economic terms, payment is the exchange of something for something. In today’s world, when units of what is called money are tendered in payment of a purchase, or in settlement of a balance after an exchange, or in settlement of debt, there has been in reality and economically no such payment. We are in these cases using the term “payment” merely as a legal convention and a leftover from a previous era, when payment did in fact exist and govern all economic activities.

Money, properly speaking, must be definable! The dollar cannot be defined: so said Alan Greenspan himself, the Pope of Central Bankers, in reference to the dollar, which is the reserve currency of the world and which “backs” all other currencies. When something is not definable, it has no physical existence. A thing that has no physical existence is imaginary. An imaginary thing such as money is today, is as different from real, actual money, as an imaginary loaf of bread is different from a loaf of bread in one’s hand.

A money payment must involve a tendering of tangible money, gold or silver, or of a credit instrument which is recognized as entitling the owner to the undoubted right to immediate redemption of that instrument, in gold or silver.

Humanity is unaware of the stupendously important fact that it lives in a world without money. This lack of awareness is perhaps the most singular feature of our contemporary world, upon which historians – if the world does survive this episode and produce historians at some future date – will remark with amazement: “How was it possible that billions of humans could delude themselves into acting as if what they used for payments, credit contracts and savings, was actually money?”

About 1997 I began to look for data concerning the amount of “reserves”, excluding gold, held by the world’s Central Banks. In other words, the amount of imaginary money they were holding, otherwise called “paper money”. In 1997, those “reserves” totaled $1,300,000,000,000 ($1.3 Trillion) dollars. Not all those “reserves” are dollars, but most of them are.

Back then, not many people were paying attention to that datum. Since then, it has received increasing attention, which is not surprising, for the “reserves” are piling up and showing numbers that are clearly “going ballistic”. As of January 2007, world Central Bank “reserves” were hitting $5 trillion dollars, an increase of 385% in ten years. The last increase of $1 Trillion only took five months, from August 2006 to January 2007. (“Bloomberg”)

Before 1971, Central Bank reserves were mainly gold, plus component of foreign exchange redeemable in gold. Reserves could only grow very slowly. Imbalances in trade were shunned because the settlement of deficits had to be made in gold or dollars exchangeable for gold. International trade was stable. Imports could not affect the economies of importing countries as much as they do today, with “globalization”. Therefore, local productive activities were stable. Jobs were generated through reinvestment in productive activities.

The present situation is chaotic, because the creation of reserves of fictitious, imaginary “money” originates mainly in Dollars which are spewed forth by the out-of-control US economy, plus other fictitious moneys like the Euro born in the European Union, the Yen born in Japan, the Pound born in the UK, all of which are held by other countries as “reserves”.

Since today “money” is imaginary, fictitious, imports no longer have any limit, for it actually costs nothing to “pay” when “money” is imaginary. Thus, “globalization” based on the unlimited creation of fictitious money is a totally false globalization unsupported by economic facts.

The more important Central Banks are becoming skittish about the enormous amounts of “reserves” which they are accumulating. The Central Bankers are bureaucrats, but they are sensing that these enormous holdings are rather worrisome; however they do not know what to do about them. The fact is, they have been had. Their “reserves” are simply numerical and lack any substance. They are imaginary and as useless as castles in the air, unless they can manage to get rid of them by passing them on to some unsuspecting seller of tangible goods.

China is now going around the world – especially Africa – looking for opportunities to buy raw materials (a Chinese delegation will be present at the First International Mining Forum in Mexico, the middle of March). For the same reason, the Central Banks that subscribed the Washington Agreement (to sell no more than a certain amount of gold each year) have since 2006 lost their former appetite for gold sales and they are not covering their allotted sales quotas. It appears that they have finally realized that the reserves that are actually worth something are the gold reserves, and not the “foreign currency” bond holdings which they were so eager to hold because they “provided earnings”.

However, if they start to unload their imaginary holdings, the exchange value of the holdings will begin to fall. So they are in a dilemma, a choice between two distasteful alternatives: “Shall we hold on to the imaginary money and wait and see what happens, or shall we begin to unload it and risk collapsing the value of the larger part remaining with us?”

Up till now, the Central Bankers have been doing what bureaucrats usually do when they are faced with a difficult choice: nothing. They are waiting to see what happens.

More than half of the world’s Central Bank “reserves” are held by the Central Banks of China, Japan, South Korea and Southeast Asia. These Central Banks ended up with these huge “reserves” because they accepted a means of exchange - which was no more than imaginary money, digits on computer discs - as if it was payment. In other words, they believed a fairy tale, like the one where Jack trades his cow for a handful of colored beans.

So, we are living in a fairy tale world, where money is not money at all. Alas, reality cannot be fooled by means of fairy tales. How shall we fare, when the dream has vanished into thin air and the last fool has had to recognize the difference between a payment and a fairy tale?

Diversifying the inevitable
March 1, 2007
Random Walk, by Rob Peebles
Wouldn’t you know it. Finally, every single investor in the U.S. gets around to doing what the academics have been telling them to do for years. Finally they get some foreign stocks in their Schwab accounts. Okay, a lot of foreign stocks. And just as they sit back in their Office Depot chairs with the adjustable lumbar support, all ready to enjoy some superior risk adjusted returns, equities across the globe get smacked. Most investors would have been better off taking their lumps right here at home, particularly instead of venturing into emerging markets.

But it was just a day. A 10% drop in Chinese stocks is hardly a hiccup compared to 140% returns over the prior year. The great thing about profit taking in China is that there are still so many profits to take. For some. For now.

But there’s more to diversification than maintaining the proper allocation to Haiti. That’s because the federal government offers workers a choice of when they pay taxes on their tax deferred retirement accounts. The choice, as you might guess, is Now or Later.

Currently, most people choose Later. And why not? Later is the almost always the best choice. For example: Do you want to have a physical Now or Later? Get that cavity filled Now or Later? Tell your kids you lost their college money playing a stock you heard about from a spam email Now or Later?

Americans who contribute to a traditional 401(k) plan have chosen Later. Money goes straight from their employer into the retirement plan with no taxes withheld. Dividends, interest and capital gains pile up tax free. When the money is withdrawn during retirement, taxes are paid on the distributions.

But now there is a Now option in addition to Later. That’s the Roth 401(k) where employees contribute after tax dollars to a retirement plan. Like the Roth IRA, once the taxes are paid, that’s it. There are no taxes on the distributions during retirement.

The Roth 401(k) isn’t exactly a brand new deal, but employers have been reluctant to offer it since there was a chance that the provision allowing it would sunset in 2011. But since the Pension Protection Act of 2006 has been passed, according to
InvestmentNews, the Now Option is the real deal, and more companies and employees are interested.

One of the major reasons to pay taxes Later rather than Now, aside from the sheer joy of stiffing Uncle Sam, is because you think your tax rate in retirement will be lower than your tax rate today. After all, if you are in the 33% bracket Now, who wouldn’t want to pay taxes at 15% Later?

But what if a lower tax bracket Later is not a sure thing? What if tax rates down the road are as unpredictable as Britney Spear’s behavior?

The charts below might make a person wonder if tax rates today have anywhere to go but up:

Those depressing graphics come courtesy of the comptroller general of the United States, who for several months has been the lead act in his office’s “Fiscal Wake-Up Tour.” The tour involves Comptroller David Walker speaking to groups around country about why the “p” is silent in “comptroller” and warning about economic disaster if the government doesn’t change its spending habits. Several of Mr. Walker’s presentations are available on the GAO website. They are so well done they deserve their own Roger Ebert-style review: “Two thumbs up, and two hands over eyes… Like watching Saving Private Ryan without all the laughs.”

The problem, according to Mr. Walker, is that despite today’s yawning budget gap, things are likely to get worse. That’s because health and retirement benefit costs for aging boomers are going to skyrocket unless somebody shakes up the status quo. Already, spending on Medicare and Medicaid has doubled over the last decade as a share of federal spending, while discretionary spending shrank from 44% to 38%. And this spending is before some drug company comes up with a pill to eliminate ear hair. But if current trends continue, Walker figures that GDP would have to grow double digits every year for 75 years to close the current budget gap. Or put another way, we could balance the budget in 2040 – if we cut spending by 60% or doubled the federal tax take.

So no wonder companies are giving more people the option to choose paying taxes Now while the prices are low. And no wonder Mr. Walker is running around the country trying to get people’s attention. Are people listening? Who knows? In an October AP story, reporter Matt Crenson found at least one American who got it. But she was an accountant. Besides, one who finds self-actualization in an accounts payable ledger knows a tough sell when she sees it. “There’s no sexiness to it,” she pointed out. But, she wondered, what if the GAO enrolled a celebrity to pitch the idea. She suggested Oprah. And why not? Maybe David Walker could jump up and down on her couch. But other endorsements could be just as powerful:

Sean Penn: “Look, I’ve known this Walker dude a long time, and he is serious, man.”
Will Farrell: “Hey, America, you start holding Congress accountable or I’m putting my underwear over my head.”
Peyton Manning: “Listen you guys, we don’t have much time. I’ve got another endorsement to do in about five minutes.”
Paris Hilton: "Isn't it sad about Britney's hair?"

But since the whole point is to get our fiscal house in order, why pay a bunch of money to a celebrity? The GAO could set an example for Congress and the White House by doing this budget education campaign on the cheap. All it takes is the right theme. Maybe something like…

“It takes a village. And it takes China. And Japan. And India. Let’s do something before they want their money back.”

Monday, March 05, 2007

Reversal of Fortune
News: The formula for human well-being used to be simple: Make money, get happy. So why is the old axiom suddenly turning on us?

By Bill Mckibben

March/April 2007 Issue

for most of human history, the two birds More and Better roosted on the same branch. You could toss one stone and hope to hit them both. That's why the centuries since Adam Smith launched modern economics with his book The Wealth of Nations have been so single-mindedly devoted to the dogged pursuit of maximum economic production. Smith's core ideas—that individuals pursuing their own interests in a market society end up making each other richer; and that increasing efficiency, usually by increasing scale, is the key to increasing wealth—have indisputably worked. They've produced more More than he could ever have imagined. They've built the unprecedented prosperity and ease that distinguish the lives of most of the people reading these words. It is no wonder and no accident that Smith's ideas still dominate our politics, our outlook, even our personalities.

But the distinguishing feature of our moment is this: Better has flown a few trees over to make her nest. And that changes everything. Now, with the stone of your life or your society gripped in your hand, you have to choose. It's More or Better.

Which means, according to new research emerging from many quarters, that our continued devotion to growth above all is, on balance, making our lives worse, both collectively and individually. Growth no longer makes most people wealthier, but instead generates inequality and insecurity. Growth is bumping up against physical limits so profound—like climate change and peak oil—that trying to keep expanding the economy may be not just impossible but also dangerous. And perhaps most surprisingly, growth no longer makes us happier. Given our current dogma, that's as bizarre an idea as proposing that gravity pushes apples skyward. But then, even Newtonian physics eventually shifted to acknowledge Einstein's more complicated universe.

1. "We can do it if we believe it": FDR, LBJ, and the invention of growth

it was the great economist John Maynard Keynes who pointed out that until very recently, "there was no very great change in the standard of life of the average man living in the civilized centers of the earth." At the utmost, Keynes calculated, the standard of living roughly doubled between 2000 B.C. and the dawn of the 18th century—four millennia during which we basically didn't learn to do much of anything new. Before history began, we had already figured out fire, language, cattle, the wheel, the plow, the sail, the pot. We had banks and governments and mathematics and religion.

And then, something new finally did happen. In 1712, a British inventor named Thomas Newcomen created the first practical steam engine. Over the centuries that followed, fossil fuels helped create everything we consider normal and obvious about the modern world, from electricity to steel to fertilizer; now, a 100 percent jump in the standard of living could suddenly be accomplished in a few decades, not a few millennia.

In some ways, the invention of the idea of economic growth was almost as significant as the invention of fossil-fuel power. But it took a little longer to take hold. During the Depression, even FDR routinely spoke of America's economy as mature, with no further expansion anticipated. Then came World War II and the postwar boom—by the time Lyndon Johnson moved into the White House in 1963, he said things like: "I'm sick of all the people who talk about the things we can't do. Hell, we're the richest country in the world, the most powerful. We can do it all.... We can do it if we believe it." He wasn't alone in thinking this way. From Moscow, Nikita Khrushchev thundered, "Growth of industrial and agricultural production is the battering ram with which we shall smash the capitalist system."

Yet the bad news was already apparent, if you cared to look. Burning rivers and smoggy cities demonstrated the dark side of industrial expansion. In 1972, a trio of mit researchers released a series of computer forecasts they called "limits to growth," which showed that unbridled expansion would eventually deplete our resource base. A year later the British economist E.F. Schumacher wrote the best-selling Small Is Beautiful. (Soon after, when Schumacher came to the United States on a speaking tour, Jimmy Carter actually received him at the White House—imagine the current president making time for any economist.) By 1979, the sociologist Amitai Etzioni reported to President Carter that only 30 percent of Americans were "pro-growth," 31 percent were "anti-growth," and 39 percent were "highly uncertain."

Such ambivalence, Etzioni predicted, "is too stressful for societies to endure," and Ronald Reagan proved his point. He convinced us it was "Morning in America"—out with limits, in with Trump. Today, mainstream liberals and conservatives compete mainly on the question of who can flog the economy harder. Larry Summers, who served as Bill Clinton's secretary of the treasury, at one point declared that the Clinton administration "cannot and will not accept any 'speed limit' on American economic growth. It is the task of economic policy to grow the economy as rapidly, sustainably, and inclusively as possible." It's the economy, stupid.

2. Oil bingeing, Chinese cars, and the end of the easy fix

except there are three small things. The first I'll mention mostly in passing: Even though the economy continues to grow, most of us are no longer getting wealthier. The average wage in the United States is less now, in real dollars, than it was 30 years ago. Even for those with college degrees, andlthough productivity was growing faster than it had for decades, between 2000 and 2004 earnings fell 5.2 percent when adjusted for inflation, according to the most recent data from White House economists. Much the same thing has happened across most of the globe. More than 60 countries around the world, in fact, have seen incomes per capita fall in the past decade.

For the second point, it's useful to remember what Thomas Newcomen was up to when he helped launch the Industrial Revolution—burning coal to pump water out of a coal mine. This revolution both depended on, and revolved around, fossil fuels. "Before coal," writes the economist Jeffrey Sachs, "economic production was limited by energy inputs, almost all of which depended on the production of biomass: food for humans and farm animals, and fuel wood for heating and certain industrial processes." That is, energy depended on how much you could grow. But fossil energy depended on how much had grown eons before—all those billions of tons of ancient biology squashed by the weight of time till they'd turned into strata and pools and seams of hydrocarbons, waiting for us to discover them.

To understand how valuable, and irreplaceable, that lake of fuel was, consider a few other forms of creating usable energy. Ethanol can perfectly well replace gasoline in a tank; like petroleum, it's a way of using biology to create energy, and right now it's a hot commodity, backed with billions of dollars of government subsidies. But ethanol relies on plants that grow anew each year, most often corn; by the time you've driven your tractor to tend the fields, and your truck to carry the crop to the refinery, and powered your refinery, the best-case "energy output-to-input ratio" is something like 1.34-to-1. You've spent 100 Btu of fossil energy to get 134 Btu. Perhaps that's worth doing, but as Kamyar Enshayan of the University of Northern Iowa points out, "it's not impressive" compared to the ratio for oil, which ranges from 30-to-1 to 200-to-1, depending on where you drill it. To go from our fossil-fuel world to a biomass world would be a little like leaving the Garden of Eden for the land where bread must be earned by "the sweat of your brow."

And east of Eden is precisely where we may be headed. As everyone knows, the past three years have seen a spate of reports and books and documentaries suggesting that humanity may have neared or passed its oil peak—that is, the point at which those pools of primeval plankton are half used up, where each new year brings us closer to the bottom of the barrel. The major oil companies report that they can't find enough new wells most years to offset the depletion in the old ones; rumors circulate that the giant Saudi fields are dwindling faster than expected; and, of course, all this is reflected in the cost of oil.

The doctrinaire economist's answer is that no particular commodity matters all that much, because if we run short of something, it will pay for someone to develop a substitute. In general this has proved true in the past: Run short of nice big sawlogs and someone invents plywood. But it's far from clear that the same precept applies to coal, oil, and natural gas. This time, there is no easy substitute: I like the solar panels on my roof, but they're collecting diffuse daily energy, not using up eons of accumulated power. Fossil fuel was an exception to the rule, a one-time gift that underwrote a one-time binge of growth.

This brings us to the third point: If we do try to keep going, with the entire world aiming for an economy structured like America's, it won't be just oil that we'll run short of. Here are the numbers we have to contend with: Given current rates of growth in the Chinese economy, the 1.3 billion residents of that nation alone will, by 2031, be about as rich as we are. If they then eat meat, milk, and eggs at the rate that we do, calculates ecostatistician Lester Brown, they will consume 1,352 million tons of grain each year—equal to two-thirds of the world's entire 2004 grain harvest. They will use 99 million barrels of oil a day, 15 million more than the entire world consumes at present. They will use more steel than all the West combined, double the world's production of paper, and drive 1.1 billion cars—1.5 times as many as the current world total. And that's just China; by then, India will have a bigger population, and its economy is growing almost as fast. And then there's the rest of the world.

Trying to meet that kind of demand will stress the earth past its breaking point in an almost endless number of ways, but let's take just one. When Thomas Newcomen fired up his pump on that morning in 1712, the atmosphere contained 275 parts per million of carbon dioxide. We're now up to 380 parts per million, a level higher than the earth has seen for many millions of years, and climate change has only just begun. The median predictions of the world's climatologists—by no means the worst-case scenario—show that unless we take truly enormous steps to rein in our use of fossil fuels, we can expect average temperatures to rise another four or five degrees before the century is out, making the globe warmer than it's been since long before primates appeared. We might as well stop calling it earth and have a contest to pick some new name, because it will be a different planet. Humans have never done anything more profound, not even when we invented nuclear weapons.

How does this tie in with economic growth? Clearly, getting rich means getting dirty—that's why, when I was in Beijing recently, I could stare straight at the sun (once I actually figured out where in the smoggy sky it was). But eventually, getting rich also means wanting the "luxury" of clean air and finding the technological means to achieve it. Which is why you can once again see the mountains around Los Angeles; why more of our rivers are swimmable every year. And economists have figured out clever ways to speed this renewal: Creating markets for trading pollution credits, for instance, helped cut those sulfur and nitrogen clouds more rapidly and cheaply than almost anyone had imagined.

But getting richer doesn't lead to producing less carbon dioxide in the same way that it does to less smog—in fact, so far it's mostly the reverse. Environmental destruction of the old-fashioned kind—dirty air, dirty water—results from something going wrong. You haven't bothered to stick the necessary filter on your pipes, and so the crud washes into the stream; a little regulation, and a little money, and the problem disappears. But the second, deeper form of environmental degradation comes from things operating exactly as they're supposed to, just too much so. Carbon dioxide is an inevitable byproduct of burning coal or gas or oil—not something going wrong. Researchers are struggling to figure out costly and complicated methods to trap some CO2 and inject it intdlderground mines—but for all practical purposes, the vast majority of the world's cars and factories and furnaces will keep belching more and more of it into the atmosphere as long as we burn more and more fossil fuels.

True, as companies and countries get richer, they can afford more efficient machinery that makes better use of fossil fuel, like the hybrid Honda Civic I drive. But if your appliances have gotten more efficient, there are also far more of them: The furnace is better than it used to be, but the average size of the house it heats has doubled since 1950. The 60-inch TV? The always-on cable modem? No need for you to do the math—the electric company does it for you, every month. Between 1990 and 2003, precisely the years in which we learned about the peril presented by global warming, the United States' annual carbon dioxide emissions increased by 16 percent. And the momentum to keep going in that direction is enormous. For most of us, growth has become synonymous with the economy's "health," which in turn seems far more palpable than the health of the planet. Think of the terms we use—the economy, whose temperature we take at every newscast via the Dow Jones average, is "ailing" or it's "on the mend." It's "slumping" or it's "in recovery." We cosset and succor its every sniffle with enormous devotion, even as we more or less ignore the increasingly urgent fever that the globe is now running. The ecological economists have an enormous task ahead of them—a nearly insurmountable task, if it were "merely" the environment that is in peril. But here is where things get really interesting. It turns out that the economics of environmental destruction are closely linked to another set of leading indicators—ones that most humans happen to care a great deal about.

3. "It seems that well-being is a real phenomenon": Economists discovdldonics

traditionally, happiness and satisfaction are the sort of notions that economists wave aside as poetic irrelevance, the kind of questions that occupy people with no head for numbers who had to major in liberal arts. An orthodox economist has a simple happiness formula: If you buy a Ford Expedition, then ipso facto a Ford Expedition is what makes you happy. That's all we need to know. The economist would call this idea "utility maximization," and in the words of the economic historian Gordon Bigelow, "the theory holds that every time a person buys something, sells something, quits a job, or invests, he is making a rational decision about what will...provide him 'maximum utility.' If you bought a Ginsu knife at 3 a.m. a neoclassical economist will tell you that, at that time, you calculated that this purchase would optimize your resources." The beauty of this principle lies in its simplicity. It is perhaps the central assumption of the world we live in: You can tell who I really am by what I buy.

Yet economists have long known that people's brains don't work quite the way the model suggests. When Bob Costanza, one of the fathers of ecological economics and now head of the Gund Institute at the University of Vermont, was first edging into economics in the early 1980s, he had a fellowship to study "social traps"—the nuclear arms race, say—in which "short-term behavior can get out of kilter with longer broad-term goals."

It didn't take long for Costanza to demonstrate, as others had before him, that, if you set up an auction in a certain way, people will end up bidding $1.50 to take home a dollar. Other economists have shown that people give too much weight to "sunk costs"—that they're too willing to throw good money after bad, or that they value items more highly if they already own them than if they are considering acquiring them. Building on such insights, a school of "behavioral economics" has emerged in recent years and begun plumbing how we really behave.

The wonder is that it took so long. We all know in our own lives how irrationally we are capable of acting, and how unconnected those actions are to any real sense of joy. (I mean, there you are at 3 a.m. thinking about the Ginsu knife.) But until fairly recently, we had no alternatives to relying on Ginsu knife and Ford Expedition purchases as the sole measures of our satisfaction. How else would we know what made people happy?

That's where things are now changing dramatically: Researchers from a wide variety of disciplines have started to figure out how to assess satisfaction, and economists have begun to explore the implications. In 2002 Princeton's Daniel Kahneman won the Nobel Prize in economics even though he is trained as a psychologist. In the book Well-Being, he and a pair of coauthors announce a new field called "hedonics," defined as "the study of what makes experiences and life pleasant or unpleasant.... It is also concerned with the whole range of circumstances, from the biological to the societal, that occasion suffering and enjoyment." If you are worried that there might be something altogether too airy about this, be reassured—Kahneman thinks like an economist. In the book's very first chapter, "Objective Happiness," he describes an experiment that compares "records of the pain reported by two patients undergoing colonoscopy," wherein every 60 seconds he insists they rate their pain on a scale of 1 to 10 and eventually forces them to make "a hypothetical choice between a repeat colonoscopy and a barium enema." Dismal science indeed.

As more scientists have turned their attention to the field, researchers have studied everything from "biases in recall of menstrual symptoms" to "fearlessness and courage in novice paratroopers." Subjects have had to choose between getting an "attractive candy bar" and learning the answers to geography questions; they've been made to wear devices that measured their blood pressure at regular intervals; their brains have been scanned. And by now that's been enough to convince most observers that saying "I'm happy" is more than just a subjective statement. In the words of the economist Richard Layard, "We now know that what people say about how they feel corresponds closely to the actual levels of activity in different parts of the brain, which can be measured in standard scientific ways." Indeed, people who call themselves happy, or who have relatively high levels of electrical activity in the left prefrontal region of the brain, are also "more likely to be rated as happy by friends," "more likely to respond to requests for help," "less likely to be involved in disputes at work," and even "less likely to die prematurely." In other words, conceded one economist, "it seems that what the psychologists call subjective well-being is a real phenomenon. The various empirical measures of it have high consistency, reliability, and validity."

The idea that there is a state called happiness, and that we can dependably figure out what it feels like and how to measure it, is extremely subversive. It allows economists to start thinking about life in richer (indeed) terms, to stop asking "What did you buy?" and to start asking "Is your life good?" And if you can ask someone "Is your life good?" and count on the answer to mean something, then you'll be able to move to the real heart of the matter, the question haunting our moment on the earth: Is more better?

4. If we're so rich, how come we're so damn miserable?

in some sense, you could say that the years since World War II in America have been a loosely controlled experiment designed to answer this very question. The environmentalist Alan Durning found that in 1991 the average American family owned twice as many cars as it did in 1950, drove 2.5 times as far, used 21 times as much plastic, and traveled 25 times farther by air. Gross national product per capita tripled during that period. Our houses are bigger than ever and stuffed to the rafters with belongings (which is why the storage-locker industry has doubled in size in the past decade). We have all sorts of other new delights and powers—we can send email from our cars, watch 200 channels, consume food from every corner of the world. Some people have taken much more than their share, but on average, all of us in the West are living lives materially more abundant than most people a generation ago.

What's odd is, none of it appears to have made us happier. Throughout the postwar years, even as the gnp curve has steadily climbed, the "life satisfaction" index has stayed exactly the same. Since 1972, the National Opinion Research Center has surveyed Americans on the question: "Taking all things together, how would you say things are these days—would you say that you are very happy, pretty happy, or not too happy?" (This must be a somewhat unsettling interview.) The "very happy" number peaked at 38 percent in the 1974 poll, amid oil shock and economic malaise; it now hovers right around 33 percent.

And it's not that we're simply recalibrating our sense of what happiness means—we are actively experiencing life as grimmer. In the winter of 2006 the National Opinion Research Center published data about "negative life events" comparing 1991 and 2004, two data points bracketing an economic boom. "The anticipation would have been that problems would have been down," the study's author said. Instead it showed a rise in problems—for instance, the percentage who reported breaking up with a steady partner almost doubled. As one reporter summarized the findings, "There's more misery in people's lives today."

This decline in the happiness index is not confined to the United States; as other nations have followed us into mass affluence, their experiences have begun to yield similar results. In the United Kingdom, real gross domestic product per capita grew two-thirds between 1973 and 2001, but people's satisfaction with their lives changed not one whit. Japan saw a fourfold increase in real income per capita between 1958 and 1986 without any reported increase in satisfaction. In one place after another, rates of alcoholism, suicide, and depression have gone up dramatically, even as we keep accumulating more stuff. Indeed, one report in 2000 found that the average American child reported higher levels of anxiety than the average child under psychiatric care in the 1950s—our new normal is the old disturbed.

If happiness was our goal, then the unbelievable amount of effort and resources expended in its pursuit since 1950 has been largely a waste. One study of life satisfaction and mental health by Emory University professor Corey Keyes found just 17 percent of Americans "flourishing," in mental health terms, and 26 percent either "languishing" or out-and-out depressed.

5. Danes (and Mexicans, the Amish, and the Masai) just want to have fun

how is it, then, that we became so totally, and apparently wrongly, fixated on the idea that our main goal, as individuals and as nations, should be the accumulation of more wealth? The answer is interesting for what it says about human nature. Up to a certain point, more really does equal better. Imagine briefly your life as a poor person in a poor society—say, a peasant farmer in China. (China has one-fourth of the world's farmers, but one-fourteenth of its arable land; the average farm in the southern part of the country is about half an acre, or barely more than the standard lot for a new American home.) You likely have the benefits of a close and connected family, and a village environment where your place is clear. But you lack any modicum of security for when you get sick or old or your back simply gives out. Your diet is unvaried and nutritionally lacking; you're almost always cold in winter.

In a world like that, a boost in income delivers tangible benefits. In general, researchers report that money consistently buys happiness right up to about $10,000 income per capita. That's a useful number to keep in the back of your head—it's like the freezing point of water, one of those random figures that just happens to define a crucial phenomenon on our planet. "As poor countries like India, Mexico, the Philippines, Brazil, and South Korea have experienced economic growth, there is some evidence that their average happiness has risen," the economist Layard reports. Past $10,000 (per capita, mind you—that is, the average for each man, woman, and child), there's a complete scattering: When the Irish were making two-thirds as much as Americans they were reporting higher levels of satisfaction, as were the Swedes, the Danes, the Dutch. Mexicans score higher than the Japanese; the French are about as satisfied with their lives as the Venezuelans. In fact, once basic needs are met, the "satisfaction" data scrambles in mindlnding ways. A sampling of Forbes magazine's "richest Americans" have identical happiness scores with Pennsylvania Amish, and are only a whisker above Swedes taken as a whole, not to mention the Masai. The "life satisfaction" of pavement dwellers—homeless people—in Calcutta is among the lowest recorded, but it almost doubles when they move into a slum, at which point they are basically as satisfied with their lives as a sample of college students drawn from 47 nations. And so on.

On the list of major mistakes we've made as a species, this one seems pretty high up. Our single-minded focus on increasing wealth has succeeded in driving the planet's ecological systems to the brink of failure, even as it's failed to make us happier. How did we screw up?

The answer is pretty obvious—we kept doing something past the point that it worked. Since happiness had increased with income in the past, we assumed it would inevitably do so in the future. We make these kinds of mistakes regularly: Two beers made me feel good, so ten will make me feel five times better. But this case was particularly extreme—in part because as a species, we've spent so much time simply trying to survive. As the researchers Ed Diener and Martin Seligman—both psychologists—observe, "At the time of Adam Smith, a concern with economic issues was understandably primary. Meeting simple human needs for food, shelter and clothing was not assured, and satisfying these needs moved in lockstep with better economics." Freeing people to build a more dynamic economy was radical and altruistic.

Consider Americans in 1820, two generations after Adam Smith. The average citizen earned, in current dollars, less than $1,500 a year, which is somewhere near the current average for all of Africa. As the economist Deirdre McCloskey explains in a 2004 article in the magazine Christian Century, "Your great-great-great-grandmother had one dress for church and one for the week, if she were not in rags. Her children did not attend school, and probably could not read. She and her husband worked eighty hours a week for a diet of bread and milk—they were four inches shorter than you." Even in 1900, the average American lived in a house the size of today's typical garage. Is it any wonder that we built up considerable velocity trying to escape the gravitational pull of that kind of poverty? An object in motion stays in motion, and our economy—with the built-up individual expectations that drive it—is a mighty object indeed.

You could call it, I think, the Laurdlgalls Wilder effect. I grew up reading her books—Little House on the Prairie, Little House in the Big Woods—and my daughter grew up listening to me read them to her, and no doubt she will read them to her children. They are the ur-American story. And what do they tell? Of a life rich in family, rich in connection to the natural world, rich in adventure—but materially deprived. That one dress, that same bland dinner. At Christmastime, a penny—a penny! And a stick of candy, and the awful deliberation about whether to stretch it out with tiny licks or devour it in an orgy of happy greed. A rag doll was the zenith of aspiration. My daughter likes dolls too, but her bedroom boasts a density of Beanie Babies that mimics the manic biodiversity of the deep rainforest. Another one? Really, so what? Its marginal utility, as an economist might say, is low. And so it is with all of us. We just haven't figured that out because the momentum of the past is still with us—we still imagine we're in that little house on the big prairie.

6. This year's model home: "Good for the dysfunctional family"

that great momentum has carried us away from something valuable, something priceless: It has allowed us to become (very nearly forced us to become) more thoroughly individualistic than we really wanted to be. We left behind hundreds of thousands of years of human community for the excitement, and the isolation, of "making something of ourselves," an idea that would not have made sense for 99.9 percent of human history. Adam Smith's insight was that the interests of each of our individual selves could add up, almost in spite of themselves, to social good—to longer lives, fuller tables, warmer houses. Suddenly the community was no longer necessary to provide these things; they would happen as if by magic. And they did happen. And in many ways it was good.

But this process of liberation seems to have come close to running its course. Study after study shows Americans spending less time with friends and family, either working longer hours, or hunched over their computers at night. And each year, as our population grows by 1 percent we manage to spread ourselves out over 6 to 8 percent more land. Simple mathematics says that we're less and less likely to bump into the other inhabitants of our neighborhood, or indeed of our own homes. As the Wall Street Journal reported recently, "Major builders and top architects are walling people off. They're touting one-person 'Internet alcoves,' locked-door 'away rooms,' and his-and-her offices on opposite ends of the house. The new floor plans offer so much seclusion, they're 'good for the dysfunctional family,' says Gopal Ahluwahlia, director of research for the National Association of Home Builders." At the building industry's annual Las Vegas trade show, the "showcase 'Ultimate Family Home' hardly had a family room," noted the Journal. Instead, the boy's personal playroom had its own 42-inch plasma TV, and the girl's bedroom had a secret mirrored door leading to a "hideaway karaoke room." "We call this the ultimate home for families who don't want anything to do with one another," said Mike McGee, chief executive of Pardee Homes of Los Angeles, builder of the model.

This transition from individualism to hyper-individualism also made its presence felt in politics. In the 1980s, British prime minister Margaret Thatcher asked, "Who is society? There is no such thing. There are individual men and women, and there are families." Talk about everything solid melting into air—Thatcher's maxim would have spooked Adam Smith himself. The "public realm"—things like parks and schools and Social Security, the last reminders of the communities from which we came—is under steady and increasing attack. Instead of contributing to the shared risk of health insurance, Americans are encouraged to go it alone with "health savings accounts." Hell, even the nation's most collectivist institution, the U.S. military, until recently recruited under the slogan an "Army of One." No wonder the show that changed television more than any other in the past decade was Survivor, where the goal is to end up alone on the island, to manipulate and scheme until everyone is banished and leaves you by yourself with your money.

It's not so hard, then, to figure out why happiness has declined here even as wealth has grown. During the same decades when our lives grew busier and more isolated, we've gone from having three confidants on average to only two, and the number of people saying they have no one to discuss important matters with has nearly tripled. Between 1974 and 1994, the percentage of Americans who said they visited with their neighbors at least once a month fell from almost two-thirds to less than half, a number that has continued to fall in the past decade. We simply worked too many hours earning, we commuted too far to our too-isolated homes, and there was always the blue glow of the tube shining through the curtains.

7. New friend or new coffeemaker? Pick one

because traditional economists think of human beings primarily as individuals and not as members of a community, they miss out on a major part of the satisfaction index. Economists lay it out almost as a mathematical equation: Overall, "evidence shows that companionship...contributes more to well-being than does income," writes Robert E. Lane, a Yale political science professor who is the author of The Loss of Happiness in Market Democracies. But there is a notable difference between poor and wealthy countries: When people have lots of companionship but not much money, income "makes more of a contribution to subjective well-being." By contrast, "where money is relatively plentiful and companionship relatively scarce, companionship will add more to subjective well-being." If you are a poor person in China, you have plenty of friends and family around all the time—perhaps there are four other people living in your room. Adding a sixth doesn't make you happier. But adding enough money so that all five of you can eat some meat from time to time pleases you greatly. By contrast, if you live in a suburban American home, buying another coffeemaker adds very little to your quantity of happiness—trying to figure out where to store it, or wondering if you picked the perfect model, may in fact decrease your total pleasure. But a new friend, a new connection, is a big deal. We have a surplus of individualism and a deficit of companionship, and so the second becomes more valuable.

Indeed, we seem to be genetically wired for community. As biologist Edward O. Wilson found, most primates live in groups and get sad when they're separated—"an isolated individual will repeatedly pull a lever with no reward other than the glimpse of another monkey." Why do people so often look back on their college days as the best years of their lives? Because their classes were so fascinating? Or because in college, we live more closely and intensely with a community than most of us ever do before or after? Every measure of psychological health points to the same conclusion: People who "are married, who have good friends, and who are close to their families are happier than those who do not," says Swarthmore psychologist Barry Schwartz. "People who participate in religious communities are happier than those who are not." Which is striking, Schwartz adds, because social ties "actually decrease freedom of choice"—being a good friend involves sacrifice.

Do we just think we're happier in communities? Is it merely some sentimental good-night-John-Boy affectation? No—our bodies react in measurable ways. According to research cited by Harvard professor Robert Putnam in his classic book Bowling Alone, if you do not belong to any group at present, joining a club or a society of some kind cuts in half the risk that you will die in the next year. Check this out: When researchers at Carnegie Mellon (somewhat disgustingly) dropped samples of cold virus directly into subjects' nostrils, those with rich social networks were four times less likely to get sick. An economy that produces only individualism undermines us in the most basic ways.

Here's another statistic worth keeping in mind: Consumers have 10 times as many conversations at farmers' markets as they do at supermarkets—an order of magnitude difference. By itself, that's hardly life-changing, but it points at something that could be: living in an economy where you are participant as well as consumer, where you have a sense of who's in your universe and how it fits together. At the same time, some studies show local agriculture using less energy (also by an order of magnitude) than the "it's always summer somewhere" system we operate on now. Those are big numbers, and it's worth thinking about what they suggest—especially since, between peak oil and climate change, there's no longer really a question that we'll have to wean ourselves of the current model.

So as a mental experiment, imagine how we might shift to a more sustainable kind of economy. You could use government policy to nudge the change—remove subsidies from agribusiness and use them instead to promote farmer-entrepreneurs; underwrite the cost of windmills with even a fraction of the money that's now going to protect oil flows. You could put tariffs on goods that travel long distances, shift highway spending to projects that make it easier to live near where you work (and, by cutting down on commutes, leave some time to see the kids). And, of course, you can exploit the Net to connect a lot of this highly localized stuff into something larger. By way of example, a few of us are coordinating the first nationwide global warming demonstration­—but instead of marching on Washington, we're rallying in our local areas, and then fusing our efforts, via the website, into a national message.

It's easy to dismiss such ideas as sentimental or nostalgic. In fact, economies can be localized as easily in cities and suburbs as rural villages (maybe more easily), and in ways that look as much to the future as the past, that rely more on the solar panel and the Internet than the white picket fence. In fact, given the trendlines for phenomena such as global warming and oil supply, what's nostalgic and sentimental is to keep doing what we're doing simply because it's familiar.

8. The oil-for-people paradox: Why small farms producdlre food

to understand the importance of this last point, consider the book American Mania by the neuroscientist Peter Whybrow. Whybrow argues that many of us in this country are predisposed to a kind of dynamic individualism—our gene pool includes an inordinate number of people who risked everything to start over. This served us well in settling a continent and building our prosperity. But it never got completely out of control, says Whybrow, because "the marketplace has always had its natural constraints. For the first two centuries of the nation's existence, even the most insatiable American citizen was significantly leashed by the checks and balances inherent in a closely knit community, by geography, by the elements of weather, or, in some cases, by religious practice." You lived in a society—a habitat—that kept your impulses in some kind of check. But that changed in the past few decades as the economy nationalized and then globalized. As we met fewer actual neighbors in the course of a day, those checks and balances fell away. "Operating in a world of instant communication with minimal social tethers," Whybrow observes, "America's engines of commerce and desire became turbocharged."

Adam Smith himself had worried that too much envy and avarice would destroy "the empathic feeling and neighborly concerns that are essential to his economic model," says Whybrow, but he "took comfort in the fellowship and social constraint that he considered inherent in the tightly knit communities characteristic of the 18th century." Businesses were built on local capital investment, and "to be solicitous of one's neighbor was prudent insurance against future personal need." For the most part, people felt a little constrained about showing off wealth; indeed, until fairly recently in American history, someone who was making tons of money was often viewed with mixed emotions, at least if he wasn't giving back to the community. "For the rich," Whybrow notes, "the reward system would be balanced between the pleasure of self-gain and the civic pride of serving others. By these mechanisms the most powerful citizens would be limited in their greed."

Once economies grow past a certain point, however, "the behavioral contingencies essential to promoting social stability in a market-regulated society—close personal relationships, tightly knit communities, local capital investment, and so on—are quickly eroded." So re-localizing economies offers one possible way around the gross inequalities that have come to mark our societies. Instead of aiming for growth at all costs and hoping it will trickle down, we may be better off living in enough contact with each other for the affluent to once again feel some sense of responsibility for their neighbors. This doesn't mean relying on noblesse oblige; it means taking seriously the idea that people, and their politics, can be changed by their experiences. It's a hopeful sign that more and more local and state governments across the country have enacted "living wage" laws. It's harder to pretend that the people you see around you every day should live and die by the dictates of the market.

Right around this time, an obvious question is doubtless occurring to you. Is it foolish to propose that a modern global economy of 6 (soon to be 9) billion people should rely on more localized economies? To put it more bluntly, since for most people "the economy" is just a fancy way of saying "What's for dinner?" and "Am I having any?," doesn't our survival depend on economies that function on a massive scale—such as highly industrialized agriculture? Turns out the answer is no—and the reasons why offer a template for rethinking the rest of the economy as well.

We assume, because it makes a certain kind of intuitive sense, that industrialized farming is the most productive farming. A vast Midwestern field filled with high-tech equipment ought to produce more food than someone with a hoe in a small garden. Yet the opposite is true. If you are after getting the greatest yield from the land, then smaller farms in fact produce more food.

If you are one guy on a tractor responsible for thousands of acres, you grow your corn and that's all you can do—make pass after pass with the gargantuan machine across a sea of crop. But if you're working 10 acres, then you have time to really know the land, and to make it work harder. You can intercrop all kinds of plants—their roots will go to different depths, or they'll thrive in each other's shade, or they'll make use of different nutrients in the soil. You can also walk your fields, over and over, noticing. According to the government's most recent agricultural census, smaller farms produce far more food per acre, whether you measure in tons, calories, or dollars. In the process, they use land, water, and oil much more efficiently; if they have animals, the manure is a gift, not a threat to public health. To feed the world, we may actually need lots more small farms.

But if this is true, then why do we have large farms? Why the relentless consolidation? There are many reasons, including the way farm subsidies have been structured, the easier access to bank loans (and politicians) for the big guys, and the convenience for food-processing companies of dealing with a few big suppliers. But the basic reason is this: We substituted oil for people. Tractors and synthetic fertilizer instead of farmers and animals. Could we take away the fossil fuel, put people back on the land in larger numbers, and have enough to eat?

The best data to answer that question comes from an English agronomist named Jules Pretty, who has studied nearly 300 sustainable agriculture projects in 57 countries around the world. They might not pass the U.S. standards for organic certification, but they're all what he calls "low-input." Pretty found that over the past decade, almost 12 million farmers had begun using sustainable practices on about 90 million acres. Even more remarkably, sustainable agriculture increased food production by 79 percent per acre. These were not tiny isolated demonstration farms—Pretty studied 14 projects where 146,000 farmers across a broad swath of the developing world were raising potatoes, sweet potatoes, and cassava, and he found that practices such as cover-cropping and fighting pests with natural adversaries had increased production 150 percent—17 tons per household. With 4.5 million small Asian grain farmers, average yields rose 73 percent. When Indonesian rice farmers got rid of pesticides, their yields stayed the same but their costs fell sharply.

"I acknowledge," says Pretty, "that all this may sound too good to be true for those who would disbelieve these advances. Many still believe that food production and nature must be separated, that 'agroecological' approaches offer only marginal opportunities to increase food production, and that industrialized approaches represent the best, and perhaps only, way forward. However, prevailing views have changed substantially in just the last decade."

And they will change just as profoundly in the decades to come across a wide range of other commodities. Already I've seen dozens of people and communities working on regional-scale sustainable timber projects, on building energy networks that work like the Internet by connecting solar rooftops and backyard windmills in robust mini-grids. That such things can begin to emerge even in the face of the political power of our reigning economic model is remarkable; as we confront significant change in the climate, they could speed along the same kind of learning curve as Pretty's rice farmers and wheat growers. And they would not only use less energy; they'd create more community. They'd start to reverse the very trends I've been describing, and in so doing rebuild the kind of scale at which Adam Smith's economics would help instead of hurt.

In the 20th century, two completely different models of how to run an economy battled for supremacy. Ours won, and not only because it produced more goods than socialized state economies. It also produced far more freedom, far less horror. But now that victory is starting to look Pyrrhic; in our overheated and underhappy state, we need some new ideas.

We've gone too far down the road we're traveling. The time has come to search the map, to strike off in new directions. Inertia is a powerful force; marriages and corporations and nations continue in motion until something big diverts them. But in our new world we have much to fear, and also much to desire, and together they can set us on a new, more promising course.

Want to see how the "satisfaction index" has changed over your lifetime? Find some of the data mentioned in this article—and a few other numbers that will surprise you—at: