Predictable improvements in lithographic methods foretell continued increases in computer processing power. Economic growth and engineering evolution continue to increase the size of objects which can be manufactured and power that can be controlled by humans. Neuroscience is gradually dissecting the components and functions of the structures in the brain. Advances in computer science and programming methodologies are increasingly able to emulate aspects of human intelligence. Continued progress in these areas leads to a convergence which results in megascale superintelligent thought machines. These machines, referred to as Matrioshka Brains1, consume the entire power output of stars (~1026 W), consume all of the useful construction material of a solar system (~1026 kg), have thought capacities limited by the physics of the universe and are essentially immortal.A common practice encountered in literature discussing the search for extraterrestrial life is the perspective of assuming and applying human characteristics and interests to alien species. Authors limit themselves by assuming the technologies available to aliens are substantially similar or only somewhat greater than those we currently possess. These mistakes bias their conclusions, preventing us from recognizing signs of alien intelligence when we see it. They also misdirect our efforts in searching for such intelligence. We should start with the laws on which our particular universe operates and the limits they impose on us. Projections should be made to determine the rate at which intelligent civilizations, such as ours, approach the limits imposed by these laws. Using these time horizons, laws and limits, we may be better able to construct an image of what alien intelligence may be like and how we ourselves may evolve.
The two pillars on which the MB arch rests are the extensions of current engineering trends to the largest and smallest scales. At the largest scale, in their initial stages, MB are limited by the mass and energy provided by individual solar systems. At the smallest scale, MB are limited by our ability to assemble materials atom by atom. The terms megascale engineering and molecular nanotechnology are generally used to discuss these different perspectives. The union of construction methods at the small and large scale limits allows the optimal use of locally available energy and matter and is the distinguishing feature of Matrioshka Brains.
Megascale engineering has its roots in science fiction. One of the first scientific examinations of megascale engineering was done by mathematician Freeman Dyson (1960) in which he discussed dismantling Jupiter to construct a shell around the sun to harvest all of its energy and provide a biosphere capable of supporting large numbers of people. Writer Larry Niven addressed some of the problems of gravity in Dyson shells by changing the form of the biosphere from a shell to a rotating Niven Ring. Other examples of megascale engineering exist in fictional literature but these are the most relevant for the discussion of MB.
Nanoscale engineering was first discussed by Richard Feynman in 1959. These ideas were extended by Eric Drexler in his 1981 PNAS paper and Engines of Creation. Much of the engineering basis for nanotechnology is documented in Nanosystems. Progress in the development of nanotechnology continues and no serious challenges against its ideas have been produced in the last ten years (Merkle, 1998). Estimates of its full scale development and deployment range from 10 to 30 years in the future.
Megascale and nanoscale engineering currently do not exist. Megascale engineering results in the progression of trends in the engineering of large scale structures such as pyramids, oil tankers, suspension bridges, tunnels, sky-scrapers and rockets. Nanoscale engineering results from trend progressions in microelectronic lithographies, micromachining, microvolume and combinatorial chemistry, biotechnology manipulation of genes and proteins, robotics and computer science.
It is paradoxical that many people more easily envision megascale engineering than nanoscale engineering. The most logical explanation for this is that our senses are able to directly interact with megascale structures, while intermediaries such as atomic force microscopes or enzymes are required to sense and manipulate things at the nanoscale level. It is important to remember that atomic scale pumps, motors, engines, power generation apparatus and molecular manipulators (enzymes) exist in every individual reading this document. By mid-1998, the complete genomic DNA sequences (nanoscale programs) for more than 30 different bacteria and yeast (nanoscale assembly and replication machines) were known. Nanoscale technology exists and is rapidly being domesticated by humans.
As has been pointed out by Dyson [1960, 1968], Kardashev [1985,1988,1997], Berry [1974], and Criswell [1985], the progression of existing population and economic growth, power and mass management trends in our society will enable the construction of Matrioshka Brains using existing (non-nanoscale) technologies within at most a few thousand years. Nanoscale assembly per se is not required. Current trends in silicon wafer production, if continued, would allow the production of sufficient microprocessors, of current primitive designs, to create a MB by 2250. It would however require most of the silicon in the planet Venus as raw material. A MB built from such processors would have capabilities significantly less than the limits which would be available using nanoscale fabrication. Even so, a computing machine built out of even these primitive components would have a thought capacity in excess of a million times the thought capacity of the 6 billion+ people now populating the planet! A small fraction of this thought capacity devoted to extending engineering methods should in a brief period develop nanoengineering and assembly to its ultimate limits.
Computer Trends and Characteristics
To discuss computational characteristics of a MB it is necessary to understand the evolution of computers. This topic is too complex to be discussed in great detail in this paper. In general however, we may assume that current trends in lithography (optical lithography down to 0.08 mm) and computer architecture modifications such as processor-in-memory (PIM), intelligent-RAM (IRAM), content-addressable memory (CAM), etc. should provide approximate human-brain equivalent computational capacity in desktop machines sometime between 2005-2010.Lithographic methods will continue to improve, transitioning from optical to EUV, to X-ray, to e-beam or nano-imprint (soft lithography), each at smaller levels of resolution. This will culminate in nanoassembly with the ability to manipulate individual atoms. If historic trends were to continue, atomic scale manipulation would be reached by 2050. In the last 5 years however, the introduction of decreased lithographic scales has been accelerating [SIA, 1997]. The formation of a company and continuing work towards the goal of producing a nanoassembler (at Zyvex) and prognostications on nanotechnology development trends confirming earlier projections [Drexler, 1998], provide reasons to believe that nanoassembly may become possible in the 2010-2015 time frame. In fact, Jim Von Ehr, the president of Zyvex LLC, has publicly stated that he believes that Zyvex will be able to do diamondoid nanoassembly by 2010.
Lithographic technologies enable the construction of very powerful computers based on a two-dimensional technology. Systems assembly methods using SIMMs and processor cards (Slot-1) effectively convert 2-D chips into 3-D systems. The addition of optical interconnects (Emcore, Opticomp & others) and high capacity cooling (Beech et. al., 1992; Tuckerman, 1984, SDL Inc.) allow significant increases in communication bandwidth and processing density. Nanotechnology enables the construction 3-D computers which allow the computational, communication, power production and delivery and cooling elements to be tightly integrated into a low cost package using a single uniform assembly process. The development of nanotechnology will be a natural development once the limits of conventional manufacturing and assembly processes are reached. There is no known process that allows efficiencies and capabilities greater than those offered by nanotechnology. It is reasonable to assume that nanotechnology and nanoassembly represents a significant plateau in the development of technological civilizations.
In Nanosystems, Drexler outlined the details of a rod-logic computer (essentially a nanoscale abacus). A single rod-logic nanoCPU is a very small computer which consumes very little power with very little capacity. NanoCPUs can be assembled into parallel systems (midi-Nanocomputers) which achieve the processing capacity of current microprocessors at significantly lower power consumption. Further aggregation results in a Mega-Nanocomputer that consumes 100,000 W (~104 times greater than a human brain) in a volume of 1 cm3 (~103 times less than a human brain). The high speed, massive parallelism and reduced propagation delays in a Mega-Nanocomputer should result in computational throughput 106-107 times greater than the human brain.
Merkle and Drexler have also developed helical logic which requires nanoassembly methods to create computers based on the control of the movement of single electrons. The limits on computation are dictated by the size of the computational elements and the heat production associated with the computation. We may assume that the manipulation of single electrons and the use of reversible logic (such as in rod and helical logic) bring us close to the possible limits of computation. These topics are explored in much greater depth in Merkle & Drexler, 1996, Sandberg, 1997 and Frank & Knight, 1998.
Since the details of rod-logic computers (power consumption, size, computational capacity, etc.) are the best defined for Mega-nanocomputers, they will be used for our discussion. Beyond rod-logic, helical logic allows an improvement of 1011 in power consumption per operation. Theoretical limits potentially allow improvements of 109 in cooling capacities (power density) and 104 in operating frequencies. If computers could be produced at these limits, computational capacities from 1010 to 1020 greater than those presented in this paper may be possible.
Table 1 details the characteristics of some of these computer architectures.
|
speed |
Rate |
|
|
|
|
|
|
|
|
|||||||
|
|
|
|
|
|
|
||
Circa Y2000 microprocessor
(e.g. Merced) |
|
|
|
|
|
|
|
Intel, Byte |
Rod-logic NanoCPU |
|
|
|
|
|
|
|
Nanosystems |
Rod-logic Midi-Nanocomputer |
|
|
|
|
|
|
|
Nanosystems |
Rod-logic Mega-Nanocomputer |
|
|
|
|
|
|
|
Nanosystems, pg. 370 |
Helical-logic computer |
|
|
|
Merkle & Drexler, 1996,
Drexler, 1992 |
||||
Physical Limits |
|
10,000 |
|
|
|
|
Merkle & Drexler, 1996 |
Computer and Human Operations Equivalence.
At the simplest level of abstraction, neurons can be considered to be multiplication and adding machines. Neurons multiply the "strength" of a synaptic connection times the "weight" of an incoming signal and sum these values across a number of input synapses. If the result exceeds a certain threshold, the neuron fires and transmits a signal to other neurons connected to its network. Neurons fire very slowly, < 100 times per second. The immense power found in the human brain is due to neuron features other than speed. These include their small size, low power consumption, high interconnection levels (100-10,000 per neuron) and to a large degree sheer numbers. The human neocortex, which is the most highly developed portion of the human brain, and that part which is thought to be responsible for "higher thought", contains ~21 billion(2.1 ×1010) neurons [Pakkenberg, 1997]. The total number of neurons in the brain is less certain, but since the neocortex contains roughly 1/3 of the brain volume, unless neurons density is much higher in other brain regions, extrapolations from Pakkenberg's data would imply there is a total of 60 billion (6 ×1010) neurons in the brain. To provide a proper perspective, if current SIA projected trends continue, microprocessors would not have 60 billion transistors circa 2025. Even then, a single transistor does not possess the computational capacity of a neuron. On the other side of the coin, a microprocessor with 60 billion transistors would occupy a volume much smaller than that of the human brain.If we assume 6 ×1010 neurons × 5 ×101 firings per second × 103 operations per neuron firing , we end up with a result of 3 ×1015 operations per second (300 Trillion operations per second or 300 TeraOps). This is likely to be at the high end of possible computational capacities since it is assuming that all neurons are being used simultaneously. This is unlikely to be true since the brain clearly has specialized structures for visual, auditory and odor input; speech output; physical sensation and control; memory storage and recall; language analysis and comprehension; and left-right brain communication. It is unlikely that all of these structures will be optimally utilized at any point in time.
A high end estimate of 300 TeraOps for human thought capacity does not significantly differ from those found in the literature as outlined in Table 2.
Brain Capacity | Method | Source |
---|---|---|
1013 calculations per second
1014 bits / second |
Algorithmic equivalence | Moravec (1987) |
1014 instructions per second | Extrapolation of retina
equivalent computer operations |
Moravec (1997) |
1013-1016 operations per second | Power consumption | Merkle (1989) |
1017 FLOPS(*) | Arithmetic equivalence | McEachern (1993) |
The fact that each of these capacity estimates using different methods, computes values within a range of 10,000 demonstrates how poorly understood the brain is at this time. The numbers are however in general agreement. Because of the specialized structures of the brain, it is impossible to focus all of the available capacity on a single problem. Computers, unlike the brain, can devote all of their capacity to a single problem (assuming the problem fits in available memory). This would imply that computers do not require the capacity of the brain to achieve equivalence with specialized areas of the brain. Developing trends in desktop computers are analogous to the multiprocessing occurring in the brain. It is not uncommon for systems may now execute 10-20 processes simultaneously. These might include listening to a network, listening to human speech, recording and compressing information for permanent storage, displaying information for interpretation, and devoting intensive processing power to search, recognition or analytical processes. The available computer power is divided among the tasks at hand in the computer, just as in the brain.Computer capacity has increased significantly in recent years. Current state-of-the-art computers achieve operating levels as follows:
It is clear from these numbers that computers are approaching human brain capacity and will eventually exceed it. As pointed out by Moravec (1997), the Deep Blue computer was able to defeat Gary Kasparov with only 1/30th of the estimated power in the human brain. Either the brain is has less capacity than the estimates above would indicate or humans are unable to devote all of that capacity to a single task.
- Intel Teracomputer: 1.8 Teraflops (1.8 ×1012 FLOPS)
- IBM Teracomputer: 3 Teraflops (3 ×1012 FLOPS)
- IBM ASCI White: 12.3 Teraflops (1.23 ×1013 FLOPS)
- IBM Deep Blue (Chess computer):
- 200 million (2 ×108) positions per second = ~3 million MIPS (3 ×1012 IPS)
- GRAPE (GRAvity PipE) computers for stellar orbit calculations (Taubes, 1997)
- GRAPE-3 (1991): 600 MFLOPS (6 ×108 FLOPS)
- GRAPE-4 (1995): 1.08 Teraflops (1.1 ×1012 FLOPS)
- GRAPE-5 (1998): 21.6 Gigaflop (2.16 ×1010 FLOPS)
- GRAPE-6 (2000/1): 100 Teraflops (1 ×1014 FLOPS):
- On the drawing board:
- IBM Blue Gene (~2003): 1 petaflop (1015 FLOPS)
Computers have always been better than humans in arithmetic. They now seem to be approaching our abilities in tasks which require parallel processing. In recent years, computer systems have demonstrated 'human' abilities such as:
The realm of activities which are only available to humans is becoming increasingly small so it seems reasonable to assume that computers will match and eventually exceed human capabilities.Besting humans in games such as Blackjack (with random factors) and Checkers & Chess (with non-random principles). The only game where a human can currently compete against a computer is Go. Proving theorems previously unproved by humans. "Reading" documents (OCR) and "understanding" human speech. Driving automobiles. Solar Power
Drexler has observed [Drexler, 1992], that with no improvements in device physics, but simply the technology to fabricate small precise structures, it should be possible to construct solar collectors with a mass of ~10-3 kg/m2 and a power-to-mass harvesting capability at Earth orbit of ~105 W/kg. The power output of the sun is~4 ×1026 W, implying a mass requirement of ~1021 kg for solar collectors in Earth orbit. This is approximately the estimated mass of the asteroids in the asteroid belt and significantly less than the mass of the Earth's moon (7 ×1022 kg ) or the planet Mercury (3 ×1023 kg). An orbit for solar collectors between Mercury and Venus would reduce the mass requirements still further. Orbits that are very close to the sun could result in a decrease in the lifetime of the solar collectors, that could decrease the quantity of harvested energy due to the requirement for continual reconstruction of the solar cells [Landis, 1998]. Assuming our solar system is typical, harvesting the entire energy output of a star using the material present in its solar system is feasible2.Kardashev Civilization Levels
Nikolai Kardashev in his 1964 paper, defined three major evolutionary levels for civilizations. These are outlined in Table 3.
Level |
Utilization |
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For KT-II and KT-III level civilizations, if the material in the stars is excluded then the mass available is perhaps ~10-4 lower. If the star, and the hydrogen and helium in the solar system are excluded, then the mass available to a civilization is ~10-6 of the total available mass. There are reasons, discussed below, that suggest that these strength of these exclusions changes with the evolution of the civilization.
Development Trends
In short, we can see that computer power is likely to continue to evolve until it significantly exceeds human intelligence. The ability to harvest stellar power output levels to rearrange the materials of a solar system seems feasible. Unless something occurs to intervene in the course of its evolution, a technological civilization should evolve from a KT-I to a KT-II and perhaps a KT-III level. As the greatest intelligence levels for these civilizations would be accomplished by constructing supercomputers constructed from nanotechnology using star output power levels, it is presumed that at least some civilizations would follow this path. These constructs are examined in more detail below.
The computers would typically be NanoCPUs or Mega-NanoCPUs with a large amount of nanoscale storage and high efficiency, high bandwidth (optical) communications channels to other similar devices.
A Matrioshka Brain architecture is highly dependent on the structural materials from which the energy collectors, computational elements and radiators are constructed. At high temperatures the greatest problem is the destruction of the computational elements. Three relatively abundant materials from which high-temperature rod-logic computers could be constructed are diamond (stable to ~1275°K), aluminum oxide (M.P. ~2345°K), and titanium carbide (M.P. 3143°K). Material strength decreases relatively linearly as the operating temperature increases. Thermal expansion must also be taken into account in mechanical designs. So operating temperatures are likely to from 50-80% of the temperatures listed above. There are materials with higher melting points, particularly the elements rhenium and tungsten and the refractory compounds of hafnium carbide and tantalum carbide, but these elements are relatively rare. If the computers are operating at temperatures below ~1200°K, the thermal radiation from the radiators consists primarily of low energy infrared photons (< 0.5 eV) that few materials can harvest in ways that allow direct conversion to electricity. This implies that focusing the thermal energy (via mirrors) and using heat engines with Carnot cycles (e.g. Rakine, Stirling, Ericson, etc.) are likely to be used to generate power in the outer layers of the MB.
The elemental availability must be given consideration. Carbon, oxygen, magnesium, silicon, iron and aluminum are useful structural materials and are much more abundant than nickel, phosphorus, fluorine or tungsten, etc. More abundant elements (C, Al2O3, SiO2, MgO, Fe2O3) should be the bulk construction material for MB layers. The most abundant materials being used in the larger outermost (cooler) shells. There is the possibility over long time periods of using breeder reactors to convert significant amounts of a less useful element (e.g. magnesium or iron) into a more useful element (e.g. tungsten or hafnium).
|
The surface facing the star is a solar array, the surface facing
away
from the star is a radiator. The hexagonal element in the center of the
array is a nanocomputer. The dark blue portion circulates cooling
fluid and the light blue portion is a high-pressure turbo-pump.
The
red bumps are vernier control nozzles for station keeping.
The nanocomputer surfaces are 2-D communication arrays of light transmitters and receivers, composed of VCSELs [Vertical-Cavity-Surface-Emitting-Lasers] and CCDs respectively. These arrays provide high bandwidth communications to adjacent compute-elements (see Figure 2). |
The energy requirements for disassembly of asteroids and small planets are dominated by the chemical bond manipulation requirements. In this situation, the best approach is to utilize material in locations which have the highest solar energy flux to construct ever expanding solar collectors. The critical determinant of the time required is the solar collector thickness. Current technologies allow construction of collectors (or mirrors) with masses of 1 kg/m2. It is envisioned that collectors for solar sails may be as thin as .02 kg/m2 (Potter, 1996) while Drexler 1992 postulates structures of .001 kg/m2. The energy requirements for the disassembly of the larger planets particularly Saturn and Jupiter are dominated by requirement of getting the material out of the planet's gravity well. Even if the entire energy output of the sun were used to disassemble Jupiter, it would still take hundreds of years. Faster disassembly requires supplementing solar power with fusion energy derived from manufactured thermonuclear reactors. There are clearly tradeoffs between the amount of solar energy and/or minor planetary matter used for MB computations and the amount devoted to the construction of supplementary thermonuclear reactors and gas giant disassembly. Since the computational benefits derived from the disassembly of gas giants are marginal (relative to the huge benefit derived from dismantling even a single minor planet), civilizations may choose to dismantle the larger bodies at a relatively slow rate.
The mass requirements for the solar collectors and CPUs around the Sun are small compared with the mass available. Only small fractions of the Mercury or the Earth's moon would be required for their construction. Of some concern is whether specific elements required for CPU construction, such as carbon or sulfur would be available in sufficient quantities. If this is not the case, then one can turn to the atmosphere of Venus or the asteroids (esp. carbonaceous chondrites) for further material. The radiator material is of concern since it must have high emissivity. One candidate, likely to be available in high abundance is iron oxide (hematite). It has both a high melting temperature and highly abundant among the inner planets and asteroids.
Construction times are for immature MBs are short. Exponential growth4 of nanoassemblers would provide sufficient numbers to disassemble and reassemble planets in weeks to months. If non-nano-scale automatons are required the time scale may be years to decades. The construction of a small number of solar collectors near the sun could provide high concentrations of beamed power to any point in the solar system. The strongest limit on construction times are likely to be the time required to move power collectors into the proper positions around the sun or the time required to ship materials from outer solar system locations to inner solar system locations, should essential elements be in short supply. Conversely if non-star centered MBs are desirable (see Location), the limit is on moving sufficient mass from various solar systems to gravitationally balanced or minimally disrupted point between the energy sources.
While many authors have focused on the possibility of moving
comets,
moons or planets for construction or terraforming purposes, it should be
understood that this is not required for MB construction. First,
since the elemental requirements of MB should be known, it would be
better
to disassemble materials on moons or planets and ship only those
molecules
or atoms which are absolutely necessary. Second, moving a large
mass
to an alternate orbit requires expending a large amount of energy and
mass
or waiting a long time or both. Instead the available energy and
matter should be used to construct mass-drivers which accelerate
material
towards positions where optimal energy harvesting and beaming stations
may be built. Once operational, these stations return an increased
amount of energy to the moons or planets on which mass harvesting
operations
are taking place. This allows an exponential growth in material
breakdown,
separation, and transport capacity. Eventually the point is
reached
where an optimal amount of solar energy is diverted to the transport
of
materials optimal for MB construction.
It can be seen that there is a tradeoff between the amount of power available and the longevity of the power source. If you want to do a lot of thinking in a short time you can construct a MB around a 10-100 Msun star. Unfortunately this massive amount of power increases your cooling requirements significantly and requires such a large diameter for the MB radiators that the amount of construction material available in an individual solar system will likely be insufficient. This then requires importing material from other solar systems or dust clouds creating the requirement that interstellar distance material transit times be incorporated into the construction schedule. As the lifetime of these large stars is short, presumably you would have to plan the construction and begin the material transfers while the star is still forming. This requires transferring the materials against the very strong solar wind of a large mass star during its violent and high radiation output formation stage. Even after the construction of a mega-MB, the large diameter would imply that the transit time for messages between CPUs would be hours or days. Clearly a mega-MB would only be useful if one wanted to solve well-defined problems which required a great deal of thought in a short period of time. Since stars more than 1.5 Msun end their life by becoming supernovae, the MB would have to be disassembled and reassembled elsewhere else unless energy and matter were considered to be so plentiful that the incineration of the megamind is of no concern. These difficulties all argue against the construction of MB around large mass stars.
However, a non-star centered mega-MB can be constructed and be powered by either externally supplied power or internal thermonuclear reactors. This avoids the stellar radiation and lifetime problems leaving only the inter-node travel time problem. If this problem is of no concern, then one might find non-star centered mega-MB in regions where there is a high external energy flux (for power harvesting) and relatively long lived stars. These are the characteristics of globular clusters (GC) which consist of hundreds of thousands to millions of stars in regions of space only a few tens to hundreds of light years in size. The external light flux in GC is many times greater than that available from a single star.
Astronomers believe that GC are at least 8 billion years old with some estimates as high as 12 billion years. These ages are based on two observations:
The first possibility is that an older external MB civilization would send a robotic nano-probe with the necessary mining plans to a GC "cloud" very early in its formation. The seed then constructs the necessary mining equipment and commences harvesting metals before they have an opportunity to be incorporated into stars. The second possibility is that if star-lifting is possible [Criswell, 1985], then the MB (or their probes) may evolve in the GC or arrive from remote locations and are recycling the stars to harvest all of the available metals. It may even be possible that a MB based civilization (KT-II+) could engineer the formation of a GC by using mega-lasers or focused redirected solar winds (essentially large ion-beams) to direct many large mass interstellar dust clouds towards a common point in space. It is questionable whether the estimated age of the universe would allow sufficient time for such construction efforts however.
A final question remains as to why a MB civilization would not harvest all of the energy available in a GC for the purpose of thinking? If optimal MB architectures are constrained by specific element abundances (e.g. carbon), then the best use of these elements is in the construction of computational machines or long-term memory storage and not power harvesting apparatus. Diverting any of the rare elements for the construction of thermonuclear breeder reactors may be a suboptimal use of resources. It may be much more efficient to allow gravity to serve as the container for stellar thermonuclear reactors and harvest the heavier elements as cheaply as possible. If the GC are breeding grounds for the production of elements required for low-power data-storage devices, then the power loss from the stars in GC radiating into space is of no concern. Over many billions of years, the GC is gradually converted from lighter elements into a massive long-term solid-state memory.
There will be ultimate limits on the MB architecture due to insufficient materials. Possible examples include:
For comparison purposes, the following table outlines the elemental
composition of three nanostructures designed by Eric Drexler at the Institute
for Molecular Manufacturing and a familiar complex of nanomachines.
This shows clearly the variability that nanomachine compositions may have and illustrates the difficulty we will have in determining what elemental makeup of MB may be. However, it seems reasonable to say that whatever architectures are chosen, some elements will in excess relative to other elements. While the carbon, silicon, metals, semiconductor dopant atoms and elements with unusual properties (melting point, hardness, density, ferromagnetism, superconductivity, etc.) are likely to be fully utilized, there may be a significant excess of hydrogen, helium, neon, and perhaps even nitrogen and oxygen. Possible uses for these materials could include the construction and maintenance of biological zoos [Ball, 1973] or radiation shields and controlled fusion fuel sources.
Obviously substitutions can and will occur. MB will optimize their structures to make the most efficient use of the readily available elements. Without knowing the specific material requirements for various MB components it is impossible to predict at this time which elements will be the platinum, gold and silver of a MB culture. We can presume that very young MB will however use all available matter within a solar system and commence the study of where additional matter should be mined or whether the local star(s) and local MB architecture(s) should be engineered for long term elemental transmutation activities to create element ratios which are better suited for optimal MB architectures. Element transmutation on massive scales may require large amounts of energy. and has long time scales it is likely that interstellar mining in high density gas clouds will initially be a more rapid and less expensive solution to the accumulation of rare and valuable materials. In the long term, as local resources are exhausted, elemental transmutation will be the only reasonable solution for producing optimal element ratios. As an example, if a highly efficient method were discovered to convert 56Fe into 184W (consuming ~4.5 ×1013 W/mol) and 10 Earth masses of iron (5.9 ×1026 kg) were available, it would take approximately 4000 years using the entire power output of the sun to perform the transmutation.
As there exists the possibility, pointed out by Kardashev [1997] and presumably by many others, that intelligent life may have existed for 6 billion years or more. If intelligent life has existed for that long and evolves into MB architectures as postulated here, then interstellar mining activities may have been occurring for billions of years on galactic scales. This has serious consequences for astrophysical theories about the origin and history of the universe as they depend heavily on observed abundances of metals in stars and interstellar space and assume these ratios have not been adjusted by extraterrestrial intelligences optimizing their personal element ratios.
Even very sloppy single-layer MB architectures come relatively close to the most efficient computing structures possible given the physical laws of this universe. They will also be able to utilize most of the energy produced by a local star with only a small fraction of the locally available matter. If computational throughput is the major emphasis of MB (see thought limits), then it may be much more important to construct small-hot MBs and not large-cold MBs (whose radiators would exceed local material requirements). Thus there may be no incentive to go on interstellar mining expeditions and the astrophysicists may still be able to sleep nights.
Dyson [1979] demonstrated that in an open universe it is theoretically possible to live indefinitely, consuming less and less power as one thinks more slowly. Current results in astrophysics lean towards an open universe structure [REF TBD]. Though Dyson did not indicate exactly what the physical nature of immortal "beings" would be, it is clear that MB which have tremendously greater thought capacity than we do will have a much longer time than our sun has existed in which to consider and solve this problem.
This is clearly seen when imagining the management of three different planetary probes, one on the moon, one on Mars and one orbiting Saturn. The moon probe may be managed from earth in real time. The Mars probe can be given directions between cups of coffee. The Saturn probe can be given directions only several times a day. If you expect the more distant probes to do useful work in a reasonable time you have to build into them increased amounts of intelligence and autonomy.
If the thoughts between CPUs in a MB are independent, then the brain can be made very large with little effect. If however the MB is attempting to solve a problem which requires all of its capacity then it must think slower to maintain synchronization between CPUs as their inter-node distance increases. In theory MBs orbiting in the galactic halo, a KT-III civilization would be able to think collectively, but their "thought" time must be on the order of tens of thousands of years or more.The two major problems facing MB are how to think more efficiently and how to think smaller.
Thinking more "efficiently", means to solve the problem with less heat generation. If the thought engines generate less heat, they can be placed closer together and can therefore solve a problem more quickly. It might be useful for very complex problems to devote a significant amount of thought and prototyping to the production of thought engines which are optimal for a specific problem.
McKendree [1997] discusses the possibility of nanotechnology based engineering being able to "surge" the production of various components necessary for the minimal solution times for problems which are well-defined from a computational standpoint. Using these methods, all of the CPUs in the MB would then be reconstructed for that specific problem, the problem would be "thought about", and after a solution is produced the process would be repeated for other problems. Current FPGA (field programmable gate array) products from manufacturers such as Xilinx and research in configurable computing (Villasenor & Mangione-Smith, 1997) are the foundations for these MB computational methods. Alternately, CPU groups or complete MBs may have architectures designed for solving specific types of problems, e.g. galactic stellar motion computations as is now done by the GRAPE computers in Japan. Some possible architectures could be:
Thinking "smaller", means to develop new architectures which move through the macro-atomic structural level to the sub-atomic structural level. One can begin to see hints of possible approaches in this arena in single-electron devices, optical computing and quantum computing. Compute engines built using these methods are "faster", in that more computation is done per time interval, but the limits imposed by heat removal and inter-compute engine communication time still impose limits to thought capacity. These solutions may provide several orders of magnitude (102-104) improvement in macro-atomic scale MB but increases beyond this will either be impossible or will involve magical physics that we do not currently understand well.
If it is possible for MBs to harvest significant amounts of fusionable mass (H, He, etc.) from either stellar lifting or interstellar gas cloud mining, then the construction of migrating MBs is possible. These MBs may be constructed as solid spheres and may use the harvested elements in large numbers of fusion reactors to generate power. [Question: Is it possible for a large MB with a solid shell to retain a large mass of H/He as a internal atmosphere as a potential fuel source (or will the H/He collapse into a gas planet)?] Structures such as this could be found orbiting around the galaxy in the galactic halo.
If we assume that even a single civilization makes it past the barriers to a KT-II level, then we may assume that within a few million years, they may take the galaxy to a KT-III level [Newman, 1981 & others]. Whether they choose to colonize the galaxy, or simply allow the galaxy to evolve many independent KT-II civilizations and the gradual emergence of a KT-III civilization occurs remains a matter of much discussion beyond the scope of this document (it requires a significant understanding of the motivations and goals of a KT-II level MB). The path to a KT-III level may be either a single KT-II civilization colonizing the galaxy or multiple KT-II/KT-II+ civilizations developing in local regions over a time-scale of millions to billions of years. Our galaxy is old enough for either of these situations to have occurred.
It is useful to examine some of the unexplained or poorly explained observations in astronomy and astrophysics that could bear witness to many galaxies being at a KT-III level.
Criswell [1985] defined the concept of "stellar husbandry" which consists of the removal of the atmosphere of a star ("star lifting") and gradually returning the stored materials which are capable of undergoing fusion reactions to allow a significant extension of the lifespan of the star at least 1000 times (to 1014 years). This activity also provides an extensive source of materials for the construction of larger (and cooler) MBs. If possible, this activity would take tens to hundreds of millions of years. Since star lifting will eliminate many short term material resource constraints as well as provide a greatly extended lifespan for the MB, it would likely be an important goal.
Combining these perspectives provides a reasonable concept of KT-II evolution. Initial MB construction utilizes materials from asteroids or planets with the lowest gravity in closest proximity to the star. Construction is rapid (a few years), may be inefficient in its mass utilization and produces a hot (~500-1500°K) MB relatively near the star. As more material becomes available, larger planets in more distant orbits are dismantled and the MB shell expands or additional layers are constructed which are cooler (~70-300°K). Finally, if star lifting activities are undertaken and large quantities of metals become available, the MB enters its final stages with both a large size (5 AU radius) and cool temperatures (< 30°K).
The ultimate fate of MBs is unclear. A tradeoff must be made between active thought and information storage. Material returned to the star (or consumed in thermonuclear reactors) to enable active computation cannot be utilized in information storage. A means of utilizing all of the potential energy available and gradually converting most of the mass to iron may be developed. The iron could then be arranged, perhaps utilizing other required elements, in the form of a massive static information store. The last energy available could be utilized in accelerating these information stores in the direction of untapped energy sources where they could regenerate new MBs.
If one supposes that the transition period between KT-I civilizations and KT-II civilizations is short (thousands of years) relative to the lifetime of a MB (billions of years), then it makes little sense for MBs to concern themselves with creatures which are much much lower than insects. Perhaps they may take an interest once a civilization has progressed to the KT-II (MB) level as one now has the equivalent of a "child" which may be rapidly educated. The mass and energy resources available to MB are so large that they may observe us quite closely for a very long time from a large distance waiting to see if we will make the transition to a MB level .
It seems silly for them to interact with us at the level which we are now at. More likely, possible future outcomes of pre-KT-I civilizations, like our own, have been computed in some detail (it only takes seconds to compute thousands of thousand-year scenarios for us). We should not feel too bad however. A single MB has the same problem relative to a KT-III civilization which we have with them. KT-III civilizations made up of 1011 or more MBs would think on a radically different time scale than individual MBs. Since it is likely that the MBs of a KT-III civilization would be separated by light years, the propagation delays between them become are a significant problem. What does one think about when you yourself can compute an answer to most questions you might transmit before the answer can be received?
Are there galactic MB Oracles which have utilized their design and simulation capacities and mass transmutation or star lifting activities to construct optimal architectures for solving specific types of problems? The travel time to ask a question and receive an answer from such an Oracle may be 104 - 105 years. Nanotechnology enables surge construction of optimal problem attack architectures [McKendree, 1997]. So questions must involve problems which cannot be solved with an optimal architecture in the time to send a question and receive an answer from an Oracle. Presumably, individual MB would perform a return-on-investment analysis to determine whether it is more efficient to ask an Oracle or use local resources and reconstruction activities to produce an optimal architecture for thinking about the problem and producing a local solution. Obviously, utilizing ones resources to attack one problem means that those same resources cannot be used to solve another problem. The would be significant cost-benefit tradeoffs involved in asking the Oracle(s) or consuming local resources.
A single MB may use a fraction of 1% of its available mass to construct 100 billion telescopes with mirror diameters equal to that of the moon. These telescopes would fill a planar space corresponding to roughly the orbit of Jupiter. Using this number of telescopes they should be able to monitor most of the solar systems in the galaxy. If we assume some reasonable fraction of the galactic dark matter constitutes a KT-III civilization with billions of MB, then we may also assume they can monitor to a significant degree many activities occurring in nearest old galaxies within Kardashev's (1997) "civilization window". Major activities of MBs may be the monitoring of developing local KT-I civilizations and the nearest remote KT-III civilizations and contributing this information to the galactic gossip.
Given the possible existence of a galactic MB/KT-III civilization for 3-6 billion years, there should be a large directory of problems and answers computed and stored by MBs from preceding times. There should be a large amount of information about galactic history (stellar births & deaths, civilization histories, lifeform blueprints, etc.). The galactic knowledge base is potentially huge, but it plagued by the problems of long latency times for information retrieval as well as bandwidth limitations if the volume of information is large. While waiting for the retrieval of answers to questions, MBs may devote their time to devising complex problems which have not been solved and can only be solved in millions of years by a dedicated MB or closely linked MB cluster. It is difficult to imagine what these problems might be since even one MB has sufficient computational capacity to easily solve problems far beyond our current capabilities.
The huge difference between the capacities and intelligence of a MB and our feeble human minds provides an explantion for why we have no contact with "them". It also implies that CETI (SETI) of the radiowave search type is likely to fail. The pre-KT-II stages of technological civilizations are likely to be so short (e.g. hundreds of years), that there will be few of them in our galaxy. Optical SETI searches might succeed if the path of the Earth or Solar System were to transit a direct communication path between two communicating MB. However interstellar distances are so great the probability of that seems small. They have little reason to waste time or energy transmitting signals directly to us. Searches that are likely to yield signs of MB include the gravitational microlensing searches, near and far infrared searches and occultation astronomy.
Self-preservation and self-structural optimization are the only
goals
or activities that we may easily imagine MB pursuing. Due to
speed-of-light
limits, growth in MB physical size or interstellar colonization yield
diminishing
thought returns. Instead they are likely to focus on becoming,
smaller,
faster, and more efficient. Their activities, based on
technologies
that originated with Richard Feynman's observation "There is plenty of
room at the bottom", will transcend his observation because they
understand
that "There is more room at the bottom".
This table is based on the concept of the high heat capacity of a phase-change coolant in which the heat is absorbed by solid ice particles circulating in a fluid coolant [Nanosystems, Section 11.5.2]. The melting temperature of the "ice" should be between the melting point and the boiling point of the coolant fluid. The case of Ne in H2 does not quite meet this criteria, but may be possible with appropriate coolant pressurization.
Green rows (elements with an atomic number lighter than iron) are those elements that may be produced via fusion of lighter elements with the possibility of a net gain of energy. Pink rows(*) (elements with an atomic number greater than iron) can only be synthesized with a net loss of energy. Abundances are for our solar system, other solar systems may vary significantly.