Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Universe a grand tour of modern science Phần 2 pdf
MIỄN PHÍ
Số trang
77
Kích thước
460.3 KB
Định dạng
PDF
Lượt xem
1026

Universe a grand tour of modern science Phần 2 pdf

Nội dung xem thử

Mô tả chi tiết

By the time that report was published, in 2000, Sellers was in training as a NASA

astronaut, so as to observe the biosphere from the International Space Station.

The systematic monitoring of the land’s vegetation by unmanned spacecraft

already spanned two decades. Tucker collaborated with a team at Boston

University that quarried the vast amounts of data accumulated daily over that

period, to investigate long-term changes.

Between 1981 and 1999 the plainest trend in vegetation seen from space was

towards longer growing seasons and more vigorous growth. The most dramatic

effects were in Eurasia at latitudes above 40 degrees north, meaning roughly the

line from Naples to Beijing. The vegetation increased not in area, but in density.

The greening was most evident in the forests and woodland that cover a broad

swath of land at mid-latitudes from central Europe and across the entire width

of Russia to the Far East. On average, the first leaves of spring were appearing

a week earlier at the end of the period, and autumn was delayed by ten days.

At the same mid-latitudes in North America, the satellite data showed extra

growth in New England’s forests, and grasslands of the upper Midwest.

Otherwise the changes were scrappier than in Eurasia, and the extension of

the growing season was somewhat shorter.

‘We saw that year to year changes in growth and duration of the growing

season of northern vegetation are tightly linked to year to year changes in

temperature,’ said Liming Zhou of Boston.

I The colour of the sea

Life on land is about twice as productive as life in the sea, hectare for hectare,

but the oceans are about twice as big. Being useful only on terra firma, the

satellite vegetation index therefore covered barely half of the biosphere. For the

rest, you have to gauge from space the productivity of the ‘grass’ of the sea, the

microscopic green algae of the phytoplankton, drifting in the surface waters lit

by the Sun.

Research ships can sample the algae only locally and occasionally, so satellite

measurements were needed even more badly than on land. Estimates of ocean

productivity differed not by percentage points but by a factor of six times from

the lowest to the highest. The infrared glow of plants on land is not seen in the

marine plants that float beneath the sea surface. Instead the space scientists had

to look at the visible colour of the sea.

‘In flying from Plymouth to the western mackerel grounds we passed over a

sharp line separating the green water of the Channel from the deep blue of the

Atlantic,’ Alister Hardy of Oxford recorded in 1956. With the benefit of an

aircraft’s altitude, this noted marine biologist saw phenomena known to

fishermen down the ages—namely that the most fertile water is green and

65

biosphere from space

murky, and that the transition can be sudden. The boundary near the mouth of

the English Channel marks the onset of fertilization by nutrients brought to the

surface by the churning action of tidal currents.

In 1978 the US satellite Nimbus-7 went into orbit carrying a variety of

experimental instruments for remote sensing of the Earth. Among them was a

Coastal Zone Color Scanner, which looked for the green chlorophyll of marine

plants. Despite its name, its measurements in the open ocean were more reliable

than inshore, where the waters are literally muddied.

In eight years of intermittent operation, the Color Scanner gave wonderful

impressions of springtime blooms in the North Atlantic and North Pacific, like

those seen on land by the vegetation index. New images for the textbooks

showed high fertility in regions where nutrient-rich water wells up to the surface

from below. The Equator turned out to be no imaginary line but a plainly

visible green belt of chlorophyll separating the bluer, much less fertile regions in

the tropical oceans to the north and south.

But, for would-be bookkeepers of the biosphere, the Nimbus-7 observations

were frustratingly unsystematic and incomplete. A fuller accounting began with

the launch by NASA in 1997 of OrbView-2, the first satellite capable of gauging

the entire biosphere, by both sea and land. An oddly named instrument,

SeaWiFS, combined the red and infrared sensors needed for the vegetation index

on land with an improved sea-colour scanner.

SeaWiFS surveyed the whole world every two days. After three years the

scientists were ready to announce the net primary productivity of all the world’s

plants, marine and terrestrial, deduced from the satellite data. The answer was

111 to 117 billion tonnes of carbon downloaded from the air and fixed by

photosynthesis, in the course of a year, after subtracting the carbon that the

plants’ respiration returned promptly to the air.

The satellite’s launch coincided with a period of strong warming in the Eastern

Pacific, in the El Nin˜o event of 1997–98. During an El Nin˜o, the tropical ocean is

depleted in mineral nutrients needed for life, hence the lower global figure in

the SeaWiFS results. The higher figure was from the subsequent period of

Pacific cooling: a La Nin˜a. Between 1997 and 2000, ocean productivity increased

by almost ten per cent, from 54 to 59 billion tonnes per year. In the same period

the total productivity on land increased only slightly, from 57 to 58 billion

tonnes of fixed carbon, although the El Nin˜o to La Nin˜a transition brought

more drastic changes from region to region.

North–south differences were already known from space observations of

vegetation ashore. The sheer extent of the northern lands explains the strong

seasonal drawdown of carbon dioxide from the air by plants growing there in the

northern summer. But the SeaWiFS results showed that summer productivity

66

biosphere from space

is higher also in the northern Atlantic and Pacific than in the more spacious

Southern Ocean. The blooms are more intense.

‘The summer blooms in the southern hemisphere are limited by light and by a

chronic shortage of essential nutrients, especially iron,’ noted Michael Behrenfeld

of NASA’s Laboratory of Hydrospheric Sciences, lead author of the first report

on the SeaWiFS data. ‘If the northern and southern hemispheres exhibited

equivalent seasonal blooms, ocean productivity would be higher by some 9

billion tonnes of carbon.’

In that case, ocean productivity would exceed the land’s. Although uncertainties

remained about the calculations for both parts of the biosphere, there was no

denying the remarkable similarity in plant growth by land and by sea. Previous

estimates of ocean productivity had been too low.

I New slants to come

The study of the biosphere as a whole is in its infancy. Before the Space Age it

could not seriously begin, because you would have needed huge armies and

navies of scientists, on the ground and at sea, to make the observations. By the

early 21st century the political focus had shifted from Soviet grain production to

the role of living systems in mopping up man-made emissions of carbon dioxide.

The possible uses of augmented forests or fertilization of the oceans, for

controlling carbon dioxide levels, were already of interest to treaty negotiators.

In parallel with the developments in space observations of the biosphere,

ecologists have developed computer models of plant productivity. Discrepancies

between their results show how far there is to go. For example, in a study reported

in 2000, different calculations of how much carbon dioxide was taken in by plants

and soil in the south-east USA, between 1980 and 1993, disagreed not by some

percentage points but by a factor of more than three. Such uncertainties

undermine the attempts to make global ecology a more exact science.

Improvements will come from better data, especially from observations from

space of the year-to-year variability in plant growth by land and sea. These will

help to pin down the effects of different factors and events. The lucky coincidence

of the SeaWIFS launch and a dramatic El Nin˜o event was a case in point.

A growing number of satellites in orbit measure the vegetation index and the sea

colour. Future space missions will distinguish many more wavelengths of visible

and infrared light, and use slanting angles of view to amplify the data. The space

scientists won’t leave unfinished the job they have started well.

E See also Carbon cycle. For views on the Earth’s vegetation at ground level, see

Biodiversity. For components of the biosphere hidden from cameras in space, see

Extremophiles.

67

biosphere from space

O n a visit to bell labs in New Jersey, if you met a man coming down the

corridor on a unicycle it would probably be Claude Shannon, especially if he

were juggling at the same time. According to his wife: ‘He had been a gymnast

in college, so he was better at it than you might have thought.’ His after-hours

capers were tolerated because he had come up single-handedly with two of the

most consequential ideas in the history of technology, each of them roughly

comparable to inventing the wheel on which he was performing.

In 1937, when a 21-year-old graduate student of electrical engineering at the

Massachusetts Institute of Technology, Shannon saw in simple relays—electric

switches under electric control—the potential to make logical decisions. Suppose

two relays represent propositions X and Y. If the switch is open, the proposition

is false, and if connected it is true.

Put the relays in a line, in series, then a current can flow only if X AND Y are

true. But branch the circuit so that the switches operate in parallel, then if either

X OR Y is true a current flows. And as Shannon pointed out in his eventual

dissertation, the false/true dichotomy could equally well represent the digits

0 or 1. He wrote: ‘It is possible to perform complex mathematical operations by

means of relay circuits.’

In the history of computers, Alan Turing in England and John von Neumann in

the USA are rightly famous for their notions about programmable machinery,

in the 1930s and 1940s when code-breaking and other military needs gave an

urgency to innovation. Electric relays soon made way for thermionic valves in

early computers, and then for transistors fashioned from semiconductors. The

fact remains that the boy Shannon’s AND and OR gates are still the principle

of the design and operation of the microchips of every digital computer, whilst

the binary arithmetic of 0s and 1s now runs the working world.

Shannon’s second gigantic contribution to modern life came at Bell Labs. By

1943 he realized that his 0s and 1s could represent information of kinds going far

wider than logic or arithmetic. Many questions like ‘Do you love me?’ invite a

simple yes or no answer, which might be communicated very economically by a

single 1 or 0, a binary digit. Shannon called it a bit for short. More complicated

communications—strings of text for example—require more bits. Just how many

68

is easily calculable, and this is a measure of the information content of a

message.

So you have a message of so many bits. How quickly can you send it? That

depends on how many bits per second the channel of communication can

handle. Thus you can rate the capacity of the channel using the same binary

units, and the reckoning of messages and communication power can apply to

any kind of system: printed words in a telegraph, voices on the radio, pictures

on television, or even a carrier pigeon, limited by the weight it can carry and the

sharpness of vision of the reader of the message.

In an electromagnetic channel, the theoretical capacity in bits per second

depends on the frequency range. Radio with music requires tens of kilocycles

per second, whilst television pictures need megacycles. Real communications

channels fall short of their theoretical capacity because of interference from

outside sources and internally generated noise, but you can improve the fidelity

of transmission by widening the bandwidth or sending the message more slowly.

Shannon went on polishing his ideas quietly, not discussing them even with close

colleagues. He was having fun, but he found writing up the work for publication

quite painful. Not until 1948 did his classic paper called ‘A mathematical theory

of communication’ appear. It won instant acceptance. Shannon had invented his

own branch of science and was treading on nobody else’s toes. His propositions,

though wholly new and surprising, were quickly digestible and then almost

self-evident.

The most sensational result from Shannon’s mathematics was that near-perfect

communication is possible in principle if you convert the information to be sent

into digital form. For example, the light wanted in a picture element of an

image can be specified, not as a relative intensity, but as a number, expressed in

binary digits. Instead of being roughly right, as expected in an analogue system,

the intensity will be precisely right.

Scientific and military systems were the first to make intensive use of Shannon’s

principles. The general public became increasingly aware of the digital world

through personal computers and digitized music on compact discs. By the end

of the 20th century, digital radio, television and video recording were becoming

widespread.

Further spectacular innovations began with the marriage of computing and

digital communication, to bring all the world’s information resources into your

office or living room. From a requirement for survivable communications, in

the aftermath of a nuclear war, came the Internet, developed as Arpanet by the

US Advanced Research Project Agency. It provided a means of finding routes

through a shattered telephone system where many links were unavailable. That

was the origin of emails. By the mid-1980s, many computer scientists and

69

bits and qubits

physicists were using the net, and in 1990 responsibility for the system passed

from the military to the US National Science Foundation.

Meanwhile at CERN, Europe’s particle physics lab in Geneva, the growing

complexity of experiments brought a need for advanced digital links between

scientists in widely scattered labs. It prompted Tim Berners-Lee and his colleagues

to invent the World Wide Web in 1990, and within a few years everyone was

joining in. The World Wide Web’s impact on human affairs was comparable with

the invention of steam trains in the 19th century, but more sudden.

Just because the systems of modern information technology are so familiar,

it can be hard to grasp how innovative and fundamental Shannon’s ideas were.

A couple of scientific pointers may help. In relation to the laws of heat, his

quantifiable information is the exact opposite of entropy, which means the

degradation of high forms of energy into mere heat and disorder. Life itself is

a non-stop battle of hereditary information against deadly disorder, and Mother

Nature went digital long ago. Shannon’s mathematical theory of communication

applies to the genetic code and to the on–off binary pulses operating in your

brain as you read these words.

I Towards quantum computers

For a second revolution in information technology, the experts looked to the

spooky behaviour of electrons and atoms known in quantum theory. By 2002

physicists in Australia had made the equivalent of Shannon’s relays of 65 years

earlier, but now the switches offered not binary bits, but qubits, pronounced

cue-bits. They raised hopes that the first quantum computers might be

operating before the first decade of the new century was out.

Whereas electric relays, and their electronic successors in microchips, provide

the simple on/off, true/false, 1/0 options expressed as bits of information, the

qubits in the corresponding quantum devices will have many possible states. In

theory it is possible to make an extremely fast computer by exploiting

ambiguities that are present all the time in quantum theory.

If you’re not sure whether an electron in an atom is in one possible energy state,

or in the next higher energy state permitted by the physical laws, then it can be

considered to be in both states at once. In computing terms it represents both 1

and 0 at the same time. Two such ambiguities give you four numbers, 00, 01, 10

and 11, which are the binary-number equivalents of good old 0, 1, 2 and 3.

Three ambiguities give eight numbers, and so on, until with 50 you have a

million billion numbers represented simultaneously in the quantum computer.

In theory the machine can compute with all of them at the same time.

Such quantum spookiness spooks the spooks. The world’s secret services are still

engaged in the centuries-old contest between code-makers and code-breakers.

70

bits and qubits

There are new concepts called quantum one-time pads for a supposedly

unbreakable cipher, using existing technology, and future quantum computers

are expected to be able to crack many of the best codes of pre-existing kinds.

Who knows what developments may be going on behind the scenes, like the

secret work on digital computing by Alan Turing at Bletchley Park in England

during the Second World War?

A widespread opinion at the start of the 21st century held that quantum

computing was beyond practical reach for the time being. It was seen as

requiring exquisite delicacy in construction and operation, with the ever-present

danger that the slightest external interference, or a premature leakage

of information from the system, could cause the whole multiply parallel

computation to cave in, like a mistimed souffle´.

Colorado and Austria were the settings for early steps towards a practical

quantum computer, announced in 2003. At the US National Institute of

Standards and Technology, finely tuned laser beams played on a pair of

beryllium ions (charged atoms) trapped in a vacuum. If both ions were spinning

the same way, the laser beams had no effect, but if they had contrary spins the

beams made them prance briefly away from each other and change their spins

according to subtle but predictable quantum rules.

Simultaneously a team at Universita¨t Innsbruck reported the use of a pair of

calcium ions. In this case, laser beams controlled the ions individually. All possible

combinations of parallel and anti-parallel spins could be created and read out.

Commenting on the progress, Andrew Steane at Oxford’s Centre for Quantum

Computation declared, ‘The experiments . . . represent, for me, the first hint that

there is a serious possibility of making logic gates, precise to one part in a

thousand or even ten thousand, that could be scaled up to many qubits.’

Quantum computing is not just a new technology. For David Deutsch at

Oxford, who developed the seminal concept of a quantum computer from

1977 onwards, it opened a road for exploring the nature of the Universe in its

quantum aspects. In particular it illustrated the theory of the quantum

multiverse, also promulgated by Deutsch.

The many ambiguities of quantum mechanics represent, in his theory, multiple

universes like our own that co-exist in parallel with what we know and

experience. Deutsch’s idea should not be confused with the multiple universes

offered in some Big Bang theories. Those would have space and time separate

from our own, whilst the universes of the quantum multiverse supposedly

operate within our own cosmic framework, and provide a complexity and

richness unseen by mortal eyes.

‘In quantum computation the complexity of what is happening is very high so

that, philosophically, it becomes an unavoidable obligation to try to explain it,’

71

bits and qubits

Tải ngay đi em, còn do dự, trời tối mất!