Monday, December 3, 2012

Biography Of Albert Einstein

Biography Of Albert Einstein

Albert Einstein (play /ˈælbərt ˈnstn/; German: [ˈalbɐt ˈaɪnʃtaɪn] ( listen); 14 March 1879 – 18 April 1955) was a German-born theoretical physicist who developed the general theory of relativity, effecting a revolution in physics. For this achievement, Einstein is often regarded as the father of modern physics and the most influential physicist of the 20th century. While best known for his mass–energy equivalence formula E = mc2 (which has been dubbed "the world's most famous equation"),he received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect".The latter was pivotal in establishing quantum theory.
Near the beginning of his career, Einstein thought that Newtonian mechanics was no longer enough to reconcile the laws of classical mechanics with the laws of the electromagnetic field. This led to the development of his special theory of relativity. He realized, however, that the principle of relativity could also be extended to gravitational fields, and with his subsequent theory of gravitation in 1916, he published a paper on the general theory of relativity. He continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the thermal properties of light which laid the foundation of the photon theory of light. In 1917, Einstein applied the general theory of relativity to model the structure of the universe as a whole.
He was visiting the United States when Adolf Hitler came to power in 1933, and did not go back to Germany, where he had been a professor at the Berlin Academy of Sciences. He settled in the U.S., becoming a citizen in 1940.On the eve of World War II, he helped alert President Franklin D. Roosevelt that Germany might be developing an atomic weapon, and recommended that the U.S. begin similar research; this eventually led to what would become the Manhattan Project. Einstein was in support of defending the Allied forces, but largely denounced using the new discovery of nuclear fission as a weapon. Later, with the British philosopher Bertrand Russell, Einstein signed the Russell–Einstein Manifesto, which highlighted the danger of nuclear weapons. Einstein was affiliated with the Institute for Advanced Study in Princeton, New Jersey, until his death in 1955.
Einstein published more than 300 scientific papers along with over 150 non-scientific works.His great intellectual achievements and originality have made the word "Einstein" synonymous with genius.

Albert Einstein
 

Albert Einstein in 1921
Born 14 March 1879
Ulm, Kingdom of Württemberg, German Empire
Died 18 April 1955 (aged 76)
Princeton, New Jersey, United States
Residence Germany, Italy, Switzerland, Austria, Belgium, United Kingdom, United States
Citizenship
Fields Physics
Institutions
Alma mater
Doctoral advisor Alfred Kleiner
Other academic advisors Heinrich Friedrich Weber
Notable students
Known for
Notable awards
Spouse Mileva Marić (1903–1919)
Elsa Löwenthal (1919–1936)
Signature

Types Of Seismic Waves

Types Of Seismic Waves


Seismic waves are elastic waves that propagate in solid or fluid materials. They can be divided into body waves that travel through the interior of the materials; surface waves that travel along surfaces or interfaces between materials; and normal modes, a form of standing wave.

Body waves

There are two types of body wave, P-waves and S-waves (both body waves). Pressure waves or Primary waves (P-waves), are longitudinal waves that involve compression and rarefaction (expansion) in the direction that the wave is traveling. P-waves are the fastest waves in solids and are therefore the first waves to appear on a seismogram. S-waves, also called shear or secondary waves, are transverse waves that involve motion perpendicular to the direction of propagation. S-waves appear later than P-waves on a seismogram. Fluids cannot support this perpendicular motion, or shear, so S-waves only travel in solids. P-waves travel in both solids and fluids.[1]

Surface waves

The two main kinds of surface wave are the Rayleigh wave,which has some compressional motion, and the Love wave, which does not. Such waves can be theoretically explained in terms of interacting P- and/or S-waves. Surface waves travel more slowly than P-waves and S-waves, but because they are guided by the surface of the Earth (and their energy is thus trapped near the Earth's surface) they can be much larger in amplitude than body waves, and can be the largest signals seen in earthquake seismograms. They are particularly strongly excited when their source is close to the surface of the Earth, as in a shallow earthquake or explosion.[1]

Normal modes

The above waves are traveling waves. Large earthquakes can also make the Earth "ring" like a bell. This ringing is a mixture of normal modes with discrete frequencies and periods of an hour or longer. Motion caused by a large earthquake can be observed for up to a month after the event.[1] The first observations of normal modes were made in the 1960s as the advent of higher fidelity instruments coincided with two of the largest earthquakes of the 20th century - the 1960 Great Chilean Earthquake and the 1964 Great Alaskan Earthquake. Since then, the normal modes of the Earth have given us some of the strongest constraints on the deep structure of the Earth.

Seismology

Seismology
Seismology (/saɪzˈmɒlədʒi/) is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies. The field also includes studies of earthquake effects, such as tsunamis as well as diverse seismic sources such as volcanic, tectonic, oceanic, atmospheric, and artificial processes (such as explosions). A related field that uses geology to infer information regarding past earthquakes is paleoseismology. A recording of earth motion as a function of time is called a seismogram. A seismologist is a scientist who does research in seismology.

About Earthquakes-The Most Dangerous Natural Disaster

Earthquakes

An earthquake (also known as a quake, tremor or temblor) is the result of a sudden release of energy in the Earth's crust that creates seismic waves. The seismicity, seismism or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time.
Earthquakes are measured using observations from seismometers. The moment magnitude is the most common scale on which earthquakes larger than approximately 5 are reported for the entire globe. The more numerous earthquakes smaller than magnitude 5 reported by national seismological observatories are measured mostly on the local magnitude scale, also referred to as the Richter scale. These two scales are numerically similar over their range of validity. Magnitude 3 or lower earthquakes are mostly almost imperceptible or weak and magnitude 7 and over potentially cause serious damage over larger areas, depending on their depth. The largest earthquakes in historic times have been of magnitude slightly over 9, although there is no limit to the possible magnitude. The most recent large earthquake of magnitude 9.0 or larger was a 9.0 magnitude earthquake in Japan in 2011 (as of October 2012), and it was the largest Japanese earthquake since records began. Intensity of shaking is measured on the modified Mercalli scale. The shallower an earthquake, the more damage to structures it causes, all else being equal.
At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity.
In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter.

Naturally occurring earthquakes

Fault types
Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities and this leads to a form of stick-slip behaviour. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.

Earthquake fault types

There are three main types of fault, all of which may cause an earthquake: normal, reverse (thrust) and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip.
Reverse faults, particularly those along convergent plate boundaries are associated with the most powerful earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7.
This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet which can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 degrees Celsius flow in response to stress; they do not rupture in earthquakes.The maximum observed lengths of ruptures and mapped faults, which may break in one go are approximately 1000 km. Examples are the earthquakes in Chile, 1960; Alaska, 1957; Sumatra, 2004, all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939) and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.

Aerial photo of the San Andreas Fault in the Carrizo Plain, northwest of Los Angeles
 
The most important parameter controlling the maximum earthquake magnitude on a fault is however not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees.[6] Thus the width of the plane within the top brittle crust of the Earth can become 50 to 100 km (Tohoku, 2011; Alaska, 1964), making the most powerful earthquakes possible.
Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km within the brittle crust,thus earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about 6 km.
In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike slip by intermediate, and normal faults by the lowest stress levels.This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that 'pushes' the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass 'escapes' in the direction of the least principal stress, namely upward, lifting the rock mass up, thus the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.

Saturday, December 1, 2012

World Records Of Concrete

World Records Of Concrete
The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.
The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of 715 m (2,346 ft).
The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix[42] .[43] The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two day period.The previous record, 13,200 cubic metres poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.
The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of 225,000 square feet (20,900 m2) of concrete placed within a 30 hour period, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.
The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,224 cubic yards of concrete placed in a 58 hour period using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the 50,180-square-foot (4,662 m2) cofferdam to be dewatered approximately 26 feet (7.9 m) below sea level to allow the construction of the IHNC GIWW Sill & Monolith Project to be completed in the dry.

Concrete Recycling

Concrete Recycling
Concrete recycling is an increasingly common method of disposing of concrete structures. Concrete debris was once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits.
Concrete, which must be free of trash, wood, paper and other such materials, is collected from demolition sites and put through a crushing machine, often along with asphalt, bricks and rocks.
Reinforced concrete contains rebar and other metallic reinforcements, which are removed with magnets and recycled elsewhere. The remaining aggregate chunks are sorted by size. Larger chunks may go through the crusher again. Smaller pieces of concrete are used as gravel for new construction projects. Aggregate base gravel is laid down as the lowest layer in a road, with fresh concrete or asphalt placed over it. Crushed recycled concrete can sometimes be used as the dry aggregate for brand new concrete if it is free of contaminants, though the use of recycled concrete limits strength and is not allowed in many jurisdictions. On 3 March 1983, a government funded research team (the VIRL research.codep) approximated that almost 17% of worldwide landfill was by-products of concrete based waste.

Concrete

Concrete
Concrete is a composite construction material composed primarily of aggregate, cement, and water. There are many formulations, which provide varied properties. The aggregate is generally a coarse gravel or crushed rocks such as limestone, or granite, along with a fine aggregate such as sand. The cement, commonly Portland cement, and other cementitious materials such as fly ash and slag cement, serve as a binder for the aggregate. Various chemical admixtures are also added to achieve varied properties. Water is then mixed with this dry composite, which enables it to be shaped (typically poured) and then solidified and hardened into rock-hard strength through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, eventually creating a robust stone-like material. Concrete has relatively high compressive strength, but much lower tensile strength. For this reason it is usually reinforced with materials that are strong in tension (often steel). Concrete can be damaged by many processes, such as the freezing of trapped water.
Concrete is widely used for making architectural structures, foundations, brick/block walls, pavements, bridges/overpasses, motorways/roads, runways, parking structures, dams, pools/reservoirs, pipes, footings for gates, fences and poles and even boats. Famous concrete structures include the Burj Khalifa (world's tallest building), the Hoover Dam, the Panama Canal and the Roman Pantheon.
Concrete technology was known by the Ancient Romans and was widely used within the Roman Empire—the Colosseum is largely built of concrete. After the Empire passed, use of concrete became scarce until the technology was re-pioneered in the mid-18th century.
The environmental impact of concrete is a complex mixture of not entirely negative effects; while concrete is a major contributor to greenhouse gas emissions, recycling of concrete is increasingly common in structures that have reached the end of their life. Structures made of concrete can have a long service life. As concrete has a high thermal mass and very low permeability, it can make for energy efficient housing.

History
The word concrete comes from the Latin word "concretus" (meaning compact or condensed), the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow).
Concrete was used for construction in many ancient structures.
During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman Architectural Revolution, freed Roman construction from the restrictions of stone and brick material and allowed for revolutionary new designs in terms of both structural complexity and dimension.
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. 200 kg/cm2).However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application was also different:
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.
The widespread use of concrete in many Roman structures has ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges have masonry cladding on a concrete core, as does the dome of the Pantheon.
Some have stated that the secret of concrete was lost for 13 centuries until 1756, when the British engineer John Smeaton pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate. However, the Canal du Midi was built using concrete in 1670,and there are concrete structures in Finland that date from the 16th century.A method for producing Portland cement was patented by Joseph Aspdin on 1824.

Friday, November 30, 2012

Taipei 101

Taipei 101
Taipei 101 (Chinese: 台北101 / 臺北101), formerly known as the Taipei World Financial Center, is a landmark skyscraper located in Xinyi District, Taipei, Taiwan. The building ranked officially as the world's tallest from 2004 until the opening of the Burj Khalifa in Dubai in 2010. In July 2011, the building was awarded LEED Platinum certification, the highest award in the Leadership in Energy and Environmental Design (LEED) rating system and became the tallest and largest green building in the world.Taipei 101 was designed by C.Y. Lee & partners and constructed primarily by KTRT Joint Venture. The tower has served as an icon of modern Taiwan ever since its opening, and received the 2004 Emporis Skyscraper Award. Fireworks launched from Taipei 101 feature prominently in international New Year's Eve broadcasts and the structure appears frequently in travel literature and international media.
Taipei 101 comprises 101 floors above ground and 5 floors underground. The building was architecturally created as a symbol of the evolution of technology and Asian tradition. Its postmodernist approach to style incorporates traditional design elements and gives them modern treatments. The tower is designed to withstand typhoons and earthquakes. A multi-level shopping mall adjoining the tower houses hundreds of fashionable stores, restaurants and clubs.
Taipei 101 is owned by the Taipei Financial Center Corporation (TFCC) and managed by the International division of Urban Retail Properties Corporation based in Chicago. The name originally planned for the building, Taipei World Financial Center, until 2003, was derived from the name of the owner. The original name in Chinese was literally, Taipei International Financial Center (Chinese: 臺北國際金融中心).
Taipei 101
台北101

Taipei 101

Record height
Tallest in the world from 2004 to 2010
Preceded by Petronas Towers
Surpassed by Burj Khalifa
General information
Type Mixed use: communication, conference, fitness center, library, observation, office, restaurant, retail
Location Xinyi District, Taipei, Republic of China
Coordinates 25°2′1″N 121°33′54″E
Construction started 1999
Completed 2004
Opening December 31, 2004
Cost NT$ 58 billion
(US$ 1.80 billion)
Height
Architectural 509 m (1,669.9 ft)
Roof 449.2 m (1,473.8 ft)
Top floor 439 m (1,440.3 ft)
Observatory 391.8 m (1,285.4 ft)
Technical details
Floor count 101 (+5 basement floors)
Floor area 193,400 m2 (2,081,700 sq ft)
Elevators 61 Toshiba/KONE elevators, including double-deck shuttles and 2 high speed observatory elevators)
Design and construction
Owner Taipei Financial Center Corporation
Management Urban Retail Properties Co.
Architect C.Y. Lee & partners
Structural engineer Thornton Tomasetti
Main contractor KTRT Joint Venture
Website
taipei-101.com.tw

The Biography Of John Smeaton (Father Of Civil Engineering)

John Smeaton (Father Of Civil Engineering)
John Smeaton, FRS, (8 June 1724 – 28 October 1792) was an English civil engineer responsible for the design of bridges, canals, harbours and lighthouses. He was also a capable mechanical engineer and an eminent physicist. Smeaton was the first self-proclaimed civil engineer, and often regarded as the "father of civil engineering".
He was associated with the Lunar Society.
John Smeaton

Portrait of John Smeaton, with the Eddystone Lighthouse in the background
Born 8 June 1724
Austhorpe, Leeds, England
Died 28 October 1792 (aged 68)
Austhorpe, Leeds, England
Nationality British
Occupation Civil engineer

Law and physics

Smeaton was born in Austhorpe, Leeds, England. After studying at Leeds Grammar School he joined his father's law firm, but left to become a mathematical instrument maker (working with Henry Hindley), developing, among other instruments, a pyrometer to study material expansion and a whirling speculum or horizontal top (a maritime navigation aid).
He was elected a Fellow of the Royal Society in 1753, and in 1759 won the Copley Medal for his research into the mechanics of waterwheels and windmills. His 1759 paper "An Experimental Enquiry Concerning the Natural Powers of Water and Wind to Turn Mills and Other Machines Depending on Circular Motion" addressed the relationship between pressure and velocity for objects moving in air (Smeaton noted that the table doing so was actually contributed by "my friend Mr Rouse" "an ingenious gentleman of Harborough, Leicestershire" and calculated on the basis of Rouse's experiments), and his concepts were subsequently developed to devise the 'Smeaton Coefficient'.


Over the period 1759-1782 he performed a series of further experiments and measurements on waterwheels that led him to support and champion the vis viva theory of German Gottfried Leibniz, an early formulation of conservation of energy. This led him into conflict with members of the academic establishment who rejected Leibniz's theory, believing it inconsistent with Sir Isaac Newton's conservation of momentum.

Civil engineering

Recommended by the Royal Society, Smeaton designed the third Eddystone Lighthouse (1755–59). He pioneered the use of 'hydraulic lime' (a form of mortar which will set under water) and developed a technique involving dovetailed blocks of granite in the building of the lighthouse. His lighthouse remained in use until 1877 when the rock underlying the structure's foundations had begun to erode; it was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeaton's Tower. He is important in the history, rediscovery of, and development of modern cement, because he identified the compositional requirements needed to obtain "hydraulicity" in lime; work which led ultimately to the invention of Portland cement.
Deciding that he wanted to focus on the lucrative field of civil engineering, he commenced an extensive series of commissions, including:
Because of his expertise in engineering, Smeaton was called to testify in court for a case related to the silting-up of the harbour at Wells-next-the-Sea in Norfolk in 1782: he is considered to be the first expert witness to appear in an English court. He also acted as a consultant on the disastrous 63-year-long New Harbour at Rye, designed to combat the silting of the port of Winchelsea. The project is now known informally as "Smeaton's Harbour", but despite the name his involvement was limited and occurred more than 30 years after work on the harbour commenced.

Difficulties Of Interstellar Space Travel

The difficulties of interstellar space travel

The main challenge facing interstellar travel is the vast distances that have to be covered. This means that a very great speed and/or a very long travel time is needed. The time it takes with most realistic propulsion methods would be from decades to millennia. Hence an interstellar ship would be much more severely exposed to the hazards found in interplanetary travel, including vacuum, radiation, weightlessness, and micrometeoroids. The long travel times make it difficult to design manned missions. The fundamental limits of space-time present another challenge.Furthermore, it is difficult to foresee interstellar trips being justified for conventional economic reasons.

Required energy

A significant factor contributing to the difficulty is the energy which must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy K = ½ mv2 where m is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the required energy at least doubles, because the energy needed to halt the ship equals the energy needed to accelerate it to travel speed.
The velocity for a manned round trip of a few decades to even the nearest star is thousands of times greater than those of present space vehicles. This means that due to the square law,millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least 450 PJ or 4.5 ×1017 J or 125 billion kWh, not accounting for losses. This energy has to be carried along,as solar panels do not work far from the Sun and other stars.
There is some belief that the magnitude of this energy may make interstellar travel impossible. It has been reported that at the 2008 Joint Propulsion Conference, where future space propulsion challenges were discussed and debated, a conclusion was reached that it was improbable that humans would ever explore beyond the Solar System.Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated “At least 100 times the total energy output of the entire world [in a given year] would be required for the voyage (to Alpha Centauri)”

Interstellar medium

A major issue with traveling at extremely high speeds is that interstellar dust and gas may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved. Various shielding methods to mitigate this problem have been proposed.Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects, and methods of mitigating these risks, have not been adequately assessed.

Travel time

It can be argued that an interstellar mission which cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity, not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion.On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now.Voyages undertaken before the minimum will be overtaken by those who leave at the minimum, while those who leave after the minimum will never overtake those who left at the minimum.
One argument against the stance of delaying a start until reaching fast propulsion system velocity is that the various other non-technical problems that are specific to long-distance travel at considerably higher speed (such as interstellar particle impact, possible dramatic shortening of average human life span during extended space residence, etc.) may remain obstacles that take much longer time to resolve than the propulsion issue alone, assuming that they can even be solved eventually at all. A case can therefore be made for starting a mission without delay, based on the concept of an achievable and dedicated but relatively slow interstellar mission using the current technological state-of-the-art and at relatively low cost, rather than banking on being able to solve all problems associated with a faster mission without having a reliable time frame for achievability of such.
Intergalactic travel involves distances about a million-fold greater than interstellar distances, making it radically more difficult than even interstellar travel.

Interstellar distances

Astronomical distances are often measured in the time it would take a beam of light to travel between two points. Light in a vacuum travels approximately 300,000 kilometers per second or 186,000 miles per second.
The distance from Earth to the Moon is 1.3 light-seconds. With current spacecraft propulsion technologies, a craft can cover the distance from the Earth to the Moon in around eight hours (New Horizons). That means light travels approximately thirty thousand times faster than current spacecraft propulsion technologies. The distance from Earth to other planets in the solar system ranges from three light-minutes to about four light-hours. Depending on the planet and its alignment to Earth, for a typical unmanned spacecraft these trips will take from a few months to a little over a decade.
The nearest known star to the Sun is Proxima Centauri, which is 4.23 light-years away. However, there may be undiscovered brown dwarf systems that are closer. The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600th of a light-year in 30 years and is currently moving at 1/18,000th the speed of light. At this rate, a journey to Proxima Centauri would take 72,000 years. Of course, this mission was not specifically intended to travel fast to the stars, and current technology could do much better. The travel time could be reduced to a few millennia using lightsails, or to a century or less using nuclear pulse propulsion. A better understanding of the vastness of the interstellar distance to one of the closest stars to the sun, Alpha Centauri A (a Sun-like star), can be obtained by scaling down the Earth-Sun distance (~150,000,000 km) to one meter. On this scale the distance to Alpha Centauri A would still be 271 kilometers or about 169 miles.
However, more speculative approaches to interstellar travel offer the possibility of circumventing these difficulties. Special relativity offers the possibility of shortening the travel time: if a starship with sufficiently advanced engines could reach velocities approaching the speed of light, relativistic time dilation would make the voyage much shorter for the traveler. However, it would still take many years of elapsed time as viewed by the people remaining on Earth, and upon returning to Earth, the travelers would find that far more time had elapsed on Earth than had for them. (For more on this effect, see twin paradox.)
General relativity offers the theoretical possibility that faster-than-light travel may be possible without violating fundamental laws of physics, for example, through wormholes, although it is still debated whether this is possible, in part, because of causality concerns. Proposed mechanisms for faster-than-light travel within the theory of General Relativity require the existence of exotic matter.

Communications

The round-trip delay time is the minimum time between an observation by the probe and the moment the probe can receive instructions from Earth reacting to the observation. Given that information can travel no faster than the speed of light, this is for the Voyager 1 about 32 hours, near Proxima Centauri it would be 8 years. Faster reaction would have to be programmed to be carried out automatically. Of course, in the case of a manned flight the crew can respond immediately to their observations. However, the round-trip delay time makes them not only extremely distant from but, in terms of communication, also extremely isolated from Earth (analogous to how past long distance explorers were similarly isolated before the invention of the electrical telegraph).
Interstellar communication is still problematic — even if a probe could reach the nearest star, its ability to communicate back to Earth would be difficult given the extreme distance.

Prime targets for interstellar travel

There are 59 known stellar systems within 20 light years from the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions:
Stellar system Distance (ly) Remarks
Alpha Centauri 4.3 Closest system. Three stars (G2, K1, M5). Component A similar to our sun (a G2 star). Alpha Centauri B has one confirmed planet.
Barnard's Star 6.0 Small, low luminosity M5 red dwarf. Next closest to Solar System.
Sirius 8.7 Large, very bright A1 star with a white dwarf companion.
Epsilon Eridani 10.8 Single K2 star slightly smaller and colder than the Sun. Has two asteroid belts, might have a giant and one much smaller planet, and may possess a solar system type planetary system.
Tau Ceti 11.8 Single G8 star similar to the Sun. High probability of possessing a solar system type planetary system.
Gliese 581 20.3 Multiple planet system. The unconfirmed exoplanet Gliese 581 g and the confirmed exoplanet Gliese 581 d are in the star's habitable zone.
Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration.

Manned missions



The mass of any craft capable of carrying humans would inevitably be substantially larger than that necessary for an unmanned interstellar probe. For instance, the first space probe, Sputnik 1, had a payload of 83.6 kg, while spacecraft to carry a living passenger (Laika the dog), Sputnik 2, had a payload six times that at 508.3 kg. This underestimates the difference in the case of interstellar missions, given the vastly greater travel times involved and the resulting necessity of a closed-cycle life support system. As technology continues to advance, combined with the aggregate risks and support requirements of manned interstellar travel, the first interstellar missions are unlikely to carry earthly life forms.