Thursday, December 24, 2009





Ozone depletion describes two distinct, but related observations: a slow, steady decline of about 4% per decade in the total volume of ozone in Earth's stratosphere (ozone layer) since the late 1970s, and a much larger, but seasonal, decrease in stratospheric ozone over Earth's polar regions during the same period. The latter phenomenon is commonly referred to as the ozone hole. In addition to this well-known stratospheric ozone depletion, there are also tropospheric ozone depletion events, which occur near the surface in polar regions during spring.

The detailed mechanism by which the polar ozone holes form is different from that for the mid-latitude thinning, but the most important process in both trends is catalytic destruction of ozone by atomic chlorine and bromine.The main source of these halogen atoms in the stratosphere is photodissociation of chlorofluorocarbon (CFC) compounds, commonly called freons, and of bromofluorocarbon compounds known as halons. These compounds are transported into the stratosphere after being emitted at the surface.Both ozone depletion mechanisms strengthened as emissions of CFCs and halons increased.

CFCs and other contributory substances are commonly referred to as ozone-depleting substances (ODS). Since the ozone layer prevents most harmful UVB wavelengths (270–315 nm) of ultraviolet light (UV light) from passing through the Earth's atmosphere, observed and projected decreases in ozone have generated worldwide concern leading to adoption of the Montreal Protocol that bans the production of CFCs and halons as well as related ozone depleting chemicals such as carbon tetrachloride and trichloroethane. It is suspected that a variety of biological consequences such as increases in skin cancer, cataracts, damage to plants, and reduction of plankton populations in the ocean's photic zone may result from the increased UV exposure due to ozone depletion.

Ozone cycle overview


Three forms (or allotropes) of oxygen are involved in the ozone-oxygen cycle: oxygen atoms (O or atomic oxygen), oxygen gas (O2 or diatomic oxygen), and ozone gas (O3 or triatomic oxygen). Ozone is formed in the stratosphere when oxygen molecules photodissociate after absorbing an ultraviolet photon whose wavelength is shorter than 240 nm. This produces two oxygen atoms. The atomic oxygen then combines with O2 to create O3. Ozone molecules absorb UV light between 310 and 200 nm, following which ozone splits into a molecule of O2 and an oxygen atom. The oxygen atom then joins up with an oxygen molecule to regenerate ozone. This is a continuing process which terminates when an oxygen atom "recombines" with an ozone molecule to make two O2 molecules:
O + O3 → 2 O2

Layers of the atmosphere (not to scale)The overall amount of ozone in the stratosphere is determined by a balance between photochemical production and recombination.



Ozone can be destroyed by a number of free radical catalysts, the most important of which are the hydroxyl radical (OH·), the nitric oxide radical (NO·), atomic chlorine (Cl·) and bromine (Br·). All of these have both natural and manmade sources; at the present time, most of the OH· and NO· in the stratosphere is of natural origin, but human activity has dramatically increased the levels of chlorine and bromine. These elements are found in certain stable organic compounds, especially chlorofluorocarbons (CFCs), which may find their way to the stratosphere without being destroyed in the troposphere due to their low reactivity. Once in the stratosphere, the Cl and Br atoms are liberated from the parent compounds by the action of ultraviolet light, e.g. ('h' is Planck's constant, 'ν' is frequency of electromagnetic radiation)

CFCl3 + hν → CFCl2 + Cl

The Cl and Br atoms can then destroy ozone molecules through a variety of catalytic cycles. In the simplest example of such a cycle,[4] a chlorine atom reacts with an ozone molecule, taking an oxygen atom with it (forming ClO) and leaving a normal oxygen molecule. The chlorine monoxide (i.e., the ClO) can react with a second molecule of ozone (i.e., O3) to yield another chlorine atom and two molecules of oxygen. The chemical shorthand for these gas-phase reactions is:
Cl + O3 → ClO + O2
ClO + O3 → Cl + 2 O2

The overall effect is a decrease in the amount of ozone. More complicated mechanisms have been discovered that lead to ozone destruction in the lower stratosphere as well.

A single chlorine atom would keep on destroying ozone (thus a catalyst) for up to two years (the time scale for transport back down to the troposphere) were it not for reactions that remove them from this cycle by forming reservoir species such as hydrogen chloride (HCl) and chlorine nitrate (ClONO2). On a per atom basis, bromine is even more efficient than chlorine at destroying ozone, but there is much less bromine in the atmosphere at present. As a result, both chlorine and bromine contribute significantly to the overall ozone depletion. Laboratory studies have shown that fluorine and iodine atoms participate in analogous catalytic cycles. However, in the Earth's stratosphere, fluorine atoms react rapidly with water and methane to form strongly-bound HF, while organic molecules which contain iodine react so rapidly in the lower atmosphere that they do not reach the stratosphere in significant quantities. Furthermore, a single chlorine atom is able to react with 100,000 ozone molecules. This fact plus the amount of chlorine released into the atmosphere by chlorofluorocarbons (CFCs) yearly demonstrates how dangerous CFCs are to the environment.

Quantitative understanding of the chemical ozone loss process

New research on the breakdown of a key molecule in these ozone-depleting chemicals, dichlorine peroxide (Cl2O2), calls into question the completeness of present atmospheric models of polar ozone depletion. Specifically, chemists at NASA's Jet Propulsion Laboratory in Pasadena, California, found in 2007 that the temperatures, and the spectrum and intensity of radiation present in the stratosphere created conditions insufficient to allow the rate of chemical-breakdown required to release chlorine radicals in the volume necessary to explain observed rates of ozone depletion. Instead, laboratory tests, designed to be the most accurate reflection of stratospheric conditions to date, showed the decay of the crucial molecule almost a magnitude lower than previously thought.


That result motivated further measurements by different methods, resulting in cross-sections that agree with the older, higher ones. If the new results hold up, the purported discrepancy will be resolved.

Observations on ozone layer depletion

The most pronounced decrease in ozone has been in the lower stratosphere. However, the ozone hole is most usually measured not in terms of ozone concentrations at these levels (which are typically of a few parts per million) but by reduction in the total column ozone, above a point on the Earth's surface, which is normally expressed in Dobson units, abbreviated as "DU". Marked decreases in column ozone in the Antarctic spring and early summer compared to the early 1970s and before have been observed using instruments such as the Total Ozone Mapping Spectrometer (TOMS).

Chemicals in the atmosphere


 CFCs in the atmosphere

Chlorofluorocarbons (CFCs) were invented by Thomas Midgley in the 1920s. They were used in air conditioning/cooling units, as aerosol spray propellants prior to the 1980s, and in the cleaning processes of delicate electronic equipment. They also occur as by-products of some chemical processes. No significant natural sources have ever been identified for these compounds — their presence in the atmosphere is due almost entirely to human manufacture. As mentioned in the ozone cycle overview above, when such ozone-depleting chemicals reach the stratosphere, they are dissociated by ultraviolet light to release chlorine atoms. The chlorine atoms act as a catalyst, and each can break down tens of thousands of ozone molecules before being removed from the stratosphere. Given the longevity of CFC molecules, recovery times are measured in decades. It is calculated that a CFC molecule takes an average of 15 years to go from the ground level up to the upper atmosphere, and it can stay there for about a century, destroying up to one hundred thousand ozone molecules during that time.


Verification of observations

Scientists have been increasingly able to attribute the observed ozone depletion to the increase of man-made (anthropogenic) halogen compounds from CFCs by the use of complex chemistry transport models and their validation against observational data (e.g. SLIMCAT, CLaMS). These models work by combining satellite measurements of chemical concentrations and meteorological fields with chemical reaction rate constants obtained in lab experiments. They are able to identify not only the key chemical reactions but also the transport processes which bring CFC photolysis products into contact with ozone.

Interest in ozone layer depletion


While the effect of the Antarctic ozone hole in decreasing the global ozone is relatively small, estimated at about 4% per decade, the hole has generated a great deal of interest because:

The decrease in the ozone layer was predicted in the early 1980s to be roughly 7% over a 60 year period.[citation needed]

The sudden recognition in 1985 that there was a substantial "hole" was widely reported in the press. The especially rapid ozone depletion in Antarctica had previously been dismissed as a measurement error.[citation needed]

Many[citation needed] were worried that ozone holes might start to appear over other areas of the globe but to date the only other large-scale depletion is a smaller ozone "dimple" observed during the Arctic spring over the North Pole. Ozone at middle latitudes has declined, but by a much smaller extent (about 4–5% decrease).

If the conditions became more severe (cooler stratospheric temperatures, more stratospheric clouds, more active chlorine), then global ozone may decrease at a much greater pace. Standard global warming theory predicts that the stratosphere will cool.

When the Antarctic ozone hole breaks up, the ozone-depleted air drifts out into nearby areas. Decreases in the ozone level of up to 10% have been reported in New Zealand in the month following the break-up of the Antarctic ozone hole.

Consequences of ozone layer depletion

Since the ozone layer absorbs UVB ultraviolet light from the Sun, ozone layer depletion is expected to increase surface UVB levels, which could lead to damage, including increases in skin cancer. This was the reason for the Montreal Protocol. Although decreases in stratospheric ozone are well-tied to CFCs and there are good theoretical reasons to believe that decreases in ozone will lead to increases in surface UVB, there is no direct observational evidence linking ozone depletion to higher incidence of skin cancer in human beings. This is partly due to the fact that UVA, which has also been implicated in some forms of skin cancer, is not absorbed by ozone, and it is nearly impossible to control statistics for lifestyle changes in the populace.

 Increased UV

Ozone, while a minority constituent in the Earth's atmosphere, is responsible for most of the absorption of UVB radiation. The amount of UVB radiation that penetrates through the ozone layer decreases exponentially with the slant-path thickness/density of the layer. Correspondingly, a decrease in atmospheric ozone is expected to give rise to significantly increased levels of UVB near the surface.

Increases in surface UVB due to the ozone hole can be partially inferred by radiative transfer model calculations, but cannot be calculated from direct measurements because of the lack of reliable historical (pre-ozone-hole) surface UV data, although more recent surface UV observation measurement programmes exist (e.g. at Lauder, New Zealand).

Because it is this same UV radiation that creates ozone in the ozone layer from O2 (regular oxygen) in the first place, a reduction in stratospheric ozone would actually tend to increase photochemical production of ozone at lower levels (in the troposphere), although the overall observed trends in total column ozone still show a decrease, largely because ozone produced lower down has a naturally shorter photochemical lifetime, so it is destroyed before the concentrations could reach a level which would compensate for the ozone reduction higher up.[citation needed]

Biological effects

The main public concern regarding the ozone hole has been the effects of increased surface UV and microwave radiation on human health. So far, ozone depletion in most locations has been typically a few percent and, as noted above, no direct evidence of health damage is available in most latitudes. Were the high levels of depletion seen in the ozone hole ever to be common across the globe, the effects could be substantially more dramatic. As the ozone hole over Antarctica has in some instances grown so large as to reach southern parts of Australia and New Zealand, environmentalists have been concerned that the increase in surface UV could be significant.[citation needed]

Effects on humans

UVB (the higher energy UV radiation absorbed by ozone) is generally accepted to be a contributory factor to skin cancer. In addition, increased surface UV leads to increased tropospheric ozone, which is a health risk to humans.[citation needed] The increased surface UV also represents an increase in the vitamin D synthetic capacity of the sunlight.


The cancer preventive effects of vitamin D represent a possible beneficial effect of ozone depletion.In terms of health costs, the possible benefits of increased UV irradiance may outweigh the burden.

1. Basal and Squamous Cell Carcinomas -- The most common forms of skin cancer in humans, basal and squamous cell carcinomas, have been strongly linked to UVB exposure. The mechanism by which UVB induces these cancers is well understood — absorption of UVB radiation causes the pyrimidine bases in the DNA molecule to form dimers, resulting in transcription errors when the DNA replicates. These cancers are relatively mild and rarely fatal, although the treatment of squamous cell carcinoma sometimes requires extensive reconstructive surgery. By combining epidemiological data with results of animal studies, scientists have estimated that a one percent decrease in stratospheric ozone would increase the incidence of these cancers by 2%.

2. Malignant Melanoma — Another form of skin cancer, malignant melanoma, is much less common but far more dangerous, being lethal in about 15–20% of the cases diagnosed. The relationship between malignant melanoma and ultraviolet exposure is not yet well understood, but it appears that both UVB and UVA are involved. Experiments on fish suggest that 90 to 95% of malignant melanomas may be due to UVA and visible radiation[26] whereas experiments on opossums suggest a larger role for UVB.[25] Because of this uncertainty, it is difficult to estimate the impact of ozone depletion on melanoma incidence. One study showed that a 10% increase in UVB radiation was associated with a 19% increase in melanomas for men and 16% for women.[27] A study of people in Punta Arenas, at the southern tip of Chile, showed a 56% increase in melanoma and a 46% increase in nonmelanoma skin cancer over a period of seven years, along with decreased ozone and increased UVB levels.

3. Cortical Cataracts -- Studies are suggestive of an association between ocular cortical cataracts and UV-B exposure, using crude approximations of exposure and various cataract assessment techniques. A detailed assessment of ocular exposure to UV-B was carried out in a study on Chesapeake Bay Watermen, where increases in average annual ocular exposure were associated with increasing risk of cortical opacity [29]. In this highly exposed group of predominantly white males, the evidence linking cortical opacities to sunlight exposure was the strongest to date. However, subsequent data from a population-based study in Beaver Dam, WI suggested the risk may be confined to men. In the Beaver Dam study, the exposures among women were lower than exposures among men, and no association was seen.[30] Moreover, there were no data linking sunlight exposure to risk of cataract in African Americans, although other eye diseases have different prevalences among the different racial groups, and cortical opacity appears to be higher in African Americans compared with whites.

4. Increased Tropospheric Ozone -- Increased surface UV leads to increased tropospheric ozone. Ground-level ozone is generally recognized to be a health risk, as ozone is toxic due to its strong oxidant properties. At this time, ozone at ground level is produced mainly by the action of UV radiation on combustion gases from vehicle exhausts.[citation needed]













Tuesday, November 17, 2009



William Henry "Bill" Gates III (born October 28, 1955) is an American business magnate, philanthropist, and chairman of Microsoft, the software company he founded with Paul Allen. He is ranked consistently one of the world's wealthiest people[4] and the wealthiest overall as of 2009. During his career at Microsoft, Gates held the positions of CEO and chief software architect, and remains the largest individual shareholder with more than 8 percent of the common stock.He has also authored or co-authored several books.




Gates is one of the best-known entrepreneurs of the personal computer revolution. Although he is admired by many, a number of industry insiders criticize his business tactics, which they consider anti-competitive, an opinion which has in some cases been upheld by the courts (see Criticism of Microsoft).In the later stages of his career, Gates has pursued a number of philanthropic endeavors, donating large amounts of money to various charitable organizations and scientific research programs through the Bill & Melinda Gates Foundation, established in 2000.



Bill Gates stepped down as chief executive officer of Microsoft in January, 2000. He remained as chairman and created the position of chief software architect. In June, 2006, Gates announced that he would be transitioning from full-time work at Microsoft to part-time work and full-time work at the Bill & Melinda Gates Foundation. He gradually transferred his duties to Ray Ozzie, chief software architect and Craig Mundie, chief research and strategy officer. Gates' last full-time day at Microsoft was June 27, 2008. He remains at Microsoft as non-executive chairman.

Early life


Gates was born in Seattle, Washington, to William H. Gates, Sr. and Mary Maxwell Gates, of English, German, and Scotch-Irish descent. His family was upper middle class; his father was a prominent lawyer, his mother served on the board of directors for First Interstate BancSystem and the United Way, and her father, J. W. Maxwell, was a national bank president. Gates has one elder sister, Kristi (Kristianne), and one younger sister, Libby. He was the fourth of his name in his family, but was known as William Gates III or "Trey" because his father had dropped his own "III" suffix. Early on in his life, Gates' parents had a law career in mind for him.



At 13 he enrolled in the Lakeside School, an exclusive preparatory school When he was in the eighth grade, the Mothers Club at the school used proceeds from Lakeside School's rummage sale to buy an ASR-33 teletype terminal and a block of computer time on a General Electric (GE) computer for the school's students.Gates took an interest in programming the GE system in BASIC and was excused from math classes to pursue his interest. He wrote his first computer program on this machine: an implementation of tic-tac-toe that allowed users to play games against the computer. Gates was fascinated by the machine and how it would always execute software code perfectly. When he reflected back on that moment, he commented on it and said, "There was just something neat about the machine." After the Mothers Club donation was exhausted, he and other students sought time on systems including DEC PDP minicomputers. One of these systems was a PDP-10 belonging to Computer Center Corporation (CCC), which banned four Lakeside students—Gates, Paul Allen, Ric Weiland, and Kent Evans—for the summer after it caught them exploiting bugs in the operating system to obtain free computer time.



At the end of the ban, the four students offered to find bugs in CCC's software in exchange for computer time. Rather than use the system via teletype, Gates went to CCC's offices and studied source code for various programs that ran on the system, including programs in FORTRAN, LISP, and machine language. The arrangement with CCC continued until 1970, when the company went out of business. The following year, Information Sciences Inc. hired the four Lakeside students to write a payroll program in COBOL, providing them computer time and royalties. After his administrators became aware of his programming abilities, Gates wrote the school's computer program to schedule students in classes. He modified the code so that he was placed in classes with mostly female students. He later stated that "it was hard to tear myself away from a machine at which I could so unambiguously demonstrate success."At age 17, Gates formed a venture with Allen, called Traf-O-Data, to make traffic counters based on the Intel 8008 processor.In early 1973, Bill Gates served as a congressional page in the U.S. House of Representatives.

 Gates graduated from Lakeside School in 1973. He scored 1590 out of 1600 on the SAT[18] and subsequently enrolled at Harvard College in the fall of 1973.[19] Prior to the mid-1990s, an SAT score of 1590 corresponded roughly to an IQ of 170,[20] a figure that has been cited frequently by the press.[21] While at Harvard, he met his future business partner, Steve Ballmer, whom he later appointed as CEO of Microsoft. He also met computer scientist Christos Papadimitriou at Harvard, with whom he collaborated on a paper about pancake sorting.[22] He did not have a definite study plan while a student at Harvard[23] and spent a lot of time using the school's computers. He remained in contact with Paul Allen, joining him at Honeywell during the summer of 1974.[24] The following year saw the release of the MITS Altair 8800 based on the Intel 8080 CPU, and Gates and Allen saw this as the opportunity to start their own computer software company.[25] He had talked this decision over with his parents, who were supportive of him after seeing how much Gates wanted to start a company.

Microsoft

BASIC

After reading the January 1975 issue of Popular Electronics that demonstrated the Altair 8800, Gates contacted Micro Instrumentation and Telemetry Systems (MITS), the creators of the new microcomputer, to inform them that he and others were working on a BASIC interpreter for the platform. In reality, Gates and Allen did not have an Altair and had not written code for it; they merely wanted to gauge MITS's interest. MITS president Ed Roberts agreed to meet them for a demo, and over the course of a few weeks they developed an Altair emulator that ran on a minicomputer, and then the BASIC interpreter. The demonstration, held at MITS's offices in Albuquerque, was a success and resulted in a deal with MITS to distribute the interpreter as Altair BASIC. Paul Allen was hired into MITS,and Gates took a leave of absence from Harvard to work with Allen at MITS in Albuquerque in November 1975. They named their partnership "Micro-Soft" and had their first office located in Albuquerque. Within a year, the hyphen was dropped, and on November 26, 1976, the trade name "Microsoft" was registered with the Office of the Secretary of the State of New Mexico.Gates never returned to Harvard to complete his studies.



Microsoft's BASIC was popular with computer hobbyists, but Gates discovered that a pre-market copy had leaked into the community and was being widely copied and distributed. In February 1976, Gates wrote an Open Letter to Hobbyists in the MITS newsletter saying that MITS could not continue to produce, distribute, and maintain high-quality software without payment. This letter was unpopular with many computer hobbyists, but Gates persisted in his belief that software developers should be able to demand payment. Microsoft became independent of MITS in late 1976, and it continued to develop programming language software for various systems. The company moved from Albuquerque to its new home in Bellevue, Washington on January 1, 1979.

During Microsoft's early years, all employees had broad responsibility for the company's business. Gates oversaw the business details, but continued to write code as well. In the first five years, he personally reviewed every line of code the company shipped, and often rewrote parts of it as he saw fit.



IBM partnership

In 1980, IBM approached Microsoft to write the BASIC interpreter for its upcoming personal computer, the IBM PC. When IBM's representatives mentioned that they needed an operating system, Gates referred them to Digital Research (DRI), makers of the widely used CP/M operating system.IBM's discussions with Digital Research went poorly, and they did not reach a licensing agreement. IBM representative Jack Sams mentioned the licensing difficulties during a subsequent meeting with Gates and told him to get an acceptable operating system. A few weeks later Gates proposed using 86-DOS (QDOS), an operating system similar to CP/M that Tim Paterson of Seattle Computer Products (SCP) had made for hardware similar to the PC. Microsoft made a deal with SCP to become the exclusive licensing agent, and later the full owner, of 86-DOS. After adapting the operating system for the PC, Microsoft delivered it to IBM as PC-DOS in exchange for a one-time fee of $50,000. Gates did not offer to transfer the copyright on the operating system, because he believed that other hardware vendors would clone IBM's system. They did, and the sales of MS-DOS made Microsoft a major player in the industry.



Windows

Gates oversaw Microsoft's company restructuring on June 25, 1981, which re-incorporated the company in Washington and made Gates President of Microsoft and the Chairman of the Board.Microsoft launched its first retail version of Microsoft Windows on November 20, 1985, and in August, the company struck a deal with IBM to develop a separate operating system called OS/2. Although the two companies successfully developed the first version of the new system, mounting creative differences undermined the partnership. Gates distributed an internal memo on May 16, 1991, announcing that the OS/2 partnership was over and Microsoft would shift its efforts to the Windows NT kernel development.



Management style

From Microsoft's founding in 1975 until 2006, Gates had primary responsibility for the company's product strategy. He aggressively broadened the company's range of products, and wherever Microsoft achieved a dominant position he vigorously defended it.

As an executive, Gates met regularly with Microsoft's senior managers and program managers. Firsthand accounts of these meetings describe him as verbally combative, berating managers for perceived holes in their business strategies or proposals that placed the company's long-term interests at risk. He often interrupted presentations with such comments as, "That's the stupidest thing I've ever heard!" and, "Why don't you just give up your options and join the Peace Corps?" The target of his outburst then had to defend the proposal in detail until, hopefully, Gates was fully convinced.When subordinates appeared to be procrastinating, he was known to remark sarcastically, "I'll do it over the weekend."

Gates's role at Microsoft for most of its history was primarily a management and executive role. However, he was an active software developer in the early years, particularly on the company's programming language products. He has not officially been on a development team since working on the TRS-80 Model 100 line, but wrote code as late as 1989 that shipped in the company's products. On June 15, 2006, Gates announced that he would transition out of his day-to-day role over the next two years to dedicate more time to philanthropy. He divided his responsibilities between two successors, placing Ray Ozzie in charge of day-to-day management and Craig Mundie in charge of long-term product strategy.

Tuesday, October 27, 2009

Wipro Limited is a $5 billion Indian conglomerate. According to the 2008-09 revenue, Wipro is the second largest IT company in India. Wipro Ltd has interests varying from information technology, consumer care, lighting, engineering and healthcare businesses. Azim Premji is the Chairman of the board.

History
Wipro started as a vegetable oil trading company in 1947 from an old mill at Amalner, Maharashtra, India founded by Azim Premji's father. When his father died in 1966 & Azim, a graduate in Electrical Engineering from Stanford University, took on the leadership of the company at the age 21. He repositioned it and transformed Wipro (Western India Vegetable Products Ltd) into a consumer goods company that produced hydrogenated cooking oils/fat company, laundry soap, wax and tin containers and later set up Wipro Fluid Power to manufacture hydraulic and pneumatic cylinders in 1975. At that time, it was valued at $2 million[citation needed].

In 1977, when IBM was asked to leave India, Wipro entered the information technology sector. In 1979, Wipro began developing its own computers and in 1981, started selling the finished product. This was the first in a string of products that would make Wipro one of India's first computer makers[2]. The company licensed technology from Sentinel Computers in the United States and began building India's first mini-computers[3]. Wipro hired managers who were computer savvy, and strong on business experience..[citation needed]

In 1980 Wipro moved into software development and started developing customized software packages for their hardware customers. This expanded their IT business and subsequently developed the first Indian 8086 chip[4].

Since 1992 Wipro began to grow its roots off shore in United States and by 2000 Wipro Ltd ADRs were listed on the New York Stock Exchange[5][unreliable source?].

The company's revenue grew by 450% from 2002 to 2007. This success has led to higher salaries (wages have been growing by more than 14% per year since 2005), which puts pressure on the company's margins.
Wipro Technologies deals in following businesses
IT Services: Wipro provides complete range of IT Services to the organization. The range of services extends from Enterprise Application Services (CRM, ERP, e-Procurement and SCM) to e-Business solutions. Wipro's enterprise solutions serve a host of industries such as Energy and Utilities, Finance, Telecom, and Media and Entertainment.
Product Engineering Solutions: Wipro is the largest independent provider of R&D services in the world. Using "Extended Engineering" model for leveraging R&D investment and accessing new knowledge and experience across the globe, people and technical infrastructure, Wipro enables firms to introduce new products rapidly.
Technology Infrastructure Service: Wipro's Technology Infrastructure Services (TIS) is the largest Indian IT infrastructure service provider in terms of revenue, people and customers with more than 200 customers in US, Europe, Japan and over 650 customers in India.
Business Process Outsourcing: Wipro provides business process outsourcing services in areas Finance & Accounting, Procurement, HR Services, Loyalty Services and Knowledge Services. In 2002, Wipro acquiring Spectramind and became one of the largest BPO service players.
Consulting Services: Wipro offers services in Business Consulting, Process Consulting, Quality Consulting, and Technology Consulting.

Wipro's Sister Concerns (Business Units)
Wipro Infrastructure Engineering: It has emerged as the leader in the hydraulic cylinders and truck tipping systems market in India[citation needed].
Wipro Infotech: It is one of the leading manufacturers of computer hardware[citation needed] and a provider of systems integration and infrastructure services in India.Wipro Infotech is more focussed on the manpower services and is not at the cutting edge of technology[citation needed].
Wipro Lighting: It manufactures and markets the Wipro brand of luminaries. Wipro Lighting offers lighting solutions across various application areas such as commercial lighting for modern work spaces, manufacturing and pharmaceutical companies, designer petrol pumps and outdoor architecture.

Timeline
1945 - Incorporation as Western India Vegetable Products Limited[8]
1947 - Establishment of an oil mill at Amalner, Maharashtra, India
1960 - Manufacture of laundry soap 787 at Amalner
1970 - Manufacture of Bakery Shortening Vanaspati at Amalner
1975 - Diversification into engineering and manufacture of hydraulic cylinders as WINTROL (now called Wipro Fluid Power) division in Bangalore.
1977 - Name of the Company changed to Wipro Products Limited
1980 - Diversification into Information Technology
1990 - Incorporation of Wipro-GE medical systems
1992 - Going global with global IT services division
1993 - Business innovation award for offshore development[citation needed]
1995 - Wipro gets ISO 9001 quality certification[9], re-certified twice for mature processes
1997 - Wipro gets SEI CMM level 3 certification, enterprise wide processes defined[citation needed]
Start of the Six Sigma initiative, defects prevention practices initiated at project level.[10]
1998 - Wipro first software services company in the world to get SEI CMM level 5[4]
1999 - Wipro's market capitalization is the highest in India[citation needed]
2000 - Start of the Six Sigma initiative, defects prevention practices initiated at project level. Wipro listed on New York Stock Exchange.[11]
2001 - First Indian company to achieve the "TL9000 certification" for industry specific quality standards[citation needed].
Wipro acquires American Management Systems’ global energy practice.[citation needed]
Becomes world's first PCMM Level 5 company.[12]
Premji established Azim Premji Foundation, a not-for-profit organization for elementary education.
Wipro becomes only Indian company featured in Business Week’s 100 best-performing technology companies.[citation needed]
2002
World’s first CMMi ver 1.1 Level 5 company.[13]
Wipro acquires Spectramind.[9]
Ranked the 7th software services company in the world by BusinessWeek (Infotech 100, November 2002)[verification needed].
2003
Wipro acquires Nervewire.[9]
Wipro Technologies Wins Prestigious IEEE Award for Software Process Excellence[citation needed]
Wipro Technologies awarded prestigious ITSMA award for services marketing excellence[citation needed]
Wipro wins the 2003 Asian Most Admired Knowledge Enterprise Award.[citation needed]
2004
Crossed the $1 Billion mark in annualized revenues.[citation needed]
Wipro launches India’s first RFID enabled apparel store[citation needed].
Wipro Technologies named Asian Most Admired Knowledge Enterprise second year in a row[citation needed].
IDC rates Wipro as the leader among worldwide offshore service providers[14]
2005 - Wipro acquires mPower to enter payments space[clarification needed] and also acquires European System on Chip (SoC) design firm NewLogic
2006 - Wipro acquires Enabler to enter Niche Retail market
2008 - Wipro acquires Gallagher Financial Systems[citation needed] to enter mortgage loan origination space.
2009 - Wipro stops Connectivity IP and closes NewLogic Sophia-Antipolis R&D center.

Wednesday, September 9, 2009

The University of Oxford (informally Oxford University, or simply Oxford), located in the City of Oxford, Oxfordshire, United Kingdom, is the oldest university in the English-speaking world. It is also regarded as the best university in the UK according to many of the most recent League tables of British universities.The name is sometimes abbreviated as Oxon. in post-nominals (from the Latin Oxoniensis), although Oxf is sometimes used in official publications. The University has 38 independent colleges and six permanent private halls.

Although the exact date of foundation remains unclear, there is evidence of teaching there as far back as the 11th century. The university grew rapidly from 1167, after English students were banned from attending the University of Paris.After a dispute between students and townsfolk broke out in 1209, some of the academics at Oxford fled north-east to the town of Cambridge, where the University of Cambridge was founded. The two universities (collectively known as "Oxbridge") have since had a long history of competition with each other.

The University of Oxford is a member of the Russell Group of research-led British universities, the G5 Group, the Coimbra Group, the League of European Research Universities, International Alliance of Research Universities and is also a core member of the Europaeum. Academically, Oxford is ranked in the world's top 10 universities.For more than a century, it has served as the home of the Rhodes Scholarship, which brings highly accomplished students from a number of countries to study at Oxford as postgraduates.

History

The University of Oxford does not have a clear date of foundation. Teaching at Oxford existed in some form in 1096.

The expulsion of foreigners from the University of Paris in 1167 caused many English scholars to return from France and settle in Oxford. The historian Gerald of Wales lectured to the scholars in 1188, and the first known foreign scholar, Emo of Friesland, arrived in 1190. The head of the University was named a chancellor from 1201, and the masters were recognised as a universitas or corporation in 1231. The students associated together, on the basis of geographical origins, into two “nations”, representing the North (including the Scots) and the South (including the Irish and the Welsh). In later centuries, geographical origins continued to influence many students' affiliations when membership of a college or hall became customary in Oxford. Members of many religious orders, including Dominicans, Franciscans, Carmelites, and Augustinians, settled in Oxford in the mid-13th century, gained influence, and maintained houses for students. At about the same time, private benefactors established colleges to serve as self-contained scholarly communities. Among the earliest were John I de Balliol, father of the future King of Scots; Balliol College bears his name. Another founder, Walter de Merton, a chancellor of England and afterwards Bishop of Rochester, devised a series of regulations for college life; Merton College thereby became the model for such establishments at Oxford as well as at the University of Cambridge. Thereafter, an increasing number of students forsook living in halls and religious houses in favour of living at colleges.

The new learning of the Renaissance greatly influenced Oxford from the late 15th century onward. Among University scholars of the period were William Grocyn, who contributed to the revival of the Greek language, and John Colet, the noted biblical scholar. With the Reformation and the breaking of ties with the Roman Catholic Church, the method of teaching at the university was transformed from the medieval Scholastic method to Renaissance education, although institutions associated with the university suffered loss of land and revenues. In 1636, Chancellor William Laud, archbishop of Canterbury, codified the university statutes; these to a large extent remained the university's governing regulations until the mid-19th century. Laud was also responsible for the granting of a charter securing privileges for the university press, and he made significant contributions to the Bodleian Library, the main library of the university.


The university was a centre of the Royalist Party during the English Civil War (1642–1649), while the town favoured the opposing Parliamentarian cause. Soldier-statesman Oliver Cromwell, chancellor of the university from 1650 to 1657, was responsible for preventing both Oxford and Cambridge from being closed down by the Puritans, who viewed university education as dangerous to religious beliefs. From the mid-18th century onward, however, the University of Oxford took little part in political conflicts.

History

The coat of arms of the University of Oxford.The University of Oxford does not have a clear date of foundation. Teaching at Oxford existed in some form in 1096.[10]

The expulsion of foreigners from the University of Paris in 1167 caused many English scholars to return from France and settle in Oxford. The historian Gerald of Wales lectured to the scholars in 1188, and the first known foreign scholar, Emo of Friesland, arrived in 1190. The head of the University was named a chancellor from 1201, and the masters were recognised as a universitas or corporation in 1231. The students associated together, on the basis of geographical origins, into two “nations”, representing the North (including the Scots) and the South (including the Irish and the Welsh). In later centuries, geographical origins continued to influence many students' affiliations when membership of a college or hall became customary in Oxford. Members of many religious orders, including Dominicans, Franciscans, Carmelites, and Augustinians, settled in Oxford in the mid-13th century, gained influence, and maintained houses for students. At about the same time, private benefactors established colleges to serve as self-contained scholarly communities. Among the earliest were John I de Balliol, father of the future King of Scots; Balliol College bears his name. Another founder, Walter de Merton, a chancellor of England and afterwards Bishop of Rochester, devised a series of regulations for college life; Merton College thereby became the model for such establishments at Oxford as well as at the University of Cambridge. Thereafter, an increasing number of students forsook living in halls and religious houses in favour of living at colleges.

The new learning of the Renaissance greatly influenced Oxford from the late 15th century onward. Among University scholars of the period were William Grocyn, who contributed to the revival of the Greek language, and John Colet, the noted biblical scholar. With the Reformation and the breaking of ties with the Roman Catholic Church, the method of teaching at the university was transformed from the medieval Scholastic method to Renaissance education, although institutions associated with the university suffered loss of land and revenues. In 1636, Chancellor William Laud, archbishop of Canterbury, codified the university statutes; these to a large extent remained the university's governing regulations until the mid-19th century. Laud was also responsible for the granting of a charter securing privileges for the university press, and he made significant contributions to the Bodleian Library, the main library of the university.


In 1605 Oxford was still a walled city, but several colleges had been built outside the city walls. (North is at the bottom on this map.)The university was a centre of the Royalist Party during the English Civil War (1642–1649), while the town favoured the opposing Parliamentarian cause. Soldier-statesman Oliver Cromwell, chancellor of the university from 1650 to 1657, was responsible for preventing both Oxford and Cambridge from being closed down by the Puritans, who viewed university education as dangerous to religious beliefs. From the mid-18th century onward, however, the University of Oxford took little part in political conflicts.

The mid nineteenth century saw the aftermath of the Oxford Movement (1833-1845) led amongst others by the future Cardinal Newman. The influence of the reformed German university reached Oxford via key scholars such as Jowett and Max Muller.

Administrative reforms during the 19th century included the replacement of oral examinations with written entrance tests, greater tolerance for religious dissent, and the establishment of four colleges for women. Women have been eligible to be full members of the university and have been entitled to take degrees since 7 October 1920.Twentieth century Privy Council decisions (such as the abolition of compulsory daily worship, dissociation of the Regius professorship of Hebrew from clerical status, diversion of theological bequests to colleges to other purposes) loosened the link with traditional belief and practice. Although the University's emphasis traditionally had been on classical knowledge, its curriculum expanded in the course of the 19th century and now attaches equal importance to scientific and medical studies.

Monday, August 31, 2009

A network switch is a computer networking device that connects network segments.
The term commonly refers to a
Network bridge that processes and routes data at the Data link layer (layer 2) of the OSI model. Switches that additionally process data at the Network layer (layer 3 and above) are often referred to as Layer 3 switches or Multilayer switches.
The term network switch does not generally encompass unintelligent or passive network devices such as
hubs and repeaters.
The first
Ethernet switch was introduced by Kalpana in 1990.

Function
The network switch, packet switch (or just switch) plays an integral part in most
Ethernet local area networks or LANs. Mid-to-large sized LANs contain a number of linked managed switches. Small office, home office (SOHO) applications typically use a single switch, or an all-purpose converged device such as gateway access to small office/home office broadband services such as DSL router or cable Wi-Fi router. In most of these cases, the end user device contains a router and components that interface to the particular physical broadband technology, as in the Linksys 8-port and 48-port devices. User devices may also include a telephone interface to VoIP.
In the context of a standard 10/100 Ethernet switch, a switch operates at the data-link layer of the OSI model to create a different collision domain per switch port. If you have 4 computers A/B/C/D on 4 switch ports, then A and B can transfer data between them as well as C and D at the same time, and they will never interfere with each others' conversations. In the case of a "hub" then they would all have to share the bandwidth, run in
Half duplex and there would be collisions and retransmissions. Using a switch is called micro-segmentation. It allows you to have dedicated bandwidth on point to point connections with every computer and to therefore run in Full duplex with no collisions.

Role of switches in networks
Network switch is a marketing term rather than a technical one.[
citation needed] Switches may operate at one or more OSI layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch, although use of the term is diminishing.[citation needed]
In switches intended for commercial use, built-in or modular interfaces make it possible to connect different types of networks, including
Ethernet, Fibre Channel, ATM, ITU-T G.hn and 802.11. This connectivity can be at any of the layers mentioned. While Layer 2 functionality is adequate for speed-shifting within one technology, interconnecting technologies such as Ethernet and token ring are easier at Layer 3.
Interconnection of different Layer 3 networks is done by
routers. If there are any features that characterize "Layer-3 switches" as opposed to general-purpose routers, it tends to be that they are optimized, in larger switches, for high-density Ethernet connectivity.
In some service provider and other environments where there is a need for a great deal of analysis of network performance and security, switches may be connected between WAN routers as places for analytic modules. Some vendors provide
firewall,network intrusion detection, and performance analysis modules that can plug into switch ports. Some of these functions may be on combined modules.
In other cases, the switch is used to create a mirror image of data that can go to an external device. Since most switch port mirroring provides only one mirrored stream,
network hubs can be useful for fanning out data to several read-only analyzers, such as intrusion detection systems and packet sniffers.

Friday, August 14, 2009

Network security consists of the provisions made in an underlying computer network infrastructure, policies adopted by the network administrator to protect the network and the network-accessible resources from unauthorized access, and consistent and continuous monitoring and measurement of its effectiveness (or lack) combined together.

Comparison with information security
The terms network security and information security are often used interchangeably, however network security is generally taken as providing protection at the boundaries of an organization, keeping the intruders (e.g. black hat hackers, script kiddies, etc.) out. Network security systems today are mostly effective, so the focus has shifted to protecting resources from attack or simple mistakes by people inside the organization, e.g. with Data Loss Prevention (DLP). One response to this insider threat in network security is to compartmentalize large networks, so that an employee would have to cross an internal boundary and be authenticated when they try to access privileged information. Information security is explicitly concerned with all aspects of protecting information resources, including network security and DLP.

Network security concepts
Network security starts from authenticating any user, commonly (one factor authentication) with a username and a password (something you know). With two factor authentication something you have is also used (e.g. a security token or 'dongle', an ATM card, or your mobile phone), or with three factor authentication something you are is also used (e.g. a fingerprint or retinal scan). Once authenticated, a stateful firewall enforces access policies such as what services are allowed to be accessed by the network users.[1] Though effective to prevent unauthorized access, this component fails to check potentially harmful content such as computer worms being transmitted over the network. An intrusion prevention system (IPS)[2] helps detect and inhibit the action of such malware. An anomaly-based intrusion detection system also monitors network traffic for suspicious content, unexpected traffic and other anomalies to protect the network e.g. from denial of service attacks or an employee accessing files at strange times. Communication between two hosts using the network could be encrypted to maintain privacy. Individual events occurring on the network could be tracked for audit purposes and for a later high level analysis.
Honeypots, essentially decoy network-accessible resources, could be deployed in a network as surveillance and early-warning tools. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis could be used to further tighten security of the actual network being protected by the honeypot.
A useful summary of standard concepts and methods in network security is given by in the form of an extensible ontology of network security attacks.

Security management
Security Management for networks is different for all kinds of situations. A small home or an office would only require basic security while large businesses will require high maintenance and advanced software and hardware to prevent malicious attacks from hacking and spamming.

Small homes
A basic firewall like COMODO Internet Security or a unified threat management system.
For Windows users, basic Antivirus software like AVG Antivirus, ESET NOD32 Antivirus,Kaspersky, McAfee, or Norton AntiVirus. An anti-spyware program such as Windows Defender or Spybot would also be a good idea. There are many other types of antivirus or antispyware programs out there to be considered.
When using a wireless connection, use a robust password. Also try and use the strongest security supported by your wireless devices, such as WPA or WPA2.
If using Wireless: Change the Default SSID network name, also Disable SSID Broadcast; as this function is unnecessary for home use.
Enable MAC Address filtering to keep track of all home network MAC devices connecting to your router.
Assign STATIC IP addresses to network devices.
Disable ICMP ping on router.
Review Router or Firewall logs to help identify abnormal network connections or traffic to the Internet.
Use passwords for all accounts.
Have multiple accounts per family member, using non-administrative accounts for day-to-day activities. Disable the guest account (Control Panel> Administrative Tools> Computer Management> Users).
Raise awareness about information security to children.

Medium businesses
A fairly strong firewall or Unified Threat Management System
Strong Antivirus software and Internet Security Software.
For authentication, use strong passwords and change it on a bi-weekly/monthly basis.
When using a wireless connection, use a robust password.
Raise awareness about physical security to employees.
Use an optional network analyzer or network monitor.
An enlightened administrator or manager.

Large businesses
A strong firewall and proxy to keep unwanted people out.
A strong Antivirus software package and Internet Security Software package.
For authentication, use strong passwords and change it on a weekly/bi-weekly basis.
When using a wireless connection, use a robust password.
Exercise physical security precautions to employees.
Prepare a network analyzer or network monitor and use it when needed.
Implement physical security management like closed circuit television for entry areas and restricted zones.
Security fencing to mark the company's perimeter.
Fire extinguishers for fire-sensitive areas like server rooms and security rooms.
Security guards can help to maximize security.

School
An adjustable firewall and proxy to allow authorized users access from the outside and inside.
Strong Antivirus software and Internet Security Software packages.
Wireless connections that lead to firewalls.
Children's Internet Protection Act compliance.
Supervision of network to guarantee updates and changes based on popular site usage.
Constant supervision by teachers, librarians, and administrators to guarantee protection against attacks by both internet and sneakernet sources.

Large Government
A strong firewall and proxy to keep unwanted people out.
Strong Antivirus software and Internet Security Software suites.
Strong encryption.
Whitelist authorized wireless connection, block all else.
All network hardware is in secure zones.
All host should be on a private network that is invisible from the outside.
Put all servers in a DMZ, or a firewall from the outside and from the inside.
Security fencing to mark perimeter and set wireless range to this.

Wednesday, July 29, 2009

The United States Army is the branch of the United States armed forces responsible for land-based military operations. It is the largest and oldest established branch of the U.S. military and is one of seven uniformed services. The modern Army has its roots in the Continental Army which was formed on 14 June 1775, before the establishment of the United States, to meet the demands of the American Revolutionary War. Congress created the United States Army on 14 June 1784 after the end of the war to replace the disbanded Continental Army. The Army considers itself to be descended from the Continental Army and thus dates its inception from the origins of that force.
The primary mission of the Army is to "provide necessary forces and capabilities ... in support of the National Security and Defense Strategies."Control and operation is administered by the Department of the Army, one of the three service departments of the Department of Defense. The civilian head is the Secretary of the Army and the highest ranking military officer in the department is the Chief of Staff, unless the Chairman of the Joint Chiefs of Staff or Vice Chairman of the Joint Chiefs of Staff are Army officers. The Regular Army reported a strength of 539,675 soldiers; the Army National Guard (ARNG) reported 360,351 and the United States Army Reserve (USAR) reported 197,024 putting the combined component strength total 1,097,050 soldiers (2008 Financial).

Origins
The Continental Army was created on 14 June 1775 by the Continental Congress as a unified army for the states to fight Great Britain, with George Washington appointed as its commander.The Army was initially led by men who had served in the British Army or colonial militias and who brought much of British military heritage with them. As the Revolutionary war progressed, French aid, resources, and military thinking influenced the new army, while Prussian assistance and instructors also had a strong influence.
George Washington made use of the Fabian strategy and used hit-and-run tactics, hitting where the enemy was weakest, to wear down the British forces and their Hessian mercenary allies. Washington led victories against the British at Trenton and Princeton, and then turned south. With a decisive victory at Yorktown, and the help of the French, the Spanish and the Dutch, the Continental Army prevailed against the British, and with the Treaty of Paris, the independence of the United States was acknowledged.
After the war, though, the Continental Army was quickly disbanded as part of the American distrust of standing armies, and irregular state militias became the new nation's sole ground army, with the exception of a regiment to guard the Western Frontier and one battery of artillery guarding West Point's arsenal. However, because of continuing conflict with Native Americans, it was soon realized that it was necessary to field a trained standing army. The first of these, the Legion of the United States, was established in 1791.
19th century
The War of 1812 (1812-1815), the second and last American war against the British, was less successful than the Revolution had been. An invasion of Canada failed, and U.S. troops were unable to stop the British from burning the new capital of Washington, D.C.. However, the Regular Army, under Generals Winfield Scott and Jacob Brown, proved they were professional and capable of defeating a British army in the Niagara campaign of 1814. Two weeks after a treaty was signed, though, Andrew Jackson defeated the British invasion of New Orleans. However this had little effect, as per the treaty both sides returned to the status quo.
Between 1815 and 1860, a spirit of Manifest Destiny struck the United States, and as settlers moved west the U.S. Army engaged in a long series of skirmishes and battles with Native Americans that the colonists uprooted. The U.S. Army also fought the short Mexican–American War, which was a victory for the United States and resulted in territory which became all or parts of the states of California, Nevada, Utah, Colorado, Arizona, Wyoming and New Mexico.
The Civil War (1861-1865) was the most costly war for the United States. After most states in the South seceded to form the Confederate States of America, CSA troops opened fire on the Union-held Fort Sumter in Charleston, South Carolina, starting the war. For the first two years Confederate forces solidly defeated the U.S. Army, but after the decisive battles of Gettysburg in the east and Vicksburg in the west, combined with superior industrial might and numbers, Union troops fought a brutal campaign through Confederate territory and the war ended with a Confederate surrender at Appomatox Courthouse in April 1865. Based on 1860 census figures, 8% of all white males aged 13 to 43 died in the war, including 6% in the North and an extraordinary 18% in the South.
Following the Civil War, the U.S. Army fought a long battle with Native Americans, who resisted U.S. expansion into the center of the continent. But by the 1890s the U.S. saw itself as a potential international player. U.S. victories in the Spanish-American War (1898) and the controversial and less well known Philippine-American War (1898-1913), as well as U.S. intervention in Latin America and the Boxer Rebellion, gained America more land.

20th century
The United States joined World War I (1914-1918) in 1917 on the side of Russia, Britain and France. U.S. troops were sent to the front and were involved in the push that finally broke through the German lines. With the armistice on 11 November 1918, the Army once again decreased its forces.
The U.S. joined World War II after the Japanese attack on Pearl Harbor on 7 December 1941. On the European front, U.S. Army troops formed a significant portion of the forces that captured North Africa and Sicily. On D-Day and in the subsequent liberation of Europe and defeat of Germany, millions of U.S. Army troops played a central role. In the Pacific, Army soldiers participated alongside U.S. Marines in the "island hopping" campaign that wrested the Pacific Islands from Japanese control. Following the Axis surrenders in May (Germany) and September (Japan) of 1945, Army troops were deployed to Japan and Germany to occupy the two defeated nations. Two years after World War II, the Army Air Forces separated from the Army to become the United States Air Force on 18 September 1947 after decades of attempting to separate. Also, in 1948 the Army was desegregated.
However, the end of World War II set the stage for the East-West confrontation known as the Cold War (late 1940s to late 1980s/early 1990s). With the outbreak of the Korean War, concerns over the defence of Western Europe rose. Two corps, V and VII, were reactivated under Seventh United States Army in 1950 and American strength in Europe rose from one division to four. Hundreds of thousands of U.S. troops remained stationed in West Germany, with others in Belgium, the Netherlands and the United Kingdom, until the 1990s in anticipation of a possible Soviet attack.
During the Cold War, American troops and their allies fought Communist forces in Korea and Vietnam (see Domino Theory). The Korean War began in 1950, when the Soviets walked out of a U. N. Security meeting, removing their possible veto. Under a United Nations umbrella, hundreds of thousands of U.S. troops fought to prevent the takeover of South Korea by North Korea, and later, to invade the northern nation. After repeated advances and retreats by both sides, and the Peoples' Republic of China 's entry into the war, a cease-fire returned the peninsula to the status quo in 1953.
The Vietnam War is often regarded as a low point in the Army's record due to the use of drafted personnel, the unpopularity of the war with the American public, and frustrating restrictions placed on the Army by US political leaders (i.e. no invading communist held North Vietnam). While American forces had been stationed in the Republic of Vietnam since 1959, in intelligence & advising/training roles, they did not deploy in large numbers until 1965, after the Gulf of Tonkin Incident. American forces effectively established and maintained control of the "traditional" battlefield, however they struggled to counter the guerrilla hit and run tactics of the communist Viet Cong and the North Vietnamese Army. On a tactical level, American soldiers (and the US military as a whole) never lost a sizable battle.[citation needed] For instance in the Tet Offensive in 1968, the US Army turned a large scale attack by communist forces into a massive defeat of the Viet Cong on the battlefield (though at the time the offensive sapped the political will of the American public) which permanently weakened the guerrilla force; thereafter, most large scale engagements were fought with the regular North Vietnamese Army. In 1973 domestic political opposition to the war finally forced a US withdrawal. In 1975, Vietnam was unified under a communist government.
The Total Force Policy was adopted by Chief of Staff of the Army General Creighton Abrams in the aftermath of the Vietnam War and involves treating the three components of the Army - the Regular Army, the Army National Guard and the Army Reserve as a single force. Believing that no US president should be able to take the United States (and more specifically the US Army) to war without the support of the American people, General Abrams intertwined the structure of the three components of the Army in such a way as to make extended operations impossible, without the involvement of both the Army National Guard and the Army Reserve.
The 1980s was mostly a decade of reorganization. The Army converted to an all-volunteer force with greater emphasis on training and technology. The Goldwater-Nichols Act of 1986 created Unified Combatant Commands bringing the Army together with the other four armed forces under unified, geographically organized command structures. The Army also played a role in the invasions of Grenada in 1983 (Operation Urgent Fury) and Panama in 1989 (Operation Just Cause).
By 1991 Germany was reunited and the Soviet Union was near collapse. The Cold War was, effectively, over. In 1990 Iraq invaded its smaller neighbor, Kuwait. In January 1991 Operation Desert Storm commenced, a U.S.-led coalition which deployed over 500,000 troops, the bulk of them from U.S. Army formations, to drive out Iraqi forces. The campaign ended in total victory for the Army, as Western coalition forces routed an Iraqi Army organized along Soviet lines in just one hundred hours.
After Desert Storm, the Army did not see major combat operations for the remainder of the 1990s. Army units did participate in a number of peacekeeping activities, such as the UN peacekeeping mission in Somalia in 1993, where the abortive Operation Gothic Serpent led to the deaths of eighteen American soldiers and the withdrawal of international forces. The Army also contributed troops to a NATO peacekeeping force in the former Yugoslavia in the middle of the decade.

21st century
After the September 11 attacks, and as part of the Global War on Terror, U.S. and NATO combined arms (i.e. Army, Navy, Air Force, Marine, Special Operations) forces invaded Afghanistan in 2001, replacing the Taliban government.
The Army took part in the combined U.S. and allied invasion of Iraq in 2003. In the following years the mission changed from conflict between regular militaries to counterinsurgency, with large numbers of suicide attacks resulting in the deaths of more than 4,000 U.S. servicemen (as of March 2008) and injuries to thousands more.[7] The lack of stability in the theater of operations has led to longer deployments for Regular Army as well as Reserve and Guard troops.