Scrigroup - Documente si articole

Username / Parola inexistente      

Home Documente Upload Resurse Alte limbi doc  

CATEGORII DOCUMENTE





BulgaraCeha slovacaCroataEnglezaEstonaFinlandezaFranceza
GermanaItalianaLetonaLituanianaMaghiaraOlandezaPoloneza
SarbaSlovenaSpaniolaSuedezaTurcaUcraineana

AdministrationAnimalsArtBiologyBooksBotanicsBusinessCars
ChemistryComputersComunicationsConstructionEcologyEconomyEducationElectronics
EngineeringEntertainmentFinancialFishingGamesGeographyGrammarHealth
HistoryHuman-resourcesLegislationLiteratureManagementsManualsMarketingMathematic
MedicinesMovieMusicNutritionPersonalitiesPhysicPoliticalPsychology
RecipesSociologySoftwareSportsTechnicalTourismVarious

Environment and Consumption

political

+ Font mai mare | - Font mai mic







DOCUMENTE SIMILARE

Trimite pe Messenger
The Political Career and Personal Qualities of Richelieu (a research paper on Richelieu)
Western Africa and E.U. Politics
Environment and Consumption
The Rise of the Merchant, Industrialist, and Capital Controller
Hunger, Poverty, and Economic Development
Antisystemic Protest
Constructing the Citizen-Activist
Disease
Peasant Protest, Rebellion, and Resistance
The Problem of Population Growth

Environment and Consumption






If the life-supporting ecosystems of the planet are to survive for future genera­tions, the consumer society will have to dramatically curtail its use of re­sourcespartly by shifting to high-quality, low-input durable goods and partly by seeking fulfillment through leisure, human relationships, and other nonmaterial avenues.

—Alan Durning, How Much Is Enough?

A man is rich in proportion to the things he can afford to let alone.

—Henry David Thoreau, Walden

The first sweetened cup of hot tea to be drunk by an English worker was a significant historical event, because it prefigured the transformation of an entire society, a total remaking of its economic and social basis. We must struggle to understand fully the consequences of that and kindred events, for upon them was erected an entirely different conception of the relationship between pro­ducers and consumers, of the meaning of work, of the definition of self, of the nature of things.

—Sidney Mintz, Sweetness and Power

All animals alter their environments as a condition of their existence. Human beings, in ad­dition, alter their environments as a condition of their cultures, that is by the way they choose to obtain food, produce tools and products, and construct and arrange shelters. But culture, an essential part of human adaptation, can also threaten human existence when short-term goals lead to long-term consequences that are harmful to human life. Swidden agriculture alters the environment but not as much as irrigation agriculture and certainly not as much as modern agriculture with its use of chemical fertilizers, pesticides, and her­bicides. Domesticated animals alter environments, but keeping a few cattle for farm work or cows for dairy products does far less damage than maintaining herds of thousands to supply a meat-centered diet.

The degree to which environments are altered and damaged is determined in part by population and in part by the technology in use. Obviously, the more people in a given


area, the more potential there is for environmental disruption. Tractors and bulldozers alter the environment more than hoes or plows. But the greatest factor in environmental alteration—in the use of raw materials, the use of nonhuman energy, and the production of waste—is consumption. Because of our level of consumption, the average American child will do twice the environmental damage of a Swedish child, three times that of an Italian child, thirteen times that of a Brazilian child, thirty-five times that of an Indian child, and 280 times that of a Chadian or Haitian child (Kennedy 1993:32).

The United States alone uses 25 percent of the world's energy and accounts for 25 percent of the world's carbon emissions that are responsible for global warming (see Table 7.1 on p. 196). The United States and Canada have by far the highest per capita rates of energy usage and carbon emissions of any countries on earth. They are also the countries most resisting the implementation of the Kyoto Accord, an international agree­ment negotiated in 1992 to reduce global carbon emissions to 1990 levels.

Mathis Wackernagel and William Rees (1996) estimate that four to six hectares of land are required to maintain the consumption level of the average person from a high-consumption country. The problem is that in 1990, worldwide there were only 1.7 hect­ares of ecologically productive land for each person. They conclude that the deficit is made up in core countries by drawing down the natural resources of their own countries and expropriating the resources, through trade, of peripheral countries. In other words, someone has to pay for our consumption levels, and it will either be our children or inhab­itants of the periphery of the world system.

Our consumption of goods obviously is a function of our culture. Only by produc­ing and selling things and services does capitalism in its present form work, and the more that is produced and the more that is purchased the more we have progress and prosperity. The single most important measure of economic growth is, after all, the gross national product (GNP), the sum total of goods and services produced by a given society in a given year. It is a measure of the success of a consumer society, obviously, to consume.

The production, processing, and consumption of commodities, however, require the extraction and use of natural resources (wood, ore, fossil fuels, and water) and require the creation of factories and factory complexes, which create toxic byproducts, while the use of commodities themselves (e.g., automobiles) creates pollutants and waste. Yet of the three factors to which environmentalists often point as responsible for environmental pol­lution—population, technology, and consumption—consumption seems to get the least attention. One reason, no doubt, is that it may be the most difficult to change; our con­sumption patterns are so much a part of our lives that to change them would require a massive cultural overhaul, not to mention severe economic dislocation. A drop in demand for products, as economists note, brings on economic recession or even depression, along with massive unemployment.

The maintenance of perpetual growth and the cycle of production and consumption essential in the culture of capitalism does not bode well for the environment. At the begin­ning of Chapter 1 we mentioned that the consumer revolution of the late nineteenth and early twentieth centuries was caused in large part by a crisis in production; new technolo­gies had resulted in production of more goods, but there were not enough people or money to buy them. Since production is such an essential part of the culture of capitalism, society quickly adapted to this crisis by convincing people to buy things, by altering basic


institutions, and even by generating a new ideology of pleasure. The economic crisis of the late nineteenth century was solved but at considerable expense to the environment in the additional waste that was created and resources that were consumed. At that time the world's population was about 1.6 billion, and those caught up in the consumer frenzy were a fraction of that total.

The global economy today faces the same problem it faced one hundred years ago, except that the world population has almost quadrupled. Consequently it is even more im­portant to understand how the interaction between capital, labor, and consumption in the culture of capitalism creates an overproduction of commodities and how this relates to en­vironmental pollution. To illustrate, let's take a quick look at the present state of the global automobile industry.

In capitalism competition between companies for world markets requires that they constantly develop new and improved ways to produce things and lower costs. In some industries, such as textiles, as we saw in Chapter 2, competition requires seeking cheaper sources of labor; in others, such as the automobile industry, it means creating new tech­nologies that replace people with machines to lower labor costs. Twenty years ago the production of one automobile took hundreds of hours of human labor. Today a Lexus LS 400 requires only 18.4 hours of human labor, Ford Motor Company produces several cars with 20.0 hours of human labor, and General Motors lags behind at about 24.8 hours per car (Greider 1997:110-112).

In addition to reducing the number of jobs available to people, advanced productive technology creates the potential for producing ever more cars, regardless of whether there are people who want to buy them. In 1995 the automobile industry produced over 50 mil­lion automobiles, but there was a market for only 40 million. What can companies do? Obviously they can begin to close plants or cut back on production, which some do. In the 1980s some 180,000 American auto workers lost their jobs because of cutbacks and fac­tory shutdowns. But the producers, of course, hope the problem of selling this surplus is someone else's problem, so they continue to produce cars.

From the perspective of the automobile companies and their workers, the preferred solution to overproduction is to create a greater demand for automobiles. This is difficult in core countries, where the market is already saturated with cars. In the United States, for example, there is one car for every 1.3 persons. However, there are places in the world where there are few cars. In China, for example, there is only one car for every 125 peo­ple, in India only one in 142.9, and in Bangladesh only one in 1,000 (see Table 7.1). Imag­ine the environmental impact if the consumption rate of automobiles in China alone, with a population of well over a billion people, even began to approach the consumption rate in the United States.

But that is exactly the goal of automotive manufacturers and the nation-states that operate to help them build and sell their products. Not only would automobile makers in the core like to enter the Chinese market, the Chinese themselves plan to build an automo­bile industry as large as that of the United States, to produce cars for their own market and to compete in other markets as well. If China—or India, Indonesia, Brazil, or most of the rest of the periphery—even approached the consumption rate of automobiles common in the core, the increased environmental pollution would be staggering. There would be not only massive increases in hydrocarbon pollution but also vastly increased demands for


raw materials, especially oil. And the overproduction dilemma is not unique to automo­biles: the steel, aircraft, chemical, computer, consumer electronics, drug, and tire indus­tries, among others, face the same dilemma.

The environmental problem could be alleviated if consumers simply said 'enough is enough' and stopped consuming as much as they do. Indeed, there has been a number of social movements to convince people to consume less. But, as previously noted, any re­duction of consumption would likely cause severe economic disruption. Furthermore, few are aware of how large a reduction would have to be to effect a change. A study by Friends of the Earth Netherlands asked what the consumption levels of the average Dutch person would have to be in the year 2010 if consumption levels over the world were equal and if resource consumption was sustainable. They found that consumption levels would have to be reduced dramatically. For example, to reduce global warming by the year 2010, people in the Netherlands would have to reduce carbon dioxide emission by two-thirds; to accomplish that a Dutch person would have to limit the use of carbon-based fuel to one liter per day, thus limiting travel to 15.5 miles per day by car, 31 miles per day by bus, 40 miles per day by train, or 6.2 miles per day by plane. A trip from Amsterdam to Rio de Janeiro could be made only once every twenty years (see Korten 1995:34).

Thus significantly reducing our consumption patterns is no easy task. Consumption is as much a part of our culture as horse raiding and buffalo hunting were part of Plains Indian culture; it is a central element. Consequently there is no way to appreciate the problem of environmental destruction without understanding how people are turned into consumers, how luxuries are turned into necessities. That is, why do people choose to consume what they do, how they do, and when they do ?

Take sugar, for example. In 1997, each American consumed in his or her soft drinks, tea, coffee, cocoa, pastries, breads, and other foods sixty-six pounds of sugar (USDA 2000). In addition, Americans consume almost two hundred grams, or fifty-three teaspoonfuls, of caloric sweeteners each day (Gardner and Halweil 2000:31). Why? Liking the taste might be one answer. In fact, a predilection for sweets may be part of our biological makeup. But that doesn't explain why we consume it in the form of sugarcane and beet sugar and in the quantities we do. Then there is meat. Modern livestock produc­tion is one of the most environmentally damaging and wasteful forms of food production the world has known. Yet Americans eat more meat per capita than all but a few other peoples. Some environmentalists argue that we can change our destructive consumption patterns, if we desire. But is our pattern of consumption only a matter of taste and of choice, or is it so deeply embedded in our culture as to be virtually impervious to change?

To begin to answer this question, we shall examine the history of sugar and beef, commodities that figure largely in our lives but involve environmental degradation. The combination of sugar and beef is appropriate for a number of reasons:

1.           The production and processing of both degrade the environment; furthermore, the
history of sugar production parallels that of a number of other things we consume,
including coffee, tea, cocoa, and tobacco, that collectively have significant environ­
mental effects.

2.     Neither is terribly good for us, at least not in the quantities and form in which we
consume them.


3.          Both have histories that closely tie them to the growth and emergence of the capital­
ist world economy. They are powerful symbols of the rise and economic expansion
of capitalism; indeed, they are a result and a reason for it.

4.          With the rise of the fast-food industry, beef and sugar, fat and sucrose have become
the foundations of the American diet, accounting for more than one-half of the ca­
loric intake of North Americans and Europeans (Gardner and Halweil 2000:15). In­
deed, they are foundation foods of the culture of capitalism symbolized in the
hamburger and Coke, a hot dog and soda, and the fat and sucrose dessert—ice cream.

The Case of Sugar

The history of sugar reveals how private economic interests, along with economic policies of the nation-state and changes in the structure of society combined to convert a commod­ity from a luxury good believed to have health benefits into a necessity with overall harm­ful health consequences. In the process, it vastly increased the exploitation of labor—first in the form of slavery, then in migrant labor—converted millions of acres of forest into sugar production—in the process expelling millions from their land—and changed the di­etary habits of most of the world. It illustrates how our consumption patterns are deter­mined in capitalism and why we engage in behavior that may be environmentally unsound and personally harmful. The story of sugar is an excellent case study of how the nation-state mediated interaction of the capitalist, the laborer, and the consumer produces some of our global problems.

Sugar Origins and Production

Sugarcane, until recently the major source of sugar, was first domesticated in New Guinea, then grown in India and the Middle East. The processing of sugarcane into sugar is complex and environmentally damaging. There are various kinds of sugar plants, most of which grow quickly after regenerating from cuttings left in the fields after harvests or from the controlled planting of cuttings. The stalk matures in nine to eighteen months and must be cut when the juice in the stalk contains the most sucrose. The juice must be ex­tracted quickly before it rots or ferments; it is squeezed from the cane by chopping, press­ing, or pounding, then heated to evaporate the liquid, leaving crystals from which centrifugal machines extract most of the molasses. The molasses may be used as a sweet­ener or, more importantly, processed into rum. The raw sugar that remains after the mo­lasses is extracted can be consumed as is, turned into a liquid syrup, or processed further to obtain the granular white sugar that most Americans and Europeans favor.

Sugar production alters the environment in a number of ways. Forests must be cleared to plant sugar; wood or fossil fuel must be burned in the evaporation process; waste water is produced in extracting sucrose from the sugarcane; and more fuel is burned in the refining process. When Spain sought to expand sugar production into the Atlantic islands in the sixteenth century, it colonized the Canary Islands, then inhabited by the Guanche. The Spaniards transformed the Canarian ecosystem, clearing the forests and hillsides to make way for cane fields and to obtain fuel for the fires of the ingenio, or


sugar house. Within a few decades wood was so scarce that the government tried, in vain, to protect the forests from the lumberjacks (Crosby 1986:96). The Guanche were also gone within a century. When sugar production expanded in the seventeenth century, the sugar refineries of Antwerp caused so much pollution that the city banned the use of coal. Contemporary sugar production in Hawaii not only has destroyed forests, but also waste products from processing have severely damaged marine environments. 'Big sugar,' as the sugar industry is called in Florida, is largely responsible for the pollution, degrada­tion, and virtual destruction of the Everglades.

Sugar, therefore, like virtually all commodities, comes to us at a heavy environmen­tal cost. Yet people did not always crave sugar. For that to happen, a luxury had to be con­verted into a necessity, a taste had to be created.

Uses of Sugar

By A.D. 1000, when sugar was grown in Europe and the Middle East, it was a highly valued trade item and a luxury. Sugar was used largely as a spice and a medicine and was available only to the wealthy. In Arabian medical works from the tenth and fourteenth centuries, for example, sugar was an ingredient in virtually every medicine. So useful was sugar as a medicine that one way of expressing desperation or helplessness was the saying 'like an apothecary without sugar' (Mintz 1985:101). According to one source, 'nice, white sugar' from the Atlantic islands cleaned the blood and strengthened the chest, lungs, and throat; when used as a powder it was good for the eyes, and when smoked it was good for the common cold. Mixed with cinnamon, pomegranate, and quince juice, it was good for a cough and fever (Mintz 1985:103).

Sugar was also used for decoration, mixed with almonds (marzipan) and molded into all kinds of shapes, the decorations becoming central to celebrations and feasts. And it was used as a spice in cooking and, of course, as a sweetener. It was also used as a pre­servative. We still use sugar to preserve ham, and it is often added to bread to increase its shelf life. But through the seventeenth century, even with its diverse uses, it was an expen­sive luxury item reserved for the upper classes.

The Development of the Sugar Complex

As a luxury item, sugar brought considerable profits for those who traded in it. In fact, it was the value of sugar as a trade item in the fifteenth and sixteenth centuries that led Spain and Portugal to extend sugarcane production, first to the Atlantic Islands, then to the Caribbean islands, and finally to Brazil, from which, beginning in 1526, raw sugar was shipped to Lisbon for refining.

Modern economists like to talk about the spin-off effects of certain commodities, that is the extent to which their production results in the development of subsidiary indus­tries. For example, production of automobiles requires road construction, oil and petro­leum production, service stations, auto parts stores, and the like. Sugar production also produced subsidiary economic activities; these included slavery, the provisioning of the sugar producers, shipping, refining, storage, and wholesale and retail trade.



The increased demand for sugar in the eighteenth and nineteenth centuries represented a boon for West Indies sugar plantations and created a demand for more laborers including, slaves and children.

The slave trade was a major factor in the expansion of the sugar industries. Slaves from Europe and the Middle East were first used on the Spanish and Portuguese planta­tions of the Canary Islands and Madeira, but by the end of the fifteenth century slaves from West Africa were working the fields. The growing demand for and production of sugar created the plantation economy in the New World and was largely responsible for the expansion of the Atlantic slave trade in the sixteenth, seventeenth, and eighteenth cen­turies. From 1701 to 1810 almost one million slaves were brought to Barbados and Ja­maica to work the sugar plantations.

Money was to be made also from the shipment of raw sugar to European refineries, more yet from the wholesale and retail sale of sugar, and probably more yet from the sale by European merchants of necessary provisions to the plantation owners. Investors in Eu­rope, especially England, put money into the sugar industry, either in the development of plantations, the sale of provisions to the colonial plantations, shipping, or the slave trade. Attorneys, grocers, drapers, and tailors invested small amounts to form partnerships to fi­nance slave-buying expeditions to Africa and the subsequent resale of slaves to buyers from the sugar plantations of the New World. Thus it was during the sixteenth and seventeenth centuries that sugar became the focus of an industry, a sugar complex that combined the


sugar plantations, the slave trade, long-distance shipping, wholesale and retail trade, and investment finance.

The Expansion of Sugar Consumption

It was not until the late seventeenth century that sugar production and sales really began to influence sugar consumption in Europe. Sugar consumption increased fourfold in England and Wales from 1700 to 1740 and doubled in the next thirty-five years. From 1663 to 1775 consumption increased twentyfold. Sugar consumption rose more rapidly than that of bread, meat, and dairy products in the eighteenth century. While the per capita annual consumption in 1809 of eighteen pounds of sugar per person does not compare to our present consump­tion of almost seventy pounds, it was more than sufficient to generate large profits.

Why did people in England begin to consume sugar in greater and greater quanti­ties? First, increased sugar production led to reduced prices, making it accessible to more people, although its use was still largely confined to the upper and emerging middle classes of English society. One reason prices remained as high as they did was the impo­sition of high import tariffs on sugar produced in other countries. Planters in the British West Indies and people who invested in their enterprises were a powerful force in English politics. To protect their profits, they lobbied and received protection from foreign com­petitors. They were also powerful enough to prevent British abolitionists from winning legislation to end the slave trade, at least until the beginning of the nineteenth century.

Second, the benefits of sugar were widely touted by various authorities, notably pop­ular physicians. Dr. Frederick Slare found sugar a venerable cure-all. He recommended that women include at their breakfast bread, butter, milk, and sugar. Coffee, tea, and chocolate were similarly 'endowed with uncommon virtues,' he said, adding that his message would please the West Indian merchant and the grocer who became wealthy on the production of sugar. Slare also prescribed sugar as a dentifrice, a lotion, and a substitute for tobacco in the form of snuff and for babies. Sidney Mintz (1985:107-108) said of Slare that while his en­thusiasm for sugar is suspect, it is more than a curiosity because it relates to so many aspects of what was then still a relatively new commodity; furthermore, by stressing sugar's value as a medicine, food, and preservative, he was drawing additional attention to it.



Slare's enthusiasm for sugar was not an oddity; others shared his enthusiasm. No contemporary advertising executive could improve on the description of sugar by John Oldmixin, a contemporary of Slare:

One of the most pleasant and useful Things in the World, for besides the advantage of it in Trade, Physicians, and Apothecaries cannot be without it, there being nearly three Hun­dred Medicines made up with sugar; almost all Confectionery Wares receive their Sweet­ness and Preservation from it, most Fruits would be pernicious without it; the finest pastries could not be made nor the rich Cordials that are in the Ladies' Closets, nor their Conserves; neither could the Dairy furnish us with such a variety of Dishes, as it does, but by their Assistance of this noble Juice. (cited Mintz 1985:108)

A third reason that sugar consumption increased in the eighteenth century was its use as a sweetener for three other substances, all bitter and all technically drugs (stimu-


lants)—tea, coffee, and cocoa. All of these were used in their places of origin without sugar, in spite of their bitterness. All three initially were drinks for the wealthy; by the time they were used by others they were generally served hot and sweetened.

Fourth, sugar's reputation as a luxury good inspired the middle classes to use it to emulate the wealthy—sugar was a sign of status. The powerful used sugar for conspicu­ous consumption, as a symbol of hospitality and the like. When sugar was a luxury the poor could hardly emulate these uses, but as the price declined and as its use expanded, sugar became available to the poor to use in much the same way as their social betters.

Finally, sugar consumption increased because the government increased its pur­chase of sugar and sugar products. After the capture of Jamaica and its sugar plantations from the French in 1655, the British navy began to give its sailors rum rations, set in 1731 at half a pint per day and later increased to a pint per day for adult sailors. The govern­ment also purchased sugar to distribute to poorhouse residents.

Thus sugar production and consumption increased, as did the amount of land de­voted to its production and the number of sugar mills and refineries, distilleries producing rum, and slaves employed in the whole process. Most important, the profits generated by the sugar trade increased dramatically.

The Mass Consumption of Sugar

By 1800, British sugar consumption had increased 2,500 percent since 1650 and 245,000 tons of sugar reached European consumers annually from the world market. By 1830 pro­duction had risen to 572,000 tons per year, an increase of more than 233 percent. By 1860, when beet sugar production was also rising, world production of sucrose increased another 233 percent to 1.373 million tons. Six million tons were produced by 1890, an­other 500 percent increase (Mintz 1985:73).

Two acts by the British government helped spur this massive increase in sugar pro­duction and consumption. First, the government removed tariffs on the imports of foreign sugar. This made foreign sugar more accessible to British consumers and forced domestic producers to lower their prices, making it affordable to virtually all levels of British soci­ety. Second, England abolished slavery during the years 1834-1838 (it had abolished the slave trade in 1807). This had the effect of forcing technological improvements, but it also spurred the creation of a labor pattern that exists to the present. The freed slaves were without land and tools and were dependent on whatever labor they could get. As Mintz (1985:176) noted, while freed from the discipline of slavery they were reduced to laborers by the discipline of hunger. The surplus supply of labor was increased when the British Foreign Office went to the aid of the planters in the West Indies by helping them import contracted laborers from India, China, and elsewhere. The freed slaves, unable to secure a livelihood independent of the sugar industry or to use collective bargaining, settled into obscurity until they reentered British consciousness as migrants to England more than a century later.

The lower price of sugar increased its use in tea and stimulated a dramatic rise in the production of preserves and chocolate. More important, it must have been apparent to sugar producers and sellers that there was a fortune to be made by increasing the availability of


sugar to the working mass of England. Certainly there were those who worked hard to expand its availability.

Sidney Mintz's history of sugar reveals how much social, political, and economic power had to do with increased sugar consumption. Planters, slavers, shippers, bankers, refiners, grocers, and government officials, all profiting in one way or another from in­creased sugar consumption, exercised power to support the rights and prerogatives of planters, the maintenance of slavery, the availability of sugar and its products (molasses, rum, preserves) and products associated with it (tea, coffee, cocoa), and to supply it to the people at large at prices they could afford. Thus the consumption of sugar was hardly just a matter of taste—it had to do with investments, taxes, the dispensation of sugar through government agencies, and a desire to emulate the rich, among other things. It also had to do with convenience and the changes in household structure, labor, and diet that accom­panied the Industrial Revolution.

Rural workers in England in the eighteenth and nineteenth centuries typically had diets that consisted of oatmeal, porridge, milk, homemade bread, and vegetable broth. While simple, the diet was relatively nutritious. In the industrial cities, however, it could be costly. Fresh food in general would have been more costly in the city, and food prepa­ration, especially if it needed cooking, required fuel, which cost yet more money. Further­more, urban women were, as noted earlier, also working in the factories twelve to fourteen hours per day, reducing the time they could spend on food preparation.

As a result, the diet of the urban working class and poor was transformed to one dominated by tea, sugar, store-bought white bread, and jam. Hot tea replaced vegetable broth. Jams (50-65 percent sugar) were cheaper than butter to put on bread, were easily stored, and could be left open on shelves for children to be spread on bread in the absence of adults. In other words, the cultural and social constraints of time and cost created in the urban, industrial setting combined with the convenience of sugar and the prodding of those who profited from its sale to shape the diet of the British working class. It was an ideal arrangement, for, as E. P. Thompson (1967) noted, after providing profits to planta­tion and refinery investors, sugar provided the bodily fuel for the working people of Brit­ain. Sugar is also what Sidney Mintz referred to as a 'drug food,' a category that includes coffee, tea, cocoa, alcohol, and tobacco—foods that deaden hunger pangs and stimulate effort without providing nutrition and do so cheaply. That is one reason why they have been transformed from upper-class luxuries to working-class necessities.

Modern Sugar

Sidney Mintz (1985:180-181) suggested that the consumption of goods such as sugar is the result of profound changes in the lives of working people, changes that made new forms of foods and of eating seem 'natural,' as new work schedules, new sorts of labor, and new conditions of life became 'natural.' This does not mean that we lack a choice in what we consume, but that our choice is made within various constraints. We may have a choice between a McDonald's hamburger and a Colonel Sanders chicken leg during a half-hour lunch break. The time available acts to limit our choice, removing, for example, the option of a home-cooked vegetarian lunch.



Sugar has become, as it did for the nineteenth-century British laborer, a mainstay of the fast-food diet in the United States, the perfect complement to fat. Both fat and sugar are made more attractive by the clever use of language. The fat side of our diet is adver­tised with words like 'juicy,' 'succulent,' 'hot,' 'luscious,' 'savory,' and 'finger-licking good.' The sugar side is advertised as 'crisp,' 'fresh,' 'invigorating,' 'wholesome,' 're­freshing,' and 'vibrant.' And the sugar in soft drinks serves as the perfect complement to hamburgers and hot dogs, since it possesses what nutritionists call 'go-away' qualities— removing the fat coating and the beef aftertaste from the mouth.

Thus, sugar fits our budgets, our work schedules, and our psychological needs while at the same time generating monetary profits and growth. As Mintz (1985:186) put it, sugar

[s]erved to make a busy life seem less so; in the pause that refreshes, it eased, or seemed to ease, the changes back and forth from work to rest; it provided swifter sensations of full­ness or satisfaction than complex carbohydrates did; it combines easily with many other foods, in some of which it was also used (tea and biscuit, coffee and bun, chocolate and jam-smeared bread)No wonder the rich and powerful liked it so much, and no wonder the poor learned to love it.

The Story of Beef

The story of beef is very much like that of sugar, except that livestock breeding has been indicted for even greater environmental damage than sugar production, largely because of the vast amount of land needed to raise cattle. As an agricultural crop, sugar is quite effi­cient; while it has little nutritional value, it is possible to get about eight million calories from one acre of sugarcane; to get eight million calories of beef requires 135 acres. In ad­dition, much of the beef we eat is grain-fed to produce the marbling of fat that makes it choice grade and brings the highest prices; as mentioned earlier, 80 percent of the grain produced in the United States is fed to livestock. In addition, two-thirds of U.S. grain ex­ports go to feed livestock in other countries. Thus to the amount of land needed for range-land, we must add the farmland devoted to animal feed; moreover, as we saw in Chapter 6, this grain production requires tons of chemical fertilizer, pesticides, and herbicides, all of which negatively alter the environment.

Cattle raising consumes a lot of water. Half the water consumed in the United States is used to grow grain to feed cattle; the amount of water used to produce ten pounds of steak equals the household consumption of a family for an entire year. Fifteen times more water is needed to produce a pound of beef protein than an equivalent amount of plant protein. There are also environmental problems associated with beef waste products; a feedlot steer produces forty-seven pounds of manure per day (Ensminger 1991:187), not to mention the methane gases that contribute to the destruction of the ozone layer. Even more pollution is produced by the slaughter, refrigeration, transport, and cooking of beef.

Cattle raising has also been criticized for its role in the destruction of tropical for­ests. Hundreds of thousands of acres of tropical forests in Brazil, Guatemala, Costa Rica, and Honduras, to name just a few countries, have been leveled to create pasture for cattle. Since most of the forest is cleared by burning, the extension of cattle pasture also creates


carbon dioxide and contributes significantly to global warming. In addition, with increas­ing amounts of fossil fuel needed to produce grain, it now takes a gallon of gasoline to produce a pound of grain-fed beef.

Much of the rangeland of the United States has been devastated by livestock herd­ing to the point that it has become desert. Currently 2 to 3 million cattle graze on 306 mil­lion acres of public land. According to the General Accounting Office (GAO) more plant species are being threatened by cattle grazing than by any other single factor, and popula­tions of pronghorn, antelope, and elk have virtually disappeared from western rangelands. Much of this is a consequence of government policy. For example, the Bureau of Land Management, responsible for allotting land for livestock use, in one district in Oregon allots about 252 million pounds of herbage to livestock and 8 millions pounds to wildlife. To protect livestock, the government also participates in the killing of thousands of coy­otes and other so-called predator animals each year, as well as animals such as bison and deer that may carry disease. It is not surprising that the Bureau of Land Management re­ported that almost 95 million acres of rangelands are in 'unsatisfactory condition,' a con­dition attributed by researchers to overgrazing (Rifkin 1992:211).

The same problems are occurring in areas of Africa that have a long tradition of cattle raising. When cattle populations were managed by traditional means and for tradi­tional consumption, there was little environmental damage. When attempts were made to introduce Western livestock raising practices and technologies to increase the number of cattle and develop a larger beef export industry, pasture has turned to desert and wild ani­mals have disappeared, largely because of overgrazing by cattle (Rifkin 1992:216).

In addition, beef is terribly inefficient as a source of food. By the time a feedlot steer in the United States is ready for slaughter, it has consumed 2,700 pounds of grain and weighs approximately 1,050 pounds; 157 million metric tons of cereal and vegetable protein are used to produce 28 metric tons of animal protein. Finally, beef in the quantities that Americans consume it is unhealthy, being linked to cardiovascular disease, colon cancer, breast cancer, and osteoporosis.

Americans are among the highest meat consumers in the world and the highest con­sumers of beef. Over 6.7 billion hamburgers are sold each year at fast-food restaurants alone. Furthermore, we are exporting our taste for beef to other parts of the world. The Japanese, who in the past consumed only one-tenth the amount of meat consumed by Americans, are increasing their consumption of beef. McDonald's sells more hamburgers in Tokyo than it does in New York City.

Marvin Harris (1986) suggested that 'animal foods play a special role in the nutri­tional physiology of our species.' He pointed out that studies of gathering and hunting so­cieties reveal that 35 percent of the diet comes from meat, more than even Americans eat, and that this dietary pattern has existed for hundreds of thousands years. Many cultures have a special term for 'meat hunger.' The Ju/wasi of Botswana, for example, say 'hun­ger is grabbing me,' not just because they haven't eaten but because they haven't eaten meat. Harris pointed out that it is also an especially efficient protein source and that when people can afford it, they eat more meat.

Historically few societies, however, have made meat the center of their diet. If we look around the world, we find that most diets center on some complex carbohydrate-rice, wheat, manioc, yams, taro—or something made from these—bread, pasta, tortillas,


and so on. To these are added some spice, vegetables, meat, or fish, the combination giving each culture's food its distinctive flavor. But meat and fish are generally at the edge, not the center, of the meal (Mintz 1985). Moreover, whether or not we just like meat, why do American preferences run to beef? Anthropologists Marvin Harris and Eric Ross (1987b) have some interesting answers that may help us understand why, in spite of the environmental damage our beef consumption causes, we continue to eat it in such quantities. The answers involve understanding the relationships among Spanish cattle, British colonialism, the American government, the American bison, indigenous peoples, the automobile, the hamburger, and the fast-food restaurant.

The Ascendancy of Beef

The story of the American preference for beef begins with the Spanish colonization of the New World. The Spanish, as noted earlier, introduced the so-called cattle complex to the New World, where it became established in Argentina, areas of Central America, particu­larly Northern Mexico, and Texas. By the 1540s, cattle were so numerous around Mexico City that the Spaniards had to train Indians to handle them. Fortunes were made in cattle in the sixteenth century on meat and leather.

In Argentina, the number of feral cattle increased so rapidly on the pampas that by the seventeenth century meat was eaten three times a day, and animals were killed for their hides and the meat left to rot. One seventeenth-century traveler wrote of Argentina,

All the wealth of these inhabitants consists in their animals, which multiply so prodi­giously that the plains are covered with them in such numbers that were it not for the dogs that devoured the calves. .. they would devastate the country. (cited Rifkin 1992:49)

In colonial America, however, pigs, not beef, were the meat of choice. Eric Ross (1980) pointed out that the preference for beef or pork is related in part to environmental factors. Pigs tend to be raised in forested areas and can be maintained in areas of rela­tively dense populations since they eat the same food as human beings. In densely popu­lated West Germany in 1960, the ratio of cattle to hogs was 0.06 to 1, while in sparsely populated Argentina, with its large tracts of range land, it was 11.2 to 1.0. Another reason pork was preferred in America was that the preservation process—smoking, salting, and pickling—improved the flavor of the meat, whereas for beef it did not. In fact, pork was the meat of choice in the United States until the 1960s, when beef overtook it. While meat consumption in the United States has declined since the 1970s, beef is still preferred (see Table 7.2 on p. 208).

The Emergence of the American Beef Industry

On the eve of the Industrial Revolution, England was the beef-eating capital of the world, with 100,000 head of cattle slaughtered annually in London. But in the nineteenth cen­tury, with its population increasing and more people migrating to the factory towns and cities, England began to look toward its colonies and ex-colonies for food, especially


TABLE 7.2   U.S. per Capita Meat Consumption, 1900-1998 (in pounds)

Year

Beef

Pork

Year

Beef

Pork

Year

Beef

Pork

1998

64.9

49.1

1977

125.9

61.6

1950

63.4

69.2

1997

63.8

45.6

1975

120.1




54.8

1940

54.9

73.5

1996

65.0

49.9

1970

113.7

66.4

1920

59.1

63.5

1993

61.5

48.9

1960

85.1

64.9

1900

67.1

71.9

1990

64.0

46.4

See also Ross 1980:191; Bureau of Census, 1990, 1993, 1994; USDA/NASS Agricultural Statistics, 2000, http:/ /www.usda.gov/nass/pubs/agr00/acro00.htm

meat. Eric Ross (1980) suggested that the motivation was not simply to get food but to keep meat prices low in order to keep wages low and allow industry in Great Britain to remain competitive with industries in other countries. As we saw earlier, the British in­creased cattle production in Ireland by increasing the amount of land devoted to pasture, pushing people onto smaller plots of land and increasing dependence on the potato. When the potato blight hit Ireland in 1846-1847 and millions starved, the export of Irish grain and livestock intensified. In fact, because of the massive outmigrations caused by the fam­ine, English landlords were able to intensify cattle production even more; from 1846 to 1874 the number of Irish cattle exported to England climbed from 202,000 to 558,000, and more than 50 percent of the total land mass of Ireland was devoted to cattle raising.

England next turned to Argentina, where the development of the refrigerated steamer permitted the shipment of fresh beef to England. In the 1870s English, Scottish, and Irish colonists already owned 45 percent of the sheep and 20 percent of the cattle herds in Argentina, and English demand and English capital helped develop the Argentine beef business. One of the greatest nineteenth-century fortunes made in England, that of the Vesteys, was made by dominating the Argentine meat market.

Who was eating all this meat? It was not the working class, whose breakfast con­sisted of little more than bread, butter or jam, and tea with sugar and whose dinner might include a meat byproduct, such as Liebig's extract (made from hides and other residue), or inferior cuts. The gentry, however, apparently consumed vast quantities. Beef, in fact, had been for some time the choice of the British well-to-do. For example, in 1735 a group of men formed the Sublime Society of Beef Steaks, most renowned for the invention of the sandwich by one of its members. The society consisted largely of members of the British elite but also included painters, merchants, and theatrical managers. It existed until 1866. Twice a year the group would meet for dinner, at which, according to the soci­ety's charter, 'beef steaks shall be the only meat for dinner' (Lincoln 1989:85).

Here is one description, dating from 1887, of a typical breakfast table of the British nobility (Harris and Ross 1987b:35-36):

In a country house, which contains, probably, a sprinkling of good and bad appetites and digestions, breakfasts should consist of a variety to suit all tastes, viz.: fish, poultry, or





game, if in season; sausages, and one meat of some sort, such as mutton cutlets, or fillets of beef; omelets, and eggs served in a variety of ways; bread of both kinds, white and brown, and fancy bread of as many kinds as can conveniently be served; two or three kinds of jam, orange marmalade, and fruits when in season; and on the side table, cold meats such as ham, tongue, cold game, of game pie, galantines, and in winter a round of spiced beef.

The army and navy consumed enormous amounts of meat, each sailor and soldier getting by regulation three-fourths of a pound of meat daily; in fact, the diet of the mili­tary was vastly superior to that of the bulk of the population. Between 1813 and 1835 the British War Office contracted for 69.6 million pounds of Irish salted beef and 77.9 million pounds of Irish salt pork; as Harris and Ross (1987:37-38) noted, Irishmen were able to eat the meat of their own country only by joining the army of the country that had colo­nized theirs. Furthermore, the British military, by distributing rum and meat to their men, helped to subsidize both the sugar and meat industries.

Laying largely untapped in the latter half of the nineteenth century, by either the British or the Americans, were the American Great Plains and the vast herds of Texas longhorns. The longhorn was the remnant of the Spanish herds that ran wild. It was uniquely adapted to the extremes of heat and cold of the prairies; it could eat almost any­thing, including leaves and prickly pear, and by the 1830s and 1840s cowboys began to round up the strays and herd them to New Orleans. In the 1830s there were 100,000 head of cattle roaming Texas; by 1860 there were over 3.5 million.

But cattle traders faced three problems in trying to make a profit from longhorn cat­tle. The first problem was shipping cattle to areas in the Midwest from where they could be distributed. They could be driven overland, but that was too costly. The second prob­lem was the availability of rangeland; in the 1860s the plains were occupied by indige­nous peoples and their major food source, the buffalo. Finally, there was the quality of the beef; longhorn beef was too lean and tough for British tastes. The solutions to these prob­lems would define the American taste for beef and a good part of the history of the Amer­ican West.

The problem of transporting cattle to the Midwest and East was solved by a young entrepreneur, Joseph McCoy, who convinced the Union Pacific Railway to construct a siding and cattle pen at its remote depot in Abilene, Kansas, and pay him a commission on every animal he delivered for shipment. The animals would be driven from Texas to Abilene on the Chisholm trail. McCoy first needed and got the governor of Kansas to lift a quarantine on Texas cattle that had been imposed because of the spread of Texas fever; then he persuaded the Illinois legislature to allow his shipment of cattle into their state. On September 5, 1867, McCoy shipped twenty railway cars of cattle east from Abilene. By 1871 he was shipping 700,000 cattle annually. Through the 1870s the Chisholm Trail was traveled by herd after herd headed for the slaughterhouses, tables, and leatherworks of the East.

But as the demand for meat, leather, and tallow grew, more land was needed for cat­tle; the plains, once considered the Great American Desert, were being promoted as a land of a 'fairy-tale' grass that required no rain and could support millions of head of cattle (Rifkin 1992:73). Only two things stood in the way of its use—buffalo and Indians.


Cattlemen, Eastern bankers, the railroads, and the U.S. Army believed the solution to both problems could be effected by the extermination of the buffalo, and they joined in a systematic campaign to that end. In a period of about a decade, from 1870 to 1880, in one of the world's greatest ecological disasters, buffalo hunters ended 15,000 years of continuous existence on the plains of the American bison, reducing herds of millions to virtual extinction. Stationed in Kansas, Colonel Richard Henry Dodge wrote that in 1871 buffalo around the post were virtually limitless; by the fall of 1873 'there was now myri­ads of carcasses. The air was foul with a sickening stench, and the vast plain, which only a short twelve months before teemed with animal life, was a dead, solitary, putrid desert' (cited Rifkin 1992:74).

Buffalo hunters were getting one to three dollars a hide, and heroes such as William F. Cody (Buffalo Bill) were entertaining European royalty on buffalo hunts, while rail­road passengers, armed with rifles provided by the conductors, shot the animals from the moving trains. Not everyone approved of the slaughter, and some newspaper editorials condemned it but to no avail. Even the bones were ground up for fertilizer and sold for eight dollars a ton. The 'white harvest,' as it was called, even engaged indigenous groups, who would bring the bones in wagons for sale at the railroad depots; meat met sugar as 'fresher bones' were made into char and used in the refining process to remove the brownish coloration of sugar. In a speech to the Texas legislature in 1877, General Phillip Sheridan said of the buffalo hunters:

These men have donemore to settle the vexed Indian question than the entire regular army has done in the last thirty years. They are destroying the Indians' commissary; and it is a well-known fact that an army losing its base of supplies is placed at a great disadvan­tage. Send them powder and lead if you will; but for the sake of lasting peace let them kill, skin, and sell until the buffalo is exterminated. Then your prairies can be covered with speckled cattle and the festive cowboy who follows the hunter as a second forerunner of an advanced civilization, (cited Wallace and Hoebel 1952:66)

With the buffalo went the Indians of the Plains. Their major food and source of ritual and spiritual power removed, they were soon vanquished and confined to reserva­tions, with land granted to them in earlier treaties with the U.S. government taken away.

In one of the great ironies of history, cattlemen made fortunes selling beef to the U.S. government for distribution to Indians forced onto reservations and hungry because of the buffalo slaughter. Furthermore, cattlemen grazed their animals on what remained of Indian land, paying them in beef or cash only a fraction of what the grazing rights were worth.

The final problem in the story of England and the Texas longhorn involved the toughness or leanness of plains cattle. The British liked their beef generously marbled with fat. This problem was solved by a historic bargain; western cattle would be trans­ported to the midwestern farmbelt and fed corn until their meat was speckled with fat, then shipped by rail and steamer to English ports (Rifkin 1992:58-59). The integration of the plains and the prairie, rangeland and farmland, was so complete that to this day the price of corn is closely linked to the demand for and price of cattle.

As a consequence of the merger of cattle and corn in the 1870s, British banks were pouring millions into the American West. They formed the Anglo-American Cattle Com-




pany Ltd. with £70,000 of capital; then the Colorado Mortgage and Investment Company of London, buying 10,000 acres of rangeland north of Denver; then the Prairie Cattle Company Ltd. and the Texas Land and Cattle Company, Ltd. The Scottish-American Company invested £220,000 in land in Wyoming and the Dakotas. Cattlemen associa­tions were formed that controlled millions of acres and often became spokespersons for foreign cattle barons. In all, the British invested some $45 million in western real estate, and by the 1880s America was responsible for 90 percent of beef imported to England (Harris and Ross 1987b:38). By the mid-1880s, 43,136 tons of fresh beef were being shipped yearly to Great Britain (Rifkin 1992:95).

The takeover of the West by the British so alarmed some Americans that in the 1884 presidential election, both parties included planks that would limit 'alien holdings' in the United States. The Republican campaign slogan of 1884, 'America for Americans,' was directed not at poor Latin Americans or Asian migrants or European minorities, as it would be later, but the British elite. But the British invasion of the American cattle indus­try had one other long-lasting effect: it defined for the next one hundred years the Ameri­can taste in beef.

In response to both British tastes and midwestern farm interests, the U.S. Depart­ment of Agriculture (USDA) developed a system of grading beef that awarded the highest grade—prime—to the beef with the most fat content, choice grade to the next fattiest, select grade to the next, and so on. Thus the state participated in creating a system that in­spired cattle raisers to feed cattle grain and add fat because it brought the best price, while at the same time communicating to the consumer that since it was most expensive the most marbled cuts of beef must be the best.

The federal inspection and grading standards also aided another important sector of the beef industry, the meat packers. Beef packers wanted to centralize their operations, to bring live animals to one area to be butchered, but most states had laws that required in­spection of live animals twenty-four hours before slaughter. Butchers were opposed to the centralized slaughter of beef because, since most animals were slaughtered and butchered locally, centralizing the operation would put many of them out of business. The beef packing industry lobbied successfully to convince Congress to pass a federal meat inspec­tion system that, unlike state inspection systems, would not affect out-of-state centralized operations (Harris and Ross 1987b:202).

Aided by the government, the meat packing business proceeded to dominate the pro­duction and distribution of beef. Refrigeration technology and the new federal inspection standards allowed individuals such as George H. Hammond in 1871, Gustavus Swift in 1877, and Philip and Simeon Armour in 1882 to slaughter beef in one area of the country— Chicago—and ship it fresh to any other area of the country. Their growth and domination of the meat packing industry resulted in the concentration of production in five companies that by World War I handled two-thirds of all meat packing in the United States. By 1935 Armour and Swift controlled 61 percent of meat sales in the United States.

One of the great technological innovations in meat packing was the assembly (or, as Jeremy Rifkin [1992] called it, 'disassembly') line. Henry Ford is generally credited with developing assembly line technology in the construction of his Model T Ford in 1913. However, even Ford said that he got his idea from watching cattle hung on conveyor belts passing from worker to worker, each assigned a specific series of cuts until the entire



animal was dismembered. The working conditions in the meat packing industry were then and remain among the worst of any industry in the country. At the turn of the century they prompted Upton Sinclair to produce his quasi-fictional account, The Jungle (1906), whose descriptions of the slaughterhouses promoted such public outrage that the govern­ment acted to regulate the meat packing industry.

The state has also heavily subsidized our taste in beef by allowing cattlemen to graze cattle on public lands at a fraction of market costs for grazing on private land, thus making beef more affordable and encouraging its consumption. As early as the 1880s, cattlemen were fencing millions of acres of public land, to which they had no title, with the newly invented barbed wire. In fact, at that time most of the cattle companies grazing their cattle on public land were British. After objections to the practice were raised, the government passed the Desert Land Act of 1887, which awarded land to anyone who im­proved it. The Union Cattle Company of Cheyenne dug a thirty-five-mile-long ditch, called it an irrigation canal, and claimed 33,000 acres of public land (Skaggs 1976:62).

In 1934 Congress passed the Taylor Grazing Act, which transferred millions of acres of public land to ranchers if they took responsibility for improving it. In 1990, some 30,000 cattle ranchers in eleven western states grazed their cattle on 300 million acres of public land, an area equal to the fourteen East Coast states stretching from Maine to Flor­ida (Rifkin 1992:105-106). These permit holders pay a third to a quarter less than they would pay on private lands.

The victory of beef, however, was not yet complete. To appreciate the story of American beef consumption, we need to understand the role of the American government in creating the legal definition of a hamburger and the infrastructure that encouraged the spread of the automobile.

Modern Beef

Legend has it that the hamburger was invented by accident when an Ohio restaurant owner ran out of pork sausages at the Ohio Fair in 1892 and substituted ground beef on his buns. The hamburger was the rage of the St. Louis World's Fair of 1904, and by 1921 White Castle had opened its hamburger chain in Kansas City. But the hamburger still needed an assist, and it got it from the automobile and the government.

Henry Ford's Model T began the American romance with the automobile, and the number of Americans with cars grew enormously in the twentieth century. There are now as many automobiles as there are licensed drivers in the United States. But it was the surge in highway construction after World War II (a $350 billion project to construct a network of 41,000 miles of superhighways) that made the automobile boom possible. This led to the growth of the suburbs and the fast-food restaurants that were to make beef, and particularly the hamburger, king.

Pork, as mentioned earlier, had always competed with beef for priority in American meat tastes. While beef was more popular in the Northeast and the West, pork was the meat of choice in the South; however, in the Mid-Atlantic states and the Midwest they were relatively evenly matched. But by the 1960s beef clearly became the meat of choice for most Americans.



One advantage of beef was its suitability for the outdoor grill, which became more popular as people moved into the suburbs. Suburban cooks soon discovered that pork pat-ties crumbled and fell through the grill, whereas beef patties held together better. In addi­tion, since the USDA does not inspect pork for trichinosis because the procedure would be too expensive, it recommended cooking pork until it was gray; but that makes pork very tough. Barbecued spare ribs are one pork alternative, but they are messier, have less meat, and can't be put on a bun.

In 1946 the USDA issued a statute that defined the hamburger:

Hamburger. 'Hamburger' shall consist of chopped fresh and/or frozen beef with or with­out the addition of beef fat as such and/or seasonings, shall not contain more than 30 per­cent fat, and shall not contain added water, phosphates, binders, or extenders. Beef cheek (trimmed Beef cheeks) may be used in the preparation of hamburgers only in accordance with the conditions prescribed in paragraph (a) of this section. (Harris 1987:125)



Marvin Harris (1987:125-126) noted that we can eat ground pork and ground beef, but we can't combine them, at least if we are to call it a hamburger. Even when lean, grass-fed beef is used for hamburger and fat must be added to bind it. The fat must come

The row of fast-food restaurants that line streets in virtually every American town and city represents not only the union of sugar and fat but also the fast-paced lives required in a consumer-oriented society.


from beef scraps, not from vegetables or a different animal. This definition of the hamburger protects not only the beef industry but also the corn farmer, whose income is linked to cattle production. Moreover, it helps the fast-food industry, because the definition of hamburger allows it to use inexpensive scraps of fat from slaughtered beef to make, in fact, 30 percent of its hamburger. Thus an international beef patty was created that overcame what Harris called the 'pig's natural superiority as a converter of grain to flesh.'

The fast-food restaurant, made possible by the popularity of the automobile, put the final touch on the ascendancy of beef. Ray Kroc, the founder of McDonald's, tapped into the new temporal and work routines of American labor. With more women working out­side the home, time and efficiency were wed to each other as prepared foods, snacks, and the frozen hamburger patty became more popular. In many ways, the fast-food restaurant and the beef patty on a bun were to the American working woman of the 1970s and 1980s what sugar, hot tea, and preserves were to the English female factory worker of the latter half of the nineteenth century. They both offered ease of preparation and convenience at a time when increasing numbers of women worked outside the home.

As with sugar, therefore, our 'taste' for beef goes well beyond our supposed indi­vidual food preferences. It is a consequence of a culture in which food as a commodity takes a form defined by economic, political, and social relationships. We can, as many have done, refuse beef. But to do so requires a real effort, as those who try to follow a strictly vegetarian diet can attest.

In addition, as with much of what Americans do, the matter of beef does not stop in the United States. The United States produces about 9 percent of the world's beef but con­sumes 28 percent of it. In 1995, Americans consumed 25,461,000 pounds of beef, and while consumption per capita is declining the total amount of beef consumed is increas­ing. The consumption of beef in core countries is having more of an impact on countries in the periphery, and this is likely to worsen as other countries, such as Japan and China, begin to emulate American and European dietary patterns. Let's look at just one example, in Costa Rica, of peripheral countries converting their forests into grazing land for cattle to profit from the core's demand for beef.

The Internationalization of the Hamburger

In the 1960s, with the help of the World Bank, governments in South and Central Amer­ica began to convert tropical forest into pasture to raise beef for the international market. The case of Costa Rica, analyzed by Mark Endelmann (1987), is illustrative. The United States began to purchase beef from Central America in the 1950s, largely because prices were about 40 percent lower than in the United States.

The state plays a role in the production and importation of foreign beef. Foreign beef suppliers must meet USDA certification of their herds and packing facilities and are subject to import quotas that are more informal than formal, since the quotas themselves are in violation of GATT. Therefore, countries from whom we import beef must 'volun­tarily' restrict sales to the United States.

International financial agencies, such as the World Bank, also play a role in promot­ing cattle production by financing and requiring the establishment of a cattle infrastruc­ture. For example, international lending institutions required Costa Rica's Central Bank




to add a cattle technical extension division and the Banco Nacional de Costa Rica to add animal husbandry and veterinary sections to the branches of the bank located in cattle ranching areas. The loans themselves were used for such things as road building in cattle-raising regions and livestock improvement. In fact, the International Development Bank devoted 21 percent of its loans in the 1960s to the cattle sector of the economy. Further­more, the U.S. Agency for International Development (USAID) helped develop roads and livestock-related extension and research agencies for Costa Rican cattle farmers.

A powerful livestock lobby developed in Costa Rica. The 'chambers of cattlemen' is a national federation that placed spokespersons in the legislative assembly, banks, min­istries, and major political parties. This lobby convinced the government to increase ex­ports, which had the effect of drastically reducing beef available for local consumption. Only inferior beef that would be rejected by USDA inspectors was made available for do­mestic consumption.

The increase in cattle production also had environmental consequences for Costa Rica. From 1950 to 1973 the area of pasture in Costa Rica doubled from 622,402 hectares to 1,558,053 hectares. Since brushland is also used, it may be that as much as 89.9 percent of the country's productive land is used for livestock. It has also resulted in widespread destruction of the forests. In 1950, 72 percent of Costa Rica consisted of rainforest; by 1973 only 49 percent of the country was forested, and by 1978 this had decreased to only 34 percent (Endelmann 1987:554).

The expansion of cattle raising often occurred at the expense of peasant subsistence agriculture, as cattleman evicted or forced peasants off their land or bought the land. Since cattle raising uses far less labor than agriculture, the peasants were forced into the cities, where unemployment was already high. And since cattle raising is profitable only on a large scale, the expansion led to further concentrations of wealth.

Yet if countries such as Costa Rica can escape the poverty discussed in Chapter 6, is it fair to expect them to reduce their cattle production because of concerns in core countries about rainforest destruction? Might there be ways for countries such as Costa Rica and Mexico to raise beef, reduce environmental destruction, and assist the poor? Let's examine one example of how anthropologists, working with agricultural specialists, can supply some answers.

Environmentally Sustainable Cattle Raising

Is it possible to develop ways of producing cattle that are not environmentally destruc­tive? This is a question that some anthropologists are examining. Ronald Nigh, for exam­ple, has developed a project in Mexico in which indigenous methods of agriculture and stock raising are being applied not only to raise crops but to regenerate rainforests de­stroyed by stock breeding.

Mexico, along with most Central American countries, has lost vast amounts of its rain­forests. At the beginning of the century Mexico had 13 million hectares (31 million acres) of rainforest. Today only 2.4 million hectares remain. Of the total destroyed, 5.5 million hectares were converted to pasture, and over half of that is in an advanced stage of degradation and ero­sion. Furthermore, while 60 percent of Mexico's productive land is devoted to pasture or forage for animals, more than 50 percent of its population never consumes animal products.


Nigh (1995) maintains that the destruction of the rainforest by cattle grazing is largely the result of the importation of what he called the factory model of agricultural production. The factory model is designed to produce a single product (corn, soy, beef, pork, etc.) in as short a time as possible. It tends to be technology-intensive and environ­mentally damaging. Furthermore, it tends to convert whole regions to a single type of ag­ricultural production—cattle in one area, corn in another, wheat in another, and so on. In Central America the factory model of cattle raising has required clearing large tracts of land with fire and herbicides and reseeding with grasses that are not well-suited to the en­vironment. The result is degradation of the land by uncontrolled grazing and its eventual abandonment and return to secondary vegetation.

Nigh suggested that it is far more productive and far less damaging to the environ­ment to look at agriculture as an ecological, rather than a manufacturing process and to adopt what S. R. Gleissman (1988; see also Posey et al. 1984) referred to as an agroeco-logical approach. One foundation of this approach is to combine indigenous practices that have produced food yet preserved the environment with contemporary agricultural research. The major difference between a factory approach and an agroecological approach is that the latter creates a polyculture—the production of multiple crops and animals—rather than a monoculture—the growth or production of a single crop or animal. Indigenous methods of production in the rainforest create a system that enhances regeneration of land, flora, and fauna.

For example, there are sites of secondary vegetation in the Mexican rainforest left by Mayan farmers who practice swidden agriculture. The farmers clear a site, use it to grow corn for five to eight years, and then move on. These sites may soon look like the land abandoned by cattle ranchers, but Mayan farmers do not abandon the sites. They continue to work the garden, perhaps planting fruit trees, and use the site to attract mam­mals and birds to hunt. In fact, the area is designed to attract an animal crop and the Maya refer to it as 'garden hunting' (Nigh 1995). Since, unlike the factory model, no herbicides are used to clear the land, plant and animal life can regenerate. Thus traditional agricul­ture creates an environment that mixes fields, forests, and brushlands.

The idea is to create productive modules, each a mosaic of productive spaces. The agroecological model, drawing as it does on indigenous systems developed over centu­ries, creates an ecologically sustainable system of production modeled after natural sys­tems, rather than a system that displaces natural ones.

Rather than demonizing cattle, Nigh said, it is possible to design an agricultural system modeled after indigenous systems in which cattle are integrated into an agroeco­logical model, one in which diversity rather than uniformity is emphasized. For example, one area would be used for annual crops such as corn, squash, root crops, spices, and le­gumes. Secondary areas, including those previously degraded by overgrazing, can be used for fruit trees, forage, and so on. Other secondary areas, using specially selected animal breeds and grasses, can be devoted to intensive grazing. In his project, for exam­ple, they selected a breed developed in New Zealand that is small but a high milk pro­ducer. Intensive grazing frees up tropical rainforest land that should never have been converted to pasture to begin with. Nigh maintains that by using only organic fertilizers and controlled grazing, it is possible to recover aquatic areas (ponds, rivers, and lakes) and take advantage of water resources such as fish, mollusks, turtles, and birds.


Exporting Pollution

We can now see how economic, political, and social factors contribute to our patterns of consumption. The same analysis can be applied to many other commodities that we con­sume in great numbers, which have a detrimental effect on the environment. Some exam­ples are large houses, electronic devices, and appliances. Furthermore, there are the host of environmental problems caused by industrial pollution, the use, storage, and disposal of nuclear energy, the mountains of garbage that are accumulating as packaging becomes as much of a problem as the commodities they enclose. But there is a paradox here: While the consumption patterns of core countries are the primary cause of environmental pollu­tion, resource depletion and destruction, and the production of toxic substances, the core countries suffer far less from environmental problems than peripheral countries. The United States, for example, while having its share of environmental problems, enjoys rel­atively cleaner air, cleaner water, and more open spaces than peripheral countries whose people consume and produce a fraction of what Americans do. How do we explain the spread of environmental destruction to the periphery?

On December 12, 1991, Lawrence Summers, then chief economist of the World Bank, later Undersecretary of Treasury in the Clinton administration, and now President of Harvard University, sent a memorandum to some of his colleagues, intending only, the World Bank later said, to provoke discussion. The memo, in brief, argued that it made perfectly good economic sense for the United States to export its pollution and toxic waste to poor countries. The memo then appeared in The Economist, the prestigious Brit­ish publication, under the headline, 'Let Them Eat Pollution.'

Summers argued that the World Bank should encourage the movement of 'dirty in­dustries' from core countries to less developed countries. He based his argument on three things: first, from an economic point of view, the cost of illness associated with pollution measured in working days lost is cheapest in the country with the lowest wages; second, un­derdeveloped countries are 'underpolluted,' and, consequently, the initial increases in pol­lution will have a relatively low cost; finally, since people in less developed countries have a lower life expectancy, pollutants that cause diseases of the more elderly, such as prostate cancer, are less of a concern. In sum, Summers argued that a clean environment is worth more to the inhabitants of rich countries than to those of poor countries; therefore, the cost of pollution is less in poor countries than in rich countries; consequently, it makes perfect economic sense to export 'dirty' industries to the less developed countries (Foster 1993).

The reaction to the memo from most environmentalists was scathing. Jose Lutzen-berger, Brazil's Secretary of the Environment, issued a response in which he said, among other things,

[i]t was almost a pleasant surprise to me to read reports in our papers and then receive [a] copy of your memorandum supporting the export of pollution to Third World countries and the arguments you present for justifying it. Your reasoning is perfectly logical but to­tally insane, (cited Rich 1994:246-248)

Yet, John Bellamy Foster (1993:12) said, there was little in the memo that has not been stated in other terms many times, largely by economists and public policy analysts.



The memo was a perfect expression of the view of the environment and of people that emerges logically from the culture of capitalism. From an anthropological perspective, the memo is illustrative of our culture's cosmology, its view of the person and the envi­ronment. The premises of Summers's argument include:

1.           The lives of people in the Third World, judged by 'foregone earnings' from illness
and death, are worth less—hundreds of times less—than those of individuals in ad­
vanced capitalist countries where wages are hundreds of times higher. Therefore it
makes sense to deposit toxic wastes in less developed countries.

2.     Third World environments are underpolluted compared to places such as Los Ange­
les and Mexico City (where children had to be kept home from school for a month
in 1989 because of air pollution).

3.     A clean environment, in effect, is a luxury good pursued by rich countries because
of the aesthetic and health standards in those countries. Thus the worldwide costs of
pollution could decrease if waste was transferred to poor countries where a clean
environment is 'worth' less, rather than polluting environments of the rich where a
clean environment is 'worth' more.

Essentially, the memo expresses a perspective in which a monetary value can be put on both human life, based on wage prospects, and the environment, based on the value people place on a clean environment. It reveals the tendency of the culture of capitalism to commodify virtually everything, including human life and the environment. As Foster (1993:12) said, Summers's memo is not an aberration; in his role as chief economist for the World Bank, Summers's job was to help create conditions for the accumulation of profit and to ensure economic growth. The welfare of the world's population, the health of the environment, 'nor even the fate of individual capitalists themselves—can be al­lowed to stand in the way of this single-minded goal.'

The Economist, in fact, went on to defend Summers, saying that governments con­stantly make decisions in regard to health, education, working conditions, housing, and the environment that are based on differential valuations of certain people over others. For example, in the 1980s the U.S. Office of Management and Budget (OMB) sponsored a number of studies that concluded that the value of a human life was between $500,000 and $2 million, then used those figures to argue that some forms of pollution control were cost-effective and others were not. Other economists have argued that the value of a human life should be based on earning power, thus a woman is worth less than a man, a Black's life worth less than a White's.

As shocking as that may sound, that is exactly how we, for the most part, operate. For example, three out of four off-site commercial hazardous landfills in southern states were located primarily in African American communities, although African Americans represent only 20 percent of the population. The core countries already ship 20 million tons of waste annually to the periphery. In 1987 dioxin-laden industrial waste was shipped from Philadelphia to Guinea and Haiti; in 1988, 4,000 tons of PCB-contaminated chemical waste from Italy was found leaking from drums in Nigeria.

Economists like Summers argue that it is more important to build an economic in­frastructure for future generations than to protect against global warming. They compare


the cost of rainforest destruction with the economic cost of conserving it, without recog­nizing that rainforest destruction is irrevocable. They argue that rather than halting eco­nomic development because of global warming, countries will be able to use their newly developed riches to build retaining walls to hold back the rising sea; furthermore they argue that money spent to halt carbon dioxide output could be better spent dealing with population growth (Foster 1993:16).

Foster concluded that capitalism will never sacrifice economic growth and capital accumulation for environmental reform. Its internal logic will always be 'let them eat pol­lution.' Opposition will develop, as we shall see in later chapters, and some changes made, as have been made in the United States over the past thirty years, but environmental concerns will never be allowed to threaten the system itself. This is clearly evident if we turn again to the latest developments in the American automobile industry.

The latest development in American tastes for automobiles is the preference for so-called sport utility vehicles (SUVs). In 2000 they constituted one of every two family ve­hicles sold (65 million in all). These vehicles emit 57 percent more carbon dioxide than standard automobiles, are the fastest-growing source of global warming gases in the United States, and use 50-100 percent more gasoline than ordinary passenger automo­biles (Bradsher 1997). Because they were used primarily on farms and construction sites they were classified in the 1970s as light trucks, and pollution emission standards and minimum mileage requirements that applied to other vehicles did not and do not apply to them. These vehicles are highly profitable; not only do the profit margins exceed those for other vehicles, but the American automobile industry is protected by high import tariffs on light trucks made overseas (because of a trade war with Germany in 1964) that do not apply to other foreign-made automobiles. For this reason, automobile manufacturers and workers' unions have lobbied heavily to prevent Congress from extending pollution and gasoline consumption standards to SUVs. As Foster concluded (1993:19):

Where radical change is called for little is accomplished within the system and the under­lying crisis intensifies over time. Today this is particularly evident in the ecological realm. For the nature of the global environmental crisis is such that the fate of the entire planet and social and ecological issues of enormous complexity are involved, all traceable to the forms of production now prevalent. It is impossible to prevent the world's environmental crisis from getting progressively worse unless root problems of production, distribution, technology, and growth are dealt with on a global scale. And the more that such questions are raised, the more it becomes evident that capitalism is unsustainable—ecologically, economically, politically, and morally—and must be superseded.

Conclusion

We began this chapter by asking why people choose to consume what they do, how they do, and when they do. We concluded that our tastes are largely culturally constructed and that they tend to serve the process of capital accumulation. There is no 'natural' reason why we engage in consumption patterns that do harm to the environment. Furthermore, we suggested that of all the contributing factors to environmental pollution, the most dif­ficult to 'fix' is our consumption behavior, since it serves as the foundation of our culture.


To illustrate we examined how the American taste for sugar and fat was historically constructed largely to serve the interests of sugar and beef producers. We examined how anthropological research is trying to help peripheral countries in their effort to meet west­ern demands for products such as beef without destroying their environmental resources.

We also examined how core countries are attempting to maintain their culture by exporting the by-products—pollution and resource depletion—and how from some per­spectives that seems to make perfect sense.

Most important, we examined the historical and cultural dynamic that drives the con­sumption of specific commodities and our attitudes about the environmental damage that re­sults, and concluded that it is not an aberration but an intrinsic part of our way of life.










Politica de confidentialitate

DISTRIBUIE DOCUMENTUL

Comentarii


Vizualizari: 957
Importanta: rank

Comenteaza documentul:

Te rugam sa te autentifici sau sa iti faci cont pentru a putea comenta

Creaza cont nou

Termeni si conditii de utilizare | Contact
© SCRIGROUP 2019 . All rights reserved

Distribuie URL

Adauga cod HTML in site