Get quality term paper help at Unemployedprofessor.net. Use our paper writing services to score better and meet your deadlines. It is simple and straightforward. Whatever paper you need—we will help you write it!
Order a Similar Paper Order a Different Paper
Please choose one of the following questions:
1. Chambliss discusses several theories of state power in the reading this week. Identify and describe at least two of those theories. Would you say that the U.S. governance today is characterized more by pluralism or by the concentration of power in the hands of an elite? Provide examples or evidence to support your answer.
2. Describe your vision of a model economic system – is it capitalist, socialist, or somewhere in between? In your answer be sure to compare and contrast the two major economic systems (capitalism and socialism).
3. African Americans and Latinos in the United States in general experience higher mortality rates and worse health than their White and Asian American counterparts. What sociological factors help to explain this health gap? How is this a sociological issue as well as a medical one?
The Week 7 Forum meets the following course objectives:
- Apply a sociological perspective to the social world.
- Analyze contemporary social issues using the sociological imagination and use sociological theories and concepts to analyze everyday life.
- Interpret the United States economy and politics.
- Discuss the sociological study of health and medicine in society.
Instructions for all Forums:
Each week, learners will post one initial post per week. This post must demonstrate comprehension of the course materials, the ability to apply that knowledge in the real world. Learners will engage with the instructor and peers throughout the learning week. To motivate engaged discussion, posts are expected to be on time with regular interaction throughout the week. All posts should demonstrate college level writing skills. To promote vibrant discussion as we would in a face to face classroom, formatted citations and references are not required. Quotes should not be used at all, or used sparingly. If you quote a source quotation marks should be used and an APA formatted citation and reference provided.
Not Participating (0%)
Comprehension of course materials
Initial post demonstrates rich comprehension of course materials. Detailed use of terminology or examples learned in class. If post includes opinion, it is supported with evaluated evidence.
Initial post demonstrates clear comprehension of course materials. Use of terminology or examples learned in class. If post includes opinion, it is supported with evaluated evidence.
Initial post demonstrates some comprehension of course materials. Specific terminology or examples learned in class may be incorrect or incomplete. Post may include some opinion without evaluated evidence.
Initial post does not demonstrate comprehension of course materials. Specific terminology or examples learned in class are not included. Post is opinion based without evaluated evidence.
No posting, post is off topic, post does not meet minimum criteria for demonstrating beginning level of comprehension. Post may be plagiarized, or use a high percentage of quotes that prevent demonstration of student’s comprehension.
Real world application of knowledge
Initial post demonstrates that the learner can creatively and uniquely apply the concepts and examples learned in class to a personal or professional experience from their life or to a current event.
Initial post demonstrates that the learner can apply the concepts and examples learned in class to a personal or professional experience from their life or to a current event.
Initial post does not clearly demonstrate that the learner can apply the concepts and examples learned in class. Unclear link between the concepts and examples learned in class to personal or professional experience or to a current event.
Initial post does not demonstrate that the learner can apply the concepts and examples learned in class. No link to a personal or professional experience or to a current event is made in the post.
No posting, post is off topic, post does not meet minimum criteria for demonstrating beginning level of application. Post may be plagiarized, or use a high percentage of quotes that prevent demonstration of student’s ability to apply comprehension.
Active Forum Engagement and Presence
Learner posts 4+ different days in the learning week.
Replies to at least one response from a classmate or instructor on the learner’s initial post to demonstrate the learner is reading and considering classmate responses to their ideas.
Posts two or more 100+ word responses to initial posts of classmates. Posts motivate group discussion and contributes to the learning community by doing 2+ of the following:
- offering advice or strategy
- posing a question,
- providing an alternative point-of-view,
- acknowledging similar experiences
- sharing a resource
Learner posts 3 different days in the learning week.
Posts two 100+ word responses to initial posts of classmates. Posts motivate group discussion and contribute to the learning community by doing 2+ of the following:
- offering advice or strategy
- posing a question,
- providing an alternative point-of-view,
- acknowledging similar experiences
- sharing a resource
Learner posts 2 different days in the learning week.
Posts one 100+ word response to initial post of classmate. Post motivates group discussion and contributes to the learning community by doing 1 of the following:
- offering advice or strategy
- posing a question,
- providing an alternative point-of-view,
- acknowledging similar experiences
- sharing a resource
Learner posts 1 day in the learning week.
Posts one 100+ word response to initial post of classmate. Post does not clearly motivate group discussion or clearly contribute to the learning community.
Responses do not:
- offering advice or strategy
- posing a question,
- providing an alternative point-of-view,
- acknowledging similar experiences
- sharing a resource
Learner posts 1 day in the learning week, or posts are not made during the learning week and therefore do not contribute to or enrich the weekly conversation.
No peer responses are made. One or more peer responses of low quality (“good job, I agree”) may be made.
Post is 250+ words. All posts reflect widely accepted academic writing protocols like using capital letters, cohesive sentences, and no texting language. Dialogue is also polite and respectful of different points of view.
Post is 250+ words. The majority of posts reflect widely-accepted academic writing protocols like using capital letters, cohesive sentences, and no texting language. Dialogue is polite and respectful of different points of view.
Post is 175+ words. The majority of posts reflect widely-accepted academic writing protocols like using capital letters (“I am” not “i am”), cohesive sentences, and no texting language. Dialogue may not be respectful of different points of view.
Post is 150+ words. The majority of the forum communication ignores widely-accepted academic writing protocols like capital letters, cohesive sentences, and texting; Dialogue may not be respectful of different points of view.
No posting, post is off topic and does not meet minimum criteria for demonstrating beginning level of comprehension.
14 THE STATE,WAR, AND TERROR Reuters/Nayer Haslamoun Media Library CHAPTER 14 Media Library AUDIO Voting Rights Citizens United Ruling Homegrown Terror VIDEO Gun Ownership Electoral College Stephen Colbert on Super PACs Abu Ghraib and Imprisonment Drone Warfare CQ RESEARCHER Affirmative Action Tea Party Politics PACIFIC STANDARD MAGAZINE Felon Disenfranchisement Terrorism and Depression JOURNAL Interest Group Politics REFERENCE Welfare State Debate p.349 IN THIS CHAPTER The Modern State Theories of State Power Power and Authority Forms of Governance in the Modern World The U.S. Political System War, State, and Society Terrorists and Terrorism Why Study the State and Warfare Through a Sociological Lens? WHAT DO YOU THINK? 1. What means do states have to maintain control over their populations? What means do citizens have to maintain control over their own governance? 2. Why do states go to war? What are the functions and dysfunctions of war? Is war inevitable? 3. It is sometimes said that “one person’s terrorist is another person’s freedom fighter.” What does this statement mean? Do you agree with this statement? p.350 THE BIRTH AND DEATH OF COUNTRIES © Eddie Gerald / Demotix/Corbis How is a country born? Countries come into being in a variety of ways. They may be established through armed conflict or diplomatic negotiation. They may be the products of indigenous ethnic groups striving for their own states or the results of colonial powers drawing borders that suit their political and economic interests. The 195 recognized countries in the world today are the products of a spectrum of historical times and events. The world’s newest recognized country is South Sudan, which came into being in 2011 after a decades-long civil war with the state from which it separated, Sudan. Other countries have much longer histories: Both Japan and China claim origins that date back more than 2,000 years. China marks 221 b.c. as its founding year. European states such as France, Austria, Denmark, and Hungary are more than five centuries old. The United States as a political entity came into being only in the 18th century after separating from its colonial parent, Great Britain, putting it in the large category of fairly young states. Just as countries are born out of the circumstances and interests of their times, they may also die, torn apart by political turmoil, economic collapse, or armed conflict. In 1991, the Soviet Union, a country that officially came into being in 1922 as the successor state to a fallen Russian Empire, split into 15 different countries. While there was some violence in the last months of the Soviet Union’s existence as those with a stake in its future fought for the continuation of the communist state, the split was largely peaceful, and the enormous country (it covered 11 time zones) was dissolved with signatures on paper rather than lethal weapons. In 2014, the country of Iraq appeared to be on the verge of dissolution. Iraq as a recognized state came into existence in 1920, after the collapse of the Ottoman Empire. It was, like many states of its time, created by a colonial power, Britain, which drew Iraq’s boundaries based on political expediency rather than along ethnic, religious, or tribal community lines. Britain continued to administer the country until it gained independence in 1932. p.351 From 1979, Iraq was under the rule of Saddam Hussein, a brutal dictatorial leader whose Baath Party favored the interests of Sunni Muslims over the more numerous Shiite Muslims and minority Kurds in the country. In 1990, Saddam’s Iraqi forces invaded the neighboring country of Kuwait, a U.S. ally. A large-scale U.S.-led effort, now known as the First Gulf War, commenced to liberate Kuwait, and Iraqi forces were compelled to withdraw in February 1991. Relations between the United States and Iraq continued to be tense, and U.S. forces invaded Iraq in 2003 after President George W. Bush accused the country of possessing weapons of mass destruction (which were not subsequently found and the existence of which has not been definitively shown). The U.S. occupation of Iraq followed shortly after U.S. forces invaded Afghanistan and deposed the Taliban rulers who were believed to harbor international terrorist Osama bin Laden. The military action against Iraq was commonly understood to be part of the “global war on terror” that the United States began after the terrorist attacks of September 11, 2001, on the World Trade Center and the Pentagon. The large-scale U.S. military presence in Iraq ended in 2011. Iraq, already politically fragile, remained unstable, wracked by tension between Sunni and Shiite communities that was exacerbated by an Iraqi government dominated by Shiite leaders who were reluctant to share power with Iraq’s Sunni and Kurdish populations. In 2014, the addition of a new source of strife, the aggressive presence of a radical Sunni terror organization called ISIS (Islamic State of Iraq and Syria; also known as ISIL, for Islamic State of Iraq and the Levant, or simply as the Islamic State), threatened to fragment the country into Shiite-dominated, Sunni-dominated, and Kurdish regions and, potentially, to end the existence of a governable political entity called Iraq. The world has seen the births and deaths of hundreds of states. Some have endured for centuries or even millennia—others have lasted just a few decades. Countries are core parts of the modern world, and they represent key vehicles for the exercise of power domestically and globally. They are also human-created entities and are subject to dramatic and dynamic change, which makes them a topic of interest to sociologists. We begin this chapter with a discussion of power and the modern nation-state and an examination of citizenship rights and their provision. We then look at theoretical perspectives on state power, its exercise, and its beneficiaries. A consideration of types of authority and forms of governance in the modern world provides the background for an examination of the U.S. political system. We then turn to a discussion of war and society and an analysis of war from the functionalist and conflict perspectives. This is followed by a critical look at the issue of terrorism, as well as the question of defining who is a terrorist. We conclude with a consideration of the question of why we study state power and its manifestations in phenomena that range from elections to making war. THE MODERN STATE For most of human history, people lived in small and homogeneous communities within which they shared languages, cultures, and customs. Today, however, the world’s more than 7 billion people are distributed across 195 countries (the number of countries recognized by the United States, though other entities exist that claim statehood, including Palestine and Kurdistan). On the world stage, countries are key actors: They are responsible for war and peace, the economic and social welfare of their citizens, and the quality of our shared global environment and security, among others. Social scientists commonly characterize the modern country as a nation-state—that is, a single people (a nation) governed by a political authority (a state) (Gellner, 1983). Very few countries neatly fit this model, however. Most are made up of many different peoples brought together through warfare, conquest, immigration, or boundaries drawn by colonial authorities without respect to ethnic or religious differences of the time. For instance, many Native Americans think of themselves as belonging to the Navajo, Lakota, Pawnee, or Iroquois nation rather than only to the United States. In Nigeria, most people identify primarily with others who are Yoruba, Ibo, or Hausa rather than with a country called Nigeria. In Iraq, a shared Iraqi identity is less common than allegiance to the Sunni or Shia Muslim or the Kurdish community. Because most political entities are not characterized by the homogeneity implied by the definition of nation-state, we will use the more familiar terms countryand state rather than nation-state. While not all countries possess them in equal measure, the characteristics we list below represent what Max Weber would term an ideal-typical model, a picture that approximates but does not perfectly represent reality. Modern countries emerged along with contemporary capitalism, which benefited from strong central governments and legal systems that regulated commerce and trade both within and across borders (Mann, 1986; Wallerstein, 1974; Weber, 1921/1979). This history accounts for several unique features of modern countries that distinguish them from earlier forms of political organization: p.352 © ARISTIDE ECONOMOPOULOS/Star Ledger/Corbis Obtaining citizenship is a dream for many immigrants to the United States. Every year, thousands take the Naturalization Oath of Allegiance to the United States. • Underlying the social organization of the modern country is a system of law, the codified rules of behavior established by a government and backed by the threat of force (Chambliss & Seidman, 1982). The rule of law is a critical aspect of democratic governance. • The governments of modern countries claim complete and final authority over the people who reside within the countries’ borders (Hinsley, 1986). • People living within a country’s borders are divided between citizens, individuals who are part of a political community in which they are granted certain rights and privileges and, at the same time, have specified obligations and duties, and noncitizens, people who are temporary or permanent residents and who do not have the same rights and privileges as citizens (Held, 1989). Citizenship rights may take several forms. Civil rights, which protect citizens from injury by individuals and institutions, include the right to equal treatment in places such as the school or workplace regardless of race, gender, sexual orientation, or disabilities. Political rights ensure that citizens can participate in governance, whether by voting, running for office, or openly expressing political opinions. Social rights, which call for the governmental provision of various forms of economic and social security, include such things as retirement pensions and guaranteed income after losing a job or becoming disabled. Citizens are afforded legal protections from arbitrary rule and in turn are expected to pay taxes, to engage in their own governance through voting or other activities, and to perform military service (with specific expectations of such service varying by country). In reality, the extent to which all people enjoy the full rights of citizenship in any given country varies. Below we discuss two specific aspects of citizenship rights—the first relates to the evolution of state provisions for ensuring social rights, the second to the degree to which citizens enjoy freedom in the form of political rights and civil liberties. THE WELFARE STATE In most modern countries, political and civil rights evolved and were institutionalized before social rights were realized. The category of social rights is broad and encompasses entitlements that include health care insurance, old-age pensions, unemployment benefits, a minimum wage floor for workers, and a spectrum of other benefits intended to ensure social and economic security for the citizenry. Social rights have largely been won by groups of citizens mobilizing (on the basis of the civil and political rights they enjoy) to realize their interests. Social rights are often embodied in what is termed the welfare state, a political order characterized by the broad provision of social and economic welfare benefits for the citizenry. The welfare state has been a part of Western systems of governance in the post–World War II period. In the United States, Social Security, a key social welfare program that ensures an income for retired workers, and Medicare, which provides for at least basic health care for the elderly, are examples of the U.S. welfare state. Social Security was created through the Social Security Act of 1935, signed by President Franklin D. Roosevelt, which established a social insurance program to provide continuing income for retired workers at age 65 or older (Social Security Administration, 2013). Although the provision of basic health care coverage was the vision of President Harry S. Truman, Medicare and Medicaid did not get signed into law until a few decades later, when they were endorsed by President Lyndon B. Johnson. The welfare state has long been a hallmark of advanced and wealthy countries; most developing states have much thinner social safety nets, and few have had the resources to provide for retirees or even, often, for the unemployed or ill. Welfare State DebateCLICK TO SHOW p.353 The Granger Collection, NYC — All rights reserved The idea for Social Security was first championed by President Truman, but it took a few presidential administrations before President Roosevelt signed it into law in 1935. Social Security benefits have contributed to the reduction of poverty among seniors in the United States. While it has come to represent a culmination of the three key rights of citizenship in modern countries, the welfare state is shrinking rather than expanding today. Factors such as global wage competition and growing debt loads borne by countries from the United States to Sweden to France and beyond have hampered the expansion of social rights. While it is difficult to reduce benefits to already existing constituencies—for example, in the United States, discussion of reducing Social Security payments or even raising the age of eligibility evokes vehement protest from retirees—the economic crises of recent years have led to attempts by lawmakers to curb benefits to less powerful constituencies, including the poor and immigrants. POLITICAL RIGHTS AND CIVIL LIBERTIES Every year, Freedom House, an organization dedicated to monitoring and promoting democratic change and human rights, publishes an evaluation of “freedom” in 195 countries, as well as 14 related and disputed territories (Figure 14.1). The report includes ratings that measure political rights (based on the electoral process, political pluralism, and participation) and civil liberties (based on freedom of expression and belief, rights of association and organization, rule of law, and individual rights). Even in the United States, which earns top scores in Freedom House’s survey, problems with voting procedures have raised the question of whether some voters, especially minorities, have been denied a political voice in elections. Nationwide, more than 5 million U.S. citizens, including more than 4 million who are not in prison, are barred from voting due to felony convictions. In Alabama and Florida, for instance, nearly one third of African American men are permanently disenfranchised (Froomkin, 2010). Recently, some U.S. state legislatures have sought to implement voter identification laws, which require voters to show identification when they vote in person. In 2014, 11 states had such laws, and 8 of those required the ID to include a photograph (Figure 14.2). © Felix Adamo/ZUMA Press/Corbis Voter ID laws created controversy in the 2012 federal election campaign. While about three quarters of the population expressed support for the laws, critics pointed out that their implementation risked disenfranchising the populations least likely to have government-issued ID, including the elderly, minorities, and the poor. Critics point out that members of minority groups, in particular those who are elderly or poor, are among those least likely to have photo identification in the form of a driver’s license or passport. The practical effect of voter ID laws, they say, is not to combat voter fraud (which has been infrequently documented in the United States), but rather to disenfranchise minorities and the poor (Cohen, 2012). What do you think about voter ID laws? Do they bring greater benefits or costs? Later in this chapter, we discuss more fully the issue of political voice, or the representation of a group’s interests in bodies such as state and national legislatures, and ways in which political voice may vary. In the next section we turn to broader sociological perspectives on state power, which can give us a fuller context for analyzing contemporary debates on power and politics. p.354 FIGURE 14.1 Freedom Status Worldwide, 2014 SOURCE: Freedom House. (2014). 2014 Freedom in the World. THEORIES OF STATE POWER Sociologists have developed different approaches to explaining how authority is exercised in modern states. Below we highlight two theoretical approaches, which disagree about how expansively power is shared. These theories are based largely on governance in the United States today, although we can also apply them to other modern societies. THE FUNCTIONALIST PERSPECTIVE AND PLURALIST THEORY Classical sociologist Émile Durkheim (1922/1956, 1922/1973b) saw government as an institution that translates broadly shared values and interests into fair-minded laws and effective policies. Contemporary functionalist theorists recognize that modern societies are socially and culturally heterogeneous and likely to have a greater diversity of needs and perspectives. The government, they suggest, is a neutral umpire, balancing the conflicting values, norms, and interests of a variety of competing groups in its laws and actions. In the United States, most people agree on such general values as liberty, democratic governance, and equality of opportunity, but debate occurs on many more specific issues, such as abortion, the death penalty, the waging of war, funding for medical care when people are old or ill, and the size of government. Recognizing the pluralistic—that is, diverse—nature of contemporary societies, sociologists and political scientists have developed theories of government that highlight this aspect of state power. Pluralist theory tries to answer the question “Given that modern societies are pluralistic, how do they resolve the inevitable conflicts?” To answer this question Robert Dahl (1961, 1982, 1989) studied decision making in New Haven, Connecticut. Dahl (1961) concludes that power is exercised through the political process, which is dominated by different groups of leaders, each having access to a different amalgamation of political resources. Dahl argues that, in their efforts to exert political influence, individuals come together in interest groups—groups made up of people who share the same concerns on particular issues who use their organizational and social resources to influence legislation and the functioning of social institutions. An interest group may be short-lived, such as a local citizens’ group that bands together to have a road repaved or a school built, or long-lasting, such as a labor union or a manufacturers’ association. Felon DisenfranchisementCLICK TO SHOW Voting RightsCLICK TO SHOW Affirmative ActionCLICK TO SHOW p.355 FIGURE 14.2 Voter Identification Laws, 2014 SOURCE: Voter identification requirements. Legislature & Elections. Copyright © 2012 National Conference of State Legislatures; National Conference of State Legislatures. (2014). Voter identification requirements: Photo ID laws. Reprinted with permission. Dahl’s theory asserts that interest groups serve the function of seeing that everyone’s perspectives (values, norms, interests) are represented in the government. The influence of one group is offset by the power of another. For instance, if a group of investors bands together to seek government approval to clear-cut a forest to build homes, a group of citizens concerned about the environment may coalesce into an interest group to oppose the investors’ plan. The ideal result, according to Dahl, would be a compromise: Perhaps cutting would be limited, or some particularly sensitive areas would be preserved. Similarly, big businesses and organized labor routinely face off, so neither exercises disproportionate influence on the political process. When powerful interests oppose one another, pluralists see compromise as the likely outcome, and the role of the government is therefore to broker solutions that benefit as many interests as possible. In this view, power is dynamic, passing from one stakeholder to another over time rather than being concentrated in the hands of a powerful few. This competition and fluidity of power contribute to democratic governance and society. A critical view of the pluralist characterization of political power points out that government is unlikely to represent or recognize all interests (Chambliss & Seidman, 1982; Domhoff, 2006). Critics also dispute the assumption that government is a neutral mediator between competing interests. They argue that laws may favor some groups over others: For example, when the U.S. Constitution was framed by White male property owners, only White male property owners could vote: People without property, women, Blacks, and American Indians were excluded from the political process. As well, some interest groups are more powerful than others. Governments do not apply the rules neutrally, as the theory claims; rather, they interpret (or even bend) the rules to favor the most powerful groups in society, including big business and other moneyed interests that finance increasingly costly political campaigns for those who favor them (Domhoff, 2006, 2009; Friedman, 1975, 1990). As a consequence of all these factors, critics contend, even if laws are passed to protect the values and interests of people who are not economically powerful, the laws may not be actively and forcefully implemented or their reach may be limited. For example, de facto racial segregation and discrimination continue to exist even though school segregation has been outlawed in the United States since 1954 and most other forms of racial discrimination became illegal in the decade that followed. THE CONFLICT PERSPECTIVE AND CLASS DOMINANCE THEORY Social conflict theory highlights power differences between social groups. This perspective recognizes that modern societies are pluralistic, but it argues that the interests of social groups are often incompatible with one another. Further, conflict theory posits that some groups are more powerful than others and are therefore more likely to see their interests, values, and norms reflected in government policies and laws. Groups with greater resources use their power to create systems of law, economy, politics, and education that favor them, their children, and other group members. Unlike pluralistic theory, which views competing interests as having relatively equal and shifting opportunities and access to power, conflict theory sees power as being concentrated in the hands of a privileged few groups and individuals. The gains of the elite, conflict theorists suggest, come at the expense of those who have fewer resources, including economic, cultural, and social capital. The roots of conflict theory are found in the ideas of Karl Marx. You may recall from previous chapters that Marx believed the most important sources of social conflict are economic, and that, as a consequence, class conflict is fundamental to all other forms of conflict. Within a capitalist society, in Marx’s view, government represents and serves the interests of the capitalist class or bourgeoisie, the ruling class that exerts disproportionate influence on the government. Still, a well-organized working class can effectively press government for such economic reforms as a shorter working day or the end of child labor. Interest Group PoliticsCLICK TO SHOW p.356 Contemporary conflict theory extends Marx’s concept of the ruling class to include contemporary groups that wield considerable power. Class dominance theory, for instance, argues that power is concentrated in the hands of a relatively small number of individuals who compose a power elite(Domhoff, 1983, 1990, 2002; Mills, 1956/2000a). These individuals have often attended the same elite schools, belong to the same social organizations, and cycle in and out of top positions in government, business, and the military (the so-called revolving door). Class dominance theory complements Marx’s original ideas with a focus on the elite social networks themselves, rather than only on capitalism as a political economic system. G. William Domhoff (2002, 2006, 2009) posits that we can show the existence of a dominant class by examining the answers to several basic questions, which he terms “power indicators”: Who benefits? Who governs? Who wins? • In terms of “Who benefits?” Domhoff asks us to consider who gets the most of what is valued in society. Who gets money and material goods? What about leisure and travel or status and prestige? Domhoff (2006) asserts that “those who have the most of what people want are, by inference, the powerful” (p. 13). • In terms of “Who governs?” he asks who is positioned to make the important political and economic and legal decisions in the country or community. Are all demographic groups relatively well represented? Are some disproportionately powerful? Domhoff (2006) suggests, “If a group or class is highly overrepresented or underrepresented in relation to its proportion of the population, it can be inferred that the group or class is relatively powerful or powerless, as the case may be” (p. 14). • Asking “Who wins?” entails inquiring about which group or groups have their interests realized most often. Domhoff concedes that movements with fewer resources, including, for instance, environmental groups, may “win” desired legislation sometimes. However, we need to look at who has their desires realized most consistently and often. Is it small interest groups representing civil rights, environmental activists, or same-sex marriage advocates? Is it large corporate interests with friends in high places and the ability to write big campaign donation checks? After examining the power indicators in his book Who Rules America? Domhoff concludes that it is the upper class, particularly the owners and managers of large for-profit enterprises, that benefits, governs, and wins. This, he suggests, challenges the premise of pluralist theories that power is not concentrated but dynamic, moving among a variety of interests and groups. Domhoff argues that there exists a small but significant power elite, which is made up of individuals who are linked to the social upper class, the corporate community, and the policy-formation organizations that influence government policy. Though the corporations, organizations, and individuals who make up the power elite may be divided on some issues, Domhoff contends that cooperation is stronger than competition among them: The members of the power elite are united by a common set of interests, including a probusiness and antiregulation environment, and common enemies, including environmentalists and labor and consumer activists. From a critical perspective, class dominance theory tends to overemphasize the unified nature of the “ruling class.” For instance, Domhoff highlights the fact that many members of the power elite share similar social backgrounds. Often, they attend the same private high schools and colleges, spend their vacations in the same exclusive resorts, and marry into one another’s families. They share a strong belief in the importance and value of capitalism and, as Domhoff argues, are steeped in a similar set of worldviews. However, it is difficult to show that they necessarily share the same political beliefs or even economic orientations (Chambliss & Seidman, 1982; Chambliss & Zatz, 1994). Further, government decisions sometimes appear to be in direct opposition to the expressed interests of powerful capitalist groups. For example, when faced with major conflicts between labor and management during the Depression years of the 1930s, the U.S. government passed laws legalizing trade unions and giving workers the right to bargain collectively with their corporate employers, even though both laws were strongly opposed by corporation executives and owners (Chambliss & Zatz, 1994; Skocpol, 1979; Tilly, 1975). Mark Smith (2000) found that when businesses act to influence public policy to support or oppose a given issue, they may experience backlash as labor and public interest groups organize in opposition to the perceived power seizure. Does a power elite exercise disproportionate influence in the political sphere of the United States? Domhoff and other conflict theorists would answer in the affirmative. A pluralist perspective might see corporations, the upper class, and policy organizations as some among many players who compete in the political power game, balanced by other groups such as unions and environmentalists, and answer in the negative. What do you think? Gun OwnershipCLICK TO SHOW p.357 © Kirsty Wigglesworth/ /AP/Corbis In Great Britain, traditional authority peacefully and functionally coexists with rational-legal authority. In this photo, British Prime Minister David Cameron, who is the head of government, shakes hands with Queen Elizabeth II, who is the constitutional monarch. POWER AND AUTHORITY In the section above we looked at different perspectives on how state power functions and who it serves. We now turn to the question of how states exercise their power in practice, asking, “How do governments maintain control over their populations?” One way that states exercise power is through outright coercion—the threat or use of physical force to ensure compliance. Relying solely on coercion, however, is costly and difficult because it requires surveillance and sometimes suppression of the population, particularly those segments that might be inclined to dissent. Governments that ground their authority in coercion are vulnerable to instability, as they generally fail to earn the allegiance of their people. It is more efficient, and in the long run more enduring, if a government can establish legitimate authority, which, as you recall from Chapter 5, is power that is recognized as rightful by those over whom it is exercised. One of sociology’s founders, Max Weber (1864–1920), was also one of the first social scientists to analyze the nature of legitimate authority, and his ideas have influenced our understanding of power and authority in the modern world. Weber sought to answer the question, “Why do people consent to give up power, allowing others to dominate them?” His examination of this question, which was based on detailed studies of societies throughout history, identified three key forms of legitimate authority: traditional, rational-legal, and charismatic. TRADITIONAL AUTHORITY For most of human history, state power relied on traditional authority, power based on a belief in the sanctity of long-standing traditions and the legitimate right of rulers to exercise authority in accordance with these traditions (Weber, 1921/1979). Traditional rulers claim power on the basis of age-old norms, beliefs, and practices. When the people being governed accept the legitimacy of traditional authority, it tends to be relatively stable over time. The monarchies of Europe, for example, ruled for hundreds of years based on traditional authority. Their people were considered the king’s or queen’s “subjects,” whose loyalty derived from their recognition of the fundamental legitimacy of monarchical rule, with its long-standing hierarchy and distribution of power on the basis of blood and birth. In modern Europe, however, monarchies such as those in Denmark and Sweden have little more than symbolic power—they have been largely stripped of political power. Traditional authority, in these instances, coexists with the rational-legal authority exercised by modern elected bodies, which we discuss below. Traditional authority supports the exercise of power at both the macro and micro levels: Just as reverence for traditional norms and practices may give legitimacy to a state, a religion, or other government, so too may it drive the decisions and actions of families. If a family marks a particular holiday or date with an obligatory ritual, then even if an individual questions the need for that ritual, the reasoning “We’ve always done that” is a micro-level exercise of traditional authority that ensures compliance and discourages challenges by any member of the group. RATIONAL-LEGAL AUTHORITY Traditional authority, in Weber’s view, was incompatible with the rise of modern capitalist states. Capitalism is based on forms of social organization that favor rational, rule-governed calculation rather than practices grounded in tradition. As capitalism evolved, traditional authority gave way to rational-legal authority, power based on a belief in the lawfulness of enacted rules (laws) and the legitimate right of leaders to exercise authority under such rules (Weber, 1921/1979). The legitimacy of rational-legal authority derives from a belief in the rule of law. We do something not simply because it has always been done that way but because it conforms to established rules and procedures. In a system based on rational-legal authority, leaders are regarded as legitimate as long as they act according to law. Laws, in turn, are enacted and enforced through formal, bureaucratic procedures, rather than reflecting custom and tradition or the whims of a ruler. Weber argued that rational-legal authority is compatible with modern economies, which are based on rational calculation of costs and benefits, profits, and other economic decisions. Rational-legal authority is commonly exercised in the ideal-typical modern state described at the start of this chapter. In practice, we could take the United States, Canada, Japan, or the countries of the European Union as specific examples of states governed by rational-legal authority. p.358 CHARISMATIC AUTHORITY Weber’s third form of authority can threaten both traditional and rational-legal authority. Charismatic authority is power based on devotion inspired in followers by the personal qualities of a leader(Weber, 1921/1979). It derives from widespread belief in a given community that an individual has a “gift” of great—even divine—powers, so it rests most significantly on an individual personality, rather than on that individual’s claim to authority on the basis of tradition or legal election or appointment. Charismatic authority may also be the product of a “cult of personality,” an image of a leader that is carefully manipulated by the leader and other elites. In North Korea’s long-standing dictatorship, power has passed through several generations of the same family, as has the government’s careful construction of a cult of personality around each leader that elevates him as supremely intelligent, patriotic, and worthy of unquestioning loyalty. Prominent charismatic leaders in religious history include Moses, Jesus Christ, the Prophet Muhammad, and Buddha. Some military and political rulers whose power was based in large part on charisma are Julius Caesar, Napoleon Bonaparte, Vladimir Lenin, and Adolf Hitler (clearly, not all charismatic leaders are charitable, ethical, or good). More recently, charismatic leaders have emerged to lead communities and countries toward democratic development. Václav Havel, a dissident playwright in what was then communist Czechoslovakia, challenged the authority of a government that was not elected and that citizens despised; he spent years in prison and doing menial jobs because he was not permitted to work in his artistic field. Later, he helped lead the 1989 opposition movement against the government and, with its fall, was elected president of the newly democratic state. (Czechoslovakia no longer exists; its Czech and Slovak populations wanted to establish their own nation-states, and in 1993 they peacefully formed two separate republics.) Similarly, Nelson Mandela, despite spending 27 years in prison, was a key figure in the opposition movement against the racist policies of apartheid in South Africa. Mandela became South Africa’s first democratically elected president in 1994, 4 years after his release from prison. His death in 2013 marked the passing of a significant era in South African politics. Notably, Weber also pointed to a phenomenon he termed the routinization of charisma. That is, with the decline, departure, or death of a charismatic leader, his or her authority may be transformed into legal-rational or even traditional authority. While those who follow may govern in the charismatic leader’s name, the authority of successive leaders rests either on emulation (traditionalized authority) or on power that has been routinized and institutionalized (rationalized authority). Authority exists in a larger context, and political authority is often sited in a government. Governance takes place in a variety of forms, which we discuss below. FORMS OF GOVERNANCE IN THE MODERN WORLD In the modern world, the three principal forms of governance are authoritarianism, totalitarianism, and democracy. Below we discuss each type, offering ideal-typical definitions as well as illustrative examples. Two modern global trends in governance are the growth of rational-legal authority and the spread of representative democracy. AUTHORITARIANISM Under authoritarianism, ordinary members of society are denied the right to participate in government, and political power is exercised by and for the benefit of a small political elite. At the same time, authoritarianism is distinguished from totalitarianism (which we will discuss shortly) by the fact that at least some social, cultural, and economic institutions exist that are not under the control of the state. Two prominent types of authoritarianism are monarchies and dictatorships. Monarchy is a form of governance in which power resides in an individual or a family and is passed from one generation to the next through hereditary lines. Monarchies, which derive their legitimacy from traditional authority, were historically the primary form of governance in many parts of the world, and in Europe until the 18th century. Today, the formerly powerful royal families of Europe have been either dethroned or relegated to peripheral and ceremonial roles. For example, the queens of England, Denmark, and the Netherlands and the kings of Sweden and Spain do not have any significant political power or formal authority to govern. A few countries in the modern world are still ruled by monarchies, including Saudi Arabia, Jordan, Qatar, and Kuwait. Even the monarchs of these nations, however, govern with the consent of powerful religious and social or economic groups. p.359 © Vincent Yu/ /AP/Corbis According to a BBC report, “Every night, North Korea’s news bulletin begins with a song about the mythical qualities of the country’s leader Kim Jong-il” (Williamson, 2011). Political and military leaders in this isolated totalitarian state nurture the image of the late “Dear Leader” and the son who succeeded him. In the territory of Saudi Arabia, for instance, the royal family, also known as the House of Saud, has ruled for centuries, though the country of Saudi Arabia itself was established only in 1932. The Basic Law of 1992 declared Saudi Arabia to be a monarchy ruled by the sons and grandsons of King Abdul Aziz al-Saud, making the country the only one in the world named after a family. The constitution of the country is the Koran, and, consequently, sharia law, which is based on Islamic traditions and beliefs, is in effect. The country does not hold national elections, nor is the formation of independent political parties permitted. However, the royal rulers govern within the bounds of the constitution, tradition, and the consent of religious leaders, the ulema. A more modern form of authoritarianism is dictatorship, a form of governance in which power rests in a single individual. An example of an authoritarian dictatorship is the government of Iraqi president Saddam Hussein, who ruled his country from 1979 to 2003, when he was deposed. As this case shows, the individual in power in a dictatorship is actually closely intertwined with an inner circle of governing elites. In Iraq, Saddam was linked to the inner circle of the Báath Party. Further, because of the complexity of modern society, even the most heavy-handed authoritarian dictator requires some degree of support from military leaders and an intelligence apparatus. No less important to the dictator’s power is the compliance of the masses, whether it is gained through coercion or consent. We might argue that today it would be difficult for a single individual or even a handful of individuals to run a modern country effectively for any length of time. In recent years, many dictators have been deposed by foes or ousted in popular revolutions. Some have even allowed themselves to be turned out of office by relatively peaceful democratic movements, as happened in the former Soviet Union and the formerly communist states of Eastern Europe, such as Czechoslovakia and Hungary. China, which has become progressively more capitalistic while retaining an authoritarian communist government, remains an exception to this pattern. TOTALITARIANISM When authoritarian dictatorships persist and become entrenched, the end result may be a totalitarian form of government. Totalitarianism denies popular political participation in government and goes one step further by seeking to regulate and control all aspects of the public and private lives of citizens. In totalitarianism, there are no limits to the exercise of state power. All opposition is outlawed, access to information not provided by the state is stringently controlled, and citizens are required to demonstrate a high level of commitment and loyalty to the system. A totalitarian government depends more on coercion than on legitimacy in exercising power. It thus requires a large intelligence apparatus to monitor the citizenry for antigovernment activities and to punish those who fail to conform. Members of the society are urged to inform on any of their fellow members who break the rules or criticize the leadership. One characteristic shared by totalitarian regimes of the 20th century was a ruthless commitment to power and coercion over the rule of law. Soviet leader Vladimir Lenin has been quoted as stating that “the dictatorship—and take this into account once and for all—means unrestricted power based on force, not on law” (Amis, 2002 p. 33). Another characteristic of these regimes was a willingness to destroy the opposition by any means necessary. Joseph Stalin’s regime in the Soviet Union, which lasted from 1922 to 1953, purged millions of perceived, potential, or imagined enemies; Stalin’s Great Terror tore apart the ranks of even the Soviet military apparatus. Martin Amis (2002) cites the following statistics in characterizing Stalin’s “war” on his own military: From the late 1930s to about 1941, Stalin purged 3 of 5 marshals, 13 of 15 army commanders, 154 of 186 divisional commanders, and at least 43,000 officers lower down the chain of command (p. 175). An often-told story about Stalin cites him telling his political inner circle that each should find two replacements for himself. Perhaps more than any other political system, totalitarianism is built on terror and the threat of terror—including genocide, imposed famine, purges, deportation, imprisonment, torture, and murder. Fear keeps the masses docile and the dictator in power. Torture has a long, brutal history in the dictatorships of the world, and, in trying to uncover its function, Amis (2002) makes the compelling observation that “torture, among its other applications, was part of Stalin’s war against truth. He tortured, not to force you to reveal a fact, but to force you to collude in a fiction” (p. 61). p.360 © AF archive / Alamy Nazi Germany, the Soviet Union under the leadership of Lenin and later Stalin, Chile under August Pinochet, and the Spain of Francisco Franco were examples of totalitarian regimes. This image of “Big Brother,” a symbol of totalitarianism’s penetration of private as well as public life, comes from the film version of George Orwell’s classic book, 1984. Today, few totalitarian states exist. Certainly, in the age of the Internet, control of information is an enormous challenge to states that seek full control of their populations. North Korea remains one of the last remaining totalitarian states, where the citizenry is nearly fully isolated from the rest of the world and few North Koreans outside the elite have electricity or enough nutritious food, not to speak of Internet connections or computers. DEMOCRACY Democracy, a form of governance in which citizens are able to participate directly or indirectly in their own governance, literally means “the rule of the people.” (The word comes from the Greek demos, “the people,” and kratos,“rule.”) The concept of democracy originated in the Greek city-state of Athens during the fifth century b.c., where it took the form of direct democracy, in which all citizens fully participate in their own governance. This full participation was possible because Athens was a small community by today’s standards and because the vast majority of its residents (including women and slaves, on whose labor the economy relied) were excluded from citizenship (Sagan, 1992). Direct democracy is rarely possible today because of the sheer size of most countries and the complexity of their political affairs. One exception is the referendum process that exists in some U.S. states, including California and Oregon. In Oregon, for instance, the signatures of a specified percentage of registered voters can bring a referendum to the ballot. In 2012, Oregonians voted on Ballot Measure 80, which sought to decriminalize personal marijuana usage. Although backers of the measure met the signature requirement to have the initiative placed on the ballot, the measure was ultimately defeated by state voters (CNN, 2012). Democracy in the modern world more typically takes the form of representative democracy, a political system in which citizens elect representatives to govern them. In a representative democracy, elected officials are expected to make decisions that reflect the interests of their constituents. Representative democracy first took hold in the industrial capitalist countries of Europe. It is now the principal form of governance throughout the world, although some parts of the populations in democratic states may be disenfranchised. For instance, only in recent years have women been legally granted the right to vote in many countries (Figure 14.3). Some countries, such as China, the largest remaining authoritarian society, claim to have free elections for many government positions, but eligibility is limited to members of the Communist Party. Thus, even though voting is the hallmark of representative democracy, the mere fact of voting does not ensure the existence of a true democracy. THE U.S. POLITICAL SYSTEM Politics in democratic societies is structured around competing political parties whose purpose is to gain control of the government by winning elections. Political parties serve this purpose by defining alternative policies and programs, building their membership, raising funds for their candidates, and helping to organize political campaigns. Not only must candidates win elections and retain their offices, but also, once in office, they must make decisions with far-reaching financial and social effects. These decisions ideally reflect the needs and desires of their constituents as well as the interests of their parties and the entities that contribute to their campaigns. Some politicians argue that their constituents’ issues take priority; other observers suggest that politicians are beholden to party or donor interests. p.361 FIGURE 14.3 When Women Won the Right to Vote in Selected Countries, 20th Century SOURCE: Data from “Women’s Suffrage: When Did Women Vote?” Interactive Map. Scholastic.com. In the section below we discuss electoral politics in the United States. Sociologists take an interest in electoral politics because it is an important site at which power in modern countries is exercised. Thus, key questions that sociologists ask—How is this functional for society? Who benefits from the existing social order? How do perceptions structure behaviors in the electoral process?—can be applied to electoral politics. ELECTORAL POLITICS AND THE TWO-PARTY SYSTEM Most modern democracies are based on a parliamentary system, in which the chief of state (called a prime minister) is the head of the party that has the largest number of seats in the national legislature (typically called a parliament). Britain, for example, has a parliamentary system. This arrangement can give a significant degree of influence to minority parties (those that have relatively few representatives in parliament), since the majority party often requires minority party support to pass legislation, or even to elect a prime minister. In the United States, the president is chosen by popular vote—although, as happened in the 2000 presidential election, a candidate who wins the popular vote (in this case, Al Gore) cannot become president without also winning the requisite number of Electoral College votes. Voters in the United States have often chosen a president from one party and a Congress dominated by the other. The separate election of the president and Congress is intended to help ensure a separation of powers between the executive and legislative branches of the government. At the same time, it weakens the power of minority or third parties, since—unlike the case in parliamentary systems—they are unlikely to have much impact on who will be selected chief of state. In Britain or Germany, by contrast, if a minority party stops voting with the majority party in parliament, its members can force a national election, which might result in a new prime minister. This gives minority parties potential power in parliament to broker deals that serve their interests. No such system exists in the United States, and, as a consequence, third parties play only a minor role in national politics. No third-party candidate has won a presidential election since Abraham Lincoln was elected in 1860. Electoral CollegeCLICK TO SHOW p.362 The domination of of national elections and elected positions by the Republican and Democratic Parties is virtually ensured by the current political order. Parties representing well-defined interests are ordinarily eliminated from the national political process, since there are few avenues by which they can exert significant political power. Unlike in many other democracies, there are no political parties in the United States that effectively represent the exclusive interests of labor, environmentalists, or other constituencies at the national level. On the contrary, there is a strong incentive for political groups to support one of the two major political parties rather than to “waste” their votes on third parties that have no chance at all of winning the presidency; at most, such votes are generally offered as “protest” votes. Third parties can occasionally play an important, even a decisive, role in national politics, particularly when voters are unhappy with the two dominating parties. The presidential campaign of H. Ross Perot, of the Reform Party, in 1992 was probably significant in taking votes away from Republican George Bush and helping Democrat Bill Clinton to win the presidential election. Perot, running at the head of his own party organization, won nearly 19% of the popular vote. In 2000, the situation favored Republicans, as Democrat Al Gore probably lost votes to Green Party candidate Ralph Nader; in some states, George W. Bush had fewer votes than Gore and Nader combined, but more than Gore alone. Bush won the electoral votes of those states. VOTER ACTIVISM AND APATHY IN U.S. POLITICS One consequence of the lack of political choices in the entrenched two-party system in the United States may be a degree of political apathy, reflected in voter turnouts that are among the lowest in the industrialized world. Among democracies, the United States scores in the bottom fifth when it comes to voter participation. Whereas many European countries typically have voter turnouts between 70% and 90% of the eligible voters, in 2000, about 54% of the “voting eligible” U.S. population participated in the presidential election. The percentage has risen in subsequent presidential election years: Estimates put turnouts for 2004 at about 60% and 2008 at more than 61%. In 2012, about the same proportion of voting-eligible citizens participated, though differences exist by state: About 70% of New Hampshire voters turned out, while only about half of Arkansans opted to vote (U.S. Elections Project, 2013). Historically, the proportion of eligible voters turning out for elections in the United States has varied by education (Figure 14.4), race and ethnicity (Figure 14.5), and age (Figure 14.6). Voters who are White, older, and more educated have historically had greater influence than other demographic groups on the election of officials and, consequently, on government policies. Interestingly, however, data suggest that President Obama’s 2012 reelection was driven in part by the votes of young people (60% of voters 18–29 cast a vote for Obama) and minorities (for example, about 70% of ethnic Latino voters cast a ballot for Obama; Pew Research Center for the People and the Press, 2012). FIGURE 14.4 U.S. Voter Participation by Education Level, 2008–2012 SOURCE: U.S. Census Bureau. (2010). Voting and registration in the election of November 2008—Detailed tables. Voting and registration; U.S. Census Bureau. (2011). Voting and registration in the election of November 2010—Detailed tables. Voting and registration. FIGURE 14.5 U.S. Voter Participation by Race and Ethnicity, 2008–2012 SOURCE: U.S. Census Bureau. (2010). Voting and registration in the election of November 2008—Detailed tables. Voting and registration; U.S. Census Bureau. (2011). Voting and registration in the election of November 2010—Detailed tables. Voting and registration. FIGURE 14.6 U.S. Voter Participation by Age, 2008–2012 SOURCE: U.S. Census Bureau. (2010). Voting and registration in the election of November 2008 – Detailed tables. Voting and registration; U.S. Census Bureau. (2011). Voting and registration in the election of November 2010—Detailed tables. Voting and registration. Tea Party PoliticsCLICK TO SHOW p.363 Boston Globe / Contributor/Getty Images The participation of young adults (18–29) has been higher in the last two presidential elections than in most that preceded them. The volunteer in this picture is helping a college student register to vote so the student can vote via absentee ballot. Do you think increases in youth voting are a trend that will continue to grow? Though the votes of young people and minorities had a notable effect on the outcome of the 2012 presidential election, the lower proportions of these groups among voters prompt us to ask why people who are poor or working class, minority, and/or young are less likely to be active voters. One thesis is that voters do not turn out if they do not perceive that the political parties represent their interests (Delli Carpini & Keeter, 1996). Some of Europe’s political parties represent relatively narrow and specific interests. If lower- to middle-class workers can choose a workers’ party (for instance, the Labour Party in Britain or the Social Democratic Party in Germany), or environmentalists a Green Party (several European states, including Germany, have active Green Parties), or minority ethnic groups a party of their ethnic kin (in the non-Russian former Soviet states that are now democracies, Russians often have their own political parties), they may be more likely to participate in the process of voting. This is particularly likely if membership in the legislative body—say, a parliament—is proportionally allocated, in contrast to a “winner-takes-all” contest such as that in the United States. In the United States, the legislative candidate with the greater number of votes wins the seat; the loser gets nothing. In several European countries, including Germany, parties offer lists of candidates, and the total proportion of votes received by each party determines how many members of the list are awarded seats in parliament. In proportional voting, small parties that can break a minimum barrier (in Germany, it is 5% of the total vote) are able to garner at least a small number of seats and enjoy a political voice through coalition building, or by positioning themselves in the opposition. Consider the winner-takes-all system and the proportional division of electoral votes. Is one more representative of the “will of the people” than the other? What do you think? Some other reasons for low voter turnout among some demographic groups might be practical; low-wage workers may work two or more jobs and may not be able to visit polling places on the designated day of voting (for federal elections in the United States, the first Tuesday following the first Monday of November, usually between about 6:00–8:00 a.m. and 6:00–8:00 p.m.). In many European states, Election Day is a national holiday, and workers are given the day off to participate in the voting process. Recently, a growing number of U.S. states have offered early voting, extending the opportunity to vote by several days or even weeks at designated polling places. Oregon has allowed voting by mail since 1998, and other states have also begun to offer this alternative. Data suggest that these initiatives increase voter participation. However, in advance of the 2012 election, five states either passed or attempted to pass legislation that would shorten the time for early voting. Most of the laws involved curbing the number of days in which early voting could take place; for example, Florida reduced the early voting period from 14 days to 8 days. What about young people? In 1971, the Twenty-Sixth Amendment to the U.S. Constitution lowered the voting age from 21 to 18, giving 18- to 20-year-olds the right to vote. This age group has taken advantage of suffrage in relatively small numbers, however: 18- to 24-year-olds are less than half as likely as older citizens to cast ballots. The 2008 presidential election, in which about 52% of those under 20 voted, was a historical exception driven largely by the enthusiasm of the young for Barack Obama’s candidacy. A 2010 survey found that young people are generally not apathetic about civic involvement; in fact, many responded that they volunteer and are eager to “give back” to their communities (cited in Center for Information & Research on Civic Learning and Engagement, 2010). Voting, however, does not appear to inspire the same commitment. The young are less likely to be courted by the parties and candidates, who tailor messages to attract the interests of groups such as the elderly, who vote in larger numbers, and other groups who are historically more likely to turn out at the polls or make campaign donations. Perhaps because of a perception that the candidates and parties do not seem to speak to or for them, younger voters reciprocate with limited participation in the voting process. p.364 Other factors have also been identified as relevant in influencing the youth vote, including the accessibility of information about where and when to vote and the level of civic education on issues and candidates. Notably, data show that states offering same-day registration on election day have higher youth voting rates: In 2008, youth turnout in same-day registration states was about 59%, whereas it was about 50% in states not permitting same-day registration (Center for Information & Research on Civic Learning and Engagement, 2010). Before we move on to the next section, which discusses the issue of political influence, we might pause to consider the following questions: How might our elected government and its policies be altered if people turned out to vote in greater numbers? Would higher turnouts among the poor, minorities, and young people make issues particularly pertinent to them higher priorities for decision makers? What do you think? POWER AND POLITICS While parts of the general public may show apathy about elections and politics, wealthy and powerful individuals, corporations, labor unions, and interest groups have a great deal at stake. Legislators have the power to make decisions about government contracts and regulations, taxes, federal labor and environmental and health standards, national security budgets and practices, and a spectrum of other important policies that affect profits, influence, and the division of power among those competing to have a voice in legislation. The shortest route to political influence is through campaign contributions. The cost of campaigning has gone up dramatically in recent years, and candidates for public office must spend vast sums of money on getting elected. The 2012 presidential campaign was by far the most expensive in history, with more than $2 billion spent altogether. President Barack Obama’s campaign, along with the Democratic National Committee and Priorities USA Action, a special kind of political action committee (PAC) known as a super PAC, spent more than $1.1 billion, while Mitt Romney’s campaign, the Republican National Committee, and the super PAC Restore Our Future spent between $1 billion and $1.1 billion. Spending reflected money raised from a range of sources, from very small donors to very large donors (Blake, 2012). A substantial proportion of the money candidates and parties raise comes from corporate donors and well-funded interest groups. In many instances, companies and well-resourced interest groups donate money to candidates of both parties in order to ensure that they will have a voice and a hand in decision making regardless of the electoral outcome. Clearly, while most politicians would deny that there is an explicit quid pro quo (a term that means “something for something”) with big donors, most would also admit that money can buy the time and interest of a successful candidate and determine which issues are most likely to be heard. Small interest groups and grassroots organizations lacking financial means may not be able to purchase passes into the halls of power. The 2008 presidential election offers a rather different story, however. Democratic nominee Barack Obama raised an unprecedented amount of campaign money, both from big donors and from a massive number of small donations successfully solicited via social media, demonstrating the power of the Internet in political fund-raising. Fund-raising and advertising online first became popular during the 2004 election, particularly after their successful use by Democratic contender Howard Dean and later by the Democratic National Committee. They returned as key strategies in 2012 as well. The 2012 election also ushered a new player, the super PAC, into electoral politics (as noted above in our description of spending in the 2012 election). Political action committees (PACs) are organizations created by groups such as corporations, unions, environmentalists, and other interest groups for the purpose of gathering money and contributing to political candidates that favor the groups’ interests. In 2010, the U.S. Supreme Court ruled in Citizens United v. Federal Election Commission that the government cannot restrict the monetary expenditures of such organizations in political campaigns, citing the First Amendment. This effectively means that these entities can contribute unlimited amounts of money to PACs. The term super PAC is used to describe the numerous well-funded PACs that have sprung up since the Citizens United decision. While super PACs cannot contribute directly to specific campaigns or parties, they can demonstrate support for particular candidates through, for instance, television advertising. Many critics of the Citizens United decision have been disturbed by the implications of unlimited corporate spending in politics. The primary drivers of this new political spending have been extremely wealthy individuals and often anonymous donors. According to the Center for Responsive Politics, the top 100 individual donors to super PACs make up less than 4% of contributors but have been responsible for more than 80% of donations made (Riley, 2012). Efforts to influence legislation are not limited to direct or indirect campaign contributions. Special interest groups often hire lobbyists, paid professionals whose job it is to influence legislation.Lobbyists commonly maintain offices in Washington, D.C., or in state capitals, and the most powerful lobbies are staffed by full-time employees. Many of the best-funded lobbies represent foreign governments. Lobbying is especially intense when an industry or other interest group stands to gain or lose a great deal if proposed legislation is enacted. Oil companies, for instance, take special interest in legislation that would allow or limit drilling on U.S. territories, as do environmental groups. In an instance like this, lobbyists from green groups and the oil and gas industry generally stand opposed to one another and seek to influence political decision makers to side with them. Stephen Colbert on Super PACsCLICK TO SHOW p.365 Many lobbyists are former politicians or high-level government officials. Since lobbyists are often experts on matters that affect their organizations’ interests, they may help in writing the laws that elected officials will introduce as legislation. Consider the example described below. In an article titled “A Stealth Way a Bill Becomes a Law,” the magazine Bloomberg Businessweekpointed out that that several state-level bills rejecting cap-and-trade legislation (which is intended to reduce carbon dioxide emissions, believed by most scientists to contribute to climate change) used identical wording: “There has been no credible economic analysis of the costs associated with carbon mandates” (Fitzgerald, 2011). What was the source of this wording? It was supplied by the American Legislative Exchange Council, an organization supported by companies such as Walmart, Visa, Bayer, ExxonMobil, and Pfizer. In exchange for a large “membership” fee, a corporation can buy itself a seat on the bill-writing “task force,” which prepares model legislation, primarily for Republican political decision makers. The group boasts that it gets about 200 state laws passed each year. Is the interaction of private sector corporations and public sector legislators an example of fruitful and appropriate cooperation on matters of mutual interest? When, if ever, is the writing of laws by corporate sponsors appropriate? When is it inappropriate? SOCIAL MOVEMENTS, CITIZENS, AND POLITICS Well-organized, popularly based social movements can also be important in shaping public policy. Among the most important social movements of the 19th and 20th centuries was the drive for women’s suffrage, which invested half a century of activism to win U.S. women the right to vote. The movement’s leaders fought to overcome the ideas that women ought not vote because their votes were represented by their husbands, because the muddy world of politics would besmirch feminine purity, and because women, like adolescents and lunatics, were not fit to vote. The women’s suffrage movement was born in the United States in 1869, when Susan B. Anthony and Elizabeth Cady Stanton founded the American Woman Suffrage Association. It worked for decades to realize its goal: In 1920, the Nineteenth Amendment to the Constitution was ratified and women were granted the right to vote on a national level (some states had granted this right earlier). The second wave of the women’s movements, which began in the 1960s, boasted other important achievements, including the passage of laws prohibiting gender-based job discrimination and rules easing women’s ability to obtain credit independent of their husbands. The temperance movement, symbolized by Carry Nation’s pickax attacks on saloons in the early 1900s, sought to outlaw the use and sale of alcoholic beverages. This movement eventually resulted in the 1919 ratification of the Eighteenth Amendment to the Constitution, which made it a crime to sell or distribute alcoholic beverages. The Twenty-First Amendment eventually repealed Prohibition in 1933. The labor movement grew throughout the first half of the 20th century, providing a powerful counterweight to the influence of business in U.S. politics. Labor unions were critical in getting federal and state laws passed to protect the rights of workers, including minimum wage guarantees, unemployment compensation, the right to strike, and the right to engage in collective bargaining. By midcentury, at the height of the unions’ power, roughly 25% of all U.S. workers belonged to labor unions. Today globalization and the flight of U.S. factories to low-wage areas have contributed to a decline in union membership, and just under 12% of workers belong to unions (U.S. Bureau of Labor Statistics, 2013e). At the same time, large unions, including the AFL-CIO, have retained a good deal of political power, and candidates, particularly Democrats, vie for the unions’ endorsements, which bring with them the virtual guarantee of large blocs of votes. On one hand, social movements provide a counterbalance to the power and influence in politics of large corporate donors, which we discussed above. These movements offer a political voice to grassroots groups representing interests contrary to those of big business, such as labor rights and environmental protection. On the other hand, if we return to Domhoff’s (2002) question, “Who wins?” we see that these groups rarely have more influence than large corporations and donors. CONSTITUENTS Wealthy individuals, interest groups, PACs, and lobbyists exert considerable political influence through their campaign contributions. Still, these factors alone are not sufficient to explain political decisions. If they hope to be reelected, elected representatives must also serve their constituents. That is why politicians and their aides poll constituents, read their mail and e-mail, and look closely at the last election results. One way politicians seek to win their constituents’ support is by securing government spending on projects that provide jobs for or otherwise help their communities and constituents. If a new prison is to be built, for instance, legislators vie to have it placed in their district. Although a prison may seem like an undesirable neighbor, it can represent an economic windfall for a state or region. Among other things, prisons provide jobs to individuals who may not have the education or training to work in professional sectors of the economy and would otherwise be unemployed or working in the poorly paid service sector. Citizens United RulingCLICK TO SHOW p.366 INEQUALITY MATTERS MONEY MATTERS: ENVIRONMENTALISTS VERSUSCORPORATIONS Reuters/David W Cerny Sociologists understand interest groups as groups organized with the goal of pursuing particular interests and agendas. Environmental interest groups have historically pushed for more aggressive measures to ensure clean air and water, reduce emissions of pollutants, protect fragile ecosystems and endangered species, and the like. Since their efforts sometimes have the effect of regulating industrial activities or imposing antipollution measures that bring new business costs, the interests of environmental groups and those of corporations, whose key interest is the pursuit of profit, are often in conflict. Sociologist William J. Domhoff (2002) suggests that one way to measure power is to ask the question, “Who wins?” We might add, “And how much does victory cost?” The Center for Responsive Politics reports that in 2009, pro-environmental groups spent about $22.4 million on federal lobbying efforts, an unprecedented sum for these groups. At the same time, the oil and gas industry initiated an even more costly effort to block new environmental regulations: In 2009, it spent about $175 million on lobbying—fully eight times what the pro-environmental lobby had invested in its efforts. By July of that year, according to the report, “congressional debate on global warming stopped cold” (Mackinder, 2010). Lobbying efforts against climate change–related legislation have emboldened a bloc of climate change deniers and skeptics within Congress. Some representatives and senators maintain either that the scientific evidence of climate change is manufactured by big-government liberals who want the power to regulate businesses or that environmental activists have dramatically overstated the seriousness of the situation. Initially, funds backing this viewpoint could be traced to a number of conservative foundations. More recently, however, this money has been disbursed through third-party, pass-through organizations that conceal the identities of their donors. It is difficult to know exactly who is financially supporting the agenda of climate change denial and skepticism (Fischer, 2013). Despite its financial edge, this interest lobby has not necessarily won the battle for hearts and minds among the American people. Two-thirds of Americans believe that the evidence for climate change is solid and that there ought to be stricter carbon emissions limits on power plants. Even so, few Americans feel that climate change should be a top priority for Congress and the president, and overall opinion about the issue is sharply divided along partisan lines (Pew Research Center, 2014a). Thus, “Who wins?” is not always fully clear. Many pro-environmental groups lauded the passage in 2009 of the American Clean Energy and Security Act, which pushed forward with caps on carbon emissions that could help limit climate change. On the other hand, they lamented the loopholes that exempted large parts of the energy and coal industries from the caps, even as those industries argued vehemently that further regulations would result in job cuts and higher energy costs for Americans. After the 2010 congressional elections, in which Republicans regained a majority in the House of Representatives and gained seats in the Democratic-majority Senate, the debate over climate change moved lower on the national agenda. In the wake of the 2012 election, President Obama expressed a willingness to push for greater efforts to reduce greenhouse gas emissions linked to climate change. For the moment, at least, the “winner” is unclear, though the lobbying expenditures on both sides of the issue continue to be substantial. THINK IT THROUGH Well-funded interest groups expend vast sums of money lobbying for their causes. What are available avenues for expressing political opinions and influence for individuals and less well-funded interest groups? Should there be limits on how much interest groups can spend on lobbying U.S. legislative representatives? p.367 Projects that legislators push to bring to their home districts are sometimes labeled “pork.” Pork may be superfluous or unnecessary for the macro-level economy but good for the legislator’s home district. On the other hand, when a government commission proposes closing military bases, cost-conscious members of Congress will support the recommendation—unless any of the bases marked for closure are in their districts. Politicians spend substantial amounts of time in their home districts. Over the course of the calendar year, the U.S. Congress is in session for an average of 103 days (Library of Congress, 2012). Congressional representatives spend much of their off-session time in their home districts because they are interested in hearing the views of their constituents—as well as in raising money and getting reelected. CONTRADICTIONS IN MODERN POLITICS: DEMOCRACY AND CAPITALISM Leaders in modern democratic capitalist societies like the United States are caught between potentially contradictory demands. They seek widespread popular support, yet they must satisfy the demands of the elites whose financial backing is essential for electoral success. On one hand, voters are likely to look to their political leaders to back benefits such as retirement income (in the form of Social Security, for instance), housing supports (affordable housing for low-income families or mortgage tax breaks for wealthier ones), and environmental protection. On the other hand, such programs are costly to implement and entail economic costs to corporations, developers, and other members of the elite. Some theorists argue that modern governments thus are caught in a conflict between their need to realize the interests of the capitalist class and their desire to win the support and loyalty of other classes (Held, 1989; Offe, 1984; Wolfe, 1977). Jürgen Habermas (1976), a contemporary theorist with a conflict orientation, argues that modern countries have integrated their economic and political systems, reducing the likelihood of economic crisis while increasing the chances of a political crisis. He terms this the legitimation crisis.Governments have intervened in the market and, to some degree, solved the most acute contradictions of capitalism—including extreme income inequalities and tumultuous economic cycles—that Marx argued could be addressed only in a proletarian revolution. Governments often act to keep inflation and deflation in check, to regulate interest rates, and to provide social assistance to those who have lost jobs. Thus, economics is politicized, and the citizenry may come to expect that economic troubles will be solved through state structures and social welfare. To understand Habermas’s argument more fully, imagine a postindustrial U.S. city. The loss of jobs and industries manifests itself as a crisis—thousands of jobs in auto and other manufacturing industries move abroad, local businesses suffer as the amount of disposable income held by local people plummets, and economic pain is acute. How, in modern society, does our hypothetical city (which has hundreds of authentic counterparts in the United States) respond? Does it erupt in revolutionary fervor, with displaced laborers calling for class struggle? Or do people look to their local, state, and federal governments to provide relief in the form of tax cuts or credits, unemployment benefits, and plans for attracting new industries? The citizenry of modern capitalism, says Habermas, does not widely question the legitimacy of capitalism. If there is a “crisis,” it is political, and it is “solved” with policies that may smooth capitalism’s bumpy ride. In a sense, the state becomes the focus of discontent—in a democracy, political decision makers can be changed and a crisis averted. The economic system that brings many of these crises into being, however, remains in shadow, its legitimacy rarely questioned. In the next part of the chapter, we look into some of the other challenges confronted by states and their populations, including war and terrorism. States are key players in modern warfare, and military conflict is an important domestic political issue and global concern. Terrorism has also become increasingly entwined with war today, as recent wars undertaken by the United States, for instance, have been part of an effort to combat the threat of terrorism. WAR, STATE, AND SOCIETY Conflict between ethnic or religious groups, states, and other social entities has a long history. War has been part of human societies, cultures, and practices in some form for millennia. In the 5th century b.c., the ancient Greeks created a game called petteia, the first board game known to have been modeled on war. In the 6th century a.d., chess, another game of strategic battle, was born in northern India; it developed into its modern form by the 15th century. Military training in ancient Greece also gave birth to the first Olympic Games. In the 20th century, war games took on far more advanced forms, ranging from battlefield exercises used to prepare for defensive or offensive war to sophisticated computer simulations used for both popular entertainment and military readiness training (Homans, 2011). Today, the countries of the world spend trillions of dollars preparing for war or fighting in wars. At the same time, armed conflict and associated casualties have declined. Goldstein (2011) suggests that the nature of armed conflict has changed, shifting from larger wars in which powerful state actors confronted one another directly (such as World War II or the Korean War of the 1950s) to asymmetrical guerrilla wars, such as those the United States fought in Iraq and Afghanistan in the past decade. He notes, “Worldwide, deaths caused directly by war-related violence in the new century have averaged about 55,000 per year, just over half of what they were in the 1990s (100,000 a year), a third of what they were during the Cold War (180,000 a year from 1950 to 1989), and a hundredth of what they were in World War II” (p. 53) (Figure 14.7). p.368 FIGURE 14.7 Combined Military and Civilian Casualties of Some 20th- and 21st-Century Wars *Including Soviet–Polish conflict. **Including North Vietnam versus South Vietnam. Leitenberg, Milton. 2006. “Deaths in Wars and Conflicts in the 20th Century.” Cornell University Peace Studies Program, Occasional Paper #29, 3rd edition; Fischer, Hannah. 2010. “Iraq Casualties: U.S. Military Forces and Iraqi Civilians, Police, and Security Forces.” Congressional Research Service. Whatever the forms war has taken, it has been a key part of the human experience throughout history. What explains its existence and persistence? Recall from earlier chapters that manifest functions are intended and obvious, while latent functions are hidden, unexpected, or “nonpurposive” (in Robert Merton’s words). Functionalists look at a phenomenon or an institution that exists in society, assert that its existence presupposes a function, and ask what that function is. If something did not serve a function, it would cease to exist. Does war have a function? We begin with a functionalist perspective on war, considering its role and consequences at the macro and micro societal levels. A FUNCTIONALIST PERSPECTIVE ON WAR What are the manifest functions of war? Historically, one function has been to gain territory. The Roman Empire (27 b.c.–476 a.d.) waged war on surrounding territories, acquiring a substantial swath of the Middle East, including Cleopatra’s Egypt, and then holding it with its massive armies. Another manifest function of war is to gain control of the natural resources of another state, while a third is to prevent the disintegration of a territorial unit. The American Civil War (1861–1865) sought to avert the secession of the South, which still favored slavery, from the North, which sought to abolish slavery. What about the latent functions of war? First, war has historically operated as a stimulus to the economy: The term war economy refers to the phenomenon of war boosting economic productivity and employment, particularly in capital- and labor-intensive sectors such as industrial production.Notably, however, the wars in which the United States took part in the 20th century were fought outside its borders, and the benefits to the U.S. economy, especially in the World War II era, were not necessarily repeated elsewhere. The Soviet Union, France, Belgium, Poland, and many other European countries on whose territory World War II was waged emerged with shattered economies. As well, the first wars in which the United States has engaged in the 21st century—the conflict in Afghanistan that began in October 2001 and the military occupation of Iraq, which began in March 2003—have arguably had negative effects on the U.S. economy and have failed to benefit all but a few large corporations in the defense and energy sectors (Table 14.1). A second latent function of war is the fostering of patriotism and national pride. In times of war, governments implore their citizens to rally around the national cause, and citizens may display their patriotism with flags or demonstrations. Even those who oppose military action may shy away from open opposition in a climate of war-inspired patriotism: During the early years of the conflict in Iraq, officials of President George W. Bush’s administration several times chastised those who expressed criticism of the president and his actions “in a time of war,” raising the question of whether dissent was unpatriotic. A third latent function of war is its effect on family life and demographics. In the post–World War II years, the United States (and a number of other countries that had participated in the conflict) experienced a “baby boom,” partially the product of the return of men who had been away at war and of childbearing postponed in the war years. Of course, family life and individual lives are also prone to the deep dysfunctional consequences of war. With the long absence or loss of a father, husband, or son (or mother, wife, or daughter), families may be drawn closer together, or they may break apart. They may experience economic deprivation with the loss of an income. A spouse who has not previously worked outside the home may be compelled by circumstances to join the labor force. War may also have disproportionate effects on different socioeconomic classes, as history shows that it has often been members of the working class who bear the greatest burden in war fighting. Clearly, war has a spectrum of manifest and latent functions as well as dysfunctions. p.369 TABLE 14.1 Armed Civil and Interstate Conflicts of Recent Decades SOURCE: Pike, J. (2013). The world at war: Current conflicts. GlobalSecurity.org. A CONFLICT PERSPECTIVE ON WAR The conflict perspective suggests that some groups benefit from a given social order or phenomenon at the expense of others. We can turn a conflict-oriented lens on war to ask, “Who benefits from war? Who loses?” While we might be inclined to answer that the war’s victor wins and the defeated state or social group loses, the conflict perspective offers us the opportunity to construct a more nuanced picture. Consider the conflict in Iraq that spanned from the U.S. occupation of that country in March 2003 to the final withdrawal of troops at the end of 2011. Who benefited from this war in Iraq? Beneficiaries of any conflict include those who are freed from oppressive state policies or structures or from ongoing persecution by the defeat of a regime. Among the beneficiaries in Iraq we might count the minority Kurdish population, who were victims of Saddam Hussein’s genocidal attacks in 1987 and 1988 and were threatened by the Iraqi dictator’s presence. Were the rest of the people of Iraq beneficiaries? To the degree that Saddam was an oppressive political tyrant, the answer may be yes; Saddam’s ruling Báath Party, composed primarily of Sunni Muslims, persecuted the majority Shiite Muslim population of the country, particularly after some Shiites sought to foment an uprising at the end of the First Gulf War (1990–1991). Ordinary Sunni Muslims as well had no voice in the single-party state that Saddam ruled with a strong hand. At the same time, the long war led to thousands of civilian and military casualties, fundamentally destabilized the country, and left it with a badly damaged economy and infrastructure. Today, Iraq continues to be plagued by sectarian violence—that is, violence between religious groups—that threatens to spiral into an existential threat to the country itself. Notably, by mid-2014, a debate had arisen about the possibility of U.S. reengagement in Iraq, though few were calling for active participation in combat. Other beneficiaries of the nearly decadelong Iraq War were corporations (mostly U.S.-based) that profited from lucrative government contracts to supply weapons and other military supplies. War generates casualties and destruction—it also generates profits. In assessing who benefits from war, we cannot overlook capitalist enterprises for which war is a business and investment opportunity. Among those who benefit directly from war and conflict are private military corporations (PMCs), which provide military services such as training, transportation, and the protection of human resources and infrastructure. The use of private contractors in war has a long history. The new U.S. government, fighting against the British in the American Revolution, paid private merchant ships to sink enemy ships and steal their cargo. The modern military term company, which refers to an organized formation of 200 soldiers, comes from the private “companies” of mercenaries who were hired to fight in conflicts during the Middle Ages in Europe. At the same time, until recently, wars in the modern world were largely fought by the citizens of the nation-states involved. Sociologist Katherine McCoy (2009) writes, “Scholars have long thought of fighting wars as something nation-states did through their citizens. Max Weber famously defined the modern state as holding a monopoly over the legitimate use of violence, meaning that only state agents—usually soldiers or police—were allowed to wield force” (p. 15). Abu Ghraib and ImprisonmentCLICK TO SHOW p.370 In today’s conflicts, governments, including the U.S. government, increasingly rely on PMCs to provide a vast array of services that used to be functions of the governments or their militaries. This situation raises critical questions about the accountability and control of these private armies, which are motivated by profit rather than patriotism (McCoy, 2009). The rise of PMCs has been driven by reductions in the size of armies since the end of the Cold War, the availability of smaller advanced weaponry, and a political-ideological trend toward the privatization and outsourcing of activities previously conducted by governments (Singer, 2003). The conflict perspective on war also asks, “Who loses?” Losers, of course, include those on both sides of a conflict who lose their lives, limbs, or livelihoods in war. Increasingly, according to some reports, they have been civilians, not soldiers. By one estimate, in World War I, about 5% of casualties were civilians (Swiss & Giller, 1993). In World War II, the figure has been estimated at 50% (Gutman & Rieff, 1999). While some researchers argue that the figure is much higher (Swiss & Giller, 1993), Goldstein (2011) suggests that the ratio of civilian to military casualties has remained at about 50:50 into the 21st century. Specific casualty figures also vary widely, often depending on the methodologies and motivations of the organizations or governments doing the counting (Table 14.2). This topic is examined in more detail in the Behind the Numbers box on page 371. What about the less apparent “losers”? Who else pays the costs of war? Modern military action has substantial financial costs, which are largely borne by taxpayers. In the decade between 2001, when the United States was the victim of terrorist attacks, and 2011, when the Iraq War ended and the war in Afghanistan began to wind down, the country spent an estimated $7.6 trillion on defense and homeland security. Even since the end of active U.S. engagement in Iraq and Afghanistan, the United States has continued to devote a substantial part of its federal budget to the Departments of Defense and Homeland Security. At the same time, many U.S. domestic programs in areas such as education, job training, and environmental conservation have lost funding as budgets have shrunk. Growth in the defense and security allocations of the federal budget has not been without costs. TABLE 14.2 Casualty Estimates for Darfur and Congo Conflicts SOURCE: Rieff, D. (2011, September/October). Millions may die… or not. Foreign Policy, pp. 22–25. In the next section we consider the phenomena of terrorists and terrorism, which have been drivers—and consequences—of some of the world’s most recent armed conflicts. TERRORISTS AND TERRORISM The “global war on terror,” or GWOT, was initiated in 2001 after the September 11 attacks on the World Trade Center and the Pentagon. This term has been used to refer to the international (though U.S.-led) overt and covert military campaign against the Islamic group al-Qaeda and similar groups believed to threaten the United States and its allies. The term was first used by President George W. Bush and other officials of his administration. President Barack Obama has not adopted the term, and in March 2009, the U.S. Department of Defense dropped it officially. Though it continues to be used in some political commentary, the designation “overseas contingency operation” (OCO) has largely replaced GWOT. The focus of U.S. military, diplomatic, and economic efforts on combating terrorism has, however, continued unabated. The unprecedented events of September 2001 brought terrorism and terrorists more fully than ever into the U.S. experience and consciousness. The political response was to refocus domestic priorities on the GWOT and homeland security, drawing resources and attention from other areas such as education and immigration reform. Terrorism became a key theme of U.S. politics, policies, and spending priorities and a subject of concern and discussion from the U.S. Congress to ordinary citizens. The concepts of terrorist and terrorism, however, are broad and may not be defined or understood in the same ways across groups or countries. Acts of violence labeled by one group as terrorism may be embraced as heroic by another. The label of terrorist may be inconsistently applied depending on the ethnicity, religion, actions, and motivations of an individual or a group. Below we examine these concepts and consider their usage with a critical eye. WHO IS A TERRORIST? Close your eyes and picture a terrorist. What does your imagined terrorist look like? Why do you think that particular image appeared to you? The images we generate are culturally conditioned by the political environment, the mass media, and the experiences we have, and they differ across communities, countries, and cultures. The idea of a terrorist is not the same across communities because, as we will see below, violent acts condemned by one community may be embraced by another as necessary sacrifices in the pursuit of political ends. Drone WarfareCLICK TO SHOW p.371 BEHIND THE NUMBERS COUNTING THE CIVILIAN CASUALTIES OF WAR Reuters/Thaier al-Sudani Although the deaths of U.S. and allied military personnel were carefully counted while troops were serving in Iraq, there are no official estimates of the number of Iraqi civilians killed in the long-running conflict in that country. Estimates range widely. In the U.S.-led occupation of Iraq that began in 2003, civilian casualties resulted from the actions of multiple actors, including U.S., coalition, and Iraqi forces and armed insurgents, and many resulted from the sectarian violence between Iraq’s Sunni and Shiite Muslim populations that followed. An accurate count of civilian casualties remains difficult to pinpoint. For example, the Associated Press places the civilian casualties from 2005 through 2008 at 34,832 killed and 40,174 wounded. By contrast, an article published in the medical journal The Lancet estimated the total number of deaths to be between 426,000 and 793,600 (Fischer, 2008). Clearly, the difference between 35,000 and 790,000 deaths is vast. What accounts for this disparity? One factor is the way civilian deaths are initially reported. Some may go unreported, some may be reported multiple times, and still others may only be estimated when identifying victims is difficult. Local hospital records are often not maintained accurately in conflict-ridden areas. Finally, some bodies never make it to hospitals if they are taken away by family or are rendered unidentifiable, a tragic but common problem. The Associated Press tallied the number of individuals listed as killed in news reports and identified as “civilians” (Fischer, 2008). Relying only on reports of deaths in news accounts likely led to underestimation of the total number. The Lancet estimate relied on two cluster sample surveys of Iraqi households, but the journal was criticized for, among other problems, failing to gather demographic information that might have shed light on the reliability of counts. A third casualty count, also based on cluster sample surveys, sampled 20 times as many households as the Lancet study. Sponsored by the World Health Organization and the Iraq Family Health Survey, it was nationally representative and had an 89% response rate. The result was an estimate that placed the number of civilian casualties at just over 115,000 (Fischer, 2008). These widely varying counts highlight the fact that casualty statistics can be influenced by how the data are collected and analyzed. Because states, relief agencies, and international organizations use casualty figures to make their cases for intervention, cessation of operations, and budgetary increases, it is important to recognize that different methodologies may render dramatically different figures. THINK IT THROUGH Why is it more difficult to count civilian casualties than military casualties? Why do different entities often produce differing casualty figures? p.372 © U.S. Navy – digital version copy/Science Faction/Corbis; Reuters/Dan Lampariello. Terrorist acts are often intentionally dramatic and calculated to inflict harm, instill fear, and attract media attention. The attacks on the World Trade Center and Pentagon on September 11, 2001 (left), and the Boston Marathon bombing on April 15, 2013 (right), fit this model of terrorism. It has been said that one person’s terrorist is another person’s freedom fighter. Michael Collins was born in West Cork, Ireland, in 1890. Before he turned 20, he had sworn allegiance to the Irish Republican Brotherhood, a group of revolutionaries struggling for Irish independence from three centuries of British rule, and he worked and fought with them throughout the first decades of the 20th century. In Ireland and Northern Ireland today, Collins is widely regarded as a hero (Coogan, 2002). The 1996 film Michael Collins, starring Liam Neeson and Julia Roberts, cast him in a generally positive light: The film’s tagline declared, “Ireland, 1916. His dreams inspired hope. His words inspired passion. His courage forged a nation’s destiny.” In Britain, however, many consider Collins to be a terrorist. In 1920, while he was director of intelligence for the Irish Republican Army (IRA), his secret service squad assassinated 14 British officers (Coogan, 2002). The British responded to the IRA with violence as well. Notably, the Continuity Irish Republican Army (CIRA) continues to be on the U.S. Department of State’s (2012b) global list of terrorist groups. Was Michael Collins a terrorist or a hero? How do we judge Britain’s violent military response? The label of terrorist is a subjective one, conditioned by whether one rejects or sympathizes with the motives and actions under discussion. As an expert on terrorism notes, “If one party can successfully attach the label terrorist to its opponent, then it has indirectly persuaded others to adopt its moral viewpoint” (Hoffman, 2006, p. 23). The issue is, arguably, more complex when it involves acts of mass violence perpetrated by domestic terrorists, for example, incidents in the U.S. committed by U.S. citizens or residents. On April 20, 1995, 168 people perished in the bombing of a federal building in Oklahoma City, Oklahoma. While initial media suspicion pointed to foreign perpetrators, further investigation determined that Timothy McVeigh, an American, with the cooperation of a small group of antigovernment compatriots, was responsible for the crime. In the wake of the incident, the U.S. government planned closer scrutiny of domestic threats. The terrorist incidents of September 11, 2001, which were perpetrated by Islamic radicals, shifted attention to the Middle East, including Afghanistan and, later, Iraq and Pakistan, among others. Recent domestic incidents of violence, including shootings at two Jewish centers in Kansas in 2014 that killed 3 people, the Boston Marathon bombing of 2013 that resulted in 3 deaths and injuries to more than 260 people, and the killing of 6 people at a Sikh temple in Wisconsin in 2012, have refocused attention on internal incidents. In 2014, the U.S. Department of Justice relaunched the work of a group focused on domestic threats. As a Council on Foreign Relations publication points out, however, there is inconsistency in understandings and legal approaches to what “terrorism” is and whether domestic and international incidents of violence both fall under that term (Masters, 2011). Consider that the alleged killer in the Kansas Jewish center shootings, who is affiliated with White supremacist groups, has been charged with first-degree murder, while one of the Boston bombers (the other was killed by police) has been charged with, among other crimes, use of a weapon of mass destruction to kill. Are individuals or small groups in the United States who target government buildings, public events, or other groups for violence terrorists? What is the significance of using that term rather than using terms like criminal or even extremist? Homegrown TerrorCLICK TO SHOW Terrorism and DepressionCLICK TO SHOW p.373 TECHNOLOGY & SOCIETY THE TERROR SHOW Reuters/Stringer In 2014, Islamic State (IS or ISIS) fighters occupied substantial swaths of territory in Syria and Iraq. While small, the radical Islamist group has successfully recruited thousands of fighters from Western countries like England and the United States. The Internet, and in particular social media platforms like Facebook and Twitter, has become a fundamental part of contemporary terrorism. If, as Timothy Furnish (2005) writes, “the purpose of terrorism is to strike fear into the hearts of opponents in order to win political concession,” then social media have multiplied the effects of acts of terror, expanding the audience for horrific violence and transforming single incidents of violence into media shows that can be played over and over again. But social media appear to have a second key function for terrorism as well—to win over new followers and entice recruits. In the summer of 2014, the Middle Eastern terror group ISIS (Islamic State of Iraq and Syria; also known as ISIL, for Islamic State of Iraq and the Levant, or simply as the Islamic State) took to Twitter to disseminate images of the killing of Iraqi soldiers loyal to the state ISIS was seeking to destroy. ISIS has been a presence in the Syrian civil war and, according to reports, is an offshoot of the better-known terror group al-Qaeda. While both al-Qaeda and ISIS are adherents of an extreme brand of Sunni Islam, the two groups apparently broke over ISIS’s unfettered willingness to slaughter Muslim civilians. A writer on the Vox Media news website has pointed out that the multitude of graphic images circulated by ISIS on Twitter in June was not just “ISIS bragging about their murderousness. ISIS has a well-developed social media presence, which they’re using deliberately in this campaign to do two things: intimidate Iraqis who might oppose them and win supporters in their battle with al-Qaeda for influence over the international Islamist extremist movement” (Beauchamp, 2014). Indeed, ISIS’s use of social media appears carefully managed; it involves tweeting at regular intervals and choosing hashtags that will reach the audiences they seek (Beauchamp, 2014). Notably, Twitter responded within a day by suspending the account used by ISIS to post the photos of the slaughter, following Facebook, which earlier had suspended an ISIS fan page. By this time, however, the images had already been reproduced across the Internet, appearing on many mainstream media sites. ISIS continued to post on Twitter, presumably using an alternative account. Clearly, it is highly problematic when social media platforms, most of which have codes of conduct, are used to disseminate images of violence and horror. At the same time, it has been pointed out in the press that shocking images can impel the international community toward action in some cases (Chandler, 2014). They may also offer the documentation that countries and the international community need to prosecute killers for the crimes they have chosen to glorify. While fundamentalist terror groups like ISIS embrace an archaic and deeply conservative interpretation of Islam, their means of sharing their brutal battles are thoroughly modern. THINK IT THROUGH What functions do social media serve for contemporary terror groups? How should social media platforms like Facebook and Twitter respond to users who represent or are affiliated with terror groups? p.374 WHAT IS TERRORISM? There is no single definition of terrorism. The U.S. Department of Defense (2011) defines it as “the unlawful use of violence or threat of violence to instill fear and coerce governments or societies. Terrorism is often motivated by religious, political, or other ideological beliefs and committed in the pursuit of goals that are usually political.” This definition highlights the idea that terrorism is intended to provoke both fear and change. Does it then also include the bombings of war, such as the brutal Nazi air attacks on London or the U.S. nuclear bombing of Nagasaki and Hiroshima during World War II? What about genocidal acts against populations? What factors make terrorism difficult to define clearly? The U.S. Department of State offers another definition in which terrorism is “premeditated, politically motivated violence perpetrated against noncombatant targets by subnational groups or clandestine agents” (quoted in National Institute of Justice, 2011). This definition implies that terrorism is committed not by states but by groups, and that its targets are noncombatants. While this is often true, it is not invariably true: States have also been complicit in supporting acts of terror. For example, it is alleged that Libya’s former leader Colonel Muammar Gaddafi ordered the bombing of an international civilian airliner, Pan Am Flight 103, that crashed on December 21, 1988, killing all 259 people on board. While Gaddafi rejected the allegation, in 2003 the Libyan government accepted responsibility for the act and paid compensation to families of the victims. We can also understand terrorism as performative—that is, it is violence that is intentionally dramatic, enacted for the purpose of attracting attention and publicity and spreading fear. This definition points more deliberately to terrorism as an instrument of horrific political theater whose direct victims are props on the stage of a larger political or ideological play. While such “media-oriented terrorism” does not, by one analysis, make up the majority of the terror acts of the past half century, it is widespread and has historical roots in the acts of 19th-century anarchists, who pioneered the concept of “propaganda of the deed” (Surette, Hansen, & Noble, 2009). In some sense the media do offer a “stage” for acts of atrocity, not only functioning as reporters of terror events but also conditioning terrorist groups’ selection of targets and actions (see the Technology and Society box on page 373). Media-oriented terrorism is thus particularly likely to be perpetrated in democratic rather than authoritarian states, because democracies allow the wide dissemination of information about events such as terror attacks. Media attention, which has expanded from the print media and television to include the Internet, the “Twitterverse,” and other new media, has a powerful multiplier effect on modern terrorism, offering a broad platform of publicity even for small and relatively weak groups whose combat and political capabilities are otherwise very limited (Surette et al., 2009). Notably, the success of some nations in building strong, centralized militaries may also have contributed to terrorism’s spread. As an effective form of asymmetric conflict, terrorism is one of the few avenues open to those who want power, attention, or change yet lack the military means to challenge dominant global powers directly. Robert Pape (2005) has pointed to terrorism as a weapon of the weak. Based on his analysis of 315 incidences of suicide terrorism between 1980 and 2003, he concluded that a consistent causal logic of these events was the attempt to exercise coercive power against a stronger democratic state perceived as a homeland occupier. Currently, the United Nations Comprehensive Convention on International Terrorism remains in draft form, subject to debate and negotiation, particularly over language that highlights terrorism’s roots in the motive to “intimidate a population, or to compel a Government or an international organization to do or abstain from doing any act.” At issue here, among other things, is what constitutes the line between an act of terrorism and an act of war. Is an act of war by a state an act of “politically motivated violence” subject to the convention’s regulation? Leaders also debate whether to include in the definition of terrorist groups national “liberation movements,” examples of which could be the Irish Republican Army, Palestinian movements such as Hamas, and Kurdish militants in Turkey. Those who support the aims of such groups say no. In light of the fact that perceptions of a given act may differ widely across groups and countries, is a global definition of terrorism even possible? Is it necessary in order to reduce the threat of violence to states and civilian populations? What do you think? WHY STUDY THE STATE AND WARFARE THROUGH A SOCIOLOGICAL LENS? In the modern world, politics and the state directly affect the lives of everyone. Understanding how politics and the state work is essential to our lives as informed, active local and global citizens. In this chapter we have inquired into the processes that directly affect the functioning of the state and politics and into state decision making, including the decision to go to war. In the face of the apparently overwhelming power of the state and the seeming distance of political decision making from the lives of most people, it is easy for us to shrug our shoulders and feel powerless. Yet one of the lessons we learn from the sociological analysis of the state and politics is that both are subject to influence by ordinary citizens, especially when people are mobilized into social movements and interest groups, politically aware, and able to evaluate politics and policies critically. Public ignorance and apathy benefit those who use politics to ensure their own or their social groups’ well-being; active citizenship is an authentic instrument of power, even where it faces significant obstacles. p.375 Understanding issues of state and politics also helps us understand the roots and consequences of armed conflict. Wars are the products of choices made by leaders—usually the civilian or military leaders of countries or empires. Wars do not just happen. In understanding war, we benefit from recognizing the ways in which it confers benefits and incurs costs. Those with power make calculations and choices. Those without power—women and children and sometimes citizens and soldiers—do not make such choices, though they may pay the cost. While war is a reality in our world, we need to move beyond a simple understanding of war as an inevitable part of the human experience to recognize its more complex and less obvious sociological elements. Perhaps with a better understanding of war and its motivations and consequences we can help to clear a path to greater civility and peace in the world. Robert Merton (1968) posited the idea of functional alternatives. If we recognize that war has functions (as we have seen in this chapter), we might also begin to imagine functional alternatives—that is, other means of realizing those functions. For instance, if war has a manifest function of acquiring access to needed or desired natural resources such as oil or water, perhaps greater conservation of the resources would diminish the need for aggressive action to secure access. If war acts as a way of resolving territorial disputes, perhaps creative diplomatic thinking can begin to carry us toward more nonmilitary alternatives. If war is also functional in fostering patriotism, perhaps a country could construct national pride and patriotism on a foundation other than the battlefield of glory and sacrifice, as so many countries do. Social change demands imagination—a changed world must be imagined before it can be realized. While a future without war seems unimaginable, our expanded understanding of this phenomenon may give us some of the tools we need to make it less probable and less costly to civilians and soldiers alike. p.376 WHAT CAN I DO WITH A SOCIOLOGY DEGREE? SKILLS AND CAREERS: WRITTEN COMMUNICATION SKILLS Written communication is an essential skill for a broad spectrum of 21st-century careers. Sociology students have many opportunities to practice and sharpen written communication skills. Among others, sociologists learn to write theoretically, applying classical and contemporary theories to the understanding of social issues or phenomena, and to write empirically, preparing and communicating evidence-based arguments about the social world. Sociology majors write papers in a variety of forms and for a variety of audiences; these may include reaction papers, book reviews, theoretical analyses, research papers, quantitative analysis reports, field note write-ups, letters to decision makers or newspaper editors, and reflections on sociological activities or experiences. Strong written communication skills are absolutely essential in a variety of fields. In every chapter of this book, we explore the research findings of social scientists. This chapter, for instance, discussed the work of classical theorist Max Weber and contemporary sociologist G. William Domhoff. Sociologists need to be able to communicate effectively in writing in order to organize their research projects and to convey their findings to a wider audience (like you!). The success of other professionals is also dependent on writing skills. Consider the broad field of politics, which comprises politicians and their staffs, journalists, lobbyists, public interest advocacy groups, business coalitions, and many more actors. In politics, getting and keeping the attention of valuable constituencies or allies is often done through informed public policy ideas and the astute presentation of arguments. In politics, writing skills are a key foundation of an individual’s ability to draft public policy, to persuade voters or donors, to explain political issues, or to report on current events. Excellent written communication is fundamental in many occupational fields, including politics, business and entrepreneurship, communications and marketing, law and criminal justice, community organizing and advocacy, journalism, higher education, law, and public relations. Effective writing is critical for employees holding job titles such as social media writer or webmaster, grant writer, researcher, professor or instructor, public relations writer, journalist or reporter, English as a second language teacher, lawyer or judge, speech writer, editor, or copywriter. You may notice that quite a few of these fields and jobs require graduate or professional education; sharp and organized writing is also a key to success in postbachelor’s educational pursuits. THINK ABOUT CAREERS What are your strengths as a writer? What are your weaknesses? What are your goals for improvement? Consider possible career paths that might be of interest to you. What kinds of writing do you think people in those fields do? How can you gain experience in those types of writing? p.377 SUMMARY • The world today is politically divided into 195 nation-states. Most countries are made up of many different peoples, brought together through warfare, conquest, or boundaries drawn by colonial authorities without respect to preexisting ethnic or religious differences. • Modern countries are characterized by governments that claim complete and final authority over their citizens, systems of law, and notions of citizenship that contain obligations as well as civil, social, and political rights. • State power is typically based on one of three kinds of legitimate authority: traditional authority, based on custom and habit; rational-legal authority, based on a belief in the law; or charismatic authority, based on the perceived inspirational qualities of a leader. • Functionalist theories of power argue that the role of the government is to mediate neutrally between competing interests; they assert that the influence of one group is usually offset by that of another group with an opposing view. Conflict theories of state power draw the opposite conclusion: that the state serves the interests of the most powerful economic and political groups in society. Different versions of social conflict theories emphasize the importance of a power elite, structural contradictions, and the relative autonomy of state power from the economic elites. • Governance in the modern world takes a number of forms, including authoritarianism(including monarchies and dictatorships), totalitarianism, and democracy. • Democracy is one of the primary forms of governance in the world today, and most countries claim to be democratic in theory if not in practice. Most democratic countries practice representative democracy rather than direct democracy. • The U.S. political system is characterized by low voter turnouts. Voter participation varies, however, on the basis of demographic variables like age and education. • In the United States, elected officials depend heavily on financial support to get elected and to remain in office. Fund-raising is a major part of politics, and individuals and organizations that contribute heavily do so in hopes of influencing politicians. Special interests use lobbyists to exercise influence in U.S. politics. Politicians still depend on their constituents’ votes to get elected, and so they must satisfy voters as well as special interests. • We can examine war from various sociological perspectives. The functionalist perspective asks about the manifest (obvious) and latent (hidden) functions of war and conflict in society. The conflict perspective asks who benefits from war and conflict, and who loses. • The global war on terror was initiated in 2001 after the September 11 terrorist attacks on U.S. soil. The GWOT encompassed the diplomatic, military, and economic actions taken by the United States and its allies to fight terrorism. The term global war on terror was officially dropped by the U.S. Defense Department in 2009. • No single image of a terrorist threat is shared across communities and countries and cultures. Irishman Michael Collins is an example of someone regarded as a hero by some and a terrorist by others. • Terrorism is a calculated use of violence to coerce or to inspire fear. It is also “theater”—intended to send a powerful message to a distinct or a global audience. p.378 KEY TERMS nation-state, 351 law, 352 citizens, 352 noncitizens, 352 welfare state, 352 interest groups, 354 class dominance theory, 356 power elite, 356 coercion, 357 traditional authority, 357 rational-legal authority, 357 charismatic authority, 358 authoritarianism, 358 monarchy, 358 dictatorship, 359 totalitarianism, 359 democracy, 360 direct democracy, 360 representative democracy, 360 politics, 360 political action committees (PACs), 364 lobbyists, 364 war economy, 368 terrorism, 374 DISCUSSION QUESTIONS 1. In this chapter, you learned about theories of state power. Would you say that U.S. governance today is characterized more by pluralism or by the concentration of power in the hands of an elite? Cite evidence supporting your belief. 2. What is authoritarianism? What potential roles do modern technology and social media play in either supporting or challenging authoritarian governments around the world? 3. The chapter raised the issue of low voting rates for young people. Recall the reasons given in the chapter and then think about whether you can add others. Do most of the young people you know participate in elections? What kinds of factors might explain their participation or nonparticipation? 4. What are the manifest and latent functions and dysfunctions of war? Review the points made in the chapter. Can you add some of your own? 5. What is terrorism? How should this term be defined and by whom? Should domestic incidents of mass violence be labeled terrorism, or should the term be reserved for international incidents? Sharpen your skills with SAGE edge at edge.sagepub.com/chambliss2e A personalized approach to help you accomplish your coursework goals in an easy-to-use learning environment. 15 WORK,CONSUMPTION,AND THE ECONOMY Reuters/Vivek Prakash Media Library CHAPTER 15 Media Library AUDIO Low Wage Labor Brain Power and the Future of U.S. Manufacturing Nature v. Economics VIDEO Predatory Pending and Payday Loans U.S. Economic Problems Informal Economy Shift in Economic Production Worker’s Rights CQ RESEARCHER Immigration Conflict Credit Card Culture PACIFIC STANDARD MAGAZINE Oil industry Jobs Green Jobs in the United States JOURNAL Relieving the Burden of Emotional Labor U.S. Jobs for Soviet Women The Poor or the Working Class? Underground Pro-Soccer Economy Worldwide Informal Economy Global Consumer Culture REFERENCE Capitalism v. Socialism Socialism Cultural Imperialism and Consumers p.381 IN THIS CHAPTER The Economy in Historical Perspective Types of Economic Systems Working On and Off the Books Consumers, Consumption, and the U.S. Economy Globalization and the New Economic Order Why Study Economic Systems and Trends? WHAT DO YOU THINK? 1. How have job market conditions changed in your lifetime? Should you expect to experience a job market similar to that your parents or grandparents experienced? 2. Do you agree with the societal attitude that parents who are not in the paid labor force and who stay home to care for children or aging parents “don’t work”? How should “work” be defined? 3. Why has average household debt grown in recent decades? p.382 THE LOW-WAGE U.S. LABOR FORCE Reuters/Chris Keane In October 2010, the Washington Post reported that a BMW automotive plant, owned by a German parent company, had created 1,000 new jobs for workers in South Carolina (Whoriskey, 2010). In early 2014, BMW announced its intention to invest another billion dollars in its Spartanburg factory, expanding the facility to become its largest manufacturing plant and creating positions for another 800 workers (Levin, 2014). We hear much today about deindustrialization—that is, the decline of U.S. manufacturing, much of which has been automated or moved to lower-wage locations overseas. There has, however, been a small but growing movement of manufacturing operations into the United States in recent decades, including, very recently, from China. In March 2012, Xinxiang, which produces copper tubing used for air-conditioning, refrigeration, and cars, began construction on a new Chinese-owned and -operated plant in Thomasville, Alabama. The plant, which began partial operations in May 2014, is expected eventually to create 300 jobs (Kavilanz, 2012; Made in Alabama, 2014). Why, after decades of industrial job loss to countries with low labor costs, are some manufacturers moving their operations to the United States? In the case of BMW, the automobile company wants to have a manufacturing presence in the United States, its largest foreign market. As well, as a representative of the Labor and Industry Group at the Center for Automotive Research has pointed out, “We are a low-wage country compared to Germany.” Skilled, productive U.S. workers cost BMW about half the hourly wages of their unionized German counterparts, who earn the equivalent of about $33 per hour (Whoriskey, 2010). Another important characteristic of the new BMW jobs is that many workers at the plant do not work for BMW. They are employed through a contractor, which means the positions are less secure and less well paid than those offered directly by BMW. According to a recent Forbes magazine article on the decline of the United Auto Workers union’s influence, between 20% and 40% of autoworkers at foreign-owned factories today are temporary hires (Muller, 2014). The phenomenon of contracted work—that is, temporary work that minimizes the commitment of both employer or employee to a long-term economic relationship—has become a key characteristic of the modern U.S. economy as businesses seek to maximize efficiency, flexibility, and profitability by reducing (or “downsizing”) the numbers of their workers who have regular “permanent” positions (Figure 15.1). Computer giant Microsoft’s use of “permatemps,” initiated in the 1990s, is a striking example. During this time, 1,500 permatemps worked with the 17,000 regular domestic employees of the company. While they performed comparable tasks, the permatemps, some of whom had been in their jobs for 5 years or more, not only were denied the same vacation, health, and retirement benefits as other workers but also were denied discounts at the Microsoft store, opportunities for further job training, and even use of the company basketball court. A class-action suit was filed against Microsoft, and the company agreed to an out-of-court settlement of $97 million (FACE Intel.com, 2000). p.383 In the past 30 to 40 years, many businesses in the United States have used mass layoffs to “downsize” every aspect of their operations, from factories and production workers to managerial and professional staffs (Uchitelle, 2007). A growing proportion of American workers, even in the professional sector, find themselves engaged in short-lived jobs with little security (Greenhouse, 2008). Many more in the service sector are laboring part-time, sometimes in several jobs that lack benefits, mobility, and living wages. Wages at the bottom and in the middle tier have stagnated for a generation, despite a growing U.S. economy. In recent decades the clear majority of economic gains have streamed upward to earners already at the top of the ladder. According to the U.S. Census Bureau (2012b), today fully half of U.S. aggregate income goes to the top 20% of earners, continuing a long-term trend toward a concentration of gains at the top. This is the culmination of a longer-term trend. According to economist Emmanuel Saez (2013), from 1993 to 2011, real annual income growth for the bottom 99% of earners was just over 6%, while it was nearly 58% for those in the top 1%. FIGURE 15.1 Temporary Workers in Selected U.S. Industries, 2005 SOURCE: Table 4 “Employed Contingent and noncontingent workers by occupation and industry, February 2005.” U.S. Bureau of Labor Statistics. “Contingent and Alternative Employment Arrangements, February 2005.” U.S. Department of Labor (USDL 05-1433). 2005. In this chapter, we discuss key issues of economic sociology and examine the implications of a new globalized economy—postindustrial, technologically sophisticated, and consumption oriented—for the world and for U.S. society in particular. We begin with a brief historical overview of the three great economic revolutions that have transformed human society. We then look at capitalism and communism, the two principal types of economic systems that dominated the 20th century and continue to influence the 21st century. Next we turn to a discussion of work in the formal and informal economies. In this context, we also discuss social and economic issues of consumption, hyperconsumption, and debt. The chapter concludes with a discussion of the changes and challenges globalization has brought to our economic system and prospects. THE ECONOMY IN HISTORICAL PERSPECTIVE The economy is the social institution that organizes the ways in which a society produces, distributes, and consumes goods and services. By goods we mean objects that have an economic value to others, whether they are the basic necessities for survival (a safe place to live, nutritious food to eat, weather-appropriate clothing) or things that people simply want (designer clothing, an iPad or iPhone, popcorn at the movies). Services are economically productive activities that do not result directly in physical products; they can be relatively simple (shining shoes, working a cash register, serving cocktails) or quite complex (repairing an airplane engine or computer, conducting a medical procedure). In human history three technological revolutions have brought radically new forms of economic organization. The first led to the growth of agriculture several millennia ago, and the second to modern industry some 250 years ago. We are now in the throes of the third revolution, which has carried us into a digital and postindustrial age. Low Wage LaborCLICK TO SHOW Predatory Pending and Payday LoansCLICK TO SHOW p.384 Public Domain -Library of Congress Karl Marx saw industrial workers as “instruments of labor” tethered to an exploitive system. One 19th British mother described her 7-year-old to a government commission: “He used to work 16 hours a day . . . I have often knelt down to feed him, as he stood by the machine, for he could not leave it or stop” (quoted in Hochschild, 2003, p. 3). THE AGRICULTURAL REVOLUTION AND AGRICULTURAL SOCIETY The agricultural revolution vastly increased human productivity over that of earlier hunting, gathering, and pastoral societies. This achievement was spurred by the development of innovations such as irrigation and crop rotation methods, as well as by expanding knowledge about animal husbandry and the use of animals in agriculture. A simple development such as the plow, which came into use about 5,000 years ago, had a transformational effect on agriculture when it was harnessed to a working animal. Greater productivity led to economic surplus. While the majority of people in agricultural societies still engaged in subsistence farming, an increasing number could produce surplus crops, which they could then barter or sell. Eventually, specialized economic roles evolved. Some people were farmers; others were landowners who profited from farmers’ labor. A number of families specialized in the making of handicrafts, working independently on items of their own design. This work gave rise to cottage industries—so called because the work was usually done at home. The production of agricultural surpluses, as well as handicrafts, created an opportunity for yet another economic role to emerge—that of merchants, who specialized in trading surplus crops and crafted goods. Trading routes developed and permanent cities grew up along them, and the number and complexity of economic activities increased. By about the 15th century, early markets arose to serve as sites for the exchange of goods and services. Prices in markets were set (as they are in free markets today) at the point where supply (available goods and services) was balanced by demand (the degree to which those goods and services are wanted). THE INDUSTRIAL REVOLUTION AND INDUSTRIAL SOCIETY The Industrial Revolution, which began in England with the harnessing of water and steam power to run machines such as looms, increased productivity still further. Cottage industries were replaced by factories, the hallmark of industrial society, and urban areas became centers of economic activity, attracting rural laborers seeking work and creating growing momentum for urbanization. Industrialization spread through Europe and the United States, and then to the rest of the world. The change was massive. In 1810, about 84% of the U.S. workforce worked in agriculture and only 3% in manufacturing; by 1960, just 8% of all U.S. workers labored in agriculture, and fully a quarter of the total workforce was engaged in manufacturing (Blinder, 2006). Industrial society is characterized by the increased use of machinery and mass production, the centrality of the modern industrial laborer, and the development of a class society rooted in the modern division of labor. INCREASED USE OF MACHINERY AND MASS PRODUCTION Machines increase the productive capacity of individual laborers by enabling them to produce more goods efficiently at lower cost. New machines have historically required new sources of energy as well: Waterwheels gave way to steam engines, then the internal combustion engine, and eventually electricity and other modern forms of power. In 1913, automobile mogul Henry Ford introduced a new system of manufacturing in his factories. Mass production is the large-scale, highly standardized manufacturing of identical commodities on a mechanical assembly line. Under Ford’s new system, a continuous conveyor belt moved unfinished automobiles past individual workers, each of whom performed a specific operation on each automobile: One worker would attach the door, another the windshield, another the wheels. (The term Fordism is sometimes used to describe this system.) Mass production resulted in the development of large numbers of identical components and products that could be produced efficiently at lower cost. This linked system of production became a foundation for the evolution and expansion of productive industries that went far beyond auto manufacturing. THE BIRTH OF THE INDUSTRIAL LABORER With the birth of industry came the rise of the industrial labor force, comprising mostly migrants from poorer rural areas or abroad seeking their fortunes in growing cities. Often the number of would-be workers competing for available jobs created a surplus of labor. Karl Marx described this as a reserve army of labor, a pool of job seekers whose numbers outpace the available positions and thus contribute to keeping wages low and conditions of work tenuous (those who do not like the conditions of work are easy to replace with those seeking work). p.385 If it is possible to create an assembly line on which each worker performs a single, repetitive task, why not design those tasks to be as efficient as possible? This was the goal of scientific management, a practice that sought to use principles of engineering to reduce the physical movements of workers.Frederick Winslow Taylor’s Principles of Scientific Management, published in 1911, gave factory managers the information they needed to greatly increase their control over the labor process by giving explicit instructions to workers regarding how they would perform their well-defined tasks. While Taylor was focused on the goal of efficiency, “Taylorism” also had the consequence of further deskilling work. Deskilling rendered workers more vulnerable to layoffs, since they—like the components they were making—were standardized and therefore easily replaced (Braverman, 1974/1988). CLASSES IN INDUSTRIAL CAPITALISM New economic classes developed along with the rise of industrial capitalist society. One important new class was composed of industrialists who owned what Marx called the means of production—for example, factories. Another was made up of wage laborers—workers who did not own land, property, or tools. They had only their labor power to sell at the factory gate. Work in early industrial capitalism was demanding, highly regimented, and even hazardous. Workers labored at tedious tasks for 14 to 16 hours a day, 6 or 7 days a week, and were at risk of losing their jobs if economic conditions turned unfavorable or if they raised too many objections (recall the concept of the reserve army of labor). The pool of exploitable labor was expanded by migrant workers from rural areas and abroad, and even children of poor families were sometimes forced to labor for wages. Influenced by the poor conditions they saw in 19th-century English factories, Karl Marx and Friedrich Engels posited that these two classes, which they termed the bourgeoisie, or capitalists, and the proletariat, or working class, would come into conflict. They argued in the Manifesto of the Communist Party (1848) that the bourgeoisie exploited the proletariat by appropriating the surplus value of their labor. That is, capitalists paid workers the minimum they could get away with and kept the remainder of the value generated by the finished products for themselves as profit, or as a means to gather more productive capital in their own hands. The exploitation of wage labor by capitalists would, they believed, end in revolution and the end of private ownership of the means of production. While some observers of early capitalism, including Marx and Engels, offered scathing critiques of the social and economic conditions of factory laborers, the early and middle decades of the 20th century (with the exception of the period of the Great Depression) witnessed improved conditions and opportunities for the blue-collar workforce in the United States. In the early 20th century, Henry Ford, the patriarch of Fordist production, took the audacious step of paying workers on his Model T assembly line in Michigan fully $5 for an 8-hour day, nearly three times the wage of a factory employee in 1914. Ford reasoned that workers who earned a solid wage would become consumers of products such as his Model T. Indeed, his workers bought, his profits grew, and industrial laborers (and, eventually, the workers of the unionized U.S. car industry) set off on a slow but steady path to the middle class (Reich, 2010). The class structure that emerged from advanced industrial capitalism in the United States, Europe, Japan, Canada, and other modern states boasted substantial middle classes composed of workers who ranged from well-educated teachers and managers to industrial workers and mechanics with a high school or technical education. The fortunes of blue-collar and semiprofessional workers were boosted by a number of factors. Among these were extended periods of low unemployment in which workers had greater leverage in negotiating job conditions (Uchitelle, 2007). Unions supported autoworkers, railroad workers, and workers in many other industries in the negotiation of contracts that ensured living wages, as well as job security and benefits. Unionization surged following the Great Depression and the 1935 passage of the Wagner Act, which “guaranteed the rights of workers to join unions and bargain collectively” (VanGiezen & Schwenk, 2001), growing to more than 27% of the labor force by 1940. At their peak in 1979, U.S. unions claimed 21 million members (Mayer, 2004). Changes in the U.S. economy, including those we saw illustrated in this chapter’s opening story, have shaken the relatively stable middle class that emerged around the middle of the 20th century. Since the 1970s, mass layoffs have grown across industries, though manufacturing has been hit hardest (Uchitelle, 2007). As a result, today the industrial laborer—and his or her counterpart in the lower and middle levels of the white-collar workforce—is less likely to belong to a union, less likely to have appreciable job security, and more likely to have experienced a decline in wages and benefits. Income gains have slipped, and, for many, membership in the U.S. middle class has become tenuous (Table 15.1). p.386 TABLE 15.1 Selected Characteristics of Industrial and Postindustrial Societies THE INFORMATION REVOLUTION AND POSTINDUSTRIAL SOCIETY During the past quarter century the “information revolution,” which began with Intel’s invention of the microchip in 1971, has altered economic life, accelerating changes in the organization of work that were already under way. Pressured by global competition that intensified by the end of the 1970s, U.S. firms began to move away from the inflexible Fordist system of mass production, seeking ways to accommodate rapid changes in products and production processes and to reduce high labor costs that were making U.S. products less competitive. Postindustrial economic organization is complex, so the sections below focus on just some of the key aspects, including the growth of automation and flexible production, reliance on “outsourcing” and “offshoring,” and the growth of the service economy. AUTOMATION AND FLEXIBLE PRODUCTION Postindustrial production relies on ever-expanding automation, the replacement of human labor by machines in the production process. Today, robots can perform tedious and dangerous work that once required the labor of hundreds of workers. While automation increases efficiency, it has also eliminated jobs. Computer-driven assembly lines can be quickly reprogrammed, allowing manufacturers to shift to new products and designs rapidly and to shorten the time from factory to buyer. “Just-in-time” delivery systems also minimize the need for businesses to maintain warehouses full of parts and supplies; instead, parts suppliers ship components to factories on an as-needed basis so they move right to the production floor and into the products. Such reliance on more flexible, less standardized forms of production is sometimes termed post-Fordism. Notably, while to this point automation has had its most visible impact on manufacturing jobs, it is becoming significant in the large U.S. service industry as well. Consider, for instance, the recent introduction of electronic tabletop ordering devices at some popular restaurants like Chili’s and California Pizza Kitchen. According to a recent article in the Wall Street Journal, airports in New York City and Minneapolis now have some eating establishments that are “wait-staff free,” and “in 2011, McDonald’s announced that it was replacing human cashiers with touch-screen alternatives at more than 7,000 European locations” (Saltsman, 2014). While offering lower labor costs to employers and some convenience to consumers, self-ordering and self-checkout technologies are also reducing the numbers of jobs available at dining and retail establishments. RELIANCE ON OUTSOURCING AND OFFSHORING Businesses can perform activities associated with producing and marketing a product “in house,” or they can contract some of the work to outside firms, which in turn can do their own subcontracting to other firms. The term outsourcing often describes the use of low-cost foreign labor, but it can also mean contracting U.S. workers to do needed tasks, typically for less pay than a company employee would earn. Outsourcing has long been part of industrial production; striking today is the emergence of outsourcing across a wide spectrum of industries. For example, United Airlines used to rely on its own mechanics to service the company’s planes. The mechanics were well paid and enjoyed benefits negotiated by their union. By the late 1990s, however, United increasingly turned to nonunionized mechanics operating from lower-cost, lower-wage hangars in the South. The terrorist attacks of September 11, 2001, which temporarily halted air travel, exacerbated the financial difficulties of airlines. In spite of billions in government aid and loans, airlines have continued to struggle. Major carriers have become even more reliant on outsourcing to cut costs (Uchitelle, 2007). Offshoring refers more specifically to the practice among U.S. companies of contracting with businesses outside the country to perform services that would otherwise be done by U.S. workers. The movement of manufacturing jobs overseas to lower-wage countries, as noted earlier in the chapter, has been taking place since the 1970s and 1980s. More recently, however, workers and policy makers have expressed concern about the offshoring of professional jobs, such as those in information technology. According to a recent Congressional Research Service paper on the topic, this trend has been fostered by the widespread adoption of technologies enabling rapid transmission of voice and data across the globe, economic crises in the United States that have created greater pressure to achieve economic “efficiencies” (such as lower labor costs), and the availability of a growing pool of well-educated and often English-speaking labor abroad (Levine, 2012). U.S. Economic ProblemsCLICK TO SHOW p.387 Ap Photo/Paul Sakuma Many of today’s highly compensated workers have specialized skills in inventing, designing, or producing innovative technological products. Facebook founder and CEO Mark Zuckerberg represents a generation of young entrepreneurs, many of whom are based in Silicon Valley, who have both led and profited from the dramatic growth of social media use across the globe. TRANSFORMATION OF THE OCCUPATIONAL AND CLASS STRUCTURE Among the most highly compensated workers in the modern economy are those who invent or design new products, engineer new technologies, and solve problems. They are creative people who “make things happen,” organizers who bring people together, administrators who make firms run efficiently, and legal and financial experts who help firms to be profitable (Bell, 1973; Reich, 2010). Workers in this category are sometimes called “symbolic analysts” (Reich, 1991) or “knowledge workers.” Most symbolic analysts are highly educated professionals who engage in mental labor and, in some way, the manipulation of symbols (numbers, computer codes, words). They include engineers, university professors, scientists, lawyers, and financiers and bankers, among others. While the ranks of symbolic analysts have grown overall in recent decades and the ranks of “routine production workers” in manufacturing have been falling, most job growth has been concentrated in the service sector. Services constitute a diverse sector of the labor market. As of 2012, the service sector employed more than 116 million U.S. workers and accounted for almost 80% of the U.S. workforce (U.S. Bureau of Labor Statistics, 2013d). Service occupations include some jobs that require higher education but also include retail sales, home health and nurses’ aides, food service, and security services. Overall, the service jobs available to workers with a high school education (or less) do not pay as well as the manufacturing positions of the past, making membership in the middle class less likely and less secure for those without higher education. Many service positions do not require extensive education or training, and a growing fraction are part-time rather than full-time, are nonunionized, and have few or no benefits. Quite a few of these jobs require “people skills” stereotypically associated with females and are often viewed as “women’s jobs” (but by no means invariably, since private security guards, a growing occupation, tend to be men). By contrast, many routine production jobs in the past were manufacturing jobs that commonly employed men. The decline in manufacturing employment opportunities, along with declines in educational attainment among men (a topic we examined in Chapter 12), has made unemployment and underemployment particularly acute for some demographic groups, including minority males (Autor, 2010). Together, these diverse labor market trends point to significant shifts in the U.S. class structure. Economist David Autor (2010) argues that a polarization of job opportunities has taken place, particularly in the past two decades. Autor sees a modern economy characterized by “expanding opportunities in both high-skill, high-wage occupations and low-skill, low-wage occupations, coupled with contracting opportunities in middle-wage, middle-skill, white-collar and blue-collar jobs.” He views this as the basis of a split in the middle class, with those whose membership in that group was bolstered by, for instance, good manufacturing jobs now losing ground, and those who occupy the upper, professional rungs of the middle class maintaining their status amid growing opportunities (Figure 15.4). However, not all economists agree with this assessment. Economist Alan Blinder (2006) argues that “many people blithely assume that the critical labor-market distinction is, and will remain, between highly educated (or highly skilled) people and less-educated (or less-skilled) people—doctors versus call-center operators, for example. The supposed remedy for the rich countries, accordingly, is more education and a general ‘upskilling’ of the work force. But this view may be mistaken” (p. 118). Blinder suggests that the more critical social division in the future may not be between jobs that require high levels of education and those that do not, but rather between work that can be wirelessly outsourced and work that cannot. Consider the growth of online university education. Whereas a college professor may be able to accommodate 100 or even 500 students in a massive lecture hall, an online instructor can have thousands of students and teach them at a considerable cost savings to the institution—and, in some instances, to the students. Some universities, such as the Massachusetts Institute of Technology (MIT), are offering free college course lectures online (though these are not yet available for credit). While this is not “outsourcing” as we typically imagine it, trends suggest that even many highly educated workers will be vulnerable to technological changes in the decades ahead. p.388 BEHIND THE NUMBERS COUNTING THE EMPLOYED AND THE UNEMPLOYED IN THE UNITED STATES According to the U.S. Bureau of Labor Statistics (BLS; 2014b), in June 2014 the U.S. labor force participation rate (that is, the labor force as a percentage of the civilian noninstitutional population) was almost 63%, and more than 146.2 million U.S. residents were employed. At the same time, about 6.1% of U.S. workers, or 9.5 million individuals, were counted by the BLS as unemployed (Figure 15.2). What do these numbers tell us? What do they illuminate, and what do they obscure? Consider how the BLS defines the condition of being employed. In BLS statistics, employed persons are those who are 16 years of age or older in the civilian, noninstitutional population (that is, not in the military or in mental or penal institutions) who did any paid work—even as little as one hour—in the reference week or worked in their own businesses or farms. The employment figure, however, fails to capture the significant problem of underemployment, including workers forced to work part-time when they would like to work full-time or workers employed in jobs that are well below their skill level. According to the BLS, in mid-2014, there were at least 7.5 million involuntary part-time workers. The unemployed are people who are jobless, have actively looked for work in the prior 4 weeks, and are available for work. The BLS figures are based on the monthly Current Population Survey, which uses a representative sample of 60,000 households and has been conducted every month since 1940. While the BLS cannot count every U.S. household, the size of the sample and its configuration are believed to ensure a statistically accurate representation of the U.S. labor force. Official unemployment figures, however, do not include those who, after a brief or extended period of joblessness, have given up looking for work, or whose job seeking is “passive”—for instance, limited to scanning newspaper or online classified ads. Those persons are categorized as not in the labor force, because they are neither officially employed nor officially unemployed. Persons who would like to work and have searched actively for a job in the past 12 months are categorized as marginally attached to the labor force—according to the Bureau of Labor Statistics, there are about 2 million such individuals. Widely cited official unemployment statistics omit these categories and may thus underestimate the numbers of those who need and want to work. Notably as well, some researchers point out that indicators intended to measure the social and economic well-being of the population are distorted because they omit the large U.S. prison population (Western & Pettit, 2010). Prisoners are disproportionately from low socioeconomic backgrounds, yet they do not figure into important social indicators like the poverty rate or the unemployment rate. Jails and prisons are not households and therefore are not counted, though they house more than 2 million people (U.S. Bureau of Justice Statistics, 2010). FIGURE 15.2 Unemployment in the United States, 2002–2013 SOURCE: U.S. Bureau of Labor Statistics. (2013). Labor Force Statistics from the Current Population Survey. Consider the effect of this exclusion on employment figures. In 2008, the incarceration rate of African American men who dropped out of high school was about 37% (Western & Pettit, 2010). The distorting effect is considerable if we seek, for instance, to determine how African American men without a high school education are faring in the labor market (Figure 15.3). Western and Pettit (2010) write that “[conventional] estimates of the employment rate show that by 2008, around 40 percent of African American male dropouts were employed”; however, when “prison and jail inmates are included in the population count (and among the jobless), we see that employment among young African American men with little schooling fell to around 25 percent by 2008” (p. 12). p.389 FIGURE 15.3 Unemployment Rate by Educational Attainment for Blacks and Whites 25 and Older in the United States, 2012 SOURCE: U.S. Bureau of Labor Statistics. (2011). The African-American Labor Force in the Recovery, Chart 3. Clearly, the exclusion of the incarcerated population from social indicators such as poverty, employment, and unemployment may render an incomplete picture of the economic fortunes (and misfortunes) of some demographic groups in the United States, as we learn when we look behind the numbers. THINK IT THROUGH Should the institutionalized population of the United States be included in socioeconomic indicators like poverty, employment, and unemployment figures? Can you make a persuasive case to support your position on this question? Outsourcing and offshoring, automation, and the shift to more flexible workplaces characterized by temporary workers are among the factors that have reduced opportunities in the middle-level jobs that built and sustained the once-stable U.S. middle class (discussed in more detail in Chapter 7). As noted in the preceding section, while the effects of these processes have been most apparent in manufacturing, the effect on employment opportunities has also been felt in professional and service jobs. It is quite likely that the shape of the labor market in the coming decades will continue to shift as technology and economic factors drive many changes that are already under way. THE SERVICE ECONOMY AND EMOTIONAL LABOR Many service sector jobs today require a substantial amount of “emotional labor.” According to sociologist Arlie Hochschild (2003), emotional labor is the commodification of emotions, including “the management of feeling to create a publicly observable facial and bodily display” (p. 7). Like physical labor, the symbol of the industrial economy, emotional labor is also “sold for a wage and… has exchange value” (p. 7). Hochschild (2003) uses the example of flight attendants, who do emotional labor in the management of airline passengers’ comfort, good feelings, and sense of safety, but we could also use as examples customer service workers, retail sales associates, and restaurant servers. While these workers may enjoy their jobs, they are also forced to feign positive feelings even when such feelings are absent and to labor to evoke positive feelings in their customers. The emotional laborer is, in a sense, compelled to “sell” his or her smile in exchange for a wage, just as the industrial laborer sells his or her physical labor. The emotional laborer’s actions are programmed for profit and efficiency, as he or she is asked to perform emotions that maximize both. The strain between real and performed feelings, notes Hochschild, leads to an emotive dissonance—that is, a “disconnect”—between what the worker really feels and the emotions to be shown or suppressed. Hochschild posits that just as Marx’s proletarian laboring in a mill was alienated from the work and from him- or herself, so too is the emotional laborer alienated from work and his or her emotional life. TYPES OF ECONOMIC SYSTEMS Two principal types of economic systems have dominated the 20th century: capitalism and socialism. Industrialization occurred in capitalist economic systems in North America and most of Western Europe, and under socialism in the Soviet Union, Eastern Europe, China, Vietnam, Cuba, and parts of Africa. After 1989, the collapse of socialism in Eastern Europe and the (former) Soviet Union fostered the expansion of capitalist market systems. Orthodox socialism appears to be in decline elsewhere as well, notably in economically growing China. Even Cuba, which remains steadfast in its socialist rhetoric, recently introduced some capitalist-style reforms. Relieving the Burden of Emotional LaborCLICK TO SHOW U.S. Jobs for Soviet WomenCLICK TO SHOW p.390 Even though capitalism and socialism share a common focus on economic growth and increased living standards, they differ profoundly in their ideas about how the economy should be organized to achieve these goals. The following descriptions are of “ideal-typical” (that is, model) capitalist and socialist systems. Real economies often include some elements of both. FIGURE 15.4 Employment Change by Occupational Sector in the United States, 1979–2009 SOURCE: Autor, David. 2010. “The Polarization of Job Opportunities in the U.S. Labor Market: Implications for Employment and Earnings.” Center for American Progress and the Hamilton Project. Figure 3, p. 9. This material was created by the Center for American Progress (www.americanprogress.org). CAPITALISM Capitalism is an economic order characterized by the market allocation of goods and services, production for private profit, and private ownership of the means of producing wealth. Workers sell their labor to the owners of capital in exchange for a wage, and capitalists are then free to make a profit on the goods their workers produce. Capitalism emphasizes free, unregulated markets and private, rather than government, economic decision making. At the same time, governments in capitalist economies often play a key role in shaping economic life, even in countries such as the United States that have historically tended to keep the government’s role to a minimum (which is sometimes referred to as laissez-faire capitalism—literally, hands-off capitalism). In a capitalist country, the labor market is composed of both public sector jobs and private sector jobs. The public sector is linked to the government (whether national, state, or local) and encompasses production or allocation of goods and services for the benefit of the government and its citizens. The private sector also provides goods and services to the economy and consumers, but its primary motive is profit. Because capitalists compete with one another, they experience persistent pressure from consumers to keep costs and therefore prices down. They can gain a competitive edge by adopting innovative processes such as mass production (think of early Fordism), reducing expensive inventories, and developing new products that either meet existing demands or create new demands (which sociologists call manufactured needs). One important process innovation is minimizing the cost of labor, which capitalists often do by adopting technologies that increase productivity and keep wages low. On one hand, capitalism can create uneven development, inequality, and conflict between workers and employers, whose interests may be at odds. On the other hand, it has been successful in producing diverse and desirable products and services, encouraging invention and creativity by entrepreneurs who are willing to take risks in return for potential profit, and raising living standards in many countries across the globe. Capitalist systems are based on an individualistic work ethic, which posits that, ideally, if people work hard and diligently pursue their personal goals, both society and the individuals will prosper. The roles of government vary widely among different capitalist economies. In the United States and England, for example, there is greater skepticism about government’s role in the private sector and greater emphasis on the private sector as the means for allocating goods and services (though Britain, unlike the United States, has nationalized health care). In contrast, in many European economies, government takes a strong role in individual lives. Sweden and France offer “cradle to grave” social supports, with paid parental leave, child allowances, national health insurance, and generous unemployment benefits. Japan, on the other hand, does not expect government to take such a major role but does expect businesses to assume almost family-like responsibility for the welfare of their employees. Capitalism v. SocialismCLICK TO SHOW p.391 Public Domain -Library of Congress Upton Sinclair’s The Jungle, chronicling immigrants’ suffering in the unregulated meatpacking industry of the early 20th century, emerged from his investigations of Chicago’s Packingtown. In 1904, Sinclair worked in the plants to gather material for a book that eventually spurred disgust and outrage, as well as the 1906 Pure Food and Drug Act. A CASE OF CAPITALISM IN PRACTICE: A CRITICAL PERSPECTIVE Profit is the driving motive of capitalist systems. While the desire for profit spawns creativity and productivity, it may also give rise to greed, corruption, and exploitation. Industries cut costs in order to increase profits; there is economic logic in such a decision. Cutting costs, however, can also compromise the health and safety of workers and consumers. What do such compromises look like? A case study of profit over people in the meat industry in the United States offers one example. In the first decade of the 20th century, Upton Sinclair’s novel The Jungle offered a powerful and frightening fictionalized account of the real-life problems of the meat industry. The novel chronicles the struggles of a Lithuanian immigrant family working and struggling in “Packingtown,” Chicago’s meat district. An excerpt follows: It was only when the whole ham was spoiled that it came into the department of Elzbieta. Cut up by the twothousand-revolutions-a-minute flyers, and mixed with half a ton of other meat, no odor that ever was in a ham could make any difference. There was never the least attention paid to what was cut up for sausage…. There would be meat stored in great piles in rooms, and the water from leaky roofs would drip over it, and thousands of rats would race about on it…. Such were the new surroundings in which Elzbieta was placed, and such was the work she was compelled to do. It was stupefying, brutalizing work; it left her no time to think, no strength for anything. She was part of the machine she tended, and every faculty that was not needed for the machine was doomed to be crushed out of existence. (Sinclair, 1906/1995, pp. 143–145) Sinclair was critical of capitalism and the profit motive, which he felt explained the suffering of the workers and the stomach-turning products churned out in the filthy packinghouses. His work was a stirring piece of social criticism dressed as fiction, and it spurred change. President Theodore Roosevelt’s inquiry into the conditions described by Sinclair brought about legislation requiring federal inspection of meat sold through interstate commerce and the accurate labeling of meat products and ingredients (Schlosser, 2002). The novel had little effect, however, on the conditions experienced by industrial laborers, to which Sinclair had endeavored to draw attention. As he later wryly remarked, “I aimed for the public’s heart… and by accident I hit it in the stomach.” In the years since Sinclair’s novel was published, capitalism has evolved (as has regulation), though profit and the need to cut costs have remained basic characteristics. What does cost cutting look like in today’s more closely regulated meat industry? In Fast Food Nation (2002), writer Eric Schlosser describes his experience in a modern meat plant: A man turns and smiles at me. He wears safety goggles and a hardhat. His face is splattered with gray matter and blood. He is the “knocker,” the man who welcomes cattle to the building. Cattle walk down a narrow chute and pause in front of him, blocked by a gate, and then he shoots them in the head with a captive bolt stunner…. For eight and a half hours, he just shoots…. When a sanitation crew arrives at a meatpacking plant, usually around midnight, it faces a mess of monumental proportions…. Workers climb ladders with hoses and spray the catwalks. They get under tables and conveyor belts, climbing right into the bloody muck, cleaning out grease, fat, manure, leftover scraps of meat. (pp. 170–171, 177) p.392 UniversalImagesGroup / Contributor/Getty Images In the socialist period in the Soviet Union and allied countries of Eastern Europe, the state controlled all production, essentially eliminating competition in the marketplace for goods and services. Instead of advertisements in public spaces, political posters and propaganda elevated the achievements and builders of socialism and denigrated the capitalist way of life. The work is not only brutal; it is also dangerous. Profit still drives the meat industry to endanger its workers’ safety, and Schlosser notes the words of a local nurse, who says she always “[knew] the line speed… by the number of people with lacerations coming into my office” (pp. 173–174). Critics of capitalism would suggest that conditions in the meat industry past and present illuminate a fundamental problem of capitalism: Capital accumulation and profit are based on driving down the costs of production. The case of the meat industry shows that capitalists sometimes drive down the costs by compromising worker and consumer safety. The potentially high human cost is a central point of the critique of capitalism. SOCIALISM AND COMMUNISM Modern ideas about communism and socialism originated in the theories of 19th-century philosophers and social scientists, especially those of Karl Marx. Communism, in its ideal-typical form, is a type of economic system without private ownership of the means of production and, theoretically, without economic classes or economic inequality. In an ideal-typical communist society, the capitalist class has been eliminated, leaving only workers, who manage their economic affairs cooperatively and distribute the fruits of their labor “to each according to his needs, from each according to his abilities.” Since Marx believed that governments exist primarily to protect the interests of capitalists, he concluded that once the capitalist class was eliminated, there would be no need for the state, which would, in his words, “wither away.” Marx recognized that there would most likely have to be a transitional form of economic organization between capitalism and communism, which he termed socialism. In a socialist system, theoretically, the government manages the economy in the interests of the workers; it owns the businesses, factories, farms, hospitals, housing, and other means of producing wealth and redistributes that wealth to the population through wages and services. The laborer works for a state-run industrial enterprise, the farmer works for a state-run farm, and the bureaucrat works in a state agency. Profit is not a driving economic imperative. Before the collapse of the Soviet Union and its socialist allies in Eastern Europe, nearly a third of the world’s population lived in socialist countries. Far from withering away as Marx predicted, these socialist governments remained firmly in place until 1989 (1991 in the Soviet Union), when popular revolutions ushered in transformations—not to the classless communist economies Marx envisioned, but to new capitalist states. The largest remaining socialist country in the world—China—has transitioned over the past 20 years into a market economy of a size and scale to nearly rival that of the United States, though the state still exercises control over large industries. These transformations occurred in part because socialism proved too inflexible to manage a modern economy. Having the central government operate tens of thousands of factories, farms, and other enterprises was a deterrent to economic growth. Further, though the capitalist class was eliminated, a new class emerged—the government bureaucrats and communist officials who managed the economy and who were often inefficient, corrupt, and more interested in self-enrichment than in public service (Djilas, 1957). Moreover, most socialist governments were intolerant of dissent, often persecuting, imprisoning, and exiling those who disagreed with their policies. At the same time, socialist regimes were often successful in eliminating extreme poverty and providing their populations with housing, universal education, health care, and basic social services. Inequality was typically much lower in socialist states than in capitalist economies—although the overall standard of living was lower as well. p.393 The dramatic rise in economic inequality and poverty in the newly capitalist states of the former Soviet Union and Eastern Europe has created some nostalgia for the socialist past, particularly among the elderly, who have a threadbare social safety net in many states. While few miss the authoritarian political regimes, there is some longing for the basic economic and social security that socialism offered. A CASE OF SOCIALISM IN PRACTICE: A CRITICAL PERSPECTIVE In theory, a driving motive of socialist systems is achievement of a high degree of economic equality. This is realized in part through the creation of full-employment economies. In the Soviet Union, full employment gave all citizens the opportunity to earn a basic living, but it also brought about some socially undesirable results. For instance, inefficiency and waste flourished in enterprises that were rewarded for how much raw material they consumed rather than how much output they produced (Hanson, 2003). Human productivity was only partially utilized when work sites had to fill required numbers of positions but did not have meaningful work for all who occupied them. Disaffection and anger grew in workplaces where promotions were as likely to be based on political reliability, connections, and Communist Party membership as on merit. A system that theoretically ensured the use of resources for the good and equality of all workers was undermined by the realities of Soviet-style communism. In practice, socialist systems such as that of postwar Hungary were characterized by both low wages and low productivity: A popular saying among workers was that “we pretend to work and the state pretends to pay us.” Lacking a competitive labor market, workers may not have felt compelled to work particularly hard; unemployment was rare. At the same time, public sector (government) jobs, which made up the bulk of the labor market, were poorly paid; many workers sought supplementary pay in the informal economy (Ledeneva, 1998). Socialism in practice, according to sociologists Michael Burawoy and Janos Lukács (1992), was in part a performance: Painting over the sordid realities of socialism is simultaneously the painting of an appearance of brightness, efficiency, and justice. Socialism becomes an elaborate game of pretense which everyone sees through but which everyone is compelled to play…. The pretense becomes a basis against which to assess reality. If we have to paint a world of efficiency and equality—as we do in our [factory] production meetings, our brigade competitions, our elections—we become more sensitive to and outraged by inefficiency and inequality. (p. 129) In the book he coauthored with Lukács, The Radiant Past: Ideology and Reality in Hungary’s Road to Capitalism (1992), Burawoy, a U.S. sociologist who spent time working in socialist enterprises in Poland and Hungary as part of his study of socialist economies, recounts an instance of such a “painting ritual” when the Hungarian prime minister makes a visit to the Lenin Steel Works. Areas of the factory to be visited are literally painted over in bright hues, debris is swept up, and workers halt their productive tasks to create an “appearance” of productivity, for the prime minister “had to be convinced that the Lenin Steel Works was at the forefront of building socialism” (p. 127). For critics of socialism the case of Hungarian steel in the socialist period highlights a fundamental problem of the system as it was practiced: Its weaknesses were made more rather than less apparent by the “painting” rituals that asked workers to pretend socialism was fundamentally efficient and equal when their own experience showed it was not. This was among the flaws that led to the collapse of socialism in Eastern Europe and the Soviet Union. WORKING ON AND OFF THE BOOKS Work consists of any human effort that adds something of value to the goods and services that are available to others. By this definition, work includes paid labor in the factory or office, unpaid labor at home, and volunteer work in the community. Workers include rock stars and street musicians, corporate executives and prostitutes, nurses and babysitters. Almost the only activities excluded from this definition of work are those that individuals conduct purely for their own pleasure or benefit, such as pursing a hobby or playing a musical instrument for fun. The concept of work as consisting exclusively of labor that is sold for a wage is a relatively recent development of modern industrial society. Throughout most of human history, work was not ordinarily paid, at least in monetary terms. In agricultural societies, subsistence farming was common: Families often worked their own plots of land and participated with others in the community in a barter economy, based on the exchange of goods and services rather than money. With the advent of industrial society, however, work shifted largely to the economic setting of a formal, paid, and regulated job. Today, work for pay occurs in two markets: the formal economy and the informal (or underground) economy. We look at each of these next. SocialismCLICK TO SHOW p.394 Spencer Grant / Photo Researchers, Inc Day laborers often work for low pay in unregulated conditions. Some of them are illegal migrants. Their status and language barriers make it challenging for them to report dangerous or abusive conditions of work. THE FORMAL ECONOMY The formal economy consists of all work-related activities that provide income and are regulated by government agencies. It includes work for wages and salaries, as well as self-employment; it is what people ordinarily have in mind when they refer to work. It has grown in importance since the Industrial Revolution. Indeed, one of the chief functions of government in industrial society is regulation of the formal economy, which contributes to the shape and character of the labor market (Sassen, 1991; Tilly & Tilly, 1994). In the United States, as in most countries of the world, private businesses are supposed to register with governmental entities ranging from tax bureaus (the Internal Revenue Service) to state and local licensing agencies. Whether they work in the private or the public sector, U.S. workers must pay income, Medicare, and Social Security taxes on their earnings, and employers are expected to withhold such taxes on their behalf and report employee earnings to the government. Numerous agencies regulate wages and working conditions, occupational health and safety, the environmental effects of business activities, product quality, and relationships among firms. As of mid-2014, about two thirds (63%) of all people in the United States over 16 years of age who were not in the armed forces (which is also, of course, an employment sector), prisons, or mental hospitals, were in the labor force—about 6% were unemployed (U.S. Bureau of Labor Statistics, 2014b). When statistical indicators such as employment and unemployment are tabulated by government entities such as the Bureau of Labor Statistics, they rely on data from the formal economy. Work also is done in the informal economy, which is not included in BLS numbers. THE INFORMAL OR UNDERGROUND ECONOMY A notable amount of income-generating work avoids formal regulation and is not organized around officially recognized jobs. This part of the economy is termed the informal (or underground) economy; it includes all income-generating activities that are not reported to the government as required by law. Some of these sources of income are illegal, such as selling guns or pirated DVDs, drug dealing, and sex trafficking. Other work activities are not illegal but still operate under the government radar. These include selling goods at garage sales and on Internet auction sites such as eBay; housecleaning, gardening, and babysitting for employers who pay without reporting the transactions to the government; and informal catering of neighborhood events for unreported pay. In one way or another, most of us participate in the informal economy at some point in our lives. Informal EconomyCLICK TO SHOW p.395 AP Photo/Rene Macur The term underground economy may bring up images of drugs, weapons, and stolen passports, but this type of economy involves much needed products, like food. Los Angeles and other major cities have unlicensed and unregistered vendors providing food and services for residents who demand them, which has a mixture of positive and negative consequences. Workers’ reasons for participating in the informal economy are varied. A worker with a regular job may take up a second job “off the books” to make up a deficit in his or her budget, or someone may be compelled to work outside the legal economy because of his or her undocumented immigration status. Others may find the shadow economy more profitable (Schneider & Enste, 2002). Many of the underground economy’s workers occupy the lowest economic rungs of society and are likely to be pursuing basic survival rather than untold riches. They are disproportionately low income, female, and immigrant, and the jobs they do lack the protections that come with many jobs in the formal economy, such as health care benefits, unemployment insurance, and job security. Among the employers in the illegal U.S. underground economy are unlicensed “sweatshop” factories that make clothing, furniture, and other consumer goods (Castells & Portes, 1989; Sassen, 1991). Factories in the United States are competing with factories around the world where workers are paid a fraction of U.S. wages. To remain competitive (or to raise profits), U.S. firms sometimes subcontract their labor to low-cost sweatshop firms in the informal sector. Requirements to comply with environmental laws, meet health and safety standards, make contributions to Social Security and other social benefit programs, and pay taxes lead some businesses to seek to establish themselves off the books. Small businesses may operate without licenses, and large firms may illegally subcontract out some of their labor to smaller, unlicensed ones. While the United States has a broad informal sector, researchers estimate that the underground economy is substantially larger in developing counties than in advanced economies such as those of Western Europe and the United States. For example, a report commissioned by the International Monetary Fund estimates that in developing nations (Nigeria, Singapore, Bolivia, and others) the informal economy accounts for 35% to 44% of gross domestic product (Schneider & Enste, 2002). CONSUMERS, CONSUMPTION, AND THE U.S. ECONOMY As we have seen in this chapter, production has been an important part of the rise of modern capitalist economies. In modern industrial countries, including the United States, however, production has receded in importance. The economies of advanced capitalist states today rely heavily on consumption to fuel their continued growth. Today, an estimated 70% of the U.S. economy is linked to consumption. In this section we examine consumption and its relationship to the economy, as well as to our lives as consumers. THEORIZING THE MEANS OF CONSUMPTION © Ringo Chiu/Zuma Press/Corbis “Cathedrals of Consumption” like the casinos and hotels of Las Vegas, entice consumers to spend with bright, fun, and fantastical venues. Karl Marx is well known for his concept of the means of production (defined in Chapter 1), which forms a basis for his theorizing on capitalism, exploitation, and class. While the early industrial era in which Marx wrote influenced his focus on production, he also sought to understand consumption in 19th-century capitalism. Marx defined the term means of consumption as “commodities that possess a form in which they enter individual consumption of the capitalist and working class” (quoted in Ritzer, 1999, p. 56). Marx distinguished between the levels of consumption of different classes, suggesting that subsistence consumption (“necessary means of consumption”) characterizes the working class, whereas luxury consumption is the privilege of the exploiting capitalist class. In sum, Marx’s definition focused on the consumption of the end products of the exploitative production process. Immigration ConflictCLICK TO SHOW p.396 PRIVATE LIVES, PUBLIC ISSUES MUST WORK BE PAID TO BE ECONOMICALLY IMPORTANT? Richard Nowitz/National Geographic Creative Might the societal devaluation of the unpaid work parents (most often women) do in the home contribute to lower pay scales for “women’s work” in the paid labor market? Being a parent means taking on a spectrum of tasks that are time-consuming and important for the economic, educational, and social care of the family: Housekeeping, child care, cooking, budget management, driving, and teaching are among the key jobs of modern parents. Much of this work is done by women. Today there are fewer than 215,000 stay-at-home fathers in the United States, while more than 5 million mothers with children under age 15 are outside the paid labor force as the primary caregivers of their children (Livingston, 2014). Their work is often treated as marginal. Historian of the family Stephanie Coontz (2012) points out that while men are increasingly sharing the burdens of housework, the “real gender inequality in marriage stems from the tendency to regard women as the default parent, the one who, in the absence of family-friendly work policies, is expected to adjust her paid work to shoulder the brunt of domestic responsibilities. Women who quit their jobs or cut their hours suffer a wage penalty that widens over the years, even if they return to the job market and work continuously for two more decades.” The sociological imagination enables us to understand how ignoring unpaid labor systematically discounts many of women’s contributions to society and fails to recognize the consequences and real economic value of work that is not done for pay. The website Salary.com offers an annual calculation of the value of the “mom job,” basing its figures on the 10 “typical” job functions of mothers (and including 40 full-time hours plus 56.5 hours of overtime). Using a survey of more than 15,000 mothers, the site’s researchers determined that in 2014 the time spent in these tasks, if compensated, was worth $118,905 for a stay-at-home mother and $70,107 for a mother who worked outside the home (Salary.com, 2014). Unpaid work in the home or community (such as volunteering at schools) can be critical to the well-being of families and society. It also contributes, if indirectly, to the macro-level economy: Consider that a stay-at-home parent frees up the other parent to work in the formal economy. THINK IT THROUGH Why is it conventional in our society to say that stay-at-home parents (mothers or fathers) “don’t work”? What explains this devaluation of domestic tasks? Is there another phrase that could be used that recognizes the value of their labor? Sociologist George Ritzer has expanded Marx’s concept. He distinguishes between the end product (that is, a consumer good such as a pair of stylish dress shoes, a new car, or a gambling opportunity) and the means of consumption that allow us to obtain the good (for instance, the shopping mall, the cruise ship, or the Las Vegas casino). For Ritzer (1999), the means of consumption are “those things that make it possible for people to acquire goods and services and for the same people to be controlled and exploited as consumers” (p. 57). For example, a venue such as a mall offers the consumer buying options and opportunities, but it is also part of a system of consumer control through which consumers are seduced into buying what they do not need, thinking they need what they merely want, and spending beyond their means. The Poor or the Working Class?CLICK TO SHOW Oil industry JobsCLICK TO SHOW p.397 © Jennifer Wright / Alamy One of Disney World’s long-standing attractions is the Jungle Cruise, described on the website of the park as an adventure cruise of the most “exotic and ‘dangerous’ rivers in Asia, Africa, and South America,” although it is a virtually danger-free boat trip on a man-made waterway populated by plastic figures. Modern consumers, George Ritzer suggests, are buying the fantasy rather than the reality of such experiences. Ritzer’s concept of the means of consumption also integrates German sociologist Max Weber’s ideas about rationalization, enchantment, and disenchantment. Briefly, the Weberian perspective holds that premodern societies were more “enchanted” than modern societies. That is, societies or communities, which were often small and homogeneous, were grounded in ideas that he characterized as magical and mystical. Individuals and groups defined and pursued goals based on abstract teachings such as the ideals and ideas of a religion rather than on detailed, specific rules and regulations. Even early capitalism was linked to an enchanted world. Weber theorized that early Protestantism (and Calvinism in particular) embraced values of thrift, efficiency, and hard work, and viewed economic success as an indicator of divine salvation. This so-called Protestant ethic, which he identified as characteristic in Northern Europe, laid foundations for the rise of capitalism, though capitalism eventually shed its religious aspects (Weber, 1904–1905/2002). Modern capitalism lacks authentic enchantment: It is a highly rationalized system characterized by efficiency, predictability, and the pursuit of profit (rather than divine salvation!). This heavily bureaucratized and regulation-reliant environment is virtually devoid of spontaneity, spirituality, or surprise. Ritzer argues, however, that enchantment is important for controlling consumers, because consumption is, at least in part, a response to a fantasy about the item or service being consumed. Consequently, disenchanted structures must be “reenchanted” through spectacle and simulation (Baudrillard, 1981), which draw in consumers. For instance, Disney simulates a kind of childhood dreamworld (think of the Magic Kingdom), Niketown is a sports fantasy, and Las Vegas aims to bring to a single city the dazzle of Egyptian pyramids, New York’s towering urban structures, and Paris’s Eiffel Tower. In such a context, the consumer is not just buying sneakers (say, at Niketown) but embracing a broader fantasy about athletic achievement. In sum, from Ritzer’s perspective, the means of consumption are a modern instrument of control not of the worker but of the consumer, who is enchanted, led to believe that he or she “needs” certain goods, and given optimal—sometimes nearly inescapable—avenues for acquisition, such as malls with long hallways and few exits to maximize the number of shops a consumer must pass before exiting. A HISTORICAL PERSPECTIVE ON CONSUMPTION Consumer society is a political, social, and economic creation. Consider, for instance, that during World War II, the U.S. government asked its citizen-consumers to serve the greater good by reducing consumption. In contrast, in the wake of the terror attacks on the United States in 2001 and the wars that followed, citizen-consumers were encouraged to spend more money to stimulate the economy. Former secretary of labor Robert Reich termed this appeal for consumption “market patriotism.” Taking a broader perspective, economist Juliet Schor (1998) argues that consumption patterns and the dramatic growth of consumption in the United States are heavily driven by Americans’ reliance on reference groups. That is, consumers compare themselves and their consumption to the reference groups in their social environments. Significantly, says Schor, those reference groups have changed. In the 1950s, suburban middle-class consumers knew and emulated their neighbors. The substantial number of women outside the paid workforce meant that neighbors were more aware of what others were doing, wearing, and driving. By the 1970s, more women were moving into the workforce; consequently, fewer people knew their neighbors, and the workplace became an important source of reference groups. In contrast to the economically homogeneous neighborhood, however, the workplace is heterogeneous. Low five-figure wages coexist in the same space as high six-figure salaries, and coworkers may aspire upward and far beyond their means. Underground Pro-Soccer EconomyCLICK TO SHOW Worldwide Informal EconomyCLICK TO SHOW p.398 The 1980s, 1990s, and 2000s brought further upscaling of ambitions and spending, as television sold a powerful picture of consumer decadence masked as “normal life.” In the 1990s, young consumers embraced media referents such as the television sitcom Friends, about a group of young professionals living in lavish New York City apartments, wearing ever-changing stylish wardrobes, and casually consuming the pleasures around them. Lavish consumption came to seem normal rather than unreachable for people of average incomes (Schor, 1998). A new generation is now exposed to “reality” TV shows including The Real Housewives of New Jersey (and other locales), Keeping Up With the Kardashians, Million Dollar Decorators, and Say Yes to the Dress, which emphasize the benefits of conspicuous consumption among celebrities and “ordinary” people alike. A somewhat different perspective on how the U.S. consumer economy has been built and sustained is offered by Robert Reich (2010), who believes the consumption-driven economy originated in a “basic bargain” between workers and employers that offered good pay in sectors such as manufacturing, creating a consumer class that could afford to spend (recall Henry Ford’s decision to pay above-average wages so his autoworkers could buy cars). Reich notes that, until about 1970, pay rose more quickly in the middle- and lower-income segments of the U.S. labor pool than it did at the top. Consumption grew with the standard of living. The real value of workers’ pay stagnated in the 1970s, however, profoundly affected by forces that included globalization and automation. While income rose at the very top of the economic ladder, in the middle and lower strata it stalled. Consumption continued to rise, however, driven not by gains in income but by the growing credit markets, which offered new ways to spend with or without cash on hand. Next we review the consequences of that shift to credit-driven spending. CREDIT: DEBT AND MORE DEBT Do you have a credit card? Do you carry debt? If you answered yes to either or both these questions, you are not alone. In 2013, U.S. consumers held almost 392 million credit cards, and their combined credit card debt was about $870 billion (Ray & Ghahremani, 2014). Each credit card holder had an average of 3.7 cards and owed about $8,222, a huge increase from the 1990 average of roughly $2,966 (Hoffman, Brinker, & Roberts, 2010; Ray & Ghahremani, 2014). By one estimate, the debt payments of nearly 15% of U.S. households exceed 40% of the households’ income (Bricker, Kennickell, Moore, & Sabelbaus, 2012). Bankruptcies have also become more common. In the early years of the 21st century, “Americans were more likely to go bankrupt than to get divorced” (Quiggin, 2010, p. 26). Consumption (or overconsumption) was not the direct cause of bankruptcies, most of which were precipitated by the loss of a job or unexpected health care costs, but the “culture of indebtedness,” the widespread tendency to owe a great deal of money, left people less able to bear any added financial stress. While financial reforms passed into law in 2005 made the declaration of bankruptcy more onerous and less common, the financial crisis that began in 2007 saw another rise in bankruptcies: More than 1.5 million bankruptcy filings were made in 2010 (Administrative Office of the United States Courts, 2011). Humorist Will Rogers commented during the Great Depression that the United States was the first country to drive to the poorhouse in an automobile (cited in Sullivan, Warren, & Westbrook, 2000, p. 3). Car debt was a particularly notable burden during the late 1990s and early 2000s, when many U.S. drivers opted to purchase high-end cars, especially aggressively advertised sport utility vehicles. The massive Hummer SUV, manufactured in three different styles by General Motors, became, for many, a symbol of the trend toward bigger and more ostentatious vehicles. Rising gasoline prices through the 2000s, coupled with the economic recession that struck in 2007, reversed the trend, and consumers began opting for more fuel-efficient sedans, hybrid vehicles, and smaller crossover vehicles that combine features of cars and SUVs. As more people joined the ranks of the long-term unemployed and financial insecurities grew, many households cut back on spending. Overall, the average U.S. consumer spent nearly 3% less in 2009 than in 2008; spending was down on consumer items such as meals away from home, apparel, housing, and transportation (U.S. Bureau of Labor Statistics, 2011b). These numbers may seem insignificant, but they represent millions of dollars that would normally have flowed into small and large businesses alike. By the middle of 2013, the picture had changed only slightly, with overall consumer spending moderately greater than the year before, but spending on groceries, apparel, and some other services decreasing (U.S. Bureau of Labor Statistics, 2014a). GLOBALIZATION AND THE NEW ECONOMIC ORDER The U.S. economic order today has been powerfully affected by the emergence of a unified global economic system. In fact, some writers have argued that it no longer makes sense to think of the United States—or any other country—as an isolated economic society at all: In many respects, we can regard the world as a single economic unit (Friedman, 2005). We conclude this chapter by examining how global economic interdependence and the global labor market have affected work and economic life in the United States and by considering the possible shape of a future green global economy. GLOBAL ECONOMIC INTERDEPENDENCE The U.S. economy is interwoven with the economies of other countries. Many goods made in the United States are sold in foreign markets, while many goods bought by U.S. consumers are made by foreign workers. Economic integration is multidimensional and can be “shallow” or “deep” (Dicken, 1998). Shallow integration is more characteristic of the globalization of several decades past, when a single product (say, a German automobile) was made in a single country and that country’s government would regulate its export, as well as the import of other goods into the country. Countries did business with one another, but their ties were looser and less interdependent. Cultural Imperialism and ConsumersCLICK TO SHOW Credit Card CultureCLICK TO SHOW Global Consumer CultureCLICK TO SHOW p.399 Deep integration is characteristic of the modern global economy, in which corporations are often multinational rather than just national, products are made of raw materials or parts from a spectrum of countries, and a corporation’s management or engineering may be headquartered in one country while the sales force or customer service contingent may reside anywhere from Denver to Delhi (Figure 15.5). Familiar companies such as Nike, Apple, and Ford are among the many with globalized labor forces. A GLOBAL MARKET FOR LABOR As a result of economic globalization, a growing number of U.S. workers are competing with workers all over the world. This trend may affect the job prospects of all workers, whether they hold only high school diplomas or advanced degrees. There are substantial wage differences between countries. While the United States is intermediate among industrial countries, its wages are considerably higher than those in developing countries (Figure 15.6). Jobs will increasingly go wherever on the planet suitable workers can be found. Low labor costs, the decline or absence of labor unions, and governments that enforce worker compliance through repressive measures are all factors influencing the globalization of labor. Some sociologists call this trend a “race to the bottom” (Bonacich & Appelbaum, 2000), in which companies seeking to maximize profits chase opportunities to locate wherever conditions are most likely to result in the lowest costs. This has been the case in, for example, apparel manufacturing; much of the clothing we buy and wear today, including brands sold at popular shops such as H&M and Zara, is made by young workers abroad who labor for very low wages under poor working conditions. A 2010 New York Times article pointed out that Bangladesh was challenging China as a low-wage destination for manufacturers and was at that time the world’s third-largest garment manufacturer (Bajaj, 2010). In 2013, a government-appointed panel in Bangladesh voted to raise the minimum wage for garment workers; under the plan, it would rise from a monthly minimum of about $38 to $66. While this represents a state response to worker protests against unsafe and exploitative conditions of work, Bangladeshi garment manufacturing wages remain the lowest in the world. Few workers in Bangladesh would have the means to purchase the products that they labor to produce. FIGURE 15.5 Global Origins of Boeing 787 Parts FIGURE 15.6 Hourly Compensation Among Production Workers in Select Countries, 2010 SOURCE: International Labour Organization. (2012, December 7). A snapshot of comparative pay levels around the world. We have seen above that the emergence of an increasingly global labor market has resulted in job losses and declining wages in many U.S. industries, including auto and apparel manufacturing. We may be witnessing the emergence of a global wage, equivalent to the lowest worldwide cost of obtaining comparable labor for a particular task once the costs of operating at a distance are taken into account. For virtually any job, this wage is far lower than U.S. workers are accustomed to receiving. The global labor market is not limited to manufacturing. A global market is emerging for a wide range of professional and technical occupations as well. In fact, some of the “knowledge worker” jobs touted as the jobs of the future may be among the most vulnerable to globalization. Unlike cars or clothing, engineering designs and computer programs can move around the globe electronically at no cost, and transportation time is, for all practical purposes, nonexistent. Electronic engineering, computer programming, data entry, accounting, insurance claims processing, and other specialized services, such as medical image reading, can now be inexpensively purchased in such low-wage countries as India, Malaysia, South Korea, China, and the Philippines, where workers communicate digitally with employers in the United States. Also among those selling their labor on the new global market are highly educated professionals from postsocialist countries such as Estonia and Hungary, both of which have full literacy, educated populations, and many individuals fluent in English and other world languages. While we often associate cheap labor with the low-wage factories of developing countries, these post-Soviet European states also advertise their educated workers (on their investment-promoting websites, for instance) as “cheap labor.” Indeed, much global labor is low in cost. In 2010, the normal hourly wage in well-educated Estonia was $6.10; in 2008 in rapidly developing China it was $1.36; in India, also a rapidly developing country, the wage per hour as of 2007 was a paltry $1.17 (International Labour Organization, 2012). Shift in Economic ProductionCLICK TO SHOW Brain Power and the Future of U.S. ManufacturingCLICK TO SHOW p.400 While globalization has had some negative effects on earnings for American workers in the lower- to middle-income ranges, corporate executive salaries have skyrocketed. Even in the midst of a plummeting economy (2007–2009), as the federal government was distributing billions of bailout dollars to corporations, banks, and investment firms to prevent them from failing, chief executive officers (CEOs) in the United States were bringing home multiple millions of dollars in compensation (Table 15.2). While median CEO pay declined somewhat during the recession period, it has rebounded in the postcrisis period, reaching an average of more than $12 million in 2012 (Liberto, 2012). Figure 15.7 compares average CEO compensation to the compensation of average workers in a variety of industries. Notably, in 1980 CEOs made about 42 times what the average worker earned; in 2012, average CEO pay was more than 350 times greater than that of the average worker (who was estimated to be earning just over $34,600 that year; Liberto, 2012). IS THE FUTURE OF THE GLOBAL ECONOMY GREEN? Businesses, consumers, and federal officials alike, eager to promote job growth and greater energy independence, are increasingly interested in developing a new “green” economic sector in the United States. In his first State of the Union address, President Barack Obama suggested that “the country that harnesses the power of clean, renewable energy will lead the 21st century.” In his second State of the Union address, he urged the creation of clean-energy facilities, rebates for those who make their homes more energy efficient, and tax breaks to encourage businesses to keep the majority of their labor forces within the United States (Peters, 2013). TABLE 15.2 Annual Compensation of Selected CEOs, 2012 NOTE: Annual compensation includes the value of executives’ base salary, value of stock and option awards, and other financial compensation vehicles. SOURCE: Data from “20 Top-Paid CEOs,” CNN Money. FIGURE 15.7 Ratio of Average CEO Pay to Average Worker Pay in Selected Industries, 1965–2012 SOURCE: Adapted from “Pay Gap by Industry Sector” in Disclosed: The Pay Gap Between CEOs and Employees, by Elliot Blair Smith and Phil Kuntz, Bloomberg Business Week, May 2, 2013. The drive to develop, manufacture, and implement new energy-efficient and eco-friendly products and services has been swift and encompasses everything from hybrid, electric, and fuel cell automobiles to solar panels, wind turbines, and alternative fuels (such as ethanol, biodiesel, methanol, hydrogen, and liquefied natural gas), as well as environmentally friendly versions of consumer products such as dishwashers, stoves, washing machines and clothes dryers, cleaning products, toothpaste, paper goods, clothing, and food. The pursuit of a “greener” economy is, arguably, driven by both a desire on the part of companies to cash in on a global trend and a recognition of the need to curb the effects of pollution and climate change on the planet. The green economy also holds the potential to generate jobs. According to a 2008 report by the United Nations Environment Programme, 2.3 million new jobs in renewable energy were created in a span of several years; 406,000 of those were in the United States. The solar power system developer and installer groSolar saw its labor force increase from 2 when it was founded in 1998 to 155 in 2010; likewise, in just 3 years (2006–2009) its revenues increased 389%, from $11.5 million in 2006 to just over $56 million in 2009 (groSolar, 2010; Inc., 2010). In 2011, the U.S. Bureau of Labor Statistics found green jobs to be growing at a rate four times faster than all other industries combined (Lee, 2013). Relative to other nations, especially China, Brazil, the United Kingdom, Germany, and Spain, the United States has awakened slowly to the potential of the green economy. China currently leads the world in green (or “clean”) industry investments, with an annual commitment of more than $100 billion. China is the world leader in the production of solar cells (the main component of solar panels), harnesses more hydropower than any other nation, and in the very near future is poised to surpass other countries in wind power production (Boudreau, 2010). Paradoxically perhaps, it is also a flagrant polluter—the dramatic levels of air pollution in Beijing led some athletes competing in the 2008 Summer Olympic Games, hosted by the city, to consider dropping out. In recent years, some major urban centers in China have experienced “smog emergencies” severe enough to shut down schools and airports. Nature v. EconomicsCLICK TO SHOW Worker’s RightsCLICK TO SHOW p.401 TECHNOLOGY & SOCIETY THE DIGITAL SWEATSHOP AFP / Stringer/Getty Images Apple and other global companies have been under scrutiny for the hazardous working conditions in some offshore factories. Some workers have responded with strikes. A small number have committed suicide on factory grounds. The exploitation of labor is a fundamental characteristic of industrial capitalism, according to Karl Marx. While in Western countries today few industrial sites like those he and Upton Sinclair described exist, factories with strikingly poor conditions and large pools of low-wage labor flourish in other countries, including China and Thailand. Workers in developing countries bear many of the costs of the high technology we enjoy in the form of advanced computers, phones, and other necessities and amenities. In January 2012, the New York Times published an investigative article about Foxconn, one of Apple’s key suppliers in China. The article points out that while “Apple and its high-technology peers—as well as dozens of other American industries—have achieved a pace of innovation nearly unmatched in modern history…. the workers assembling iPhones, iPads and other devices often labor in harsh conditions, according to employees inside those plants, worker advocates and documents published by companies themselves” (Duhigg & Barboza, 2012). Among documented issues noted by workers and their advocates are excessive employee overtime, difficult work conditions that include long hours standing, and work injuries resulting from poisonous chemicals used in the manufacture and cleaning of products such as iPads. The article adds that “under-age workers have helped build Apple’s products, and the company’s suppliers have improperly disposed of hazardous waste and falsified records.” Apple representatives argue that the company has conducted investigations into these conditions and has standards to which its manufacturers must conform (Duhigg & Greenhouse, 2012). Like any modern capitalist enterprise, however, Apple exists in a deeply competitive economic environment, and holding costs down is a path to greater profit. Tightly controlled and rapid manufacturing is also key to maintaining the innovations that drive the technology marketplace. The New York Times article points out that Apple is one of the most admired U.S. brands. A survey the newspaper conducted in 2011 found that fully 56% of respondents could not think of anything negative about the company. Negative opinions were far more likely to be linked to the cost of Apple products (14%) than to its overseas labor practices (2%). Without pressure for change from consumers or companies themselves, the voices of laborers are least likely to be heard and their interests least likely to be realized. As an Apple executive interviewed for the story noted, “You can either manufacture in comfortable, worker-friendly factories, or you can reinvent the product every year, and make it better and faster and cheaper, which requires factories that seem harsh by American standards…. And right now, customers care more about a new iPhone than working conditions in China” (quoted in Duhigg & Barboza, 2012). THINK IT THROUGH Consider the interests that come into play in this environment: Apple and its manufacturers are interested in low labor costs, efficient and effective production, and high profits. Consumers are interested in new gadgets and technologies to increase their productivity and pleasure. Workers are interested in safe working conditions and good pay. Can all these interests be realized? What do you think? p.402 © Ryan Pyle/Corbis The green revolution is here, and you are living in the middle of it. Today’s economy will demand that companies and countries alike invest in renewable, cleaner energy that will help reduce their carbon footprint and slow the negative effects of climate change and depletion of earth’s natural resources. In fact, China has assumed such a dominant position in the new green industrial revolution, thanks to strong policies and production benchmarks set by the nation’s powerful centralized government, that some U.S. energy experts have expressed fears that if the United States does not soon assume a more dominant position in green technologies, industries, jobs, and research and development, it will lose the opportunity to do so and become dependent on green energy imported from foreign countries, much as it now relies on the importation of oil and fossil fuels. Already 25 of the top 30 green industries are located outside the United States, in countries where “green” has become a requirement, not an option (Boudreau, 2010). WHY STUDY ECONOMIC SYSTEMS AND TRENDS? Whether you were born in the 1960s, the 1970s, the 1980s, the 1990s, or earlier, the U.S. economy has experienced some dramatic changes in your lifetime. In 1953, for instance, manufacturing accounted for 30% of the U.S. gross domestic product (Blinder, 2006), and about a third of the workforce was unionized (Reich, 2010). By the mid-2000s, that share of jobs in manufacturing had declined to about 13%, and the proportion of unionized workers had fallen to 12.3%, or 15.3 million people, down from 20% or 17 million people in 1980, the first year for which comparable data are available (U.S. Bureau of Labor Statistics, 2013c). In the 1960s, more than one third of the U.S. nonagricultural workforce was engaged in manufacturing, and about two thirds of American workers were in the service sector. Since that time, the service sector’s share of jobs has grown by nearly 20%, a massive change that has brought both new opportunities and new challenges, particularly to those workers without higher education. In the latter years of the 1970s, the United States experienced a steep rise in imports—U.S.-made goods and U.S. workers were increasingly forced to compete with lower-priced goods and lower-priced labor. From this period forward, the share of goods made in the United States and the number of workers making them fell (Uchitelle, 2007). Starting in the early 1980s, U.S. wages, which had increased for decades, stagnated. Former secretary of labor Robert Reich (2010) writes that “contrary to popular mythology, trade and technology have not really reduced the number of jobs available to Americans…. The real problem [is] that the new [jobs] they got often didn’t pay as well as the ones they lost” (pp. 53–54). Today, wages in the middle and lower segments of the economic hierarchy remain flat. By the 1990s, advances in computer technologies brought a new wave of outsourcing, not of manufacturing jobs, many of which had already moved offshore, but of information technology and, increasingly, customer service jobs (Erber & Sayed-Ahmed, 2005). Countries such as India, with large populations of educated and English-speaking workers, benefited from American firms’ pursuit of lower-cost labor not only in manufacturing but in service as well. This movement of technology jobs and other jobs that, as economist David Autor (2010) points out, can be done “over the wires” continues unabated today. The shape of our economy and our economic fortunes has changed in myriad ways and continues to do so. Is the growth of “green” industries the next important shift in the world economy? How will globalization affect today’s high-skill, high-wage jobs in the United States and elsewhere? Can the fortunes of the declining middle class be reversed? Understanding economic patterns and trends of the past and present is critical to gaining a perspective on how we as individuals and as a country can both prepare for and shape our economic future. Green Jobs in the United StatesCLICK TO SHOW p.403 WHAT CAN I DO WITH A SOCIOLOGY DEGREE? SKILLS AND CAREERS: DATA AND INFORMATION LITERACY Paradoxically, in the modern world we are surrounded by information, but we are not always truly well informed. The abilities to distinguish between credible and questionable data, to seek out solid and reliable sources of information, and to use those sources wisely are critical skills in our “information society”—and in today’s job market. Data and information literacy encompasses the skills to identify the information needed to understand an issue or problem, to seek out credible and accurate sources of information, to recognize what a body of data illuminates and what it obscures, and to apply the information to a description and analysis of the issue at hand. Consider a key topic covered in this chapter—unemployment. If you were charged with gathering data on unemployment in the United States or unemployment among particular demographic groups in the United States, where would you begin? Many of us turn to Google or other popular search engines to guide our research. While there is value in using these large search engines, experienced researchers seeking to gather unemployment data are more likely to turn to credible and frequently updated sources of information such as the U.S. Bureau of Labor Statistics (BLS; www.bls.gov). The BLS provides solid data that can be broken out by demographic categories, states, and other variables. It also provides researchers with information on how the data are gathered, giving critical insight into what the data illuminate and what they obscure. For example, in this chapter’s Behind the Numbers box on page 388 we learned that the BLS unemployment figures do not take into account the institutionalized population in the United States. With more than 2 million people—many of them men of color—incarcerated, this exclusion actually results in an incomplete picture of the dire economic position of this demographic group. This does not mean that the data generated by the BLS are not useful. What it means is that we have collected good data—and recognized its limitations. This recognition can help us to use the data more effectively and perhaps to fill in the gaps with information gathered from other sources, such as the U.S. Bureau of Justice Statistics (www.bjs.gov), which provides information on issues related to criminal justice. As a sociology major you will be asked to do research on social issues and problems, and you will learn to use information sources to gather good data. You will have the opportunity to develop key skills that enable you to be a solid researcher and a critical consumer of information. As a student, you need the skills of data and information literacy to complete tasks such as writing research papers and preparing class presentations. As a consumer, you employ these skills to guide your decisions about the purchase of a home or a vehicle. As a citizen, you need the tools of data and information literacy to make informed political choices about which candidates or causes to support. Data and information literacy is no less significant in the world of work. It is a critical part of jobs that involve tasks such as research, the assessment of problems and identification of solutions, and the rigorous gathering and organization of bodies of data that inform policy, marketing, or training, among others. This is a skill of significant value inoccupational fields such as business and management, marketing and advertising, health care administration, information technology, government, education, research, politics and campaign management, polling, insurance, and management consulting. Among the job titles one finds in these fields are management/data/information analyst, market researcher, consumer survey adviser, demographic analyst, mass communications analyst, job analyst/labor force and human resources analyst, life quality researcher, policy analyst, program director, research librarian, survey research technician/specialist, and social survey director. THINK ABOUT CAREERS How would you characterize your information literacy skills at this point? What would you like to learn in order to sharpen your skills? How would you explain the skills of data and information literacy to a potential employer in a field of interest to you? p.404 SUMMARY • The economy—the social institution that organizes the ways in which a society produces, distributes, and consumes goods and services—is one of the most important institutions in society. • Three major technological revolutions in human history have brought radically new forms of economic organization. The first led to agriculture, the second to modern industry, and the third to the postindustrial society that characterizes the modern United States. • Industrial society is characterized by automation, the modern factory, mass production, scientific management, and modern social classes. Postindustrial society is characterized by the use of computers, the increased importance of higher education for well-paying jobs, flexible forms of production, increased reliance on outsourcing, and the growth of the service economy. • Although postindustrial society holds the promise of prosperity for people who work with ideas and information, automation and globalization have also allowed for new forms of exploitation of the global workforce and job loss and declining wages for some workers in manufacturing and other sectors. • Capitalism and socialism are the two principal types of political economic systems that emerged with industrial society. While both are committed to higher standards of living through economic growth, they differ on the desirability of private property ownership and the appropriate role of government. Both systems have theoretical and practical strengths and weaknesses. • Work consists of any human effort that adds something of value to goods and services that are available to others. Economists consider three broad categories of work: the formal economy, the informal (or underground) economy, and unpaid labor. • The informal economy is an important part of the U.S. economy even though it does not appear in official labor statistics. Although in industrial societies the informal economy tends to diminish in importance, in recent years this process has reversed itself. • In the modern economy, consumption replaces production as the most important economic process. The means of consumption, as defined by sociologist George Ritzer (1999), are “those things that make it possible for people to acquire goods and services and for the same people to be controlled and exploited as consumers” p. 57). A mall offers consumers buying options, but it also is part of a system of consumer control, as consumers are seduced into buying what they do not need. • We acquire goods in part based on our consideration of reference groups. As consumption reference groups have changed in the past decades, U.S. consumers have increased spending and taken on a much larger debt load. • Economic globalization is the result of many factors: technological advances that greatly increased the speed of communication and transportation while lowering their costs, increased educational attainment in low- and middle-income countries, and the opening of many national economies to the world capitalist market. Globalization has had profound effects on the U.S. economy. KEY TERMS economy, 383 goods, 383 services, 383 mass production, 384 reserve army of labor, 385 scientific management, 385 automation, 386 unemployed, 389 not in the labor force, 389 marginally attached to the labor force, 389 emotional labor, 389 capitalism, 390 public sector, 390 private sector, 390 communism, 392 socialism, 392 work, 393 barter economy, 394 formal economy, 394 informal (or underground) economy, 394 means of consumption, 396 p.405 DISCUSSION QUESTIONS 1. How is unemployment in the United States measured? What aspects of this phenomenon does the unemployment rate measure and what aspects does it fail to capture? 2. How have U.S. manufacturing jobs changed since the 1970s? What were key characteristics of those jobs in the middle of the 20th century and what are key characteristics today? How is the change socially significant? 3. What are the main differences between the formal economy and the informal economy? What are the similarities? What sociological factors explain the existence of the informal economy in the United States? 4. What are the main characteristics of a socialist economic system? Where have such systems been found in recent history? What are their strengths and weaknesses? 5. What sociological factors explain the dramatic rise of consumer debt in the United States over the past three to four decades? Why should this be of concern to society and to policy makers? Sharpen your skills with SAGE edge at edge.sagepub.com/chambliss2e A personalized approach to help you accomplish your course work goals in an easy-to-use learning environment. 16 HEALTHAND MEDICINE © Richard T. Nowitz/Corbis Media Library CHAPTER 16 Media Library AUDIO Internet Addiction Ebola and the Making of Pariahs Healthcare for Sexworkers VIDEO The Creation of “Madness” NHLB Institute’s Global Health Initiative Global Disparities in Heart Health CQ RESEARCHER Obesity in America Teen Sex and Pregnancy PACIFIC STANDARD MAGAZINE Culture and Health Care The Most Dangerous Idea in Mental Health JOURNAL Sociology and the Sick Role Concept College Student Drug Use Race, Pollution, and Health Income Inequality and Health HIV/AIDS in Poor Countries p.407 IN THIS CHAPTER Cultural Definitions of Health and Illness Health Care in the United States Sociology and Issues of Public Health in the United States Developing a Sociology of HIV/AIDS Global Issues in Health and Medicine Why Should Sociologists Study Health? WHAT DO YOU THINK? 1. Should universities and colleges regulate and punish the use of “study drugs”? 2. Do you think that use of the Internet, like the use of drugs or tobacco or alcohol, can become an addiction? If so, how should society respond? 3. Why are the poor more likely than their middle-class counterparts to be obese? What sociological factors might researchers look at to understand this correlation? p.408 THE RISE OF “STUDY DRUG” USE AMONG U.S. STUDENTS © Don Carstens—The Stock Connect/Science Faction/Corbis In the fall of 2011, Duke University in North Carolina added a new bullet point to its list of behaviors that constitute academic dishonesty: “the unauthorized use of prescription medication to enhance academic performance.” This policy, which so far has not been adopted at most other universities, represents Duke’s attempt to address student use and abuse of so-called study drugs, prescription medications intended to alleviate conditions such as attention-deficit/hyperactivity disorder (ADHD). Sales of prescription stimulants such as Ritalin and Adderall have surged in recent years: From 2006 to 2010, they increased from $4 billion to more than $7 billion. According to the Higher Education Research Institute, about 5% of incoming freshmen in 2011 had diagnosed ADHD (Johnson, 2011). But the proportion of students using the drugs prescribed to treat ADHD is larger. By one estimate, as many as a quarter of students on some college campuses have used the drugs in the past year (Trudeau, 2009). According to a recent study, 62% of college students will be offered such stimulants by their fourth year (Wild, 2013). Interestingly, a report on the problem argues that those using “study drugs” are more likely to perform below average academically and exhibit poor study habits. At the same time, the use of such substances—sometimes called “Ivy League crack”—is found among students at all levels of achievement. Students take the drugs to enhance their concentration and increase the time they can spend on tasks, though such use also carries the risk of irregular heartbeat, panic attacks, addiction, and even death (Johnson, 2011). With more students using them, concerns about the drugs’ legality and safety have been accompanied, as at Duke, by questions about how institutions of higher education should respond. Duke administrators believe that use of the drugs by students to whom they have not been prescribed constitutes cheating, a position supported by the university’s newspaper. A recent study suggests that many college students do not agree. In a survey of 1,200 male college freshmen, more respondents labeled the use of performance-enhancing drugs for sports (such as anabolic steroids) as “unethical” than condemned the use of stimulants for the purpose of improving grades (George Washington University, 2012). p.409 © Per-Anders Pettersson/Corbis Research has shown that people in every age group benefit physically and mentally from regular exercise. In this photo, an 84-year- old South African woman exercises outside her Soweto home. City authorities in Soweto have invested in parks and outdoor gyms to encourage residents to be active. Why would the use of drugs to enhance performance be judged more harshly in one context than in another? In an interview, Tonya Dodge, one of the authors of the study, suggested that “in sports there can be only one winner so misuse of a substance is less acceptable for achieving success than in academics. In academics, one’s success does not necessarily come at the expense of someone else, but in sports it does.” Do you agree with Duke University that the use of “study drugs” is a form of academic dishonesty? Or do you believe, as many students in the study suggested, that it is okay because one student’s improvements do not come at the expense of his or her peers? Should the drugs be more fully regulated because of their medical dangers—or because they constitute cheating? Or should their use be permitted for those willing to take the health risks? What kinds of policies, if any, should your university or college enact in response to this phenomenon? At one time, sociology and medicine went their separate ways. In the past half century, however, this situation has changed significantly (Cockerham & Glasser, 2000; Weitz, 2012). Today it is widely accepted that sociology can contribute to our understanding of mental and physical health and illness, social group disparities in health, and public health issues such as smoking and obesity—and the growing use and abuse of “study drugs” by young people. In this chapter, we look at health and medicine from a sociological perspective. We focus on the important role that social forces play in health and health care in the United States, and we address issues at the crossroads of medicine, health, public policy, and sociology. We begin by distinguishing health from medicine. We then turn to an examination of the ways in which ideas about health and illness are socially constructed in culture. We look at health and safety, as well as the relationship between class status and health care and outcomes in the United States, delving into the important issue of health care access and reform in the United States. Further, we highlight sociological issues related to public health, including tobacco use, obesity, and teen pregnancy. We offer ideas about the development of a sociology of HIV/AIDS, a problem that continues to threaten lives, livelihoods, and entire communities and countries. We finish with a consideration of global issues of health and their sociological roots. CULTURAL DEFINITIONS OF HEALTH AND ILLNESS Although health and medicine are closely related, sociologists find it useful to distinguish between them. Health is the extent to which a person experiences a state of mental, physical, and social well-being. It encompasses not merely the absence of illness but a positive sense of soundness as well. This definition, put forth by the World Health Organization (WHO, 2005), draws attention to the interplay of psychological, physiological, and sociological factors in a person’s sense of well-being. It makes clear that excellent health cannot be achieved in purely physical terms. Health cannot be realized if the body is disease-free but the mind is troubled or the social environment is harmful. Medicine is an institutionalized system for the scientific diagnosis, treatment, and prevention of illness. It focuses on identifying and treating physiological and psychological conditions that prevent a person from achieving a state of normal health. In this effort medicine typically applies scientific knowledge derived from physical sciences such as chemistry, biology, and physics, as well as psychology. In the United States, we usually view medicine in terms of the failure of health: When people become ill, they seek medical advice to address the problem. Yet, as the above definition suggests, medicine and health can go hand in hand. The field of preventive medicine—medicine emphasizing a healthy lifestyle that will prevent poor health before it occurs—is of key interest to health professionals, patients, and policy makers. Culture and Health CareCLICK TO SHOW p.410 © Tino Soriano/National Geographic Society/Corbis Sociologist Talcott Parsons introduced the concept of the “sick role,” which offers sociologists the opportunity to think about the condition of being ill as not only a physical condition, but also a social status with particular characteristics and expectations. THE SICK ROLE Cultural definitions of sickness and health and their causes vary widely (Sagan, 1987). There are sick roles in every society. Sick roles are rooted in cultural definitions of the appropriate behavior of and response to people labeled as sick and are thus sociologically determined (Cockerham & Glasser, 2000; Parsons, 1951, 1975). The sick role of being mentally ill, for instance, varies enormously across time and space (Foucault, 1988). In some societies, mentally ill people have been seen as having unique spiritual qualities, while in others they have been labeled as victims of demonic possession. In modern societies, mental illness is characterized sometimes as a disease with physiological antecedents and at other times as a sign of character weakness. One of the pioneers in the sociology of medicine, Talcott Parsons (1975), observed that, in the United States, the role of “sick person” includes the right to be excused from social responsibilities and other “normal” social roles. Parsons, whose theories reflect a functionalist perspective on social life, suggested that illness is both biologically and socially defined, because a “normal” state of functioning includes both physiological equilibrium and the ability to enact expected social roles and behaviors. Even if illness results from a lifestyle that puts a person at risk, society does not usually hold him or her accountable. On the other hand, the sick person has a societal obligation to try to get well and to seek competent medical help in order to do so. Failure to seek help can lead others to refuse to confer on the suffering individual the “benefits” of the sick role. The notion that a sick person is enacting a social role may remind us of Erving Goffman’s (1959) ideas about humans as actors on a social stage. Goffman suggested that life is like a dramatic play, with front and back stages, scripts for certain settings, costumes, and props. In order to define situations in ways that are favorable to ourselves, he argued, we all play roles on the “front stage” that conform to what is expected and that will show us in the best light. Imagine a doctor’s office as a stage: The doctor arrives wearing a “costume” (often a white lab coat and stethoscope). The patient also wears a “costume” (a cloth or paper gown rather than street clothing). The doctor is expected to greet the patient, ask questions about the illness, examine the patient, and offer advice. The patient is expected to assume a more passive role, submitting to an examination, accepting the diagnosis, and taking advice rather than dispensing it. Now imagine a scenario in which the doctor arrives dressed in evening attire, and the patient gives the doctor medical counsel or refuses to lie on the examining table, choosing instead to sit in the rotating “doctor’s chair.” The result would be failed expectations about the encounter, as well as an unsuccessful social and medical interaction. As Parsons pointed out, the sick person has an expected “role,” but so too do doctors, nurses, and others who are part of the “sick play.” THE SOCIAL CONSTRUCTION OF ILLNESS Parsons’s model underscores the fact that the sick role is culturally determined. Illnesses that are culturally defined as legitimate, such as cancer and heart disease, entitle those diagnosed with them to adopt the role of sick person. The afflicted are forgiven for missing time at work, spending days in bed, and asking others for consideration and assistance. A seriously ill person who persists in leading a “normal” life is given credit for an extraordinary exertion of effort. Changes in U.S. society’s response to alcoholism highlight the importance of cultural definitions of illness. In the middle of the 20th century, people addicted to alcohol were widely seen as weak and of questionable character. In 1956, however, the American Medical Association (2013) declared alcoholism an illness. With the broad acceptance of this medical model of alcoholism, alcoholics often expect and receive sympathy from family members for their illness, employers may offer programs to help them fight the disease, and the government funds research in an effort to combat the problem. While there also exists a disease model of drug addiction (Le Moal & Koob, 2007), someone addicted to illegal drugs is more likely than an alcoholic to be denied the sick role. Cocaine, heroin, and methamphetamine addicts, for example, face the possibility of being sent to prison if they are found in possession of the drugs, and they may or may not be referred for treatment of their addiction. In at least 19 U.S. states, women who use illicit drugs during pregnancy are subject to civil or even criminal charges. In 2014, Tennessee passed a law that permits prosecutors to charge a woman with criminal assault if she uses narcotics while she is pregnant. The first new mother was arrested and charged under this law in April 2014 (McDonough, 2014). While there are clear reasons to be concerned about the welfare of infants born to addicted mothers, it is less clear that there are greater benefits to criminally charging new mothers and separating them from their children than to supporting their recovery in treatment programs. Sociology and the Sick Role ConceptCLICK TO SHOW College Student Drug UseCLICK TO SHOW p.411 INEQUALITY MATTERS FEMINIST STANDPOINT THEORY AND THE CONSTRUCTIONOF “FEMALE” ILLS © Paris Pierce / Alamy In the 19th century, female hysteria was a commonly diagnosed “ailment” among women. Today is it not recognized as a medical category at all. In the 19th century, women’s ills were almost exclusively labeled by male physicians. Can you see a connection between these facts? Joan Jacobs Brumberg writes in The Body Project: An Intimate History of American Girls (1997): According to Victorian medicine, the ovaries—not the brain—were the most important organ in a woman’s body. The most persuasive spokesperson for this point of view was Dr. Edward Clarke, a highly regarded professor at Harvard Medical School, whose popular book Sex in Education; Or, A Fair Chance for the Girls (1873) was a powerful statement of the ideology of “ovarian determinism.” In a series of case studies drawn from his clinical practice, Clarke described adolescent women whose menstrual cycles, reproductive capacity, and general health were all ruined, in his opinion, by inattention to their special monthly demands…. Clarke argued against higher education because he believed women’s bodies were more complicated than men’s; this difference meant that young girls needed time and ease to develop, free from the drain of intellectual activity. (pp. 7–8) Medical facts and information are powerful and, because they are cloaked in the credibility of hard science, come to be seen in society as neutral, universal, and true. Feminist standpoint theorists such as Dorothy Smith (1987, 2005) suggest, however, that because “facts,” including medical knowledge, have, until quite recently, been produced almost exclusively by men, they reflect a “male standpoint” on the world. Standpoint theorists argue that women’s standpoints may well be different from men’s, and that “facts” produced from just one standpoint cannot be seen as neutral or universal. A key part of standpoint theory is the analysis of the power that lies in the production of knowledge, and it asks, “Who has the power to produce ‘facts’”? THINK IT THROUGH Does the creation of the medical “knowledge” described above about female education and ovulation suggest that men’s and women’s research and conclusions may be conditioned by the researchers’ positions in society? How likely is it that a female physician or researcher would have come to the scientific conclusion that education interferes with women’s menses or reproductive capacity? Some drug addiction is widely understood as “illness” while some is labeled as “deviance,” transforming the status of the individual who carries the label (Goffman, 1963b). What explains the difference? Do you think these differing definitions are justified? p.412 TABLE 16.1 Rate of Gun Deaths for Selected Countries, 2011 SOURCE: Alpers, P., & Wilson, M. (2012). Guns in the United States: Facts, Figures and Firearm Law. Sydney School of Public Health, The University of Sydney. HEALTH CARE IN THE UNITED STATES Health care can be defined as all those activities intended to sustain, promote, and enhance health.An adequate health care system includes more than the provision of medical services for those who need them—it also encompasses policies that minimize violence and the chance of accidents, whether on the highways, at work, or at home; policies that promote a clean, nontoxic environment; ecological protection; and the availability of clean water, fresh air, and sanitary living conditions. HEALTH AND PUBLIC SAFETY ISSUES By the standards noted above, few societies come close to providing excellent health care for their citizens. Some, however, do much better than others. The record of the United States in this regard is mixed. On one hand, the U.S. government spends vast sums of money in its efforts to construct safe highways, provide clean drinking water, and eliminate or reduce air and ground pollution. Laws are in place to regulate working conditions with the aim of promoting healthy and safe workplace environments: The federal Occupational Safety and Health Administration is responsible for enforcing stringent regulations intended to guard the lives and health of U.S. workers. Local health inspectors visit the premises of restaurants and grocery stores to check that food is handled in a sanitary manner, and agricultural inspectors check the quality of U.S. and imported food products. States require drivers to use seat belts, motorcyclists to wear helmets, and children to be strapped into car seats, all of which have been shown to reduce injuries and fatalities in road accidents. While these efforts do not guarantee the safety of life, work, food, or transport, they contribute to public safety in important ways. On the other hand, compared to most other modern countries, the United States is more violent, a factor that compromises safety, in particular for some high-risk groups. Gun violence and firearm accidents leading to death are serious problems in the United States (Table 16.1). Children ages 5 to 14 are killed by guns in the United States at a rate 11 times higher than the rates of 22 comparable large, high-income countries. A recent analysis of WHO statistics on gun deaths found that fully 80% of the gun deaths in 23 industrialized countries happened in the United States (Richardson & Hemenway, 2011). Homicide is a leading cause of death among young African American males, and the majority of this violence is perpetrated using guns (Kaiser Family Foundation, 2006; Violence Policy Center, 2010). Studies have also noted that rates of homicide victimization are higher in U.S. states with high rates of gun ownership, as are rates of gun suicides (Miller, Azrael, & Hemenway, 2007). Domestic violence puts thousands of women at risk: At least 85% of victims of domestic violence are women, and an average of three women are murdered by a husband or boyfriend every day in the United States. In 2010, 38% of all female murder victims in the United States were killed by a husband or boyfriend (National Center for Victims of Crime, 2012). The abuse may start young: In one study, one in three adolescent females reported being physically and/or sexually abused by a dating partner (Davis, 2008). Additionally, 9% of high school students report purposeful physical abuse by a partner within the past 12 months (National Center for Injury Prevention and Control, 2014). This occurs in spite of the fact that there are myriad laws against abuse, mechanisms for securing restraining orders against would-be attackers, and shelters for battered women. Efforts to protect victims and potential victims of domestic violence may fall short because batterers are often given a pass by those hesitant to interfere and because victims lack resources to leave their abusers or fear reprisals. Different social groups experience different degrees of violence and safety. Black Americans are far more likely than Whites to be victims of homicide, and women are more likely than men to be killed by intimate partners. Why are some groups in society more vulnerable to violence? Is there a link between physical safety and the power a group has (or does not have) in society? What do you think? p.413 TECHNOLOGY & SOCIETY ADDICTION AND THE INTERNET Richard Lewisohn / Contributor/Getty Images How many hours each day do members of your family spend on line? What about your friends and you? What are the costs and benefits of our increasing dependence on electronic gadgets and social media? A recent advertisement for a leading telecommunications company opens with an earnest spokesman meandering through a family’s home and declaring, “Today we live online.” He adds that in a just a few years, the number of gadgets in our homes will double. Around him, four family members are engaged in their own individual electronic worlds, each interacting with someone or something other than those in his or her immediate environment. The scenario is presented as pleasant, progressive, and unproblematic. Might something be missing from this picture? Scientific studies point to a growing epidemic of technological dependency, even addiction. A recent Newsweek article on the issue notes, “In less than the span of a single childhood, Americans have merged with their machines, staring at a screen for at least eight hours a day, more time than we spend on any other activity including sleeping” (Dokoupil, 2012a). Some Internet users neglect sleep, family, and health in favor of the virtual world: In one extreme case, a South Korean couple allowed their infant daughter to starve while they were nurturing an online “baby” for hours at a time (BBC, 2010). At least 10 cases have been documented of Internet surfers getting fatal blood clots from prolonged sitting at the computer. While most cases are not so acute in their consequences, the American Psychiatric Association considered including “Internet addiction disorder” in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the foundation of modern mental health practice, as a diagnosis “for further study” (Dokoupil, 2012a). However, when DSM-5 was published in 2013, the disorder was not included. A recent publication by psychiatric researchers in Asia, which has very high rates of heavy Internet use, particularly by gamers, points out, “A functional magnetic resonance imaging (MRI) study found that a cue-induced online gaming urge among individuals with Internet gaming abuse activated brain areas similar to those involved in craving in people with drug addiction” (Yen, Yen, & Ko, 2010). Observers suggest that while questions remain about whether brain changes lead to addictive behavior or heavy Internet use fosters brain changes, it is becoming increasingly clear that technology is linked in some way to problems that include addictive behavior, declining attention spans, increasing impulsiveness, anxiety, and depression. THINK IT THROUGH Is “living online” stigmatized in our society, or is it celebrated? If Internet or other technology use is addictive and vast swaths of people, particularly the young, are becoming addicted, how should society respond? The Creation of “Madness”CLICK TO SHOW Internet AddictionCLICK TO SHOW p.414 SOCIAL INEQUALITIES IN HEALTH AND MEDICINE By nearly every measure, health follows the social class curve: Poor people are more likely than their better-off counterparts to suffer chronic illnesses and die earlier. Among children, poverty affects health, food security, housing stability, and maltreatment, with the former two factors playing a significant role in the likelihood of developing chronic illnesses and other negative outcomes such as malnutrition, stunted growth, and suppressed immunity (Henry, 2010). Recessions and economic slumps also hurt families, straining their ability to afford quality food and health care. Lower-income people are more likely to live in areas that have high levels of air pollution, which raises their risks of asthma, heart disease, and cancer (Calderón-Garcidueñas & Torres-Jardón, 2012). Many have a greater probability of exposure to violence and the mental and physical health problems that entails. Their work is also more likely to involve physical and health risks than is the work of middle- and upper-class people (Commission to Build a Healthier America, 2009). Figure 16.1highlights class differences in self-perceptions of health and well-being. The poor often have less healthy diets than do their higher-income counterparts: Inexpensive foods may be highly processed, fatty, and high in sugar. Fresh fruits and vegetables and lean meats may be out of financial reach for those who struggle to make ends meet, and time pressures can limit a workingman’s or -woman’s opportunities to shop for and prepare healthy foods. Children in poor communities may also lack access to safe places for active outdoor play and exercise. Another factor that affects the health of the poorer classes is the fact that they are less likely to perceive the symptoms of illness as requiring attention from a physician (Keeley, Wright, & Condit, 2009). FIGURE 16.1 Self-Reported Health Status by Income, 2011 SOURCE: Centers for Disease Control and Prevention. (2012). Table XIII. Crude percent distributions of health status among persons aged 18 years and over, by selected characteristics: United States, 2010. Summary health statistics for U.S. Adults: National Health Interview Survey, 2010. FIGURE 16.2 Life Expectancy in the United States, 2011 SOURCE: Hoyert, D. L., & Xu, J. (2012). Deaths: Preliminary data for 2011. National Vital Statistics Reports, 61(6). Washington, DC: Centers for Disease Control and Prevention. Because race and class closely intersect in the United States, racial minorities, on average, suffer from poorer health than Whites, which is reflected in the discrepancies in life expectancy between the two groups (see Figure 16.2 above). In 2010, the overall life expectancy at birth in the United States was 78.7 years. White women’s life expectancy was higher at 81.3, while Black women’s was lower at 78.2. White men had a life expectancy of 76.6 years, but their Black counterparts had a life expectancy of just 72.1 years (Centers for Disease Control and Prevention [CDC], 2011a). The differences are linked to, among other factors, higher rates of death among Blacks due to heart disease, cancer, diabetes, and homicide. Inequalities, however, start even before birth, since poor mothers are less likely to have access to prenatal care. A report prepared for the Annie E. Casey Foundation suggests that “at any age, and at any income, education or socioeconomic level, an African American mother is more than twice as likely to lose her infant as a white woman” (Shore & Shore, 2009, p. 6). These findings are supported by 2008 data (Table 16.2) showing a mortality rate of 5.52 per 1,000 for White babies and a far higher rate of 12.67 deaths per 1,000 live births for babies of Black mothers (Hoyert & Xu, 2012). The medical establishment in the United States has also used poor minorities to advance the frontiers of science. In the infamous Tuskegee study, which ran from the 1930s to the 1970s, Black males who had contracted syphilis were intentionally left untreated so researchers could study the progress of the disease (Brandt, 1983; Washington, 2007). During the Cold War, U.S. government agencies funded and conducted hundreds of research tests on unwitting citizens to assess the effects of radiation and other by-products of war (Budiansky, Goode, & Gest, 1994). These tests were usually conducted on the poor and disproportionately on minorities (Washington, 2007). This chapter’s Global Issues essay further explores the issue of medical testing on human beings and the questions that inevitably arise about who benefits from such activities and who loses. Race, Pollution, and HealthCLICK TO SHOW Income Inequality and HealthCLICK TO SHOW NHLB Institute’s Global Health InitiativeCLICK TO SHOW p.415 GLOBAL ISSUES RICH COUNTRIES AND POOR PATIENTS The Washington Post / Contributor/Getty Images Controls on drug testing in many countries of the developing world are less stringent than in countries such as the United States and Canada. Clinical drug trials run in countries such as Nigeria (shown here) have led to criticism of pharmaceutical company practices. Who benefits from disease and poverty in developing countries? Your initial response is likely to be a decisive “nobody!” But considered from a conflict-oriented perspective, the question may elicit a different response. As noted in this chapter, U.S. medical science “benefited” from the diseased bodies of Black men in the Tuskegee study, as well as from the subjects’ relative powerlessness and lack of knowledge about what was happening to them in the study (Washington, 2007). Recall that sociologist Herbert Gans (1972) asserted that the nonpoor benefit from having a class of poor people (see Chapter 7). He was not arguing in favor of poverty—he meant that because poverty is positively functional for the nonpoor, its elimination could be costly for more economically well-off and powerful groups. Gans’s work offers us a way of understanding why poverty exists and persists, even in a wealthy country like the United States. We can construct a similar conflict-oriented argument around global poverty and disease. Who benefits from global inequality and poverty—for instance, from the poor health that often accompanies economic marginality? In terms of health and medicine, the existence of poor, undereducated populations in developing states benefits the West by offering pharmaceutical companies “guinea pigs” on whom to test new medicines where fewer restrictions on testing with human subjects are in force. Examples were cited by a Washington Post investigation in 2000: “An unapproved antibiotic was tested by a major pharmaceutical company on sick children during a meningitis epidemic in Nigeria. The country’s lax regulatory oversight, the sense among some doctors that they could not object to experiment conditions for political or economic reasons, the dearth of alternative health care options, combined with the desire of the company to rapidly prepare for market a potential ‘blockbuster’ drug underpinned a situation in which disease victims were treated as test subjects rather than patients” (quoted in Eglitis, 2010, p. 203). A 2012 report on India noted that in one hospital that serves India’s lowest societal caste, the Dalits,pharmaceutical trials conducted by British and German companies had resulted in injuries and deaths. Some of the hospital patients or their families were illiterate and signed consent forms they could not understand. Others claimed they were never asked for consent. “Over the past seven years, some 73 clinical trials on 3,300 patients—1,833 of whom were children—have taken place at Indore’s Maharaja Yeshwantrao Hospital. Dozens of patients have died during the trials, however no compensation has been paid to the families left behind” (Lloyd-Roberts, 2012). The poor patients in the trials were, according to the report, hesitant to question what was happening to them. Many felt grateful that they were getting access to drugs they would not be able to afford themselves. Few understood that some of those drugs were untested. According to the report, in the 7 years leading up to 2012, almost 2,000 drug trials had taken place across the country. THINK IT THROUGH Who benefits from global poverty—and who loses? Is it possible for Western companies to conduct drug safety trials in poor developing countries in a way that benefits the companies, the patient participants, and Western consumers? p.416 TABLE 16.2 Infant Mortality and Birth Weight per 1,000 Live Births Among Black and White Infants in the United States, 2008 SOURCE: Centers for Disease Control and Prevention. (2012). Infant Mortality Statistics from the 2008 Period Linked Birth/Infant Death Data Set. National Vital Statistics Reports, 60(5). ACCESS TO HEALTH CARE One important reason the poor—as well as some families in the working and middle classes—in the United States are less likely to experience good health is that a notable proportion are unable to access regular care for prevention and treatment of disease. In the fall of 2010, 3 years after the start of the Great Recession and shortly before the Patient Protection and Affordable Care Act (which we discuss below) was signed into law, the U.S. Census Bureau reported that more than 16% of people in the United States were without health insurance, the highest figure in 23 years (Kaiser Family Foundation, 2010a). Key sources of this decline were the economic crisis and the associated rise in unemployment; most U.S. adults get health insurance coverage from their employers. Workplace coverage is variable, however, and ranges from full benefits requiring little or no financial contribution from the employee to partial benefits paid for through shared employer and employee contributions. Cost-saving measures in U.S. workplaces in recent decades have shifted a greater share of the cost of these benefits from employers to employees. As the economic picture has improved since 2010, many people have gone back to work, but millions of employees are still uninsured or underinsured. This problem has been worsened by a changing labor market and economic structure that favors part-time or contractual employment, with fewer benefits such as the employer-based health insurance coverage that has traditionally applied to full-time employees. A substantial number of Americans have access to health care through government-funded programs such as Medicare, an elder insurance program that covers most of those ages 65 and over (about 41.5 million in 2012) and some younger residents with disabilities (about 9.4 million in 2012) (Centers for Medicare and Medicaid Services, 2012). Medicaid, a shared federal and state insurance program that provides coverage for many poor adults and children, reached an enrollment of 54.1 million in June 2012 (Kaiser Family Foundation, 2013b). Medicare was created in 1965 to serve as a federal health insurance program for people age 65 and older, regardless of income or medical history. It covers very diverse populations, since most people over the age of 65 and those with permanent disabilities are entitled to coverage (Kaiser Family Foundation, 2014). Medicaid, on the other hand, is the country’s major health insurance program designed to assist low-income people of all ages with their health care needs, but it is not available to everyone who needs long-term services; to be eligible for Medicaid, individuals must meet stringent financial qualifications (Kaiser Family Foundation, 2012a). A contemporary issue related to Medicare is the fact that members of the post–World War II baby-boom generation (those born between about 1946 and 1964) are now entering the 65+-year-old cohort. As the “boomers” reach eligibility age, their massive numbers will have an effect on the nation’s need for health care dollars and resources. The U.S. Census Bureau reports that between 2000 and 2010, the 65+ age cohort grew at a faster rate than the total population; the total population of the United States increased by less than 10%, while the population of those 65 and older grew by more than 15% (Werner, 2011). The increase in eligible Medicare recipients, medical advancements that extend the lives of the elderly, and a relatively smaller tax base are the ingredients of a debate over care and government spending that will grow more acute in the years to come (Antos, 2011). The effect of the Obama administration’s health care legislation on Medicare is not yet fully clear. At the opposite end of the age spectrum, the State Children’s Health Insurance Program (SCHIP) was created in the late 1990s in an aggressive effort to cover more uninsured children. Because individual states administer SCHIP in partnership with the federal government, state governments largely dictate its implementation, so the comprehensiveness of coverage and eligibility standards vary from state to state. While the care that the poorest U.S. adults can access through Medicaid is limited, it is often the working poor and other low-income employees who are shut out of insurance coverage altogether. They are most likely to be working in economic sectors such as the service industry (fast-food restaurants, retail establishments, and the like) that provide few or no insurance benefits to employees, while earning too little to afford self-coverage but too much to qualify for government health coverage. The fact, as noted above, that low-income people are more likely to have health problems has also affected their ability to get insurance coverage in the past, because insurers were allowed to exclude those with “preexisting conditions” such as diabetes, high blood pressure, and other illnesses and disabilities. Ebola and the Making of PariahsCLICK TO SHOW p.417 The Patient Protection and Affordable Care Act (known simply as the Affordable Care Act, or ACA), signed into law by President Barack Obama in 2010, endeavors to expand insurance coverage to most people in the United States at a time when the numbers of the uninsured had been rising. The goal of this health care overhaul is to bring more people into the insurance fold by making coverage more broadly accessible and affordable, in part by requiring that everyone buy insurance and that private insurance companies offer coverage under new terms that extend benefits to those who may have had difficulty purchasing insurance in the past, such as those with preexisting conditions. While the purchasing mandate (or “individual mandate”) went into effect in 2014, other parts of the ACA were already in place when the U.S. Supreme Court ruled on the constitutionality of the mandate in June 2012. Among these provisions were the requirement that insurance companies permit young people up to age 26 to remain on their parents’ health insurance policies if they do not have other coverage. Since its passage, the ACA has been the source of heated political debate. President Obama and other supporters of the act argue that the new law is expanding insurance coverage to a broader swath of people, many of whom had been locked out of the insurance market due to preexisting conditions or unaffordability of individual insurance policies. They suggest that the law supports this expansion of coverage through the operation of new state-level insurance markets (or exchanges) that keep prices down by enabling purchasers to buy insurance as part of a group. Those with low incomes are eligible for federal subsidies to support their insurance purchases. Supporters also note that as the population of uninsured people declines, so will taxpayer-borne costs, including those incurred when the uninsured seek medical care at emergency rooms, which are obligated by law to treat everyone regardless of their ability to pay. FIGURE 16.3 Per Capita Health Care Spending for Selected Developed Countries, 2010 SOURCE: Kaiser Family Foundation (2013) Health Expenditure Per Capita. Opponents argue that the U.S. government is overstepping the limits of its powers in requiring that people purchase health insurance or pay a penalty tax for failing to do so. Many see the individual mandate as an infringement on their freedom to choose whether or not to purchase insurance. There have also been attempts to portray the ACA as a path to “socialized medicine,” though most people will still receive their insurance through private insurance companies rather than through the government. Both supporters and opponents of the ACA have expressed concerns about the costs of the U.S. health care system. Indeed, the United States spends more per capita on health care than most economically developed states (Figure 16.3), though many of its health indicators compare poorly to those of its peers. Opponents of health care reform argue that the ACA will drive up costs by, for instance, requiring insurers to cover those who have costly health conditions. Supporters of the law point out that having a large pool of uninsured contributes to higher costs when they fail to get preventive care and must resort to far more costly emergency room care. The cost trajectory of health care is not yet clear. Certainly, an aging U.S. population will likely need more, not fewer, health care services in the future. The effect of the ACA on both U.S. health indicators and access to care will become clearer with the passage of time. CAN TECHNOLOGY EXPAND HEALTH CARE ACCESS? Would you like to have a “doctor on demand”? Some technological innovations are bringing health care into people’s homes, opening the door for greater access to medical care as well as a potential reduction in unneeded doctor’s office visits. As a recent Time magazine article on the technological expansion of access to medical care points out, such technology was “previously reserved mostly for luxe private practices or rural communities that lack access to health care” (Sifferlin, 2014). Today, it may be coming to an app near you. New technological innovations like Doctor on Demand, Health Tap, and AskMD offer a range of services, from the opportunity to ask physicians medical questions by text and receive free responses to online appointments that require payment for consultations. Beneficiaries of these technologies include both patients, who have new avenues to reach medical professionals, and doctors, as online consultations can help them build their public profiles and earn some extra income. p.418 © Bettmann/Corbis Communications researcher Jean Kilbourne (1999) says female-targeted cigarette ads often contain subtexts about female thinness, using “thin,” “slim,” or “light” in the product name. Ads also imply that smoking can help women lose weight; in the past Lucky cigarettes urged, “Reach for a Lucky instead of a sweet.” Are there potential pitfalls to the use of these technologies as well? Are there potential losers? Those patients who have acute or urgent needs are still best served by personal visits to physicians. As well, those who do not own computers or smartphones or cannot pay the fees for online consultations may still be locked out of these opportunities. “Doctor on demand” technology may, however, offer a potential vehicle for bringing medical advice to both advantaged and underserved communities. Can you think of ways that technological innovations like these could be used to address medical needs across the income spectrum? SOCIOLOGY AND ISSUES OF PUBLIC HEALTH IN THE UNITED STATES Public health is the science and practice of health protection and maintenance at a community level. Public health officials try to control hazards and habits that may harm the health and well-being of the population. They have long sought to educate the public about the hazards of tobacco use, for example, and to prevent young people from taking up smoking. More recently, they have warned that obesity is becoming an ever more serious problem for young and old alike. The issue of teen pregnancy has also garnered attention, though rates of pregnancy among teenagers have fluctuated. SMOKING One of the largest and most profitable industries in the United States is the manufacture and sale of tobacco products, estimated to be a $47.1 billion industry. At the same time, tobacco is the number one cause of premature death in the United States, claiming more than 443,000 lives each year and surpassing the toll from alcohol, homicide, suicide, drugs, auto accidents, and AIDS combined (CDC, 2011c). Even nonsmokers are at risk. The CDC (2011c) estimates that secondhand smoke exposes 88 million nonsmokers to measurable levels of toxic chemicals associated with cigarette smoke. About 90% of men and 80% of women who die of lung cancer are smokers (U.S. Department of Health and Human Services, 2004). According to the CDC (2014b), smoking-related lung cancer deaths averaged 74,300 per year among men and 53,400 per year among women from 2005 to 2009. While the smoking rate in the United States fell between 2000 and 2005, it has since stalled at about 21%, a figure that translates to more than 45 million smokers (CDC, 2011c). While statistics on morbidity, meaning the rate of illness, and mortality, the rate of death, highlight important medical aspects of cigarette smoking, we can also use sociological analysis to illuminate this public health issue. Why do so many people continue to smoke and so many young people take up smoking despite the evidence of its ill effects? Why do more men than women smoke? Why are young women the fastest-growing population of new smokers? (See Figure 16.4.) Why does the government not regulate the production and sale of such an addictive and dangerous product more stringently? Sociology offers us some insight into these questions. Among other things, cigarette advertising both constructs and reinforces gender stereotypes (Kilbourne, 1999). Male smoking has been associated with independence, ruggedness, and machismo (think of the Marlboro Man, an iconic figure in U.S. advertising). On the other hand, female smoking has been associated with images that are elegant, chic, and playful or carefree. A symbolic interactionist might highlight the way in which a cigarette is more than just tobacco rolled in paper. To a young teen, it might be a symbol of maturity; to an older teen, it might represent being cool or rebellious. In what other ways do cigarettes function as symbols of self in our society? p.419 FIGURE 16.4 Cigarette Smoking in the United States, 2012 SOURCE: Centers for Disease Control and Prevention. (2012). Adult Cigarette Smoking in the United States: Current Estimate. Smoking & Tobacco Use. TWO THEORETICAL PERSPECTIVES ON PUBLIC HEALTH: THE CASE OF CIGARETTES The conflict perspective offers some insight into why cigarettes are not regulated more stringently despite their addictive properties: Who benefits from the existence of a large population of smokers? Who loses? Smoking may give pleasure to smokers, but its benefits are largely outweighed by its consequences, which include poorer health and a thinner wallet. Smoking does, however, bring profit to the tobacco companies, which have tenaciously defended their product for decades. Tobacco companies are generous contributors to candidates for political office. They are advantaged by wealth and access to the halls of government, where their voices are heard. While the smoker gets a mixed bag of benefits (pleasure) and consequences (addiction, disease, financial cost) from smoking, cigarette companies clearly benefit from purchases of their goods and the recruitment of new smokers—men and boys, women and girls—to replace those who die or quit. Is the easy availability of cigarettes also functional? A functionalist might suggest that, in fact, it is positively functional in its creation of jobs, which range from tobacco farming to marketing and lobbying for the tobacco cause, and in its contribution to rural economies that depend on income from farming tobacco. The highly coveted plant has been subject to human cultivation and use for hundreds of years. Consider its historical functions: Tobacco became a major influence in the development of the economy of early America. During the Revolutionary War, profits from the tobacco trade helped to the Revolution by serving as collateral for loans provided to Americans by France (Randall, 1999). In contemporary times, according to the Center for Responsive Politics (2013), the tobacco lobby employed about 133 lobbyists and spent more than $17 million on behalf of 25 clients in 2011 alone. The modern tobacco industry is a multimillion-dollar enterprise with a strong political influence despite increased awareness of the deleterious effects that tobacco products have on the human body. Viewing cigarette smoking through a theoretical lens lets us see it as more than just an individual choice or action. Rather, cigarettes and smoking are social symbols and phenomena with profound effects on public heath, as well as sources of profit for some and pain for others. OBESITY The CDC identifies obesity in the United States as a national health problem: It is a major cause of mortality, second only to smoking. According to the CDC (2012d) and the Kaiser Family Foundation (2011), more than 34% of adults in the United States between the ages of 20 and 74 are obese, and more than 63% are overweight (this statistic includes those who are classified as obese). The rates of being overweight/obese vary by gender and ethnicity, as we see in Figure 16.5. The rate of obesity in American children has risen even faster and is twice what it was in the late 1970s. Children who are much bigger than their peers sometimes experience social ostracism. Further, they may suffer serious health effects. Very obese children have been observed to suffer health problems once believed to affect only older adults, including heart attacks and type 2 diabetes (CDC, 2012e). With the popularity of sedentary activities such as video games, participation in social media, and television viewing, society will likely see this problem increase. FIGURE 16.5 Rates of Obesity and Overweight in the United States by Race and Ethnicity, 2012 SOURCE: The Kaiser Family Foundation. (2011). Overweight and Obesity Rates for Adults by Race/Ethnicity, 2011. Statehealthfacts.org Obesity in AmericaCLICK TO SHOW p.420 PRIVATE LIVES, PUBLIC ISSUES POVERTY, MALNUTRITION, AND OBESITY © ZUMA Press, Inc. / Alamy Obesity is not exclusively about overconsumption of food. Factors including the higher cost of healthy meals and the lack of access to recreational spaces also play a key role in understanding why many Americans, particularly the poor, struggle with obesity. Obesity and being overweight are, in important ways, what many of us would see as “private troubles,” reflecting choices individuals make about nutrition, exercise, and health. Clearly, however, these public health problems affect millions in the United States. No less important to sociologists is the fact that the risk of falling into the categories of obese or overweight is not evenly distributed across social groups in this country. Think for a moment about the problems of hunger and malnutrition. What kinds of images enter your mind? Are you picturing the heartrending scenes of starvation in the world’s least developed countries that are brought to us by the media? Yet hunger and malnutrition are also present in our own country, though their manifestation is often quite different. Poor access to nutritious food in the United States is more likely to be manifested in obesity than in emaciation. Consider, for example, that some of the country’s poorest states have the highest obesity rates. In Kentucky 30.4% of adults are obese; in Louisiana, 33.4%; and in Mississippi, 34.9% (CDC, 2012d). Among the demographic groups most likely to be poor are also those most at risk of obesity; fully half of African American women are obese, as are 45% of Hispanic women (CDC, 2012d). Those without a high school education are more likely to be obese (32.9%) than those who complete high school (over 29%) or college (nearly 21%; Ogden, Lamb, Carroll, & Flegal, 2010). According to the Handbook on Obesity, “In heterogeneous and affluent societies like the United States, there is a strong inverse correlation of social class and obesity” (quoted in Critser, 2003, p. 117). Can we use the sociological imagination to examine the relationship between poverty and obesity? What sociological factors are pertinent for understanding this phenomenon? Though individual factors such as bone structure, genes, appetite, and personal choices have an important influence, obesity is also a product of social environment and socioeconomic conditions. Those who are poor are more likely to consume less nutritious food for a host of reasons. Nutritionally poor food is generally less expensive than high-quality goods, and large grocery stores with wide selections and competitive pricing are disproportionately located in suburbs, while convenience stores plying overpriced, processed foods serve inner-city communities (Critser, 2003), though some cities, including Washington, D.C., have increased the incentives they offer for big grocery stores to locate in poor neighborhoods. The poor are also more likely than those in other socioeconomic groups to have limited access to recreational facilities such as safe playgrounds and sports fields that offer opportunities for exercise. Facing budgetary pressures, some schools in poor neighborhoods have cut important physical education programs. Good nutrition, healthy lifestyles, and healthy weight are privileges of class in ways we may not have imagined. THINK IT THROUGH What kinds of social programs or policies might a sociologist design to address the prevalence of overweight and obesity in poor communities? p.421 Among the factors to which the rise in size has been attributed is that families in the United States eat more meals outside the home than in the past, and many of these meals are consumed at fast-food establishments. As well, the portions diners are offered in restaurants are growing because many ingredients have become very inexpensive. In Fast Food Nation, Eric Schlosser (2001) notes that “commodity prices have fallen so low that the fast food industry has greatly increased its portion sizes, without reducing profits, in order to attract customers” (p. 243), a point supported by mathematician and physicist Carson C. Chow, who argues that the obesity epidemic in the United States is an outcome of the overproduction of food since the 1970s (cited in Dreifus, 2012). Federal subsidies for food production favor meat and dairy, which soak up almost three-quarters of these funds. Just over 10% support the production of sugar, oils, starches, and alcohol, and less than a third of 1% support the growing of vegetables and fruits. These data show that the U.S. Congress has opted to subsidize the production of foods that contribute to obesity rather than those, including fruits and vegetables, recommended in the government’s own nutrition guidelines (Rampell, 2010). Physician and scientist Deborah A. Cohen (2014) argues in her book A Big Fat Crisis that “obesity is primarily the result of exposure to an obesogenic environment” (p. 191), and she points to three key components of that environment. First, she notes (consistent with Chow) that factors like agricultural advances have led to an abundance of cheap food. Second, she suggests that the availability of food, particularly junk food, has grown: More than 41% of retail stores, including hardware stores, furniture stores, and drugstores, offer food. Third, food advertising has vastly expanded. Cohen notes that grocery stores today earn more from companies paying for prime display locations than from consumers buying groceries. Damage to health is not the only harmful effect of obesity. In 2010, a study found that, on average, the annual individual cost of being obese in the United States was $4,879 for women and $2,646 for men. Obese women were also more financially disadvantaged than were obese men and suffered 38% more job-related costs, such as absenteeism (Dor, Ferguson, Langwith, & Tan, 2010). Clearly, obesity is a complex phenomenon driven by a variety of factors—biological, genetic, environmental, social, and economic. As you will see below, poverty is also an important factor in the prevalence of obesity. From a sociological perspective, we consider the connection between the personal trouble and the public issue of obesity and overweight. That is, if one individual or a handful in a community are obese, that may be a personal trouble, attributable to genetics, illness, eating habits, or any other set of factors. However, when more than one-third of the U.S. population is obese (Ogden, Carroll, Kit, & Flegal, 2012), including majorities in some communities, this is a public issue and one that, to paraphrase C. Wright Mills, we may not hope to explain by focusing just on individual cases. Rather, we need to seek out its sociological roots. Consider how this issue might look through the conflict lens. Who benefits, and who loses? While “losers” in this instance are surely those whose health is compromised by excessive weight, there are also macro-level effects such as lost productivity when employees miss work due to obesity-linked illnesses such as diabetes. In fact, the CDC has estimated that the medical care costs associated with obesity in the United States total about $147 billion annually (Finkelstein, Trogdon, Cohen, & Dietz, 2009). Who benefits? The food industry, particularly fast-food companies, arguably benefit when consumers prioritize quantity over quality. By offering bigger portions (which cost only a bit more to provide), restaurants draw bigger crowds and bigger profits The $60 billion weight-loss industry (Clark, 2011; Marketdata Enterprises, 2011) also benefits, since the rise in obesity exists in the presence of widespread societal obsession with thinness. Often, the same companies that market high-fat, unhealthy foods also peddle “lite” versions (Lemonnier, 2008). TEEN PREGNANCY About 750,000 teenage girls become pregnant in the United States each year, and an estimated 444,690 give birth (Kost & Henshaw, 2012). Most of the young mothers (about 81%) are unmarried when they give birth (Henshaw, 2002; Turner, 2003). Figure 16.6 shows changes in the birthrate among teens across recent decades. FIGURE 16.6 Birthrates for Teens of Different Races Ages 15–19 in the United States, 1993–2012 SOURCE: Data from Martin, J.A., Hamilton, B.E., Osterman, M.J.K., Curtin, S.C., & Mathews, T.J. (2013). “Births: Final Data for 2012.” National Vital Statistics Reports, Vol. 62, No. Teen Sex and PregnancyCLICK TO SHOW p.422 Tina Stallard / Contributor/Getty Images Early parenthood is a leading reason that teen women drop out of school. About a third cite this reason for leaving high school. Staying in school, however, is key to job prospects that enable families to stay out of poverty. What might schools do to encourage young mothers to graduate? Pregnancy and births among teenagers are public health issues because young women who conceive or give birth before their bodies are fully developed put themselves and their babies at risk. Compared to older mothers, teen mothers have worse health, more pregnancy complications, and more stillborn, low-weight, or medically fragile infants. But teen pregnancy and birth are of more than medical concern. They are also associated with another public health problem: poverty. Giving birth early and outside marriage compounds the risk that young women and their children will become or remain poor; about 34% of all female-headed households in the United States live below the poverty line, compared with about 7% of married-couple families (DeNavas-Walt, Proctor, & Smith, 2012). Parenthood is a leading cause of dropping out of school among teenage women; teen mothers are at greater risk than their peers of not completing high school—only about 40% of women who become mothers before the age of 18 earn a high school diploma, and fewer than 2% earn a college degree by age 30 (National Campaign to Prevent Teen Pregnancy, 2010). The relationship between teen pregnancy and birth and poverty is complicated. On one hand, as noted, early and unwed motherhood compounds the risk of poverty. On the other hand, poverty is itself a risk factor for teenage motherhood: An estimated 80% of teen mothers grew up in low-income households (Shore & Shore, 2009), and poor teens have a higher incidence of early sexual activity, pregnancy, and birth than their better-off peers (National Campaign to Prevent Teen Pregnancy, 2010). In her book Dubious Conceptions: The Politics of Teenage Pregnancy (1996), sociologist Kristin Luker suggests that poverty is a cause as well as a consequence of teen pregnancy and birth. She argues that poor young women’s probability of early motherhood is powerfully affected by “disadvantage and discouragement” (p. 111). By disadvantage she means the social effects of poverty, which reduce opportunities for a solid education and the realization of professional aspirations. Consider, for instance, a high school senior from an affluent household: She may spend her 18th year contemplating whether to begin college immediately or take a year off for travel abroad. A young woman who hails from a poor household in rural Louisiana or the Bronx’s depressed Mott Haven neighborhood may have received an inferior education in her underfunded school, and, having little money, has no hope for college. Travel beyond her own state or even city is unthinkable. Local jobs in the service industry are an option, as is motherhood. Discouragement, according to Luker, is the effect of poverty that may prevent poor young women from exercising agency in confronting obstacles. In an impoverished situation, the opportunity costs of early motherhood—that is, the educational or other opportunities lost—may seem relatively low. Notably, a study by Kathryn Edin and Maria Kefalas (2005) found that many poor young women embrace early motherhood as an honorable and even desirable choice. Some of the women the researchers interviewed also saw it as something that “saved” them from trouble with drugs or the law and “matured” them. Most of the women Edin and Kefalas interviewed expressed a desire to marry and embark on a career in the future. At the same time, discouraged by what they perceived as a limited pool of stable partners, whose marriageability was compromised by poor employment prospects and problems such as alcohol and drug use, the women did not put marriage ahead of motherhood, though many retained hopes for marriage at a point when they felt financially independent. In neighborhoods where early motherhood was the norm, many expressed a preference to have their children while young, a preference shared by the young men with whom they had relationships. While few of these young women’s pregnancies were planned, many couples took no steps to avoid pregnancy. Though rates of teen motherhood remain higher in the United States than in many other economically advanced countries, they have declined in some groups. Among other factors, the use of condoms has increased markedly (U.S. Department of Health and Human Services, 2013), perhaps due to a desire to protect against both pregnancy and sexually transmitted infections. There have also been small drops in the numbers of teenagers approving of and engaging in premarital sexual activity, and the rate of births among teenage women has dropped compared to the rate in earlier decades (Ventura & Hamilton, 2011). Teen pregnancies and births are social facts, or phenomena that, as Émile Durkheim put it, we can explain only by using other social facts. That is, to understand sociologically both the rise and the fall of the rates of teen pregnancies and births, we must recognize that these are not just “personal troubles” or individual issues, but that they are fundamentally tied to other economic, social, and cultural issues in society. p.423 DEVELOPING A SOCIOLOGY OF HIV/AIDS The case of acquired immunodeficiency syndrome (AIDS) and the virus that causes it, human immunodeficiency virus (HIV), is another example of the importance of understanding the social construction of illness. Perceptions of HIV/AIDS and those who contract HIV have varied across time, depending on who the most visible victims have been. As well, the infection—which is a global pandemic—demands a sociological approach because it is closely intertwined with a host of sociological issues, including gender inequality, poverty, violence and conflict, and the pursuit of both medical breakthroughs and profits in a globalizing world. It is estimated that more than 1 million persons in the United States have HIV/AIDS. Of these, about 236,000, or 1 in 5 (about 20%), have not been diagnosed and likely do not know they are infected (CDC, 2011b). GENDER AND HIV/AIDS We can better understand the spread of sexually transmitted diseases, including HIV/AIDS, if we examine how these diseases are related to gender and inequality. Globally, the number of women with HIV/AIDS has risen: Fully half of new infections are now diagnosed among women. In some regions, women’s infection rates outpace men’s: In sub-Saharan Africa, of the 22.5 million people living with AIDS, 60% are women (UNAIDS, 2010b). Norms and traditions in many regions reinforce women’s lower status in society. In some traditional communities in Africa, for example, it is socially acceptable—or even desirable—for men to have multiple sexual partners both before and after marriage. In this case, marriage itself becomes a risk factor for women. Many women also still lack accurate knowledge regarding sexually transmitted diseases, a problem made more acute by widespread female illiteracy in poor regions. Women who are uninfected may not know how to protect themselves, and women who are infected may not know how to protect their partners. FIGURE 16.7 Estimated New HIV Infections in the United States by Subpopulation, 2010 NOTE: MSM stands for “Men who have sex with men.” IDU stands for “Injecting drug use.” SOURCE: The Kaiser Family Foundation. (2009). Estimated Numbers of Persons Living with an AIDS Diagnoses, All Ages, by Race/Ethnicity, 2009. Statehealthfacts.org FIGURE 16.8 HIV/AIDS Prevalence by Race and Ethnicity in the United States, 2009 SOURCE: Centers for Disease Control and Prevention. Gender stereotypes and vulnerability to HIV/AIDS are also pertinent. In a New York Times Magazinearticle examining the phenomenon of Black men who present themselves to the outside world as heterosexual but engage in homosexual activity “on the down low,” Benoit Denizet-Lewis (2003) writes about a culture of Black masculinity in which Black male bisexuality and homosexuality are little discussed and little accepted. Hence, Black males who want to have sexual relationships with males are often compelled to put on a facade for their families and society. In the words of one man on the down low, “If you’re white, you can come out as an openly gay skier or actor or whatever. It might hurt you some, but it’s not like if you’re black and gay, because it’s like you’ve let down the whole black community, black women, black history, black pride” (quoted in Denizet-Lewis, 2003). An important consequence is that some men who are having sex with other men are also having sex with women—wives and girlfriends. Notably, CDC (2010a) data show that Blacks made up about half of those found to have HIV in 2008, but only about three in five had ever been tested for HIV. This suggests that many men who are HIV-positive are likely unaware of their status, which puts them, as well as their partners, whether male or female, at risk. p.424 FIGURE 16.9 HIV/AIDS Prevalence Worldwide, 2010 SOURCE: Data from UNAIDS. (2010). HIV Prevalence Map. United Nations Programme on HIV/AIDS Global Report 2010. POVERTY AND HIV/AIDS Across the globe, there is a powerful relationship between the risk of HIV/AIDS and poverty. China, for instance, has experienced a rise in new cases in the past decade. A serious epidemic was detected in central China’s Henan Province, where tens of thousands of rural villagers have been infected in the past decade through selling their blood for money under unsafe and unsterile conditions. In China as a whole, it is estimated that at the end of 2005 there were 55,000 commercial blood and plasma donors infected with HIV (“AIDS in China,” 2007). In developing countries, economic insecurity and the lack of gainful employment sometimes drive workers (particularly men) to seek work far from home. For example, migrant workers from surrounding countries toil in the mines of South Africa. Away from their families and communities, some of these men seek out the services of prostitutes, who may be infected (UNAIDS, 2010b). The sex workers themselves are often victims of dire and desperate economic circumstances. Women in the sex trade, some of whom have been trafficked and enslaved, are highly vulnerable to HIV/AIDS. They have little protection from robbery or rape and limited power to negotiate safe sex with paying customers, though some countries, such as Thailand, have sought to empower sex workers to demand condom use (UNAIDS, UNFPA, & UNIFEM, 2004). Poor states, as well as poor individuals, are vulnerable to the ravages of disease. Consider the cases of many southern African states: HIV prevalence among adults ages 15–49 is about 25% in Botswana, 23% in Lesotho, and 13% in Zimbabwe (Lesotho, 2012; Republic of Botswana, 2012; Zimbabwe, 2012). The high rates of infection and death among young and middle-aged adults also mean that countries are left with diminished workforces. Without productive citizens, the state of a country’s economy declines, further reducing the resources that might be put into HIV/AIDS prevention or treatment. Even those who are training the next generation of workers have been hard-hit by the disease: In Zambia, the number of teachers dying of AIDS outpaces the number graduating as teachers (Oyoo, 2003). While HIV/AIDS is far from limited to poor victims or poor countries, poverty clearly increases the risk of disease at both the individual and the national level. VIOLENCE AND HIV/AIDS Women’s risk of contracting HIV/AIDS is increased by situations of domestic violence. Data gathered by the United Nations suggest that up to half the women in the world may experience violence from a domestic partner at some point; this includes forced sex, which is not likely to take place with a condom (UNAIDS et al., 2004). The rape of men by other males, not uncommon in prison settings, can also be implicated in the spread of the infection. In many countries, the incidence of HIV/AIDS in prisons is significantly higher than the incidence of the disease in the noninstitutionalized population. Part of this phenomenon is linked to the sharing of needles among drug-injecting prisoners or to consensual male sexual activity, but part is also linked to the underreported sexual violence behind bars. HIV/AIDS in Poor CountriesCLICK TO SHOW Healthcare for SexworkersCLICK TO SHOW p.425 © Marco Baroncini/Corbis Much progress has been made in developing medicine that helps keep HIV/AIDS under control and maintains one’s quality of life, and even better advancements have been made in prevention, awareness, and education on how to avoid contracting HIV. Nevertheless, it continues to be a global epidemic that claims millions of lives. In this photo, a woman infected with HIV who has been ostracized from her village sits outside her small hut. HIV/AIDS is a medical issue. It is also a sociological issue. Vulnerability to infection is compounded by factors such as gender stereotypes and poverty. At a time when hope of new treatments and prevention strategies has materialized but the pandemic continues to ravage communities and countries, a sociological perspective can help us to identify the social roots of HIV/AIDS and to seek the most fruitful paths for combating its spread. GLOBAL ISSUES IN HEALTH AND MEDICINE Ever since human beings first began to migrate from their African origins, taking their illnesses with them, the spread of disease has known no global boundaries. Plagues and epidemics have traveled from populations that have developed some degree of biological immunity to others that have not. During the 14th century, the bubonic plague, known as the Black Death, arrived in Europe by way of Asia and eliminated a third of the European population in only 20 years. The European conquerors of the Americas brought syphilis and other diseases with them that virtually eliminated the indigenous population in many areas (Thornton, 1987). U.S. soldiers returning from Europe at the end of World War I carried previously unknown influenza strains that killed an estimated 20 million people worldwide. Today, tuberculosis, once all but eliminated from the industrialized nations, is making a comeback, with new treatment-resistant strains brought by immigrants from poor nations. In 2010, the United States publicly apologized to the nation of Guatemala when it was discovered that in the 1940s, U.S. government researchers deliberately infected hundreds of Guatemalan mental patients with gonorrhea and syphilis for observational purposes and encouraged them to transfer their infections to others (Bazell, 2010). Such unethical experiments endanger larger populations by introducing diseases that can erupt in outbreaks. Overall, however, the 20th century witnessed a striking global triumph over many diseases, as sanitation, clean water, sewage systems, knowledge about the importance of diet, and other public health and medical practices and treatments spread throughout the world. For example, in only a few years, the WHO’s plan for “Health for All by the Year 2000” succeeded in immunizing half the world’s children against measles, polio, and four other diseases (Steinbrook, 1988). Successes have continued into the 21st century. In 2004, the Bill and Melinda Gates Foundation, working with the Global Alliance for Vaccines and Immunization, was able to vaccinate an estimated 78% of children in the world against diphtheria, tetanus, and whooping cough (Bill and Melinda Gates Foundation, 2006). Today, the Bill and Melinda Gates Foundation (2013) reports that it is 99% of the way toward eradicating polio and that a new vaccine will save the lives of an additional 400,000 children per year on a global scale. Successes such as these have produced a sharp decline in death rates in most of the world’s countries (Andre et al., 2008). The AIDS epidemic is the most recent example of the global spread of a fatal disease. What makes it unique is the rapidity with which it spread around the world, to industrialized and less developed nations alike. HIV/AIDS is also a global issue in terms of treatment and prevention. Globalization is both functional and dysfunctional for real and potential victims of the infection. On one hand, HIV/AIDS was global in its path of spread, and it appears likely that its defeat will also be global, as it was for other once-deadly and widespread diseases such as smallpox, polio, and malaria (Steinbrook, 1988). There is a concerted global effort to combat the disease. Doctors across the globe work together to share information and knowledge on HIV/AIDS and their efforts to stop it. International organizations including the United Nations are also instrumental in leading information and empowerment campaigns. On the other hand, globalization has thrown obstacles in the path of those who seek to expand the reach of therapeutic drugs that lengthen health and life for those with the infection. The global market in HIV/AIDS treatment has been dominated by Western pharmaceutical companies, most of which have jealously guarded their patent rights on the drug therapies shown to be most effective for treatment. Their fierce desire to protect patents and profits has made it more difficult for drug makers in developing states to manufacture less expensive generic versions that could save more lives in poor countries. Together with HIV/AIDS, one of the most threatening diseases in developing countries is malaria: According to some estimates, malaria is a threat to no less than half the global population. It kills more than 800,000 people every year. The most vulnerable populations are children and pregnant women in Africa, which has the most malaria deaths (CDC, 2012b). Global Disparities in Heart HealthCLICK TO SHOW p.426 The toll taken by malaria is felt at the individual, community, and national levels. For individual families, malaria is costly in terms of drugs, travel to clinics, lost time at work or school, and burial, among other expenses. For governments, malaria means the potential loss of tourism and productive members of society and the cost of public health interventions, including treatments and mosquito nets, which many individuals are unable to pay for themselves (CDC, 2012b). Malaria, together with HIV/AIDS and tuberculosis, has attracted a substantial proportion of available funding from international and national donors and governments seeking to improve the health of populations in the developing world. Critics of international health spending priorities point to a growing threat in the developing world that has not received substantial funding or attention: chronic disease. Heart disease, stroke, and cancer have long been chronic maladies associated with the habits of the populations of developed countries, such as overeating, lack of exercise, and smoking. One scientist notes that while 80% of global deaths from chronic diseases take place in low- and middle-income countries, those illnesses receive the smallest fraction of donor assistance for health. Of the nearly $26 billion allocated for health in 2009, just 1% targeted chronic disease (Lomborg, 2012). Chronic disease, however, is a growing threat in the developing world, driven by a dramatic rise in both obesity and smoking. According to the World Health Organization, global obesity rates doubled between 1980 and 2008. The WHO estimates that about half the adult populations of Brazil, Russia, and South Africa are overweight. In Africa, around 8% of adults are obese. While these figures are low compared to those in the United States, where two thirds of adults are estimated to be overweight and one third are obese, the numbers are rising. A variety of factors contribute to this phenomenon, including growing incomes in many parts of the developing world, which enable more consumption, economic changes that shift work from physical labor to indoor and sedentary labor, and the movement of fast-food restaurants into new regions where people can now afford to splurge on burgers and soda (Kenny, 2012). While smoking has decreased in many developed countries in recent decades, it has grown dramatically in some parts of the developing world. Today, about 80% of smokers live in the developing world (Qian et al., 2010). By some estimates, China has 350 million smokers (which is more people than live in the United States), and about 60% of Chinese men smoke. Tobacco use has grown fourfold in China since the 1970s and has become a key component of the nation’s growing prosperity. Cigarettes, and particularly expensive brands of cigarettes, are given as gifts to friends and family; red cigarettes are special presents for weddings, bringing “double happiness.” China also has its own tobacco manufacturing industry, which is run by the government. This creates a conflict of interest, since the same entity that regulates tobacco and might be interested in promoting better public health is profiting from the large number of tobacco users (PBS, 2010). Since 2001, when China joined the World Trade Organization and its markets opened to new goods, Western cigarette makers have also been aggressively marketing their products there, targeting relatively untapped consumer categories such as women, who are otherwise less likely than men to smoke (Qian et al., 2010). Growing income and improvements in the standards of living in developing countries represent important changes. For the most part, these changes are positive and include growing opportunities for education, health care, and access to technology, among others. At the same time, the chronic diseases long associated with the developed world threaten populations in new ways. Whether and how the international community, national governments, and local institutions react to these problems today will have an enormous impact on the health of populations in the decades to come. WHY SHOULD SOCIOLOGISTS STUDY HEALTH? Even as our medical and technological knowledge grows, threats to the goal of a healthy society and world continue to expand. In a globalizing world, no one is isolated from diseases spawned in distant places; we are all part of the same community, linked by communications, travel, and commerce. Neither are we isolated from the far-reaching consequences of health dangers that threaten to destabilize regions far from our own. In a world where the very poor exist together with the very wealthy and billions are seeking to scramble up the ladder of prosperity, the acute illnesses of poverty can be found alongside the chronic maladies of affluence. Sociology offers us the tools to examine the sociological antecedents of a spectrum of public health problems. By using a sociological perspective, we can recognize the ways that medical issues such as HIV/AIDS intersect with social phenomena such as gender inequality, gender stereotypes, violence, and poverty. We can examine the global obesity epidemic through new eyes when we see that individuals’ choices about food and fitness are made in social and economic environments that profoundly affect those choices. While medicine and technology clearly have an enormous amount to contribute to reducing the consequences of serious health issues, including HIV/AIDS, obesity, and tobacco-related illnesses, sociology too has a role to play in discovering the social roots of and imagining creative, constructive responses to health problems that threaten many lives and livelihoods. The Most Dangerous Idea in Mental HealthCLICK TO SHOW p.427 WHAT CAN I DO WITH A SOCIOLOGY DEGREE? SKILLS AND CAREERS: COMMUNITY RESOURCE AND SERVICE SKILLS Community resource competencies link knowledge of nonprofit, government, and private community resources with the skills to access appropriate services and funding to best serve clients, organizations, and communities. Resources in communities take multiple forms, including individual donors, volunteers, politicians, business owners, religious leaders, schools, libraries and community centers, and public and private service agencies. Service skills may be developed through the study of and active participation in community organizations that engage with local populations and issues. In this chapter, we discussed problems of public health, including the local and global challenge of HIV/AIDS. While public health and medical workers worldwide are doing a commendable job getting effective treatments to more people than ever, much remains to be done. In many communities, HIV/AIDS is stigmatized, and its carriers are regarded with suspicion. Reaching out to individuals and groups who may fear seeking help, or even being tested, for HIV/AIDS requires the services not only of medical personnel but also of those who have a deep cultural understanding of an affected community and the knowledge to link populations in need with resources that can help prevent and treat HIV/AIDS. As a sociology major you will develop important intercultural competencies and understandings of diversity. You will also learn important occupational skills such as the ability to gather and summarize data in order to characterize community needs effectively and develop the habits of mind to be resourceful in addressing problems in ways that take account of different perspectives. Many educational institutions offer opportunities for service learning or volunteering that enable students to become familiar with the particular needs and resources of their own communities. Knowledge of community resources and practice in community service are assets in a variety of occupational fields, ranging from social work and counseling to organizational development, nonprofit management, and criminal justice. Job titles in these and related fields include family, school, and health care social worker; volunteer coordinator; psychologist; counselor; social or human service assistant; community resources specialist; city or regional planning aide; and vocational counselor in mental health or rehabilitation. THINK ABOUT CAREERS What kinds of opportunities for community service are offered at your college or university? Take some time to research available options for volunteering or service learning, and consider how sharpening your skills and knowledge in this area might be of value to your career plan. p.428 SUMMARY • Health is to the degree to which a person experiences a generalized state of wellness, while medicine is an institutionalized approach to the prevention of illness. Although the two are clearly related, they are not the same thing. • Notions of illness are socially constructed, as are the social roles that correspond to them. The sociological concept of the sick role is important to an understanding of societal expectations and perceptions of the ill individual. • Not all forms of addiction are treated the same in society. Some, including alcoholism, are medicalized, while others, including drug use, are criminalized. • The U.S. health care system does not serve all segments of the population equally. Good health and good health care are still often privileges of class and race. • Public health issues such as smoking, obesity, and teen pregnancy can be examined through a sociological lens. The sociological imagination gives us the opportunity to see the relationship between private troubles (such as being addicted to tobacco, being obese, or becoming a teen mother) and public issues ranging from the relentless drive for profits in a capitalist country to the persistent poverty of generations. • The global pandemic of HIV/AIDS demands a sociological approach as well as a medical approach. The mass spread of the infection is closely intertwined with sociological issues. Gender inequality makes women vulnerable to infection. Poverty renders both individuals and countries more vulnerable to the disease. Violence and war are pathways for the spread of HIV/AIDS. • Rising standards of living in many parts of the developing world have had many positive effects, but the accompanying sedentary lifestyles and access to fast food and tobacco have also contributed to an increase in chronic diseases associated with obesity and smoking. KEY TERMS health, 409 medicine, 409 preventive medicine, 409 sick roles, 410 health care, 412 public health, 418 morbidity, 418 mortality, 418 DISCUSSION QUESTIONS 1. What is the sick role as defined by sociologist Erving Goffman? What are our expectations of the ill in contemporary U.S. society? Do the responsibilities of the sick role vary by community or culture? 2. The chapter discussed the argument that some addictions are “medicalized” while others are “criminalized.” What is the difference? How might we explain why different addictions are labeled and approached in varying ways? 3. African Americans and Latinos in the United States experience worse health and higher mortality rates than their White and Asian American counterparts. What sociological factors help to explain this health gap? 4. The chapter looked at cigarettes and smoking through a sociological lens. Recall how we applied the functionalist and conflict perspectives to this topic, and try applying those perspectives to junk food, such as soda, candy, and fast food. 5. How is HIV/AIDS a sociological issue as well as a medical one? What are key sociological roots of the spread of this disease in communities and countries? p.429 Sharpen your skills with SAGE edge at edge.sagepub.com/chambliss2e A personalized approach to help you accomplish your coursework goals in an easy-to-use learning environment. p.379
Our affordable academic writing services save you time, which is your most valuable asset. Share your time with your loved ones as our Unemployedprofessor.net experts deliver unique, and custom-written paper for you.
Get a 15% discount on your order using the following coupon code SAVE15
Order a Similar Paper Order a Different Paper