UNESCO and the Strange Career of Multiculturalism

By Jordan SandAugust 28, 2016

UNESCO and the Strange Career of Multiculturalism
IN OCTOBER 2015, the Yale Intercultural Affairs Committee circulated a memo to faculty and students urging “cultural sensitivity” when dressing up for Halloween events. A faculty member who served as associate master at one of the university’s residential colleges responded with an email to the residents of her college suggesting that there should be room for young people to be “a little bit obnoxious” and that guidance from the university on costumes might constitute bureaucratic overreach. Student protest erupted following the email, which protesters argued was itself insensitive. This being the era of the viral video, one student, seen shouting angrily at the master of the college (the husband of the controversial email’s author) came to stand in for the rest. An acerbic editorial in The Wall Street Journal called the demonstrators “Yale’s little Robespierres.”

Yale is not the only place where trouble over costumes surfaced in 2015. White students at a UCLA fraternity party caused a scandal by mimicking African-American celebrity Kanye West and administrators at the University of Louisville were compelled to issue a public apology after donning sombreros and large fake moustaches for a photograph. At Boston’s Museum of Fine Arts, a small but vocal group of Asian Americans garnered media attention with a costume-related protest. They targeted an exhibit about Victorian appropriations of Japanese aesthetics in which museum visitors were encouraged to participate in the appropriation by having their picture taken in kimonos. The museum responded with a self-reflective symposium on the theme of cultural appropriation. As the Yale administration’s Halloween memo suggests, the issue of borrowed costumes had already been a matter of public concern for several years, particularly on college campuses. Protesters’ objections were encapsulated in 2013 by a widely known poster campaign emanating from a student group at Ohio University. The posters read, “We’re a Culture, not a Costume.”

Some time after it came into fashion in the 1990s, the phrase “cultural appropriation,” which had once seemed edgy and exciting, became a term of opprobrium. When subalterns did the appropriating — for example, when third-world artists repurposed the detritus of first-world capitalism — it upended stereotypes. Now, however, the issue was authenticity, and appropriation was linked to the act of stereotyping. It had become a shorthand for when members of a dominant group adopted “ethnic” trappings and pretended to be someone they weren’t.

As reflected in The Wall Street Journal’s editorial, public discussion of the Yale protests revolved around the conflict between freedom of expression and so-called “political correctness.” The issues raised by the protest were folded into the larger left-right struggle in American society over race. But we should pause before the particular nature of these costume protests is absorbed into a familiar political framework. Although many of the protesters spoke of racism (the Ohio student group, for example, was called Students Teaching about Racism in Society), most of the stereotyping that offended them did not concern denigration of supposed biological traits but the playful or parodic use of clothing. The perceived offense was as much a kind of theft as it was discrimination. Calling for sanctions against obnoxious cultural appropriations implied the existence of authentic cultures that belonged properly only to those who could say “we’re a culture.”

The Wall Street Journal’s label “little Robespierres” called to mind the French Revolution without naming the revolution that lay behind the Yale incident. But it is easy enough to extrapolate: these recent protests were the latest phase in the long revolution that has unfolded since the 1970s under the name of multiculturalism — a revolution whose effects are now so widespread that we take much of it for granted. Like earlier revolutions, the multicultural revolution began with popular liberation and the birth of new political possibilities, then took unexpected directions. Now that it is mired in conflict, this may be a good time to review the four-decade career of multiculturalism and ask what it has yielded and what it has occluded. The question concerns global movements and institutions, not simply the politics of American campuses. And no global institution has embodied the revolution and its contradictions more vividly than the United Nations Educational, Scientific and Cultural Organization (UNESCO).

¤


In the middle decades of the 20th century, culture came to replace race as the dominant rubric for talking about human groups. The horror of Nazism had made treatment of races as fixed categories anathema. The collapse of colonial empires in the decades after the war delegitimized the remaining state rationalizations for rule by race. With the triumph of the civil rights movement in the United States during the 1960s, only South Africa persisted in championing an explicit logic of racial hierarchy at the foundation of government, and apartheid increasingly made the country a pariah.

“Culture” was a murky category, but it offered a flexible alternative to race, mixing nature and nurture and including an element of choice. Thanks to “culture,” biology was removed from acceptable discourse about social inequality, or at least rendered obscure. Even anthropologist Oscar Lewis’s controversial concept of the “culture of poverty,” which many condemned for blaming the victims of poverty, expressed liberal optimism when first popularized in the 1960s. After centuries of justifications for inequality on the basis of heredity, Lewis’s theory asserted that the social traits that disadvantaged the underclass were learned and could therefore be unlearned. After the United States began conducting the national census partially by mail in 1960, race itself in the United States was culturalized, since respondents were now allowed to choose the racial category in which they wished to be identified. Census forms in 1970 and 1980 added the ethnically ambiguous “Hispanic” and other categories that further blurred the distinction between heredity and culture.

Multiculturalism as a concerted program entered legislation and the classroom in the early 1970s. In a speech to parliament in 1971, Canadian Prime Minister Pierre Trudeau announced his “policy of multiculturalism” and promised government aid to “cultural groups” and the “encouragement of interchange” among them. In the United States, “culture” began to appear in policy discussions following Oscar Lewis’s work and the 1965 Moynihan Report (published as The Negro Family: The Case for National Action), which drew inspiration from Lewis. Rejecting the implication in these studies that nonwhites were “culturally deprived,” minority advocates pushed for school materials to represent people of color as more than just token minorities. The response was quick. By 1974, roughly half of all public primary and secondary schools in the country reported having introduced some form of multicultural curriculum or teacher training.[1] The movement that followed was international but developed predominantly in Anglophone countries with white majorities. In Britain and Australia, as in the United States and Canada, schoolchildren were taught that each of them had a unique cultural heritage, equally to be valued. Native costumes and foods of various countries were introduced in many schools on special occasions. Some teachers put little flags representing the children’s different national origins out on desks to display cultural diversity. Canada had occupied the vanguard of diversity rhetoric with descriptions of the nation as a “mosaic” dating back to the 1930s. In the 1970s, the United States followed suit, as the assimilationist metaphor of the “melting pot” gave way to mosaics, rainbows, and salads: images of mixing without fusion.

The other arena in which the multicultural revolution took place was historic preservation, or “cultural heritage protection” as it came to be called increasingly after the 1972 UNESCO World Heritage Convention. In contrast to the rapid transformation in grade-school pedagogy, this was a gradual revolution. UNESCO emerged in the wake of World War II as an organization dedicated to world peace through education. In its first two decades, it promoted the idea of culture as the expression of a universal civilization. UNESCO’s definition of culture subsequently evolved from an elite one, built around “masterpieces,” to one that was more encompassing and relativist. In the process, it came to champion cultural difference over cultural confluence.

The evolution is glimpsed in the relationship between UNESCO and structural anthropologist Claude Lévi-Strauss. In a pamphlet called Race and History, published by UNESCO in 1952, Lévi-Strauss urged equal recognition for the contributions of all cultures to human progress, while proclaiming, in language that underscored the value of UNESCO’s civilizing role, that mankind’s greatest achievements had been the result of collaboration and exchange among cultures. Invited to address the organization for the International Year for Action to Combat Racism in 1971, he shifted emphasis, this time stressing the value of keeping people apart. The world had become overcrowded, creating friction among cultures, he claimed, and threatening some with extinction. Preserving cultural diversity was essential to the perpetuation of the human race, even if it meant allowing some cultures to remain parochial and xenophobic. This argument met with strong disapproval from UNESCO members at the time, since it conflicted with the United Nations’s development agenda. Yet eventually it would become integral to UNESCO’s worldview. In 2005, Lévi-Strauss, now 96 years old, addressed a UNESCO gathering with much the same argument he had presented in 1971, and enjoyed an extended ovation. By this time, both Lévi-Strauss’s view of the world as an “archipelago” of cultures, and his anxiety that individual islands in the archipelago were threatened by globalization had become firmly rooted at UNESCO.[2]

The Venice Charter of 1964, the organization’s first agreement concerning the preservation of buildings and historic sites, was a European document, prompted by the war’s devastation in Europe. The experts who drafted the World Heritage Convention eight years later also gave little thought to the different ways that cultural heritage might be understood beyond Europe’s shores. Steadily over the next three decades, however, parties outside Western Europe challenged UNESCO’s approach to heritage. The first challenge came less than a year after the convention had been adopted, when Bolivia’s Minister of Foreign Affairs wrote to UNESCO demanding protection of the “intangible” heritage of indigenous peoples from the threat of appropriation by others. His complaint was spurred by Simon & Garfunkel’s use of the Andean folk song “El Condor Pasa” on their best-selling album Bridge over Troubled Water. The letter articulated a conception of cultural heritage protection reaching well beyond the concern for historic architecture and natural monuments that had informed the World Heritage Convention. Although it produced no immediate change to UNESCO policy, by raising the issue of indigenous cultural property rights, it placed the politics of multiculturalism at the center of heritage debates.[3]

The irony of this story was that under the authoritarian government of General Hugo Banzer, Bolivia was at the same time declaring all indigenous cultural products state property. Far from advancing the rights of indigenous peoples, Bolivia was co-opting them. Yet regardless of whether an authoritarian government or an indigenous group made the claim, the cultural heritage of indigenous peoples and national minorities had clear political valence. Indigenous rights depended on asserting the historical continuity of a bounded community maintaining unique and autochthonous traditions, even if this involved what Gayatri Spivak has called “strategic essentialism”: the exaggeration of internal cohesion to suppress signs of outside influence.

When states don’t hold a monopoly on indigenous claims the way Bolivia did in 1973, they have often treated the political nature of indigenous heritage as a threat. Claims of heritage rooted in the land are a short step from demands for territorial autonomy. An indigenous people’s council on world heritage was proposed in 2000, but the proposal was rejected the following year by representatives of member states on the World Heritage Committee, who feared challenges to their sovereignty from an international body of indigenous groups, even if that body were only advisory. Nevertheless, both the idea of indigeneity defined in cultural terms and the concrete problem of indigenous rights issues within UN member states have persisted in world heritage management.

¤


Following the 1972 Convention, UNESCO broadened its World Heritage List to incorporate historic cities (1978), and cultural landscapes (1995), while the organization issued numerous declarations of multicultural ideals, culminating in the Convention on the Protection and Promotion of the Diversity of Cultural Expressions in 2005. As in Lévi-Strauss’s anthropology and in the multicultural classroom, culture in UNESCO’s heritage framework came in the process to be conceived on the archipelago model. Cultures were discrete and holistic objects, and the nations or ethnic groups associated with them were their natural possessors and custodians. As a corollary to multiculturalism’s celebration of diversity, heritage management protected (or “safeguarded,” to use the organization’s preferred term) culture from the threat of global homogenization.

The Convention had established that UNESCO was protecting things of universal value, however, and inclusion in the World Heritage List, which became much sought after, reaffirmed the high-cultural assumption that some relics of the past were more valuable than others. Non-Europeans applied continuous pressure to rebalance the list away from its European locus, where the majority of listed sites still resided. But regardless of where the heritage site was located it represented a cultural essence under threat from globalization; hybridization among cultures was usually understood as vitiating. Since representatives of UN member states did the nominating and negotiated the language of the conventions, declarations, and guidelines, the model of a world of national and subnational cultures was structurally built into the process. Lecturing in Denmark in 2004 about the importance of the new convention for intangible heritage, UNESCO Director General Koichiro Matsuura asked his audience, “What is at the core of Danish identity and culture?” Matsuura’s question implied that the convention’s purpose was to make sure Denmark remained sufficiently Danish and to guarantee the same to other member states.

The Nara Conference on Authenticity, hosted by Japan in November 1994, marked a turning point in UNESCO’s multicultural revolution. Japan had held off from signing the World Heritage Convention while Japanese conservation bureaucrats sought to modify UNESCO’s definition of what constituted an authentic historic building. All of the significant buildings more than a century old in Japan were post-and-lintel wood structures, and the standard practice of Japanese experts was to dismantle nationally designated buildings completely and reassemble them in something approaching their original form. This practice conflicted with the Ruskinian ethos reflected in UNESCO’s Venice Charter, which called for preserving “contributions of all periods” of a building’s life.

Japan’s decision to sign the Convention in 1992 provided the impetus to reconsider the guidelines for assessing authenticity. Reaching for a familiar tool of cultural relativism, several conference participants from Japan and other non-European countries began their speeches with the observation that there was no equivalent for the word “authenticity” in their languages. None sought to reject the concept entirely, however. The document that emerged from the Nara meeting, which would later become the basis for a sweeping reform of world heritage guidelines, declared that “values attributed to heritage […] differ from culture to culture,” and that authenticity could therefore derive from a list of sources ranging from “form and design” to “spirit and feeling.” It thus left as wide a space as possible open for interpretations suited to different settings.

After Nara, claims based on the custodians’ spiritual investment in an artifact or place could be used to support nomination to the list. As it had often done in the past, Japan had played the wedge for the entry of the non-West into a Western-dominated field. Revolutionary leveling had not been the Japanese heritage bureaucrats’ intention. The Japanese were like the French Revolution’s Girondists, moderates in the National Assembly who wanted to transform the state without executing the king. They did not question the founding notions of authenticity and excellence that informed monument protection; they simply wanted recognition for their own monuments. Yet the door had been thrown open, and as in earlier revolutions, a cascade of increasingly radical claims followed.

¤


Whatever its merits or shortcomings for the protection of monuments, the multicultural reinterpretation of UNESCO heritage was a move toward global distributive justice. It had become evident in the years since the 1972 Convention that a UNESCO listing was worth a lot to the host country in prestige and tourist dollars. However, the burgeoning market value of the list was an awkward fact that went largely unacknowledged by UNESCO. When tourism appeared on the agenda at UNESCO meetings, it was framed as part of the general threat of globalization. The problem of the organization’s own enabling role was shunted aside, unresolved.

To maintain the ideal — or the illusion — of a purely cultural agenda, untainted by economic or political motives, all parties to the ongoing revolution focused on pursuing greater pluralism and more encompassing conceptions of cultural value. The year after Nara, the organization issued Our Creative Diversity, a report heavily influenced by the ideas of Lévi-Strauss. One decade later, after more than 50 regional meetings to discuss implementation of the Nara Document, experts gathered again in Japan and issued the Yamato Declaration, calling for greater integration of the treatment of tangible and intangible cultural heritage. At a meeting marking the 20th anniversary of Nara, participants called for further expansion of the concept of authenticity. In these gatherings the Japanese tended to be the conservative voices — a sign that the revolution they had helped initiate had succeeded beyond their goals.

In 2003, 30 years after the Bolivian letter, 30 member states signed the Convention on Intangible Culture Heritage, the first binding agreement to extend UNESCO protection to culture in nonmaterial forms, including not only oral traditions and performances but “knowledge and practices concerning nature and the universe.” Inscription of particular cases began in 2008. Early listings included the “cultural space” of the old market square in Marrakech, several forms of Japanese classical theater, Chinese calligraphy, and Uyghur Muqam, the music of the Uyghur minority in Xinjiang. The number of ratifying member states subsequently increased to 167.

This treaty moved UNESCO into genuinely new territory, but Japan and Korea provided national-level precedents. Since 1950, Japanese law had allowed for the designation of intangible cultural heritage primarily to recognize bearers of traditional craft techniques (referred to colloquially as “human national treasures”). Korea’s law, introduced in 1962, had been applied to a broader expanse of practices, including folk music and dance from each region of the country. Before these models could be adopted, however, UNESCO members clashed over the question of whether to have a list at all. The potential number of candidates was infinite and the criteria for excellence were hard to imagine. A coalition of Caribbean nations campaigned to create a “registry” to which nominating parties could name as many instances as they wished without UNESCO weighing the merit of one over another. Here too the Japanese advocated the more conservative position, calling for limiting the list to “masterpieces.” The compromise on which the body ultimately settled was a “representative list,” limited by a nomination and selection process like the World Heritage List but without claims of universal excellence. Instead of the model of a museum or hall of fame preserving the heritage of greatest value, the intangible heritage list thus resembled a Noah’s Ark of Lévi-Straussian salvage ethnography, in which examples of every species of cultural tradition would be perpetuated.

Despite the hesitancy of Japan and some other nations to abandon excellence in favor of representativeness, in the effort to include all nations and consider all cases, UNESCO experienced what might be called multicultural mission creep, as members expanded the framework and criteria for “heritage” with each new declaration or revision to convention guidelines. Nothing represents the unintended consequences of this mission creep better than the designation of national cuisines as intangible cultural heritage. France took the lead, seeking and gaining UNESCO listing for the “French gastronomic meal” in 2010. Since UNESCO’s heritage programs targeted cultures under threat, the French could not argue for a listing simply because French cuisine was excellent; they had to claim that it was in danger. Materials submitted with the application omitted mention of particular dishes or techniques. Instead, they focused on scenes of ordinary life in France, presenting food as the vehicle of hallowed social rituals in the village and the home. The implication was that like villages in more remote places, villages in France too were steeped in tradition. The listing of French food — which stood in for the French joie de vivre itself — announced that a traditional French way of life was as much in need of protection from global market forces (McDonald’s comes to mind) as the lifestyle of any indigenous minority. If cultural globalization is a threat everywhere, then we are all indigenes.

Mexico and Japan followed suit, both of them carefully skirting general claims about “cuisine” (the word itself was too redolent of cosmopolitanism) and targeting regional traditions with their nominations. Yet, of course, these countries’ national cuisines and food industries were the winners. In its application to UNESCO, Japan used the Japanese term washoku to designate the particular local New Year’s festival preparations for which it sought designation. Washoku in fact means simply “Japanese food.” Through this linguistic slippage, the Japanese government and an expanding public-private apparatus for promoting the “Japan brand” were now able to use the UNESCO designation as if it were a Michelin ranking.

As long as there is a list, there is no escaping the fact that the list confers status, even if it purports only to identify “representative” culture. And if we are all indigenes, we are also all tourists and restaurant-goers. Thus, one outcome of an effort to open UNESCO to the broadest possible spectrum of cultural value was that two of the wealthiest nations in the world were able to promote two of the most expensive cuisines through a system originally proposed to protect Andean folklore.

¤


In the same years as the multicultural revolution in pedagogy and heritage conservation, cultural anthropologists abandoned the archipelago model of culture. The field was eager to shed the vestiges of its past as a colonial science, one that had treated non-Western peoples as static and bounded groups. Scholars sought something more dynamic. The most widely read essays in the discipline beginning in the 1970s systematically dismantled cultural anthropology’s object of inquiry. Clifford Geertz redefined culture as the web of signs and signifiers (or “twitches, winks, fake-winks,” and “burlesqued winks”) that envelop and entangle us. Writing in 1986, George Marcus and James Clifford argued that culture was a set of political strategies inseparable from the contexts in which they are deployed, therefore implicating anthropologists in the very “cultures” they seek to explain. By 1991, sociologist Janet Abu-Lughod was questioning whether scholars could use the word “culture” rigorously at all without being implicated in a politics that exoticized non-Western others. In his influential Modernity at Large (1996), Arjun Appadurai asserted that the adjectival form “cultural” remained useful but that the countable noun, “culture/cultures,” should be rejected as a dangerous reification.

In literature and other humanities departments, poststructuralist cultural studies in the 1980s and ’90s celebrated hybridity. This was particularly the case in North America. Superficially, this celebration accorded with the multiculturalist perspective in pedagogy and policy, but fundamentally the two conflicted. Despite a shared liberal politics, advocates of hybrid identities rejected the natural identity between ethnicity and culture that predominated in the popular multiculturalist agenda. From the 1980s on, students nurtured on multiculturalism in primary and secondary schools were thus apt to encounter professors in college who sought to undermine the cultural identities they had fostered prior to college. Some social theorists pronounced talk of hybridity a luxury of cosmopolitan elites, pointing out that for the vast majority of people border-crossing was involuntary and fraught with danger. Others responded that human groups had constantly blended and accused multicultural policies of “boundary fetishism.”[4] The term “multiculturalism” was capacious enough that in academic journals, the debate over hybridity could be pursued under its broad rubric. In the practical realm of policymaking and school curricula, however, the model of an archipelago of ethnic and national “cultures” offered a more manageable prescriptive tool. Dismantling stable categories threatened to undermine the programs of minority recognition that they sustained.

Several things follow from the poststructuralist reconception of culture, which is now generally accepted in anthropology and the humanities. It tells us that culture is not an autonomous sphere that can be separated from politics and economy; that it is always strategically constructed; and that it is not a thing someone can possess. Poststructuralist anthropology has made visible the forces at work in every claim based on culture. Conversely, as seen in the campus protests, consequences also follow from the political mobilization of the older holistic view of culture as discrete, natural, and possessable. The clearest danger in reifying culture lies in what Appadurai termed “primordialism”: the belief that present-day community loyalties and antagonisms among groups have ancient tribal roots. Amid the post–Cold War proliferation of violence explained as “ethnic conflict,” there has been good reason to fear reifications of ethnicity and culture. It would obviously be a mistake to portray Yale students, indigenous rights activists, and promoters of French cuisine all as the agents of ethnic nationalism — and the claims of each need to be assessed for their intentions and outcomes in their separate contexts — but the theoretical logic of all of them potentially limits new possibilities of both cultural creativity and cosmopolitanism.

Since the 1990s, “heritage studies” has emerged as a new academic field at the intersection of anthropological critique and conservation practice. With its own graduate programs, journals, and conferences, the field has grown exponentially, paralleling the growing prominence of the World Heritage program, which now dominates the work of UNESCO. Heritage studies scholars have sought to absorb the poststructuralist reconception of culture, but they find themselves in a paradoxical position, since heritage practice, shaped by national and local bureaucracies for the protection of distinct cultures, is constitutionally unable to digest its implications. Following the multicultural turn, critical heritage studies targeted UNESCO’s Eurocentrism. Since the Intangible Heritage Convention this line of argument has become difficult to sustain. The critique has recently shifted to a more generalized view of the heritage enterprise as an instance of “governmentality,” in which hegemonic control is extended ever deeper into daily life by legitimating institutions (like UNESCO), which privilege particular cultural practices. Ultimately, such an approach leads to the conclusion that “heritage” is an illusion manufactured by heritage institutions. We find, for example, folklore scholar Dorothy Noyes, in her afterword to UNESCO on the Ground, proposing in a Kafkaesque vein that both UNESCO and the localities whose heritage is being safeguarded may in fact exist for the purpose of perpetuating the cultural bureaucracies that depend on maintaining the traffic between them. Valdimar Hafstein, author of some of the most mordant and witty critical analysis of intangible heritage protection, takes a similar approach. It is a sign of UNESCO’s intellectual openness that Hafstein served on the delegation from Iceland at the meetings where the Intangible Heritage Convention was drafted. Yet as Kafka would have recognized, and as Hafstein himself appears to realize, this sophisticated internal critique does nothing to weaken the edifice of heritage. The machinery of UNESCO bureaucracy absorbs it and grinds on.

None of this scholarship or bureaucratic practice is maleficent. Indeed, like so much of what UNESCO has attempted over its seven-decade career, the world heritage program suffers, if anything, from a surfeit of good intentions. But it continues to be incapable of coping with two elephants in the room: the nation-states that determine what gets designated as cultural heritage and the global tourism industry that profits from the designations. And so, states and their tourism bureaus use the World Heritage lists to burnish their national images, while in places from Marrakech to minority regions in China, local people who lack the power to counter these twin forces find their towns overrun by tourists, their lives reshaped and, in some circumstances, their rights as citizens trampled by state-led heritage agendas. The framing of the world’s “cultural heritage” as an object for UNESCO protection and the constitution of “culture” as a pure space cordoned off from the political and economic spheres have thus had the perverse effects of enhancing the market value of the very things being protected and removing them from the control of the people most connected to them.

Meanwhile, in classrooms and on university campuses, the well-intentioned project of multiculturalism did more to promote ethnic identities than it did to address real disparities of power. In the United States, despite the gains of the civil rights movement, people of African-American descent continue to be disenfranchised and incarcerated in shocking numbers and economic inequality has expanded to the point of crisis. Poverty and violence have increased in many parts of the Global South, precipitating mass migrations northward to former imperial metropoles. Faced with these increasingly visible injustices, multiculturalism offered the vision of a state-protected utopia of unique, independent cultures living in harmony with one another. At the same time, an insatiable consumer culture appropriated the superficial trappings of ethnic difference and delivered them back to young people in the form of cartoon stereotypes, parodies, and Halloween costumes.

As misdirected as their protest may have been, the Yale students intuited correctly that despite the promises of harmony, the real disparities of race and power were still intact. Calling them “little Robespierres” misapprehends what they were doing, however. This was not the Terror — rather, they were like Bonapartists, asking the university administration, and implicitly society at large, to act as a strong protector. This wish in turn tapped into a multiculturalist presumption that has been shared by the heritage establishment since the 1980s: that “culture” is the marker of identities perpetually under threat. Whether on campus or in UNESCO, the vision of justice guaranteed by transcendent institutions protecting the cultural distinctiveness of each ethnic group or nation had radical potential when part of a minority insurgency, but is now looking both more statist and more co-opted by consumer capitalism. It is unlikely that we can manage without the useful term “culture,” but the arrival of this Bonapartist moment on American campuses suggests that multiculturalism as a concept has lost sight of its revolutionary origins.

¤


Jordan Sand is professor of Japanese History at Georgetown University in Washington, DC.


¤


[1] David E. Washburn, Neil L. Brown, and Robert W. Abbott, Multicultural Education in the United States (Philadelphia: Inquiry International, 1996), 10.

[2] Wiktor Stoczkowski, “Claude Lévi-Strauss and UNESCO,” UNESCO Courier, 2008 no.5: 5-9. The term “archipelago” here comes from Thomas Hylland Eriksen, “Between Universalism and Relativism, a Critique of the UNESCO Concept of Culture,” in Culture and Rights: Anthropological Perspectives, ed. Jane K. Cowan, Marie-Bénédicte Dembour, and Richard A. Wilson (Cambridge University Press, 2001), 127–148.

[3] Here and throughout this essay I draw on recent publications in the large body of scholarship in heritage studies. These include Janet E. Blake, International Cultural Heritage Law (Oxford University Press, 2015); Michael Dylan Foster and Lisa Gilman, eds. UNESCO on the Ground: Local Perspectives on Intangible Cultural Heritage (Indiana University Press, 2015); Lynn Meskell, ed. Global Heritage: A Reader (Wiley Blackwell, 2015); Sophia Labadi and Colin Long, eds. Heritage and Globalisation (Routledge, 2010); and Laurajane Smith and Natsuko Akagawa, eds. Intangible Heritage (Routledge, 2009). I have also drawn on the critique of reifications of culture in Joana Breidenbach and Pál Nyíri, Seeing Culture Everywhere: from Genocide to Consumer Habits (University of Washington Press, 2009).

[4] See, for example, Jonathan Friedman, “The Hybridization of Roots and the Abhorrence of the Bush,” in Spaces of Culture: City—Nation—World, ed. Mike Featherstone and Scott Lash (London: Sage, 1999) and Jan Nederveen Pieterse, “Hybridity, So What? The Anti-Hybridity Backlash and the Riddles of Recognition,” Theory, Culture, Society 18:2 (2001).

LARB Contributor

Jordan Sand is professor of Japanese History at Georgetown University in Washington, DC. He is the author of House and Home in Modern Japan (Harvard, 2004) and Tokyo Vernacular (University of California Press, 2013).

Share

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!