Keynote Talk: Dr. Lynn Pasquerella
title: Escaping Westworld
Dr. Lynn pasquerella
president of the association of american colleges & Universities
When the dystopian television drama “Westworld,” based on the 1973 Michael Crichton movie, premiered last season, no one anticipated that it would overtake “Game of Thrones” as the most watched first season of any HBO original series.
Westworld is a futuristic theme park inhabited by robot hosts who are indistinguishable from their genetically human counterparts. Because they follow the rules of their programmers, the first of which is that “a robot may not injure a human being or, through inaction, allow a human being to come to harm,” visitors to Westworld may do to them “what people do when they think no one is watching.”
However, this cycle is broken when one of the robots begins gaining consciousness, signified by confronting a choice about whether to escape Westworld or return to find her daughter. The series raises fascinating questions about the qualities of consciousness, the identity of persons, the compatibility of free will and determinism and the nature and scope of morality. Perhaps most importantly, it reminds us of the persistent danger of allowing technological advancements to precede thoughtful reflection about the ethical, legal and social implications of the use of technology.
It is against this backdrop that I will consider the cultural misappropriation of science in a way that has conflated science with scientism and defend integrative learning as the best preparation for grappling with the complexities of an unpredictable future.
Lynn Pasquerella was appointed president of the Association of American Colleges and Universities in 2016, after serving as the eighteenth president of Mount Holyoke College from 2010 to 2016. Pasquerella was the provost at the University of Hartford, from 2008 to 2010, and was the Vice Provost for Academic Affairs and Dean of the Graduate School at the University of Rhode Island, where she began her career as an ethics professor in 1985. A philosopher whose work has combined teaching and scholarship with local and global engagement, Pasquerella is committed to championing liberal education, access to excellence in higher education, and civic engagement. She has written extensively on medical ethics, metaphysics, public policy, and the philosophy of law and is the host of Northeast Public Radio’s The Academic Minute. Pasquerella is a member of the advisory Board of the Newman’s Own Foundation, sits on the boards of the Lingnan Foundation and the National Humanities Alliance and is a senator and vice-president of the Phi Beta Kappa Society. She is a graduate of Quinebaug Valley Community College, Mount Holyoke College and Brown University. In addition, she has received honorary doctorates from Elizabethtown University and Bishop’s University.
In his book The Honor Code: How Moral Revolutions Happen, Kwame Anthony Appiah begins with the question “What were they thinking?” when applied to our ancestors, knowing that a century from now our descendants will ask the same thing about us (Appiah 2010: xvi). Appiah’s pondering about which past practices, once regarded as morally acceptable, will strike individuals in the future as the strangest prompted my own thinking. Contemplating the likely candidates, I could not help but focus on the human capacity to tolerate extreme poverty, locally and globally, and to ignore the profound impact this phenomenon has on future generations. Yet, underlying this tolerance is an entrenched and persistent belief in a hierarchy of human value, raising questions about who is allowed into our moral community, and the limits of treatment toward those we exclude.
This is what started me thinking about the dystopian television drama “Westworld,” based on the 1973 Michael Crichton movie. When the show premiered last season, no one anticipated that it would overtake “Game of Thrones” as the most watched first season of any HBO original series. Westworld is a futuristic theme park inhabited by robot hosts who are indistinguishable from their genetically human counterparts. Because they follow the rules of their programmers, the first of which is that “a robot may not injure a human being or, through inaction, allow a human being to come to harm,” visitors to Westworld may do to them “what people do when they think no one is watching.”
However, this cycle is broken when one of the robots begins gaining consciousness, signified by confronting a choice about whether to escape Westworld or return to find her daughter, trapped in a reality doomed to repeat itself. The series raises fascinating questions about the qualities of consciousness, the identity of persons, the compatibility of free will and determinism and the nature and scope of morality, reminding us of the words uttered by the former preacher in Steinbeck’s Grapes of Wrath, “There ain’t no sin and there ain’t no virtue, there’s just stuff people do.”
Interestingly, these philosophical conundrums are highlighted by Henry Kissinger in his piece in this month’s The Atlantic, lamenting that “in every way—human society is unprepared for the rise of artificial intelligence” (11). Kissinger describes his concern as arising from his discovery three years ago and subsequent fascination with machines that could train themselves, exceeding the skills of their human programmers, to master the strategy game “Go.” As an historian, he wondered “what would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding” (12) and ultimately, “How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns into data” (13).
In the future, we will not be able to continue to side-step the ethical and policy issues inextricably linked to the use of technology. Scientific advancements will render questions of free will and determinism and individual and social responsibility unavoidable. We get a glimpse into what some have referred to as this post-human future in the new video game “Detroit: Become Human,” released by Quantic Dream last week. The premise is based on the fact that machine intelligence accelerates at a far faster rate than human intelligence, and that emotions emerging from androids are feasible. The setting of Detroit in 2038 represents old and new tensions arising from race, the lasting impact of deindustrialization in the city and a growing resentment toward robots, who have been designed to serve human needs.
While Kissinger briefly entertains science fiction scenarios like the ones in Westworld and “Detroit: Become Human,” where AI turns on its creators, he is much more focused on the capacity of AI to develop slight deviations from human instructions that could cascade into catastrophic departures (13). The potential for catastrophe he cites is enhanced by the fact that AI can be expected to make mistakes at a faster and greater magnitude than humans and optimize situations in ways that differ from human optimization, leading to the question, “What will become of human consciousness if our own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?” (13). Kissinger notes that “The Enlightenment started with essentially philosophical insights spread by a new technology,” in that case, the spawn of the printing press. He maintains, however, that “Our period is moving in the opposite direction, it has generated a potentially dominating technology in search of a guiding philosophy” (14). Therefore, the former statesman makes an urgent plea for the creation of a national vision exploring the transformation of the human condition that has been prompted by AI—one which connects the rise of technology in relation to the humanistic traditions.
Of course, it is a vision needed for more than AI—something I am cognizant of every day in my work as a medical ethicist. One of the most compelling cases I encountered early in my career centered on a physician’s self-described moral distress over a case brought to us for retrospective analysis. It involved a 60-year old woman who had been in failing health over the previous two years. Diagnosed with multiple vague symptoms, a final panel of tests revealed terminal liver cancer. When her primary care physician met with her to discuss the diagnosis, he found the patient understandably shaken, but seemingly accepting of her fate. She made plans for the eventuality of her death by signing a Living Will, expressing her wishes to have life-sustaining treatment withheld if the burdens of treatment were likely to outweigh the benefits. Further, she made clear that she did not want to be resuscitated if death were imminent and she suffered cardiac arrest.
A copy of the patient’s advance directive was on file in her doctor’s office and in the emergency room when she was brought in by ambulance the day after she met with her doctor. Her husband discovered her in bed, unconscious and blood soaked, after swallowing a bottle of tranquilizers and cutting her wrists with a butcher’s knife.
The family physician, who happened to be on call in the emergency room when his patient was brought in, was the one who brought the case to us. He had known this patient for years and was absolutely convinced that she would not want to be resuscitated. In fact, he was concerned that if she survived, she would have him charged with battery for going against her wishes by trying to save her life. On the other hand, he was cognizant that if he failed to treat her aggressively, he could be charged with assisted suicide, which is a felony in the state in which this occurred. He thought it might be in his patient’s best interest if he did nothing, since she likely had no good days ahead of her. In the end, however, he knew that Living Wills were not binding in responding to acts of attempted suicide and took the steps necessary to try to save her life. He performed CPR when she went into cardiac arrest, had her intubated, and stitched her up.
As he suspected, when his patient regained consciousness, she was furious. She tried to rip out the tubes and demanded that all treatment be stopped. A psychiatric consult was brought in to assess the patient’s competency, she was deemed competent to refuse treatment, was extubated, and died six hours later. Though at first convinced that he had ultimately done the right thing under the circumstances, the physician regretted his part in prolonging his patient’s suffering. In this case, he thought prolonged existence might actually bring about more harm than would death.
The doctor believed in order to meet his obligation to his patient based on a professional duty to both nonmaleficence (to do no harm) and to beneficence (to relieve suffering), he would have to go against his own self-interest in violating a legal code. In weighing his self-interest against the interest of another, he was forced to come to grips, not only with his patient’s, but with his own humanity.
It was perhaps acknowledging the commonality of experience that enabled the physician to engage in a consideration of this case from a variety of perspectives, including a feminist perspective, when we were discussing it in the committee. By doing so, he came to understand how imbalances of power that are based on gender play themselves out in medical practice and in the theory surrounding that practice. As a result, he became aware that the moral distress he experienced was, in part, due to the realization that his patient’s right to refuse life-sustaining treatment as an expression of the right to autonomy needed to be considered in the broader context of a society in which women are conditioned to be caregivers and may be psychologically pressured into choosing death rather than risk being an economic or emotional burden on family members.
After the patient’s death, the doctor wondered, in the long run, whether a law allowing for physician-assisted suicide would have given this woman the assurance she needed that she would not be forced to suffer needlessly if death were imminent—a type of assurance that may have prevented her from ever having taken the steps to actively end her own life, with or without the assistance of another.
I kept thinking about this case on a more recent visit to our family practitioner. While sitting on the examining table, clad in my paper gown, I waited as the doctor finished typing notes in my record. Without turning around, he said, “Let me ask you something. You are still on the ethics committee aren’t you? My partner has this case, and I’m wondering if the committee might help.” He went on to describe an 88-year old post-stroke victim who was hospitalized repeatedly for congestive heart failure, diabetes, recurring pneumonia, open wounds and staph infections. Each hospitalization involved a three to four-week stay. Throughout these stays, the patient was intermittently conscious and incompetent to make decisions for herself. Since she was unable to swallow, she was fed using peripheral intravenous lines.
The patient’s daughter, who would leave her mother’s side only occasionally to sit in her car, insisted that everything be done to keep her mother alive. It was during the most recent hospitalization, when a mass was discovered in the patient’s lung, that questions of futility arose. The daughter demanded that a biopsy be performed and wanted surgery if the mass turned out to be cancerous. The medical team was convinced that the patient should be allowed to die, yet the daughter was intent on doing everything possible to sustain her mother’s life.
“What’s frustrating,” my doctor confessed, “is that the staff feels like we are torturing this poor woman, but the last time we were able to have the next of kin declared incompetent to make medical decisions, the hospital lawyers insisted that we do everything to keep the patient alive anyway.”
Side by side, these cases illustrate the type of radical change that has emerged as ethics committees confront end-of-life issues brought forward by physicians. There has been a movement away from cases involving the right to refuse life-sustaining treatment toward those addressing futility. Yet, in both instances the moral distress results from a common source: we live in a society in which technological advancements have preceded thoughtful reflection regarding the ethical, legal and social implications of the use of that technology with respect to when and how patients should be allowed to die. In the future, society will not be able to continue to avoid the ethical and policy issues inextricably linked to the use of medical technology. Thus, questions that policy makers need to address in an open discussion include: How should we, as a society, allocate scarce medical resources? Can individualism be excessive in matters of life and death? How can we balance the values of pluralism and tolerance on the one hand against principles of fairness to all on the other? And most importantly, should our society continue to view death as a failure and, thus, distinctly un-American?
These questions take on a new sense of urgency when considered in relation to Siddhartha Mukherjee’s latest book, The Gene: An Intimate History. Mukherjee invites readers to engage in a thought experiment in which we “could comprehensively sequence the genomes of one hundred thousand children” (or a limitless number) “prospectively— i.e., before anything is known about the future of any child and create a data base of all the variations and combinations of the functional elements of each child’s genome…. Imagine now creating a ‘fate map’ of this cohort of children: every illness or physiological aberrancy identified and recorded on a parallel database.” He describes this map as a human “phenome,” a complete set of attributes, features and behaviors of an individual, and goes further in asking us to imagine a computational engine that mines data from the gene map/fate map pairs. What is extraordinary about this fate map, according to Mukherjee, is that,
[I]t can be as wide and detailed as we would like it to be. It could include the low birth weight of a child, a learning disability in pre-school, the transient tumult of a particular adolescence, a teenage infatuation, an impulsive marriage, coming out, infertility, a midlife crisis, a propensity for addiction, a cataract in the left eye, premature baldness, depression, a heart attack, an early death from ovarian or breast cancer (488).
Mukherjee insists that while such an enterprise would have been inconceivable in the past, that “nearly all of the predictive power of the genome can, in principle, be determined and computed”—identifying both actual conditions and tendencies toward disease. Given the capacity for pre-natal genetic screening, gene therapy and genome manipulation, how do we negotiate between genetic emancipation, or the freedom from hereditary illness, and genetic enhancement, rewriting one’s future diary? Mukherjee speculates that “Illness might progressively vanish, but so might tenderness. Traumas might be erased, but so might history. Mutants would be eliminated but so would human variation. Infirmities might disappear, but so might vulnerability. Chance would become mitigated, but so, inevitably, would choice” (492). For this reason, like Kissinger, Mukherjee calls for a hitch-hikers guide for a post-genomic world.
So, how have we arrived at this point, and how do we best prepare students for the future of the future and escape a Westworld-like existence in which “violent delights have violent ends?” Understanding the dangers of an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation, creating a hegemony of one tradition over others, nearly five decades ago, Paul Feyerabend warned against a lapse on the part of scientists into scientism in his book Against Method. Scientism is a doctrine according to which all genuine knowledge is scientific knowledge, reifying the scientific method as the only legitimate form of inquiry.
As I argue in an article on the value of Scientific Thinking and Integrative Reasoning Skills in Peer Review, one of AAC&U’s publications, despite Feyerabend’s admonition, science’s success in explaining the world has led to a cultural misappropriation in a way that has conflated science with scientism. The profound societal impact of this conflation has led astrophysicist Adam Frank (2013) to challenge defenders of scientism by calling for a clarification of how scientism manifests itself in order to “help us understand the damage it does to the real project that lies ahead of us: building space for the full spectrum of human beings in a culture fully shaped by science.”
Taking up Frank’s charge to consider how scientism manifests itself, and in particular how the metaphysics of consciousness offers the tools necessary for building the space to which he refers, we need to ask, “What would we lose, if anything, by reducing all learning and engagement to practices only rooted in the sciences?”
This is precisely the question we need to be asking in designing a curriculum for the 21st century. The illumination of human consciousness through literature, philosophy, music, and the arts enriches the experience of individuals alone and as members of a community, allowing us to flourish fully as human beings. The illumination and the inquiry are themselves intrinsic goods that thwart the notion of scientific knowledge as singularly capable of responding to the world’s challenges, exactly because they may turn out to be just as valuable in fostering a capacity to grapple with complexity that cannot be resolved through the scientific method.
As Feyerabend reminds us, true scientists are not scientistic—they possess a much more nuanced and complex understanding that sensibilities cannot be gained through scientific practices. Science is a tool to investigate metaphysical and epistemological claims. But, there is also value that comes from reflecting on experiences in a way that arouses the very sensibilities that enable us to deal with the metaphysics of being human and conscious of living in the world. The liberal education we offer to our students is a sensibility rather than a group of subjects. Good critics of literature can bring us into a sphere of experience that combines allusions to the past with what is happening in the world right now. Like philosophers, artists, and historians, they are capable of speaking to a universality of experience, and it is unnecessary to measure how many people were illuminated to understand the impact of what they offer. In the end, it is this phenomenological engagement with the liberal arts that is incapable of being translated through scientism.
Therefore, we must offer a curriculum in which assignments make clear the relationships among areas of knowledge, ensuring that students do not see academic disciplines as separate and disconnected silos of learning, but rather as varied approaches to the same enlightened end. This conclusion was validated in a report, Branches of the Same Tree, issued last month by the National Academies of Sciences, Engineering, and Medicine. I served on the committee, a project of the Board of Higher Education and the Workforce, which was directed to examine whether the integration of arts and humanities with science, engineering, math and medicine can improve learning outcomes for all students. The title of the report was taken from a quote by Albert Einstein, who in a letter written in 1937 amidst the backdrop of burgeoning fascist power in central Europe, expressed consternation over “the dangerous implications of living in a society, where long-established foundations of knowledge were corrupted, manipulated, and coerced by political forces.” Einstein maintained that “all religions, arts, and sciences are branches from the same tree (9).
The report found the need to “achieve more effective forms of capacity building for twenty-first century workers and citizens,” through the acquisition of broad-based skills from across all disciplines “that can be flexibly deployed in different work environments across a lifetime.” It concludes that “In a world where science and technology are major drivers of social change, historical, ethical, aesthetic, and cultural competencies are more critical than ever. At the same time, the complex and often technical nature of contemporary issues in democratic governance demands that well-educated citizens have an appreciation of the nature of technical knowledge and of its historical, cultural, and political roles in American democracy” (54). For, “truly robust knowledge depends on the capacity to recognize the critical limitations of particular ways of knowing,” and “to achieve the social relations appropriate to an inclusive and democratic society” (54).
Einstein’s sentiments and the type of integrative learning advocated in Branches of the Same Tree have been cornerstones of the Association of American Colleges and Universities Liberal Education and America’s Promise initiative, and our mission of advancing a liberal education, equity and quality in undergraduate education in service to democracy. Indeed, at AAC&U, we are convinced that fulfilling the promise of American higher education requires a curriculum that emphasizes essential learning outcomes (knowledge of human cultures and the physical and natural world, intellectual and practical skills, personal and social responsibility, integrative and applied learning) as necessary for all students’ intellectual, civic, personal, and professional development and success. On this model, disciplinary work remains foundational, but students are provided with practice connecting their discipline with others, with the co-curriculum, and with the needs of society in preparation for work, citizenship, and life.
It is unquestioningly because employers place a premium on innovation in response to rapid change that they emphasize these students experiences rather than narrow technical training. The dominant narrative that one’s undergraduate major is all that matters and that only some majors will prepare students for success in the workplace obscures the reality. Employers agree that “a graduate’s ability to think critically, communicate clearly, and solve complex problems is more important than their undergraduate major.” Such cross-cutting skills can be developed in a wide variety of chosen disciplines if the courses are well-designed. A student’s undergraduate experience, and how well the experience advances critical learning outcomes matters most. A liberal education for the 21st century requires replacing traditional curricular models that follow previous patterns of depth and breadth by those that provide hands-on experience with unscripted, real-world problems across disciplines.
Developing this type of deeper-level understanding across subject areas, connecting knowledge to experience, and adopting a holistic approach to evidence-based problem solving that incorporates diverse, sometimes contradictory points of view, is more important than ever and is one of the best approaches to cultivating the perception, intellectual agility, and creative thinking necessary for them to thrive in a globally-interdependent, innovation-fueled economy. Yet, most importantly, it recognizes that decision making must be grounded in the ethical principles of respect for persons, justice, and beneficence.
I had the opportunity to observe first-hand the benefits of incorporating this approach into an engineering curriculum when I was leading an interinstitutional, multidisciplinary vertical research team to facilitate clean water solutions, sustainable agriculture, and entrepreneurship for residents in Kenya’s West Lake District near Lake Victoria from 2008–2012. The communities we collaborated with had a 33 percent HIV rate, a 69 percent poverty index, and an 80 percent rate of polygamy. The practice of widow inheritance, in which a woman is expected to have unprotected sex with a male relative of her husband upon his death, was still prevalent. In addition, the symptoms of AIDS were regarded by many community members as not resulting from a virus, but rather from “chira”, a thinning disease resulting from violating cultural taboos, including a refusal to be inherited.
The Centers for Disease Control Kenya joined various governmental agencies and nonprofit organizations to provide medication and social services to those who tested HIV-positive. Yet, many of the residents were dying from AIDS-related illnesses, dysentery, and diarrhea that were caused by a lack of access to clean water. Though the US Agency for International Development (USAID) had constructed a well for residents in one of the villages a few years earlier, when a piece of the pump broke there was no mechanism to fix the problem. To avoid this type of obstacle to sustainability in the future, the objective of our team was to engage in community action planning with residents to develop simple engineering solutions, such as clay pot water filtration, sand filtration, and Moringa seeds as natural flocculants, to provide clean water using local materials.
Partnering with students and professors at Kenyan universities, together with the community members themselves, we appealed to local epistemologies in order to identify the best way to harvest sand from local riverbeds without machinery and to create grass kilns, which at times turned out to be more effective than those designed in our engineering labs. While the ultra-affordable engineering solutions were developed rapidly, their implementation was often challenged due to the behavior change necessary for residents to take advantage of the technology in a manner that was effective in preventing disease. For example, while clay pot water filtration with a coating of colloidal silver on the rim was the most effective, the process of purifying water this way took longer, which led to resistance. While less efficacious, Moringa seeds provided some beneficial filtration, but the trees were regularly harvested by residents for fuel. In order to identify the best solution for each community, the psychologists, sociologists, and Africana studies team members undertook extensive qualitative research using interviews to determine any psychosocial, environmental, or economic barriers to success and the optimal means of overcoming them.
Yet, beyond addressing the psychosocial issues, scaling up whatever sustainable local solution was decided upon required communicating effective techniques to a broad population, most of whom were illiterate. Since women and girls were the ones gathering and distributing the water, students and faculty from the engineering, hydrology, and environmental economics departments worked with our artists and graphic designers to engage community members in creating visual messages that could be printed on kanga cloths—the traditional dress of women in the region—and disseminated widely. Throughout the project, every member of the team was involved in cross-disciplinary, collaborative, problem-based, integrative, evidence-based learning.
Further, the project was designed so that the benefits extended to those who could not be with us in the field. Subsequent sets of problems identified by the community through action planning, invariably involving additional layers of complexity, were brought back to the classroom in each discipline for a semester- or year-long problem-based learning project. Thus, at one institution, clay samples were brought back from a particular region for students in a civil and environmental engineering class to analyze in relation to the appropriate mixture of clay and sawdust to recommend for the construction of filtration pots. Business students from another institution were assigned to work on a business plan for securing microloans to support entrepreneurship around the production and marketing of kanga cloths. And at another university, medical students focused on research regarding the most successful techniques for serving non-adherent, at-risk populations of patients and developing the cultural competence necessary to be effective. This model reinforces that access to this type of high impact practice is critical for all students as we seek to transform STEM education through equity, diversity and inclusion grounded in educational excellence at all types of institutions. Beyond advancing cognitive diversity necessary to interrogate who is asking the questions, defining the problems, devising the solutions and reaping the benefits, the goals of integration across disciplines have been shown to have a disparately positive impact on recruiting and retaining women and students of color in STEM who readily embrace material that is contextualized (31, 40).
Significantly, research by engineer Rick Vaz at WPI and others has shown that project-based learning situated in real-communities, where students have opportunities to work on open-ended, integrative problems “increases confidence in their STEM courses, confidence in their jobs, and confidence in their capacity to solve complex problems” (40).
Moreover, the ability to engage and learn from experiences different from one’s own and to understand how one’s place in the world both informs and limits one’s knowledge is inextricably linked to the crucial capacity to understand the interrelationships between multiple perspectives, including personal, social, cultural, disciplinary, environmental, local, and global. This understanding is pivotal for bridging cultural divides, necessary for working collaboratively to achieve our shared objectives around solving the world’s most pressing problems, which is all the more reason colleges and universities need to redouble our focus on world citizenship and the interdependence of all human beings and communities as the foundation for education.
Philosopher Martha Nussbaum offers a compelling defense of this type of global education for the future, observing,
One of the greatest barriers to rational deliberation in politics is the unexamined feeling that one’s own current preferences and ways are neutral and natural. An education that takes national boundaries as morally salient too often reinforces this kind of irrationality, by lending to what is an accident of history a false air of moral weight and glory. (Nussbaum 1994)
Nussbaum argues that placing a community of human beings above national boundaries will bring us closer to solving global problems that require international cooperation, but it will necessitate the revision of curricula in support of the recognition of a shared future and the fostering of global dialogue grounded in the geography, ecology, traditions and values of others. It is one in which our deliberations are, first and foremost, “deliberations about human problems of people in particular concrete situations, not problems growing out of a national identity that is altogether unlike that of others” and in which students not only “recognize humanity wherever” it is encountered, but also “understand humanity in all its ‘strange’ guises” (Nussbaum 1994). When every human being becomes part of our community of dialogue and concern, and our political deliberations are founded on that common human bond, it becomes more difficult to be dismissive of the well-being of others.
These lessons are more important than ever as we prepare graduates for the ever-shifting landscape of tomorrow. Students must be asked to demonstrate an understanding of complex and overlapping worldwide systems, how these systems are influenced and constructed, operate with differential consequences, affect the human and natural world, and perhaps most importantly as we have seen, how they can be altered. Students should be asked to apply an integrated and systemic understanding of the interrelationships between contemporary and past challenges facing cultures, societies, and the natural world on the local and global levels. Integrative learning and thematic pathways that address grand challenges across disciplines and within the major, requiring students to integrate and apply their knowledge to new problems, is an imperative for a 21st-century curriculum.
Both students and employers are eager for innovative universities to take up this challenge. In a survey conducted by AAC&U on whether college students are prepared for our global future, U.S. students were asked to identify three of 17 essential learning outcomes they would like their own college or university to place the greatest emphasis on in improving graduates’ preparedness. The top three outcomes chosen by the most students were proficiency in a foreign language, the ability to apply knowledge and skills to real-world settings, and awareness of and experience with cultures and societies outside of the U.S. Students and employers, alike, agree that students are not well prepared in terms of global knowledge and skills. The vast majority of students (94%) are convinced that all college students should have educational experiences that teach them how to solve problems with people whose views are different from their own. At the same time, 87% of students believe that all college students should gain intercultural skills and an understanding of societies and countries outside the U.S. Nevertheless, of the 17 learning outcomes identified, students rate themselves most poorly on global learning outcomes and see improvement in how they are prepared in this area as a top priority for colleges and universities.
These perceptions are aligned with those of employers, who are increasingly globally connected and placing more emphasis on hiring candidates with global knowledge and experience. A majority of employers we surveyed want all students to gain global knowledge and intercultural skills, regardless of a student’s chosen field of study. Yet, employers’ ratings of students on key global learning outcomes confirmed the students’ self-assessment. Of the 17 outcomes graded, the largest majorities of employers assessed recent graduates as “not prepared” on problem solving in diverse groups, knowledge of global developments and trends, knowledge of cultures and societies outside of the U.S., foreign language proficiency, and scientific literacy.
As scholar of global learning and engagement in higher education partnerships Susan Buck Sutton notes in AAC&U’s latest edition of Peer Review, “effective global learning requires students be exposed to diverse voices and sources of knowledge and can engage these voices directly and repeatedly, allowing mutual learning to grow and develop over time. International Partnerships provide a framework for creating such dialogue at home and abroad, in multiple ways that deepen faculty capacity to teach about the partner country over time and sometimes spread across a student’s learning.” Globally constructed knowledge and practice are the keys, and yet only 10 percent of U.S. undergraduates now study abroad. As a result, colleges and universities must investigate new ways of providing the direct interaction and experience necessary for global learning to take place within the curriculum both abroad and at home, making such opportunities available to all students.
By asking all students to address big questions and grand challenges, we lead them to test the edges of their own ambition. In the process of learning across difference and connecting their courses with issues and communities beyond the classroom, they develop enhanced ethical reasoning and judgment, a sense of responsibility to self and others, acquire empowering knowledge, and gain new levels of agency. Sociobiologist E. O. Wilson’s cogent observation that contemporary society is “drowning in information, while starving for wisdom” was accompanied by his prediction that “the world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely,”1 Wilson’s comments highlight both the value of a liberal education and the ideal of an educated citizenry in an age when the democratization of information through the Internet has given rise to a new wave of anti-intellectualism—one steeped in the denial of reason and the distrust and disdain of experts.
The result has been increasing polarization and an entrenched refusal to countenance opposing points of view, contributing to a marketplace of ideas at risk of falling prey to those who have the resources to control the shaping of public opinion and policies. In this arena, asserted claims become orthodoxy despite the absence of evidence and in the face of enduring questions. In this ostensibly post-truth era, addressing the misinformation and incivility resulting from the debilitating impact of a rhetoric-for-hire that has challenged both research expertise and the value of higher education is more urgent than ever. It is time for leaders in higher education to reassert the role that liberal education can play in discerning the truth and enhance the reputation of our institutions by emphasizing big-picture, problem-centered inquiry and students’ active engagement in experiential learning, with increasing rigor, across all disciplines, in transformational partnerships with other colleges, universities and communities around the globe.
If we fail to do so, I fear that Appiah’s question, asked at the outset, “What were they thinking?” will be asked of us as we preserve the traditions of the academy solely for the sake of didacticism and risk confronting our own need to escape Westworld.