

The MIT Press Podcast
The MIT Press
Interviews with authors of MIT Press books.
Episodes
Mentioned books

Oct 15, 2013 • 1h 8min
Tadeusz Zawidzki, “Mindshaping: A New Framework for Understanding Human Social Cognition” (MIT Press, 2013)
Social cognition involves a small bundle of cognitive capacities and behaviors that enable us to communicate and get along with one another, a bundle that even our closest primate cousins don’t have, at least not to the same level of sophistication: pervasive collaboration, language, mind-reading and what Tadeusz Zawidzki, Associate Professor of Philosophy at The George Washington University, calls “mindshaping”. Mindshaping includes our capacities and dispositions to imitate, to be natural learners, and to conform to and enforce social norms, and in Mindshaping: A New Framework for Understanding Human Social Cognition (MIT Press, 2013), Zawidzki defends the idea that mind-shaping is the basic capacity from which the rest of social cognition evolves. Most researchers hold that mind-reading – our “theory of mind” – is the linch-pin of the rest: our ability to ascribe to one another mental states with propositional content is necessary for sophisticated language use and for mindshaping. Zawidzki argues, in contrast, that our ability to “homogenize” our minds via mindshaping is what makes sophisticated mind-reading and language possible. On his view, language didn’t evolve so that we could express thought; it evolved so that we could express our commitment to cooperative behavior. Zawidzki’s innovative approach centers on reinterpreting and extending Daniel Dennett’s intentional stance to explain the social-cognitive development of the species and of individuals.

Jul 29, 2013 • 1h 9min
David Munns, “A Single Sky: How an International Community Forged the Science of Radio Astronomy” (MIT Press, 2012)
How do you measure a star?In the middle of the 20thcentury, an interdisciplinary and international community of scientists began using radio waves to measure heavenly bodies and transformed astronomy as a result. David P. D. Munns‘s new book charts the process through which radio astronomers learned to see the sounds of the sky, creating a new space for Cold War science. A Single Sky: How an International Community Forged the Science of Radio Astronomy (MIT Press, 2012) uses the emergence of radio astronomy to upend some of the commonly-held assumptions about the history of the modern sciences. Munns emphasizes the relative freedom of radio astronomers that stands in contrast to the popular meta-narrative of Cold War scientists bound by the interests of the military-industrial complex. He also shifts our focus from the more commonly-studied individual local and national contexts of science to look instead at scientific communities that transcended disciplinary and national boundaries, blending accounts of Australia, the UK, the Netherlands, and the US into a story that emphasizes the importance of cooperation (not competition) in driving scientific development. In addition to this, A Single Sky pays special attention to the importance of material culture (especially that of big radio telescopes) and pedagogy in shaping modern radio astronomy. It’s a fascinating story. Enjoy!For more information about The Dish, a film that Munns mentioned in the course of our conversation, see here.

Jul 1, 2013 • 51min
Anne Cutler, “Native Listening: Language Experience and the Recognition of Spoken Words” (MIT Press, 2012)
One of the risks of a telephone interview is that the sound quality can be less than ideal, and sometimes there’s no way around this and we just have to try to press on with it. Under those conditions, although I get used to it, I can’t help wondering whether the result will make sense to an outside listener.I mention this now because Anne Cutler‘s book, Native Listening: Language Experience and the Recognition of Spoken Words (MIT Press, 2012), is an eloquent and compelling justification of my worrying about precisely this issue. In particular, she builds the case that our experience with our native language fundamentally shapes the way in which we approach the task of listening to a stream of speech – unconsciously, we attend to the cues that are useful in our native language, and use the rules that apply in that language, even when this is counterproductive in the language that we’re actually dealing with. This explains how native speakers can typically process an imperfect speech signal, and why this sometimes fails when we’re listening to a non-native language. (But I hope this isn’t going to be one of those times for anyone.)In this interview, we explore some of the manifestations of the tendency to use native-language experience in parsing, and the implications of this for the rest of the language system. We see why attending to phonologically ‘possible words’ is useful in most, but not quite all, languages, and how this helps us solve the problem of embedded words (indeed, so effectively that we don’t even notice that the problem exists). We consider how the acquisition of language-specific preferences might cohere with the idea of a ‘critical period’ for second-language learning. And we get some insights into the process of very early language acquisition – even before birth – which turns out to have access to richer input data than we might imagine.

Jun 10, 2013 • 54min
Patrick Hanks, “Lexical Analysis: Norms and Exploitations” (MIT Press, 2013)
It’s tempting to think that lexicography can go on, untroubled by the concerns of theoretical linguistics, while the rest of us plunge into round after round of bloody internecine strife. For better or worse, as Patrick Hanks makes clear in Lexical Analysis: Norms and Exploitations (MIT Press, 2013), this is no longer true: lexicographers must respond to theoretical and practical pressures from lexical semantics, and this lexicographer has very interesting things to say about that discipline too.Hanks’s central point is perhaps that the development of huge electronic corpora poses enormous problems, as well as exciting challenges, for the study of word meaning. It’s no longer tenable to list every sense of a word that is in common currency: and even if we could, it would be a pointless exercise, as the vast output of such an exercise would tell us very little about what meaning is intended on a given instance of usage. However, these corpora provide us with the opportunity to say a great deal about the way in which words are typically used: and the theory that Hanks develops in this book represents an attempt to make that notion precise.In this interview, we discuss the impact of corpus-driven work on linguistics in general and lexical semantics in particular, and discuss the analogy between definitions and prototypes. In doing so, we find for Wittgenstein over Leibniz, and tentatively for ‘lumpers’ over ‘splitters’, but rule that both parties are at fault in the battle between Construction Grammar and traditional generative syntax.

May 6, 2013 • 1h 3min
Jonathan Bobaljik, “Universals of Comparative Morphology” (MIT Press, 2012)
Morphology is sometimes painted as the ‘here be dragons’ of the linguistic map: a baffling domain of idiosyncrasies and irregularities, in which Heath Robinson contraptions abound and anything goes. In his new book, Universals of Comparative Morphology: Suppletion, Superlatives, and the Structure of Words (MIT Press, 2012), Jonathan Bobaljik reassesses the terrain, and argues that there are hard limits on the extent to which languages can vary in the morphological domain.The book is a comparative study of comparatives and superlatives with a broad typological base. Bobaljik’s contention is that, at an abstract cognitive level, the representation of the comparative is contained within that of the superlative. From this hypothesis, couched within the theoretical framework of Distributed Morphology, a number of generalizations immediately follow: for instance, in a language which, like English, has forms of the type “good” and “better”, the superlative cannot be of the type “goodest”. As he shows, these generalizations are solid candidates for the status of exceptionless linguistic universals.In this interview, Jonathan outlines the generalizations and their evidential basis, and we go on to discuss apparent counterexamples (including the mysterious Karelian quantifiers), why the comparative should be contained within the superlative, how the generalizations extend to change-of-state verbs, and how similar generalizations can be found in domains as diverse as verbal person marking and pronominal case.

Apr 30, 2013 • 1h 14min
Alexandra Hui, “The Psychophysical Ear: Musical Experiments, Experimental Sounds, 1840-1910” (MIT Press, 2013)
In The Psychophysical Ear: Musical Experiments, Experimental Sounds, 1840-1910 (MIT Press, 2013), Alexandra Hui explores a fascinating chapter of that history in a period when musical aesthetics and natural science came together in the psychophysical study of sound in nineteenth century Germany. Though we tend to consider the performing arts and sciences as occupying different epistemic and disciplinary realms, Hui argues that the scientific study of sound sensation not only was framed in terms of musical aesthetics, but became increasingly so over time. The book traces a series of arguments by practitioners of the study of sound sensation as they sought to uncover universal rules for understanding the sonic world: How much epistemic weight ought to be placed on the experiences of an individual listener? What sorts of expertise were relevant or necessary for a sound scientist’s experimental practice? Did musical training matter? Was there a proper way to listen to music? The Psychophysical Ear follows sound scientists as they grappled with these and other questions, struggling with the consequences of understanding the act of listening as a practice that was fundamentally grounded in particular historical contexts as phonographic technology and the increasing number of performances of non-Western music in Europe were transforming the sonic world of Europe. Hui’s story often involves the reader’s own sensorium in the story, urging us to imagine or play sequences of musical notes that prove crucial to some of the arguments of the actors in the story. Enjoy!

Apr 13, 2013 • 1h 3min
Stephen E. Nadeau, “The Neural Architecture of Grammar” (MIT Press, 2012)
Although there seems to be a trend towards linguistic theories getting more cognitively or neurally plausible, there doesn’t seem to be an imminent prospect of a reconciliation between linguistics and neuroscience. Network models of various aspects of language have often been criticised as theoretically simplistic, custom-made to solve a single problem (such as past tense marking), and/or abandoning their neurally-inspired roots.In The Neural Architecture of Grammar (MIT Press, 2012), Stephen Nadeau proposes an account of language in the brain that goes some way towards answering these objections. He argues that the sometimes-maligned Parallel Distributed Processing (PDP) approach can genuinely be seen as a way of modelling the brain. Combining theoretical, experimental and biological perspectives, he proposes a model of language function that is based upon these principles, proceeding concisely all the way from concept meaning to high-level syntactic organisation. He proposes that this model offers a plausible account of a wealth of data from studies of normal language functioning and, at the same time, a convincing perspective on how language breaks down as a consequence of brain injury.Within an hour, it’s hard to do justice to the full complexity of the model. However, we do get to discuss much of the background and motivation for this approach. In particular, we talk about the emergence of PDP models of concept meaning and of phonological linear order. We consider the relations between this concept of meaning and the increasingly well-studied notion of ’embodied cognition’. And we look at the aphasia literature, which, Nadeau argues, provides compelling support for a view of language that is fundamentally stochastic and susceptible to graceful degradation – two automatic consequences of adopting a PDP perspective. We conclude by touching on the potential relevance of this type of account for treatments for aphasia.

Feb 26, 2013 • 1h 9min
Matthew Wisnioski, “Engineers for Change: Competing Visions of Technology in 1960s America” (MIT Press, 2012)
In his compelling and fascinating account of how engineers navigated new landscapes of technology and its discontents in 1960s America, Matthew Wisnioski takes us into the personal and professional transformations of a group of thinkers and practitioners who have been both central to the history of science and technology, and conspicuously under-represented in its historiography. Between 1964 and 1974, engineers in America wrestled with the ethical and intellectual implications of an “ideology of technological change.” Engineers for Change: Competing Visions of Technology in 1960s America (MIT Press, 2012) takes us into the debates among engineers over their responsibilities for crafting a future in a world where nuclear weapons and chemical pollutants were now facts of life, as citizens were rising in support of environmental and civil rights, and in protest of war and violence. Wisnioski introduces us to the changing resonances of and debates over key concepts in the print culture of engineers in mid-century America, key experiments in the pedagogy and training of engineers at major US institutions, and key efforts to promote creativity in the profession by collaborating with artists, social activists, and others. The book situates all of this within a wonderful introduction to the classic historiography of social studies of technology and engineering, and is illustrated with striking images from the visual culture of engineering in the 1960s. Readers interested in how these issues extend into the more recent history of technology will also find much of interest in Wisnioski’s accounts of Engineers Without Borders and the Engineering, Social Justice, and Peace (ESJP) Network. Enjoy!

Sep 15, 2012 • 1h 6min
Kristin Andrews, “Do Apes Read Minds?: Toward a New Folk Psychology” (MIT Press, 2012)
The ability to figure out the mental lives of others – what they want, what they believe, what they know — is basic to our relationships. Sherlock Holmes exemplified this ability by accurately simulating the thought processes of suspects in order to solve mysterious crimes. But folk psychology is not restricted to genius detectives. We all use it: to predict what a friend will feel when we cancel a date, to explain why a child in a playground is crying, to deceive someone else by saying less than the whole story. Its very ubiquity explains why it is called folk psychology.But how in fact does folk psychology work? On standard views in philosophy and psychology, folk psychology just is the practice of ascribing or attributing beliefs and desires to people for explaining and predicting their behavior. A folk psychologist is someone who has this “theory of mind”. In her new book, Do Apes Read Minds?: Toward a New Folk Psychology (MIT Press, 2012), Kristin Andrews, associate professor of philosophy at York University in Toronto, argues that the standard view is far too narrow a construal of what’s going on. It leaves out a wide variety of other mechanisms we use to understand the mental lives of others, and a wide variety of other reasons we have for engaging in this social competence. Moreover, what’s necessary to be a folk psychologist is not a sophisticated metacognitive ability for ascribing beliefs, but an ability to sort the world into agents and non-agents – an ability that greatly expands the class of creatures that can be folk psychologists. Andrews draws on empirical work in psychology and ethology, including her own field work observing wild primates, to critique the standard view and ground her alternative pluralistic view.

Aug 15, 2012 • 1h 12min
Lee Braver, “Groundless Grounds: A Study of Wittgenstein and Heidegger” (MIT Press, 2012)
Ludwig Wittgenstein and Martin Heidegger are both considered among the most influential philosophers of the twentieth century. Both were born in 1889 in German-speaking countries; both studied under leading philosophers of their day – Bertrand Russell and Edmund Husserl, respectively – and were considered their philosophical heirs; and both ended up critiquing their mentors and thereby influencing the direction of thought in both the Analytic and Continental traditions. In Groundless Grounds: A Study of Wittgenstein and Heidegger (MIT Press, 2012), Lee Braver, associate professor of philosophy at Hiram College attempts to build what he calls a “load-bearing bridge” between these often polarized traditions. He argues that both thinkers have similar arguments for similar conclusions on similar fundamental issues. Both blame the disengaged contemplation of traditional philosophy for confusion about the nature of language, thought and ontology, and that attention to normal, ongoing human activity in context presents alternative fundamental insights into their nature. The groundless grounds of the title is the idea that finite human nature gives us everything we need to understand meaning, mind and being, and that to insist that this ground requires justification itself betrays confusion.


