The Convivial Society cover image

The Convivial Society

Latest episodes

undefined
Sep 13, 2021 • 25min

Notes from the Metaverse (Audio Version)

A week after the fact, here is the audio version of the last installment: “Notes from the Metaverse.” Ordinarily, the audio version accompanies the essay, but in this case you’re getting it a bit later. Nothing new in this version, so if you’ve already read the essay, feel free to disregard. But for those of you who do prefer listening, here you go, with my apologies for not getting this out sooner. I will add in this short note that since I posted “Notes from the Metaverse” last week, Facebook and Ray-Ban announced the release of a set of smart-glasses. They don’t do much, at least they appear to have disappointed some. The camera might appear to be their main feature, but, in fact, I’d argue that their main feature is that they manage to look pretty normal and perhaps even stylish depending on your tastes. This, it seems to me, is the point at this juncture. Smart glasses, especially their camera, need to be normalized before they can become critical metaverse infrastructure. In that light, it’s worth noting, too, that the glasses bear absolutely no Facebook branding. We’ll see if they fare any better with the pubic than Google Glass did several years ago. Needless to say, much of the same criticism about the way that a camera enables surreptitious recording, thus more completely objectifying others as fodder for the digital spectacle, applies here as well. Cheers, Michael Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Aug 26, 2021 • 17min

Nice Image You've Got There. Shame If It Got Memed.

Welcome to The Convivial Society. In this installment, you’ll find some thoughts on the cultural consequences of digital media. A big chunk of new readers means that I’m being a bit more careful not to presume familiarity with themes that I’ve written about before, so some of what follows may be old news for some of you. Either way, you can file this installment under my ongoing effort to think about the social and political fallout of digital media. And it really is an effort to think out loud. Your feedback and even pushback are welcome. “I can’t express how useless these old school Sunday shows are,” Sean Illing recently tweeted, “and it blows my mind that a single human person still watches any of them.” Illing is here referring to the gamut of Sunday morning political talk shows—This Week, Meet the Press, etc.—and I can’t imagine that many people would step up to bat for the value of these shows. But their waning status may be worth a moment’s reflection as an entry point into a discussion about the ongoing digitization of our media ecosystem. And, to be clear, I’m not thinking here of “the media,” as in journalists, newspapers, and cable talk shows. What I have in mind is media as the plural of a medium of communication such as writing, print, telephony, radio, television, etc. Sometimes the advent of a new medium of communication can have striking social and psychic consequences. Here’s how Neil Postman once put it: A new medium does not add something; it changes everything. In the year 1500, after the printing press was invented, you did not have old Europe plus the printing press. You had a different Europe. After television, America was not America plus television. Television gave a new coloration to every political campaign, to every home, to every school, to every church, to every industry, and so on.So, with this approach in mind, here was my initial response to Illing’s tweet. Regarding the Sunday morning shows, I suggested that “their use, such as it is, is as content or fodder for the newer medium, rather than as an important medium of information in their own right. The assumptions and values of their form are irrelevant compared to the assumptions and values of the new form into which they are absorbed.” This is basically just media ecology 101. For those of you still getting your bearings around here, I’ll mention that along with channelling the strain of old tech criticism that includes Lewis Mumford, Jacques Ellul, Ivan Illich, and the like, I also occasionally slip into media ecology mode. Marshall McLuhan, the aforementioned Neil Postman, Walter Ong, and Harold Innis are some of the leading lights, and you can read more about media ecology on the website of the Media Ecology Association. The few lines above from Postman give you a good sense of the approach. The general idea is that a medium of communication matters as much as, if not sometimes more than, the content that is being communicated through it. McLuhan, in another memorable formulation, put it this way: “Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot. For the ‘content’ of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.”This can be a counter-intuitive claim if we’re used to thinking about a medium of communication (and technology in general) as an essentially neutral form. Spoiler alert: It’s not. As McLuhan famously put it, “The medium is the message,” which, as he went on to explain, means, for example, that “the ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.” A change, I want to stress, that is largely irrespective of the nature of the content that the medium is used to communicate. Media ecology, then, offers a helpful set of critical tools we can apply to questions about digital media and the public sphere. So, for example, in saying that the Sunday talk shows are fodder for the new medium, I’m recalling McLuhan’s observation that “the ‘content’ of any medium is always another medium.” In this case, digital forms of communication constitute the newer medium. And the point applies not only to the Sunday shows, but really to all pre-digital media. Television, film, radio, print — all are taken up and absorbed by digital media either because they themselves become digital artifacts (digital files rather than, say, rolls of film) or because they become the content of our digital media feeds. This means, for example, that a movie is not only something to be taken whole and enjoyed on its own terms, it is also going to be dissected and turned into free-floating clips and memes and gifs. What’s more, the meaning and effect of these clips, memes, and gifs may very well depend more on their own formal structure and their use in our feeds than on their relation to the film that is their source. It means, too, that both the nature of the television show changes when it is made for digital contexts and that we watch Ted Lasso, in part, so that we can post about it. And, it means that every other Sunday morning or so, Chuck Todd is trending on Twitter not because he is an authoritative and influential media figure, but because he is not. It means the more savvy guests know the point of their appearance is not to inform or debate, but to generate a potentially viral clip, or, alternatively, to assiduously avoid doing so. It means, ultimately, that the habits, standards, and norms of the older medium are displaced by the habits, standards, and norms of the newer medium. However seriously the Sunday morning hosts take their job, it just doesn’t matter in the way they think it does. It all starts to feel a bit like a modern version of Don Quixote. The ostensible protagonists have not realized that the world has moved on, and they unwittingly strike a tragicomic note. Woe to those who fail to grasp this point, and woe to all of us when the point is best grasped by the most unscrupulous, as is too often the case. In other words, as a good friend of mine is fond of asking, “what frames what?” It’s a good diagnostic question, and it applies nicely here. So we might ask, “What frames what, the televisual medium or the digital?” I’d argue that the answer is pretty straightforward: increasingly, digital media frames all older forms, and it is the habits, assumptions, and total effect of the digital medium that matters most. I won’t pretend to know all of what this will mean, of course. Digital media is a complex, multi-faceted, ever-shifting phenomenon, and we’re still sorting out the “symbolic fallout.” But I can tell you that many of the disorders of our moment, particularly with regard to what we sometimes quaintly call the public sphere, will make more sense if we see them, at least in part, as a function of this ongoing transition into a digital media ecosystem.Let me take another angle on this theme by commenting on an observation Ari Schulman, the editor of the The New Atlantis, recently made when he tweeted that he was “struck by the odd sense that America is post-pseudo-event. If the same disastrous scenes of Afghanistan pullout happen ten years ago, you can practically feel them preparing the reels for a Newseum exhibit. Now events are events, yet it doesn't feel like a relief.”I think I understand what Schulman is getting at here. It’s the sense, especially pronounced the more online one happens to be, that events can’t sustain the kind of attention that was once lavished on them in an older televisual media environment. Or, to put it another way, I might say that it’s the sense that nothing seems to get any durable traction in our collective imagination. I’ll provisionally suggest that this is yet another example of the medium being the message. In this case, I would argue that the relevant medium is the social media timeline. The timeline now tends to be the principal point of mediation between the world out there and our perception of it. It refracts the world, rather than reflecting it. And it’s relentless temporality structures our perception of time and with it our sense of what is enduring and significant. And, at the risk of becoming pedantic, let me say again this happens regardless of the nature of the content we encounter or even how that content is moderated. But let’s come back to the idea of being post-pseudo-event. “Pseudo-event” was a term coined by Daniel Boorstin in his 1962 book, The Image: A Guide to Pseudo-events in America. They were a function of a regime of the imagination ruled by the image, or, more specifically, the manufactured image. According to Boorstin, the pseudo-event “is not spontaneous, but comes about because someone has planned, planted, or incited it.” It was Boorstin, too, who in this same book gave us the category of someone who is famous for being famous, or, in his words, “a person who is known for his well-knownness,” a similarly artificial and vacuous dynamic. An event, by contrast, has a certain of integrity to it; it lacks the artificial character of the pseudo-event that exists chiefly to be noted and commented upon. So, are we, as Schulman suggested, post-pseudo-event? Yes and no, I’d say, but first a little bit of clarification may be in order. It’s important to note that on the ground, of course, the withdrawal from Afghanistan is not a pseudo-event; it is the site of very real danger, desperation, and suffering. It’s also worth distinguishing between the cultural power of the image and a pseudo-event. Images are the currency of the pseudo-event, but an event captured by an image does not thereby necessarily become a pseudo-event. The questions we’ve been hovering around have to do with how different media of documentation and dissemination translate a concrete event into the realm of symbolic cultural exchange. The power of a medium lies in this work of mediation, which in turn shapes the public sphere and the experience of the self in relation to it. It is in this way that the advent of new media technology—from writing to print and then radio, television, and the internet—can have far-reaching consequences. So let’s come back to Schulman’s observation. I’d say that we are not quite post-pseudo-event. In fact, from a certain perspective they have multiplied exponentially in the so-called attention economy. But something is different, and one way to see this is to recognize what has happened to the image, which, again, is the substrate of the pseudo-event. Simply put, images aren’t what they used to be, and, as a consequence, the character of the pseudo-event has changed as well. As we approach the 20th anniversary of the September 11th attacks, I’m tempted to suggest that the image of the towers burning might be the last example of an iconic public image with longstanding cultural currency. There are undoubtedly some more recent examples of powerful and memorable images, but their power is subverted by the digital media environment that now frames them. I’m tempted to argue that 9/11 marked the beginning of the end for the age of the image in the sense that Boorstin meant it. Specifically, it is the end of the age of the manufactured image that speaks compellingly to a broad swath of society.If this is generally near the mark, then one explanation is simply that not long after 9/11, the image economy began to collapse when the market was flooded with digital representations. As a simple thought experiment, consider how different the documentary evidence of 9/11 would be if the event had occurred ten years later after smartphones had saturated the market. There’s another observation by McLuhan that applies here. McLuhan, together with his son Eric, developed a tetrad of media effects as a useful tool for analyzing the consequences of new technologies. The four elements of the tetrad were enhancement, retrieval, obsolescence, and reversal. Or, to put these as questions: What does a technology enhance?What does it make obsolete?What does it retrieve that had been obsolesced earlier?What does it flip into when pushed to extremes?In response to Ari’s observation, the fourth of those effects came to mind. “When pushed to the limits of its potential,” McLuhan claimed, “the new form will tend to reverse what had been its original characteristics.” In this case, the massive proliferation of images leads to a degradation of the cultural power of the image. Boorstin himself noted the roots of the pseudo-event in the “graphic revolution” of the late 19th century, and, indeed, it was a revolution powered by new tools and techniques. But the ability to generate and circulate digital images represents yet another revolution in scale, although scale is not the only factor. One characteristic of the digital image is the ease with which it can be not only reproduced and disseminated but also manipulated. I don’t mean this only in the more nefarious sense, I mean simply that just about anyone with internet access and a computer can set about to fiddle with an image in countless ways, yielding everything from artifacts of what used to be called re-mix culture to the more notorious case of deepfakes. (Beyond this, of course, there is the more recent development of synthetic images generated by a variety of high-powered computing processes.) With the same “graphic revolution” that Boorstin referenced in mind, Walter Benjamin argued in a well-known early 20th-century essay that the work of art lost its aura, or a kind of authority grounded in its unique materiality, in the age of its mechanical reproducibility. The Mona Lisa, in other words, loses something of its cultural power when any of us can slap a passable reproduction over our toilet if we so desired. Perhaps we can extend this by saying that, in turn, the image loses its own cultural standing in the age of its digital manipulability. Mechanical reproduction, photography especially, collapsed the distance necessary for a work of art to generate its aura, its commanding presence. Digital manipulability has had an analogous effect on the image. It’s no longer received from a class of professional story tellers who have monopolized the power to produce and interpret the symbolic currency of cultural exchange. The image-making tools have been democratized. The image itself has been demystified. Every image we encounter now invites us to manipulate it to whatever end strikes our fancy. I took to Twitter while I was writing this to do a little bit of unscientific research about the power of images. I asked what the more recent examples of iconic images might be, suggesting that I wondered whether there were such images. I don’t remember the exact wording because I deleted the tweet after it was quote tweeted into the white supremacist corners of the platform. But, while it was up, the responses were instructive. While some genuine examples came up, most notably, I think, the image of the Syrian child who tragically drowned fleeing his homeland, most were, in fact, memes or decidedly ironic images. It’s probably too simplistic to put it this way, but perhaps we might say that the age of the image has yielded to the age of the meme. Again, this is not to say that powerful, moving images no longer appear, but they are framed by the ethos of digital media, which has, presently at least, given us the meme as its ideal form. The image could sustain a degree of earnestness, the meme is much too self-aware for that. The image could inspire, the meme, powerful in its own right, cannot. The image could be managed, the meme resists all such efforts. And as the image has yielded to the meme, the pseudo-event now manifests chiefly as the Discourse—ceaseless, self-referential, demoralizing, and ultimately untethered from the events themselves. Ultimately, the image as it fed the pseudo-event became a tool to manage public opinion, and now it is broken. Some people get this, others obviously do not. Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Aug 18, 2021 • 17min

Outsourcing Virtue

“What is fundamental to a convivial society is not the total absence of manipulative institutions and addictive goods and services, but the balance between those tools which create the specific demands they are specialized to satisfy and those complementary, enabling tools which foster self-realization. The first set of tools produces according to abstract plans for men in general; the other set enhances the ability of people to pursue their own goals in their unique way.” — Ivan Illich, Tools for Conviviality Welcome to the Convivial Society, especially the many of you for whom this is the first actual installment to hit your inbox. If you signed up in the last week or so, you may want to check out the brief orientation to the newsletter I sent out recently to new readers. Below you’ll find a full installment of the newsletter, which contains an essay followed by links to a variety of items, some of them with a bit of additional commentary from me, and a closing note. Read on. Share promiscuously. In lines he composed for a play in the mid-1930s, T. S. Eliot wrote of those who“constantly try to escape From the darkness outside and within By dreaming of systems so perfect that no one will need to be good.”That last line has always struck me as a rather apt characterization of a certain technocratic impulse, which presumes that techno-bureaucratic structures and processes can eliminate the necessity of virtue, or maybe even human involvement altogether. We might just as easily speak of systems so perfect that no one will need to be wise or temperate or just. Just adhere to the code or the technique with unbending consistency and all will be well. This dream, as Eliot put it, remains explicitly compelling in many quarters. It is also tacitly embedded in the practices fostered by many of our devices, tools, and institutions. So it’s worth thinking about how this dream manifests itself today and why it can so easily take on a nightmarish quality. In Eliot’s age, increasingly elaborate and Byzantine bureaucracies automated human decision making in the pursuit of efficiency, speed, and scale, thus outsourcing human judgment and, consequently, responsibility. One did not require virtue or good judgment, only a sufficiently well-articulated system of rules. Of course, under these circumstances, bureaucratic functionaries might become “papier-mâché Mephistopheles,” in Conrad’s memorable phrase, and they may abet forms of what Arendt later called banal evil. But the scale and scope of modern societies also seem to require such structures in order to operate reasonably well, although this is certainly debatable. Whether strictly necessary or not, these systems introduce a paradox: in order to ostensibly serve human society, they must eliminate or displace elements of human experience. Of course, what becomes evident eventually is that the systems are not, in fact, serving human ends, at least not necessarily so. To take a different class of example, we might also think of the modern fixation with technological fixes to what may often be irreducibly social and political problems. In a prescient 2020 essay about the pandemic, Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” No need for good judgment, responsible governance, self-sacrifice, or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Most recently, I encountered the case of NarxCare reported in Wired. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis, whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. You can read the article for further details, which are both fascinating and disturbing, but here’s the pertinent part for my purposes: “Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.”This is an obviously complex and sensitive issue, but it’s hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility, and CYA dynamics that have characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim the algorithm made me do it, and it becomes so, in part, because the existing bureaucratic dynamics all but require it. This technocratic impulse is alive and well and we’ll come back to it in a moment, but it occurs to me that we might also profitably invert Eliot’s claim and apply it to our digital media environment in which we experience systems so imperfect that it turns out everyone will need to be really, really good. Let me explain what I mean by this. The thought occurred to me when I read yet another tweet advocating for the cultivation of digital media literacy. You should know that, at one level, I think this is fine and possibly helpful under certain circumstances. However, I also think it underestimates or altogether ignores the non-intellectual elements of the problem. It seems unrealistic, for example, to expect that someone, who is likely already swamped by the demands of living in a complex, fast-paced, and precarious social milieu, will have the leisure and resources to thoroughly “do their own research” about every dubious or contested claim they encounter online, or to adjudicate the competing claims made by those who are supposed to know what they are talking about. There’s a lot more to be said about this dynamic, of course. It raises questions about truth, certainty, trust, authority, expertise, and more, but here I simply want to highlight the moral demands, because searching for the truth, or a sufficient approximation, is more than a merely intellectual activity. It involves, for example, humility, courage, and patience. It presumes a willingness to break with one’s tribe or social network with all the risks that may entail. In short, you need to be not just clever but virtuous, and, depending on the degree to which you lived online, you would need to do this persistently over time, and, recently, of course, during a health crisis that has generated an exhausting amount of uncertainty and a host of contentious debates about private and public actions. This is but one case, the one which initially led me to invert Eliot’s line. It doesn’t take a great deal of imagination to conjure up other similar examples of the kind of virtue our digital devices and networks tacitly demand of us. Consider the discipline required to responsibly direct one’s attention from moment to moment rather than responding with Pavlovian alacrity when our devices beckon us. Or the degree of restraint necessary to avoid the casual voyeurism that powers so much of our social media feeds. Or, how those same platforms can be justly described as machines for the inducement of petty vindictiveness and less-than-righteous indignation. Or, alternatively, as carefully calibrated engines of sloth, greed, envy, despair, and self-loathing. The point is not that our digital media environment necessarily generates vice, rather it’s that it constitutes an ever-present field of temptation, which can require, in turn, monastic degrees of self-discipline to manage. I’m reminded, for example, of how years ago Evgeny Morozov described buying a timed safe in which to lock his smartphone, and how, when he discovered he could unscrew the timing mechanism, he locked the screwdriver in there, too. Under certain circumstances and for certain people, maintaining a level of basic human decency or even psychic well-being, may feel like an exercise in moral sainthood. Perhaps this explains the recent interest in stoicism, although, we do well to remember Pascal’s pointed criticism of the stoics: “They conclude that we can always do what we can sometimes do.” We alternate, then, between environments that seek to render virtue superfluous and environments that tacitly demand a high degree of virtue in order to operate benignly. Both engender their own set of problems, and, not surprisingly, there’s a reciprocal relationship between these two dynamics. Failure to exhibit the requisite virtue creates a demand for the enhancement of rule-based systems to regulate human behavior. Speech on social media platforms is a case in point that comes readily to mind. The scale and speed of communication on social media platforms generate infamously vexing issues related to speech and expression, which are especially evident during a volatile election season or a global pandemic. These issues do not, in my view, admit of obvious solutions beyond shutting down the platforms altogether. That not being a presently viable option, companies and law makers are increasingly pressured to apply ever more vigilant and stringent forms of moderation, often with counterproductive results. This is yet another complex problem, of course, but it also illustrates the challenge of governing by codes that seek to manage human behavior by generating rules of conduct with attendant consequences for their violation, which, again, may be the only viable way of governing human behavior at the numeric, spatial, and temporal scale of digital information environments. In any case, the impulse is to conceive of moral and political challenges as technical problems admitting of engineered solutions. To be clear, it’s not that codes and systems are useless. They can have their place, but they require sound judgment in their application, precisely to the degree that they fail to account for the multiplicity of meaningful variables and goods at play in human relations. Trouble arises when we are tempted to make the code and its application coterminous, which would require a rule to cover every possible situation and extenuating circumstance, ad infinitum. This is the temptation that animates the impulse to apply a code with blind consistency as if this would be equivalent to justice itself. The philosopher Charles Taylor has called this tendency in modern liberal societies “code fetishism,” and it ought to be judiciously resisted. According to Taylor, code fetishism “tends to forget the background which makes sense of any code—the variety of goods which rules and norms are meant to realize—as well as the vertical dimension which arises above all of these.” Code fetishism in this sense is not unlike what Jacques Ellul called technique, a relentless drive toward efficiency that eventually became an end in itself having lost sight of the goods for the sake of which efficiency was pursued in the first place. As an aside, I’ll note that code fetishism may be something like a default setting for modern democratic societies, which have a tendency to tilt toward technocracy (while, of course, also harboring potent counter-tendencies). The tilting follows from a preference for proceduralism, or the conviction that an ostensibly neutral set of rules and procedures are an adequate foundation for a just society, particularly in the absence of substantive agreement about the nature of a good society. In this way, there is a longstanding symbiosis between modern politics and modern technology. They both traffic in the ideal of neutrality—neutral tools, neutral processes, and neutral institutions. It should not be surprising, then, that contemporary institutions turn toward technological tools to shore up the ideal of neutrality. The presumably neutral algorithm will solve the problem of bias in criminal sentencing or loan applications or hiring, for example. And neither should it be surprising to discover that what we think of as modern society, built upon this tacit pact between ostensibly neutral political and technological structures, begins to fray and lose its legitimacy as the supposed neutrality of both becomes increasingly implausible. (Okay, I realize this paragraph calls for a book of footnotes, but it will have to do for now.) As it turns out, Charles Taylor also wrote the Foreword to Ivan Illich’s Rivers North of the Future. (And—caveat lector, new readers—at the Convivial Society, we eventually come around to Illich at some point.) In his Foreword, Taylor explored Illich’s seemingly eccentric arguments about the origins of modernity in the corruption of the Christian church. It’s an eccentric but compelling argument, however, I’ll leave its merits to one side here in order to hone in on Taylor’s comments about code fetishism, or, to recall where we began, the impulse to build systems so perfect no one will need to be good. [There’s an excellent discussion of Taylor, code fetishism, and Illich in Jeffrey Bilbro’s wonderful guide to the work of Wendell Berry, Virtues of Renewal: Wendell Berry’s Sustainable Forms.]“We think we have to find the right system of rules, of norms, and then follow them through unfailingly,” Taylor wrote. “We cannot see any more,” he continued, “the awkward way these rules fit enfleshed human beings, we fail to notice the dilemmas they have to sweep under the carpet [….]”These codes often spring from decent motives and good intentions, but they may be all the worse for it. “Ours is a civilization concerned to relieve suffering and enhance human well-being, on a universal scale unprecedented in history,” Taylor argued, “and which at the same time threatens to imprison us in forms that can turn alien and dehumanizing.” “Codes, even the best codes,” Taylor concludes, “can become idolatrous traps that tempt us to complicity in violence.” Or, as Illich argued, if you forget the particular, bodily, situated context of the other, then the freedom to do good by them exemplified in the story of the good Samaritan can become the imperative to impose the good as you imagine it on them. “You have,” as Illich bluntly put it, “the basis on which one might feel responsible for bombing the neighbour for his own good.”In Taylor’s reading, Illich “reminds us not to become totally invested in the code … We should find the centre of our spiritual lives beyond the code, deeper than the code, in networks of living concern, which are not to be sacrificed to the code, which must even from time to time subvert it.” “This message,” Taylor acknowledges, “comes out of a certain theology, but it should be heard by everybody.” And, for what it’s worth, I second Taylor on that note. My chief aim in this post as been to suggest that the code fetishism Taylor described manifests itself both intellectually and materially. Which is to say that it can be analyzed as a principle animating formal legal codes, and it can be implicit in our material culture, informing the technologies that shape our habits and assumptions. To put it another way, dealing with humanity’s imperfections through systems, tools, and techniques is a longstanding strategy. It has its benefits, but we need to be mindful of its limitations, especially when ignoring those limitations can lead to demoralizing and destructive consequences. As I was wrapping up this post, I caught a tweet from Timothy Burke that rather nicely sums this up, and I’ll give him the last word. Commenting on an article arguing that “student engagement data” should replace student recommendations, Burke observed, “This is one of those pieces that identifies a problem that's rooted in the messy and flawed humanity of the systems we make and then imagines that there is some metric we could make that would flush that humanity out--in order to better judge some kind of humanity.”It will be worth pondering this impulse to alleviate the human condition by eliminating elements of human experience. News and Resources​* Clive Thompson (I almost typed Owen!) on “Why CAPTCHA Pictures Are So Unbearably Depressing”: “They weren’t taken by humans, and they weren’t taken for humans. They are by AI, for AI. They thus lack any sense of human composition or human audience. They are creations of utterly bloodless industrial logic. Google’s CAPTCHA images demand you to look at the world the way an AI does.”* And here is Thompson again on productivity apps in an article titled “Hundreds of Ways to Get S#!+ Done—and We Still Don’t”: “To-do lists are, in the American imagination, a curiously moral type of software. Nobody opens Google Docs or PowerPoint thinking ‘This will make me a better person.’ But with to-do apps, that ambition is front and center. ‘Everyone thinks that, with this system, I’m going to be like the best parent, the best child, the best worker, the most organized, punctual friend,’ says Monique Mongeon, a product manager at the book-sales-tracking firm BookNet and a self-admitted serial organizational-app devotee. ‘When you start using something to organize your life, it’s because you’re hoping to improve it in some way. You’re trying to solve something.’”There’s a lot I’m tempted to say in response to the subject of this piece. I’m reminded, for example, of a quip from Umberto Eco, “We make lists because we don’t want to die.” I think, too, of Hartmut Rosa describing how modernity turns the human experience of the world into “a series of points of aggression.” And then all sorts of Illichian responses come to mind. At one point Thompson mentions how “quite apart from one’s paid toil, there’s been an increase in social work—all the messaging and posts and social media garden-tending that the philosopher and technologist Ian Bogost calls “‘hyperemployment,’” and I’m immediately reminded of what Illich called shadow work, a “form of unpaid work which an industrial society demands as a necessary complement to the production of goods and services.” So here we are dealing with digitized shadow work, except that we’re now serving an economy based on the accumulation of data. And, finally, I’m tempted to ask, quite seriously, why anyone should think that they need to be productive at all. Of course, I know some of the answers that are likely to be given, that I would give. But, honestly, that’s just the sort of question that I think is worth taking seriously and contemplating. What counts as productivity anyway? Who defines it? Who imposes the standard? Why have I internalized it? What is the relationship among productivity and purpose and happiness? The problem with productivity apps, as Thompson suggests at one point, is the underlying set of assumptions about human well-being and purpose that are themselves built into the institutions and tools of contemporary society. * Speaking of shadow work, here is a terrific piece on some of the lesser known, but actually critical themes in Ivan Illich’s later work written by Jackie Brown and Philippe Mesly for Real Life: “Meanwhile, the economy’s incessant claims on our time and energy diminishes our engagement in non-commodified activities. According to Illich, it is only the willing acceptance of limits — a sense of enoughness — that can stop monopolistic institutions from appropriating the totality of the Earth’s available resources, including our identities, in their constant quest for growth.”* From an essay by Shannon Vallor on technology and the virtues (about which she quite literally wrote the book): “Humanity’s greatest challenge today is the continued rise of a technocratic regime that compulsively seeks to optimise every possible human operation without knowing how to ask what is optimal, or even why optimising is good.”* Thoughtful piece by Deb Chachra on infrastructure as “Care at Scale”:“Our social relationships with each other—our culture, our learning, our art, our shared jokes and shared sorrow, raising our children, attending to our elderly, and together dreaming of our future—these are the essence of what it means to be human. We thrive as individuals and communities by caring for others, and being taken care of in turn. Collective infrastructural systems that are resilient, sustainable, and globally equitable provide the means for us to care for each other at scale. They are a commitment to our shared humanity.”I confess, however, that I did quibble with this line: “Artificial light compensates for our species’ poor night vision and gives us control over how we spend our time, releasing us from the constraints of sunrise and sunset.” Chiefly, perhaps with the implications of “control” and “constraints.” Nonetheless, this was in many ways a model for how to make a public case for moral considerations in regards to technical systems. * Podcast interview with Zachary Loeb on “Tech criticism before the Techlash” (which is the best tech criticism), focusing on Lewis Mumford and Joseph Weizenbaum. Loeb knows the tradition well, and I commend his work. * A 2015 piece from Adam Elkus exploring the relationship between algorithms and bureaucracies: “If computers implementing some larger social value, preference, or structure we take for granted offends us, perhaps we should do something about the value, preference, or structure that motivates the algorithm.”* An excerpt in Logic from Predict and Surveil: Data, Discretion, and the Future of Policing by Sarah Brayne looking at the use of Palantir in policing: “Because one of Palantir’s biggest selling points is the ease with which new, external data sources can be incorporated into the platform, its coverage grows every day. LAPD data, data collected by other government agencies, and external data, including privately collected data accessed through licensing agreements with data brokers, are among at least nineteen databases feeding Palantir at JRIC. The data come from a broad range of sources, including field interview cards, automatic license plate readings, a sex offender registry, county jail records (including phone calls, visitor logs, and cellblock movements), and foreclosure data.” * With data collection, facial-recognition technology, and questions of bias in mind, consider this artifact discussed in a handout produced by Jim Strickland of the Computer History Museum. It is a rail ticket with a primitive facial recognition feature: “punched photographs,” generic faces to be punched by the conductor according to their similarity to the ticket holder. These inspired the Hollerith machine, which was used to tabulate census data from 1890 to the mid-20th century. * “Interior view of the Central Social Insurance Institution showing men working in mobile work stations used to access the card catalog drawers, Prague, Czechoslovakia.” Part of a 2009 exhibition, “Speed Limits.” * A review of Shannon Mattern’s new collection of essays, A City Is Not a Computer: Other Urban Intelligences. Mattern’s work is always worth reading. If you recognize the name but are not sure why, it might be because I’ve shared her work in the newsletter on a number of occasions. * “In Ocado's grocery warehouses, thousands of mechanical boxes move on the Hive”: * For your amusement, I was amused anyway, an historian of naval warfare rates nine Hollywood battle scenes for accuracy. The professor’s deadpan style makes the video. Re-framings— “Another Time,” by W. H. Auden (1940): For us like any other fugitive,Like the numberless flowers that cannot numberAnd all the beasts that need not remember,It is today in which we live.So many try to say Not Now,So many have forgotten howTo say I Am, and would beLost, if they could, in history.Bowing, for instance, with such old-world graceTo a proper flag in a proper place,Muttering like ancients as they stump upstairsOf Mine and His or Ours and Theirs.Just as if time were what they used to willWhen it was gifted with possession still,Just as if they were wrongIn no more wishing to belong.No wonder then so many die of grief,So many are so lonely as they die;No one has yet believed or liked a lie,Another time has other lives to live.— I stumbled upon an essay by Wendell Berry written circa 2002 titled “A Citizen’s Response to the National Security Strategy.” It struck me as a piece worth revisiting: AND SO IT IS NOT WITHOUT REASON or precedent that a citizen should point out that, in addition to evils originating abroad and supposedly correctable by catastrophic technologies in “legitimate” hands, we have an agenda of domestic evils, not only those that properly self aware humans can find in their own hearts, but also several that are indigenous to our history as a nation: issues of economic and social justice, and issues related to the continuing and worsening maladjustment between our economy and our land.Thanks for reading this latest installment of the newsletter, which was, I confess, a bit tardy. As always, my hope is that you found something useful, encouraging, or otherwise helpful in the foregoing. In case you’ve not seen it yet, my first essay with Comment Magazine is now online: “The Materiality of Digital Culture.” The key point more or less is this: “The problem with digital culture, however, is not that it is, in fact, immaterial and disembodied, but that we have come to think of it is as such.” By way of reminder, comments are open to paid subscribers, but all are always welcome to reach out via email. Depending on what happens to be going on when you do, I’ll try to get back to you in relatively short order.Finally, I read a comment recently about the guilt someone felt unsubscribing from newsletters, especially if they thought the author would be notified. Life’s too short, folks. I’d be glad for you to give the newsletter time to prove itself, but I absolve you of any guilt should you conclude that this just isn’t for you. Besides I’ve turned that notification off, as any sane person would. I’ll never know!Trust you all are well. Cheers,Michael Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Jul 20, 2021 • 18min

Ill With Want (Audio Version)

I sent out an installment titled “Ill With Want” a couple of days ago, but was unable at the time to include the audio version. I know that many of you find the audio useful, so, now that I’ve been able to record it, I wanted to get that to you. Nothing new here if you have already read the text version. I will, however, take the opportunity to pass along a link to a podcast I recorded (not the one referenced in the essay) with Justin Murphy and Nina Power on the life and work of Ivan Illich. You can listen to it here. The occasion for the conversation was an upcoming 8-week course on Illich, which Power will be teaching in a couple of weeks. You can learn more about that course here. My thanks to both Justin and Nina for delightful conversation. As per usual, my thanks for reading. If you find the newsletter valuable, consider becoming a subscriber or sharing the Convivial Society with others who may also find it helpful. Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Jul 6, 2021 • 12min

Thresholds of Artificiality

Welcome to the Convivial Society, a newsletter about technology and culture, broadly speaking. This post began as part of a recent feature I’ve titled “Is this anything?”: one idea for your consideration in less than 500 words. It spilled over 500 words, however, so just consider it a relatively brief dispatch. My writing is an exercise in thinking out loud, so I’m never quite sure where it will lead. Of course, I do hope my thinking out loud is helpful to more than just myself. Finally, the newsletter is public by design, but the support of those who are able to give it is encouraged and appreciated. In ordinary conversation, I’d say that the word artificial tends to be used pejoratively. To call something artificial is usually to suggest its inferiority to some ostensibly natural alternative. For example, the boast “No artificial sweeteners!” come to mind. And when applied to people, the word suggests a lack of authenticity or sincerity. But if we recall the word’s semantic relation to artifice or art, then we might come to see artificiality in a different light. In one sense, artificiality is just another way of speaking about what historian Thomas Hughes simply called the “human-built world.”So, for example, in Orality and Literacy Walter Ong wrote, “To say writing is artificial is not to condemn it but to praise it.” A bit further on he added, “Technologies are artificial, but – paradox again – artificiality is natural to human beings. Technology, properly interiorized, does not degrade human life but on the contrary enhances it.”I’d phrase that last line a bit differently. It would be better to say, “Certain technologies, properly interiorized, do not degrade human life but on the contrary enhance it.” In other words, technology is not one thing, and we should take care to discriminate. There are various forms of artificiality, and they are not all equal. Listening to a trained human voice, a musical instrument, a recording of a musical instrument, a recording of AI-generated sounds—these are all distinct activities. Alternatively, human artifice can work with humble regard for the non-human world or it can operate with what Albert Borgmann has called “regardless power,” that is power that takes no thought of how it disrupts the non-human or, for that matter, the human world. Historically, there have been (and will be) a variety of techno-social configurations.I often cite writers such Jacques Ellul and Ivan Illich, who are often (poorly) read as reactionary romantics pining for some lost pre-technological idyll. Ellul, it is true, was rather explicit about the eclipse of nature in modern technological societies. But neither of them are opposed to human artifice or technology per se. Indeed, Illich, especially, sought to encourage the development of what he called convivial tools. Illich also supplied us with the eminently useful concept of thresholds or limits beyond which practices, technologies, or institutions become counterproductive and even destructive. This seems like a useful concept to apply to the question of artificiality. So I find myself wondering if there is a threshold of artificiality beyond which human artifice becomes counterproductive and destructive. I’m not thinking principally of particular technologies, which might be turned toward destructive ends. I’m thinking, rather, of an aggregate degree of artificiality distancing us from the non-human world to such an extent that — paradox again (!) — our capacity to flourish as human beings is diminished. What are the consequences of so structuring our necessarily artificial environment that we find ourselves largely indifferent to the rhythms, patterns, and textures of the non-human world? What are the physical consequences? What are the emotional or psychological consequences? At what cost to the earth is our artificial world purchased? I found myself also reflecting on the Prologue to Hannah Arendt’s The Human Condition. “The earth is the very quintessence of the human condition,” Arendt observed in the mid-20th century, “and earthly nature, for all we know, may be unique in the universe in providing human beings with a habitat in which they can move and breathe without effort and without artifice. The human artifice of the world separates human existence from all mere animal environment, but life itself is outside this artificial world, and through life man remains related to all other living organisms.”“For some time now,” Arendt went on to say, “a great many scientific endeavors have been directed toward making life also ‘artificial,’ toward cutting the last tie through which even man belongs among the children of nature.” She was not sanguine about the prospects. “This future man” she observed, “[…] seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.”Arendt’s reflections were spurred by the launch of Sputnik, the first artificial satellite to orbit the earth. She noted that scientists on both sides of the Cold War had already speculated about humanity’s destiny being extra-terrestrial (anticipating more recent pronouncements by notable tech moguls). She also speculated about the impact of automation for human labor and the consequences of biological engineering. In other words, her concerns have aged well. I think, for example, of the recent announcement about the first human-monkey chimeras, a rather striking example of what Bruno Latour described as the modern constitution. Latour argued that modernity consisted of a double movement of purification and hybridization. On the surface, the modern world is constructed through a series of differentiations, which Latour calls purifications. Science is purified of faith, politics of religion. We might also add the separations of body and mind, nature and the human. Of course, Latour’s point was that we have never been modern in this sense. All the while, under the cover of this project of purification, all manner of hybridizations were underway. Human beings must first be distinguished from nature in order to then have their way with nature.Now, while these hybridizations continue apace and the artificiality Arendt feared is alive and well, digital culture presents us with novel forms of artificiality that pose a different set of challenges. Consider, for example, Marc Andreessen’s recent response to a question about the possibly detrimental consequences of “constant, instantaneous contact” enabled by digital technology. “Your question is a great example of what I call Reality Privilege,” Andreessen claimed. He went on to elaborate as follows: This is a paraphrase of a concept articulated by Beau Cronin: "Consider the possibility that a visceral defense of the physical, and an accompanying dismissal of the virtual as inferior or escapist, is a result of superuser privileges." A small percent of people live in a real-world environment that is rich, even overflowing, with glorious substance, beautiful settings, plentiful stimulation, and many fascinating people to talk to, and to work with, and to date. These are also *all* of the people who get to ask probing questions like yours. Everyone else, the vast majority of humanity, lacks Reality Privilege — their online world is, or will be, immeasurably richer and more fulfilling than most of the physical and social environment around them in the quote-unquote real world.The Reality Privileged, of course, call this conclusion dystopian, and demand that we prioritize improvements in reality over improvements in virtuality. To which I say: reality has had 5,000 years to get good, and is clearly still woefully lacking for most people; I don't think we should wait another 5,000 years to see if it eventually closes the gap. We should build -- and we are building -- online worlds that make life and work and love wonderful for everyone, no matter what level of reality deprivation they find themselves in.There’s obviously a great deal worth contesting in these two paragraphs, but, setting most of that aside, consider it in light of Arendt’s observations. Much of this seems to be quite different than the concerns that animated Arendt’s thinking nearly 70 years ago, but, in fact, I’d say the pattern is similar, except that Andreessen is defending a degree of digital artificiality that Arendt would almost certainly find questionable. In both framings, human artifice risks attenuating the relationship between the earth and the human condition. What is striking in both cases, however, may be how they reveal a structurally similar double movement as the one Latour described: one story veils another. The story of a human retreat from this world, either to the stars above or the virtual realm within, can mask a disregard for or resignation about what is done with the world we do have, both in terms of the structures of human societies and the non-human world within which they are rooted. Put another way, we might say that imagining the digital sphere as a realm hermetically sealed off from the so-called “real world” gave cover to momentous analog-digital hybridizations that were already well underway throughout human society. The digital world is not the analog world; neither is it separate from it. That seems like a good way to frame the broader question of artificiality. The trick is not to collapse the apparent paradox or tension. The human-built world is not equivalent to the non-human world, but neither is it separate from it. It is critical that we recognize both the distinctive features of each realm while also reckoning with their myriad points of interrelationship and interdependence. I would argue that there are, in fact, thresholds of artificiality beyond which human artifice becomes counterproductive, but also that we ought to think about this in more than merely human terms. It often seems that a critique of artificiality generates a desire for the “natural.” Most of the time in these discussions, “nature” remains in the realm of standing reserve, raw material for the sake of human use. More to the point, it is commodified. When human artifice, in the modern techno-capitalist mode, has enclosed the non-human world, nature is always returned to us at a price, one that increasingly few are able to afford. Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Jun 5, 2021 • 11min

The Questions Concerning Technology

A few days ago, a handful of similar stories or anecdotes about technology came to my attention. While they came from different sectors and were of varying degrees of seriousness, they shared a common characteristic. In each case, there was either an expressed bewilderment or admission of obliviousness about the possibility that a given technology would be put to destructive or nefarious purposes. Naturally, I tweeted about it … like one does. I subsequently clarified that I was not subtweeting anyone in particular just everything in general. Of course, naiveté, hubris, and recklessness don’t quite cover all the possibilities—nor are they mutually exclusive. In response, someone noted that “people find it hard to ‘think like an *-hole’, in @mathbabedotorg's phrase, because most aren’t.” That handle belongs to Cathy O’Neil, best known for her 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy. There’s something to this, of course, and, as I mentioned in my reply, I truly do appreciate the generosity of this sentiment. I suggested that the witness of history is helpful on this score, correcting and informing our own limited perspectives. But I was also reminded of a set of questions that I had put together back in 2016 in a moment of similar frustration. The occasion then was the following observation from Om Malik: “I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”Malik went on to write that “it is time to add an emotional and moral dimension to products,” by which he seems to have meant that tech companies should use data responsibly and make their terms of service more transparent. In my response at the time, I took the opportunity to suggest that we needn’t add an emotional and moral dimension to tech, it was already there. The only question was as to its nature. As Langdon Winner had famously inquired “Do artifacts have politics?” and answered in the affirmative, I likewise argued that artifacts have ethics. I then went on to produce a set of 41 questions that I drafted with a view to helping us draw out the moral or ethical implications of our tools. The post proved popular at the time and I received a few notes from developers and programmers who had found the questions useful enough to print out post in their workspaces. This was all before the subsequent boom in “tech ethics,” and, frankly, while my concerns obviously overlap to some degree with the most vocal and popular representatives of that movement, I’ve generally come at the matter from a different place and have expressed my own reservations with the shape more recent tech ethics advocacy has taken. Nonetheless, I have defended the need to think about the moral dimensions of technology against the notion that all that matters are the underlying dynamics of political economy (e.g., here and here). I won’t cover that ground again, but I did think it might be worthwhile to repost the questions I drafted then. It’s been more than six years since I first posted them, and, while some you reading this have been following along since then, most of you picked up on my work in just the last couple of years. And, recalling where we began, trying to think like a malevolent actor might yield some useful insights, but I’d say that we probably need a better way to prompt our thinking about technology’s moral dimensions. Besides, worst case malevolent uses are not the only kinds of morally significant aspects of our technology worth our consideration, as I hope some of these questions will make clear. This is not, of course, an exhaustive set of questions, nor do I claim any unique profundity for them. I do hope, however, that they are useful, wherever we happen to find ourselves in relation to technological artifacts and systems. At one point, I had considered doing something a bit more with these, possibly expanding on each briefly to explain the underlying logic and providing some concrete illustrative examples or cases. Who knows, may be that would be a good occasional series for the newsletter. Feel free to let me know what you think about that. Anyway, without further ado, here they are: * What sort of person will the use of this technology make of me?* What habits will the use of this technology instill?* How will the use of this technology affect my experience of time?* How will the use of this technology affect my experience of place?* How will the use of this technology affect how I relate to other people?* How will the use of this technology affect how I relate to the world around me?* What practices will the use of this technology cultivate?* What practices will the use of this technology displace?* What will the use of this technology encourage me to notice?* What will the use of this technology encourage me to ignore?* What was required of other human beings so that I might be able to use this technology?* What was required of other creatures so that I might be able to use this technology?* What was required of the earth so that I might be able to use this technology?* Does the use of this technology bring me joy? [N.B. This was years before I even heard of Marie Kondo!]* Does the use of this technology arouse anxiety?* How does this technology empower me? At whose expense?* What feelings does the use of this technology generate in me toward others?* Can I imagine living without this technology? Why, or why not?* How does this technology encourage me to allocate my time?* Could the resources used to acquire and use this technology be better deployed?* Does this technology automate or outsource labor or responsibilities that are morally essential?* What desires does the use of this technology generate?* What desires does the use of this technology dissipate?* What possibilities for action does this technology present? Is it good that these actions are now possible?* What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?* How does the use of this technology shape my vision of a good life?* What limits does the use of this technology impose upon me?* What limits does my use of this technology impose upon others?* What does my use of this technology require of others who would (or must) interact with me?* What assumptions about the world does the use of this technology tacitly encourage?* What knowledge has the use of this technology disclosed to me about myself?* What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?* What are the potential harms to myself, others, or the world that might result from my use of this technology?* Upon what systems, technical or human, does my use of this technology depend? Are these systems just?* Does my use of this technology encourage me to view others as a means to an end?* Does using this technology require me to think more or less?* What would the world be like if everyone used this technology exactly as I use it?* What risks will my use of this technology entail for others? Have they consented?* Can the consequences of my use of this technology be undone? Can I live with those consequences?* Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?* Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t? Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
May 27, 2021 • 28min

Surviving the Show: Illich And The Case For An Askesis of Perception

“It appears to me that we cannot neglect the disciplined recovery, an asceticism, of a sensual praxis in a society of technogenic mirages. This reclaiming of the senses, this promptitude to obey experience, the chaste look that the Rule of St. Benedict opposes to the cupiditas oculorum (lust of the eyes), seems to me to be the fundamental condition for renouncing that technique which sets up a definitive obstacle to friendship.”— Ivan Illich, “To Honor Jacques Ellul,” (1993)I don’t usually write multi-part posts, but I did conclude an earlier installment with the promise of addressing one more development in the way I’ve come to think about attention. The essay here will (finally) pick up that thread. This post is a long one, so here’s the executive summary. Attending to the world is an embodied practice involving our senses, and how we experience our senses has a history. The upshot is that we might be able to meet the some of the challenges of the age by cultivating an askesis of perception. As I explained last time around, I’ve been rethinking some aspects of how I talk about attention, a topic that has generated a great deal of interest in the age of digital media. I argued that we ought to reconsider the way the problem of attention tends to be framed by the logic of scarcity, which naturally lends itself to economic categories, and I suggested, too, that we proceed on the assumption that we have all the attention we need so long as that attention is properly ordered. What I have to say in this installment doesn’t exactly depend on those earlier reflections, but it’s probably worth mentioning that what follows picks up where I had left off in that essay. The additional line of thought, which I want to pursue here, involves the relationship between attention and the body. The reflections that led me down this path began with the realization that attention discourse tends to abstract attention away from the body. When we talk about attention, in other words, we tend to talk as if this faculty had no particular relationship to the activities of the body. This is not altogether surprising if we think about attention as the capacity to focus our thinking on a particular object. In this mode, attention is a strictly mental activity. We might even close our eyes in order to do it well. In this sense, attention becomes nearly synonymous with the activity of thinking itself. Or, alternatively, with prayer. It was Simone Weil, for example, who claimed that “absolutely unmixed attention is prayer.” But this is not the only mode of attention, of course. More often than not, when we talk about attention in relation to digital media we are talking about our capacity to attend to something or someone out there in the world. What we are doing in such cases is somewhat different than what we do when we attempt to think deliberately about a problem, say, or when we are concentrating on a memory of the past. When we attend to the world beyond our head, to borrow the title of Matthew Crawford’s book from a few years back, we are doing so through the mediation of our perceptual apparatus: we are looking with our eyes, smelling with our noses, listening with our ears, feeling with our fingertips, or tasting with our mouths. In other words, attention discourse tends to make a mental abstraction out of an ordinarily embodied practice. (And, I’ll mention in passing that it’s probably worth reflecting on the fact that attention, if we link it to the senses at all, is almost always linked to either seeing or hearing.) Consider, for example, this rather well-known paragraph from the American philosopher William James’s Principles of Psychology published in 1890: “Everyone knows what attention is. It is taking possession of the mind, in clear and vivid form, of one out of what seems several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies a withdrawal from some things in order to deal effectively with others.”This is all fine and good, of course. It accords with how most of us think about attention. I’m not suggesting that attention is anything less than this, only that we might improve our understanding of what is happening when we attend to the world if we also attend to what our senses are up to when we do so. A determination to see may only get us so far if we do not also think about how we see or, and this will be a critical point, recognize that our seeing can be trained. Moreover, James’s definition of attention leaves little room for the role that beauty or desire or ethics might play in the dance between our consciousness and the world. Attention is reduced to the exercise of raw mental will-power. As with my earlier discussion of the rhetoric of scarcity that treats attention as a resource, I don’t want to overstate my point here. I’m not suggesting that one should never speak about “attention” per se or that connecting attention more closely to our bodily senses will resolve our issues with attention. But, I do think there may be something to be gained in both cases. And here again I’m going to proceed with a little help from Ivan Illich, who, in the last phase of his intellectual journey, devoted a great deal of attention to the cultural history of the body and, specifically, sense experience. I’ll start with a proposal Illich wrote for David Ramage, who was then president of McCormick Theological Seminary in Chicago. What I’m calling a proposal, Illich titled “ASCESIS. Introduction, etymology and bibliography.” The short seven-page document details a plan for a five-year sequence of lectures that Illich wanted to give on the role ascetic disciplines might play in higher education. The proposed courses would each take up a focal point of bodily sense experience. To my knowledge, these lectures were never delivered. Nonetheless, the proposal and some of Illich’s other work around this time remains instructive. To be clear at the outset, Illich was not calling for a return to the old ascetic disciplines we might associate with earlier monastic traditions, for example. “The asceticism which can be practiced at the end of the 20th century,” Illich explained, “is something profoundly different from any previously known.” Nor is it a specifically religious mode of asceticism that Illich has in mind. In his view the tradition he is reviving and re-working includes pagan philosophers as well as monastic scholars. Illich thought that a rupture in this tradition had occurred sometime around the 12th century. This rupture had regrettably obscured the importance to learning of the body, including the affections. “Learning presupposes both critical and ascetical habits; habits of the right and habits of the left,” Illich claimed. He added: “I consider the cultivation of learning as a dissymmetrical but complementary growth of both these sets of habits.” It wasn’t immediately obvious to me what Illich meant by habits of the right and the left, but in the same paragraph he goes on to mention “habits of the mind,” or the critical habits celebrated and cultivated in the humanist tradition of learning, and the consequent neglect of the “heart’s formation.” Interestingly, the latter task, in his estimation, had been lately relegated to “the media.” “I want to explore the demise of intellectual asceticism as a characteristic of western learning since the time it became academic,” Illich went on to explain. “In this historical perspective,” he continued, I want to argue for the possibility of a new complementarity between critical and ascetical learning. I want to reclaim for ascetical theory, method and discipline a status equal to that the University now assigns to critical and technical disciplines. Reading Illich’s proposal from 1989, it strikes me as all the more relevant thirty years later. Confronted with the challenges of information superabundance, a plague of misinformation and digital sophistry, the collapse of public trust in traditional institutions, and algorithmically manipulated feeds, the “solutions” proffered, such as increased fact checking, warning labels, or media literacy training, seem altogether inadequate. From Illich’s perspective, we might say that they remain exclusively committed to the habits of the mind or the critical habits. What difference might it make for us to take Illich’s suggestion and consider the ascetic habits or habits of the body, holistically conceived? Might we do better to think about attention not as a resource that we pay or squander at the behest of the attention economy and its weaponized digital tools but rather as a bodily skill that we can cultivate, train, and hone? To explore these questions, let’s walk through parts of two other pieces by Illich, “Guarding the Eye in the Age of Show” and “To Honor Jacques Ellul.” The former is one of the last things Illich wrote just two years before his death in 2002. It is a 50-page distillation of his research in the cultural history of visual perception or the ethics of the gaze. The latter was, as the title suggests, a brief 1993 talk given in honor of the great critic of modern technology whose work had deeply informed Illich’s perspective. “Guarding the Eye in the Age of Show,” were it written today, might be classified as a contribution to attention discourse, except that the word attention is used in this sense only once, and the same is true of its nemesis distraction. Instead, Illich speaks of the ethical gaze or of what we do with our eyes and also how we conceive of vision itself, which as it turns out has a very interesting history (for more about that history, see another paper by Illich, “The Scopic Past and the Ethics of the Gaze”). So, here is Illich’s mention of attention in a way that echoes contemporary attention discourse: “Even today, I feel guilty if I find my attention distracted from a medieval Latin text by the afterglow of the MTV to which I exposed my eyes.” (Who amongst us has not …) Setting aside the question of why Illich was watching MTV, let’s consider a bit more carefully what Illich has to say here. He goes on to explain how “until quite recently, the guard of the eyes was not looked upon as a fad, nor written off as internalized repression. Our taste was trained to judge all forms of gazing on the other. Today, things have changed. The shameless gaze is in.” Illich was quick to add that he was not speaking of gazing at pornographic images. He was interested in recovering the idea that seeing was an action and not merely a passive activity on the model of a lens receiving visual data. And, as an action, it had an ethical dimension (Illich was very much indebted to the philosopher Emmanuel Levinas on these matters). He was concerned, too, with the way the gaze was captured or trapped by what he termed “the show.” Illich used show to distinguish the object of perception from the image, which had played such a critical if evolving role in traditional western philosophy and religion. At one point, he puts the question he wants to address this way:  “What can I do to survive in the midst of the show?” A question that, I suspect, still resonates today. Surviving the ShowSo what exactly was the show? I’m tempted to say that we can think of it simply as what we take in when we glance at any of our screens. I don’t think Illich would disagree with that assessment, but that’s obviously not a very helpful definition and it’s clear that Illich thought the show was a broader phenomenon. The truth is that I find it difficult to precisely pin down what Illich had in mind, but let me at least try to fill out the concept a bit more. Illich says at one point that “the distinction between image and show in the act of vision, though subtle, is fundamental for any critical examination of the sensual "I-thou" relationship. To ask how I, in this age and time, still can see you face-to-face without a medium, the image, is something different from asking how I can deal with the disembodying experience of ‘your’ photographs and telephone calls, once I have accepted reality sandwiched between shows.”Here, two things are clear. Illich is striving to preserve the possibility of “seeing” the person before our eyes, and the show, as he understands it, the pervasive field of technological mediations that shape our perception of the other threatens to obscure our ethical vision. I think of the way this very language has emerged to describe the act of beholding the person in a morally substantive way. We hear, for example, of the desire to “be seen,” by which, of course, something much deeper is in view than merely appearing within someone’s visual field. Or, somewhat less seriously, we joke about “feeling seen,” which is to say that some meme has come uncomfortably close to capturing some aspect of our personality. Further on in the paper Illich writes, “I argue that ‘show’ stands for the transducer or program that enables the interface between systems, while ‘image’ has been used for an entity brought forth by the imagination. Show stands for the momentary state of a cybernetic program, while image always implies poiesis. Used in this way, image and show are the labels for two heterogeneous categories of mediation.”My sense, deriving from the passing reference to cybernetics, is that the distinction between the image and the show tracks with the distinction, critical to Illich’s later work, which he drew between the age of instruments and the age of systems. While I don’t think that Illich rigorously developed this distinction anywhere in writing, one key element involved the manner in which the system, as opposed to the mere instrument, enveloped the user. It was possible to stand apart from the instrument and thus to attain a level of mastery over it. It was not possible to likewise stand apart from the system. Which may explain why Illich, as we’ll see shortly, concluded, “There can be rules for exposure to visually appropriating pictures; exposure to show may demand a reasoned stance of resistance.” Elsewhere he says that in our present media ecosystem our gaze is sometimes “solicited by images, but at other times it is mesmerized by show.” The difference between solicitation and mesmerization seems critical. It is in this context that he also writes, “An ethics of vision would suggest that the user of TV, VCR, McIntosh and graphs protect his imagination from overwhelming distraction, possibly leading to addiction.” Extrapolating a bit, then, and even taking the word show at face value, we might say that there was something dynamic and absorbing about the show that distinguished it from the image. (Let me say at this point that I’m not getting into Illich’s discussion of the image, which takes up classical philosophy and medieval theology. Click through to read the whole paper for that discussion.)Things then get a bit more interesting toward the tail end of the article as Illich brings his historical survey of the gaze into the modern era. If we thought that Illich was connecting the show exclusively to the age of electronic media or even the proliferation of images in the late nineteenth century, we might be taken aback when he claims that “the replacement of the picture by the show can be traced back into the anatomical atlases of the late eighteenth century.” It’s a good reminder that, as eclectic as Illich’s talents and interests were, he always remained, in some fundamental sense, a historian.“With the transition from the age of pictures to the age of show,” Illich had just written, “step by step, the viewer was taken off his feet. We were trained to do without a common ground between the registering device, the registered object and the viewer.” I hear echoes in these lines of Baudrillard, but not quite. It is not an image that has no referent in the world but rather a way of relating to the world that has no referent in our experience, a mediation of the world that displaces the ordinary or even carefully trained mediation of the human sensorium. Citing the work of his frequent collaborator, Barbara Duden, Illich writes, “the anatomists looked for new drawing methods to eliminate from their tables the perspectival ‘distortion’ that the natural gaze inevitably introduces into the view of reality.” “They want a blueprint of the object” he adds, and “They want measures, not views. They look at the world, more architectonico, according to the layout of architectural drawings.” The new scopic regime, as he calls it, spills out from anatomy to geology and zoology. “Thanks to the new printing techniques,” Illich concludes, “the study of nature increasingly becomes the study of scientific illustrations.”This incipient form of the show appears to involve a means of representation that abstracts perception from the bodily frame of reference. It presents us with a view of the world that, while highly generative in many respects, may nonetheless leave us ill prepared to see the world as it is available to us through sense experience. One of the more compelling bits of evidence Illich marshals in his historical investigations involves the shrinking diversity of words to designate varieties of sense experience. Summarizing the work of a variety of scholars, Illich noted,Dozens of words for shades of perception have disappeared from usage. For what the nose does, someone has counted the victims: Of 158 German words that indicate variations of smell, which Dürer's contemporaries used, only thirty-two are still in use. Equally, the linguistic register for touch has shriveled. The see-words fare no better.Sadly, this accords with my own experience, and I wonder whether it rings true for you. Upon reflection, I have a paucity of words at my disposal with which to name the remarkable variety of experiences and sensations that the world offers. Picking up the storyline again, the incipient form of the show was then reinforced or perhaps Illich would say popularized by the advent of the stereoscope (pictured below). The stereoscope is one of several widely used nineteenth century devices for manipulating visual experience. The Claude glass is another example that comes to mind. Illich described the stereoscope as follows: “Two simultaneous exposures are made next to each other on the same photographic plate through two lenses distanced from each other by a few inches. The developed picture postcard is placed into a box and viewed through a pair of special spectacles. The result is a ‘surrealist’ dimensionality. The foreground and background that lie outside the focus are fuzzy, while the focused object floats in unreal plasticity.” He noted that his grandmother, in the early twentieth century was still bringing back these stereo cards from her travels. Speaking of the emergence of the scopic regime of the show in the early nineteenth century, Illich concluded, “New optical techniques were used to remove the picture of reality from the space within which the fingers can handle, the nose can smell and the tongue can taste it, and show it in a new ‘objective’ isometric space into which no sentient being can enter.”It may be helpful to draw in Illich’s evolving critique of medicine for an example of the show that is not directly related to what we typically think of media technology. During the 1980s, as the consequences of the shift from instruments to systems was dawning on Illich, he came to see that one of the harms of modern institutionalized medicine was the implicit displacement of the lived body by the body that is a system apprehended by diagnostic tools. It is the body reduced to one’s chart, health as conformity to statistical averages and patterns. The individual and the particularities of their body are lost. I don’t know that Illich ever puts it this way, but it seems clear to me that this can be understood as medicine in thrall of the show. It is not, of course, that such information is useless, rather it is that something is lost when our vision of the human is thus reduced to data flows, and that loss, difficult or perhaps impossible to quantify, can result in profound consequences. It can, for example, in the case of medicine, generate, paradoxically, greater forms of suffering. Ocular AskesisAs he made clear at the outset of this paper, Illich undertook this examination of the history of visual perception in order to explore the ethics of the gaze and how “seeing and looking is shaped by personal training (the Greek word would be askesis), and not just by contemporary culture.” Or, as he also put it, “My motive for studying the gaze of the past is a wish to rediscover the skills of an ocular askesis.”In other words, Illich invites us to consider what it might mean to discipline our vision, and I’m inviting us to consider whether this is not a better way of framing our relationship to the digital media ecosystem. The upshot is recognizing the additional dimensions of what is often framed as a merely intellectual problem and thus met with laughably inadequate techniques. Perceptual askesis would involve our body, our affections, our desires, and our moral character as well as our intellect. The first step would be to recognize that vision is, in fact, subject to training, that it is more than the passive reception of sensory data. In fact, of course, our vision is always being disciplined. Either it happens unwittingly as a function of our involvement with the existing cultural structures and patterns, or we assume a measure of agency over the process. Illich’s historical work, then, denaturalizes vision in order to awaken us to the possibility of training our eyes. Our task, then, would be to cultivate an ethos of seeing or new habits of vision ordered toward the good. And, while the focus here has fallen on sight, Illich knew and we should remember that all the senses can be likewise trained.“Guarding the Eyes in the Age of Show” is a long and scholarly article. Illich’s comments in honor of Jacques Ellul, delivered a few years earlier, cover much of the same ground in a more direct, urgent, and prophetic style. It will be worth our time, I think, to close by considering some of these comments because they might give us a better idea of the nature of the good, in Illich’s view, toward which a perceptual askesis should be ordered. In one of the clearest statements of the concerns that were animating his work during this time, Illich declared that “existence in a society that has become a system finds the senses useless precisely because of the very instruments designed for their extension. One is prevented from touching and embracing reality.” And, what’s more, “it is this radical subversion of sensation that humiliates and then replaces perception.”In a similar dire vein, Illich continued:  “We submit ourselves to fantastic degradations of image and sound consumption in order to anesthetize the pain resulting from having lost reality.” He then added: “To grasp this humiliation of sight, smell, touch, and not just hearing, it was necessary for me to study the history of bodily acts of perception.”What is evident here is that Illich wanted to defend a way of being in the world that took the body as its focal point. He spoke of this as a matter of “touching and embracing reality.” While I’m deeply sympathetic to Illich’s point of view here, I think I might put things a bit differently. Reality is a bit too elastic and protean a term to be helpful in this case. Better, in my view, to make the case that we risk missing out on a fuller experience of the depth and diversity of things, along with the pleasures and satisfactions such an experience might yield. It seems to me relatively uncontroversial to observe that, for example, looking can be distinguished from seeing. I might look at a painting, for instance, and fail to see it for what it is. The same is true of a landscape, a single tree, a bird, an elegant building, or, most significantly, a person. If my vision is trained by the show, will I be able to see the person before me who cannot match the show’s dynamic, mesmerizing quality? And, from Illich’s perspective, it is not only that I would fail to accord my neighbor the honor they are owed but that I would lose myself in the process, too. Eyes trained by the show would be unable “to find joy in the only mirror in which I can discover myself, the pupil of the other.”News and Resources​* A review of Frank Pasquale’s New Laws of Robotics, “A World Ruled by Persons, Not Machines”: “Frank Pasquale’s thought-provoking and deeply humanist New Laws of Robotics: Defending Human Expertise in the Age of AI pledges that another story is possible. We can achieve inclusive economic prosperity and avoid both traps of mass technological unemployment and low labor productivity. His central premise is that technology need not dictate our values, but instead can help bring to life the kind of future we want. But to get there we must carefully plan ahead while we still have time; we cannot afford a ‘wait-and-see’ approach.”* Douglas Rushkoff on “why people distrust ‘the Science’”: “By disconnecting science from the broader, systemwide realities of nature, human experience, and emotion, we rob it of its moral power. The problem is not that we aren’t investing enough in scientific research or technological answers to our problems, but that we’re looking to science for answers that ultimately require human moral intervention.”* Chad Wellmon is always insightful on the history of higher education. Here he is answering the question, “What must one believe in to be willing to borrow tens of thousands of dollars in order to pursue a certification of completion — a B.A.?”: “Atop this pyramid scheme sit institutions like my own, the University of Virginia, which masks its constant competition for more — more money, more status, more prestige — as a belief in higher learning. Given the goals they set for themselves, UVA and other wealthy institutions need the system of higher education to continue just as it is. They profess to do so out of a faith that meritocracy’s hidden hand will watch over their graduates, ensuring the liberal, progressive order. And they hire professionals to manage that faith, such as UVA’s recently appointed vice provost for enrollment, who will ensure the most efficient use of students’ hopes in higher education to maximize revenues.”* An essay adapted from The Filing Cabinet: A Vertical History of Information by Craig Robertson: “The filing cabinet does not just store paper; it stores information; and because the modern world depends upon and is indeed defined by information, the filing cabinet must be recognized as critical to the expansion of modernity. In recent years scholars and critics have paid increasing attention to the filing systems used to store and retrieve information critical to government and capitalism, particularly information about people — case dossiers, identification photographs, credit reports, et al. 4 But the focus on filing systems ignores the places where files are stored. 5 Could capitalism, surveillance, and governance have developed in the 20th century without filing cabinets? Of course, but only if there had been another way to store and circulate paper efficiently. The filing cabinet was critical to the infrastructure of 20th-century nation states and financial systems; and, like most infrastructure, it is often overlooked or forgotten, and the labor associated with it minimized or ignored.”* Opening of a review of Entangled Life: How Fungi Make Our Worlds, Change Our Minds and Shape Our Futures by Merlin Sheldrake: “Try to imagine what it is like to be a fungus. Not a mushroom, pushing up through damp soil overnight or delicately forcing itself out through the bark of a rotting log: that would be like imagining the grape rather than the vine. Instead try to think your way into the main part of a fungus, the mycelium, a proliferating network of tiny white threads known as hyphae. Decentralised, inquisitive, exploratory and voracious, a mycelial network ranges through soil in search of food.”* On the story of Robert McDaniel, who in 2013 was identified as a potential victim or perpetrator of a violent crime by a predictive policing algorithm: “He invited them into this home. And when he did, they told McDaniel something he could hardly believe: an algorithm built by the Chicago Police Department predicted — based on his proximity to and relationships with known shooters and shooting casualties — that McDaniel would be involved in a shooting. That he would be a “party to violence,” but it wasn’t clear what side of the barrel he might be on. He could be the shooter, he might get shot. They didn’t know. But the data said he was at risk either way.”Increasingly, it seems to me that we are presented with two paths along which we might make our way in the world. These two paths can be characterized in any number of ways. One path is marked by the desire to control experience, even the experience of others, and predictive technologies serve this purpose. The other path is marked by a greater degree of openness to experience in the interest of freedom, with the risks this entails, and, as I’ve suggested elsewhere, relies on promise rather than prediction. * In light of the turn toward privacy in the tech industry, Evgeny Morozov wonders whether it was a mistake to put so much emphasis on matters of privacy in an effort to meet the challenges posed by big tech corporations: “Yet I wonder if these surprising victories for the privacy movement may, in the end, turn out to be pyrrhic ones – at least for the broader democratic agenda. Instead of reckoning with the broader political power of the tech industry, the most outspoken tech critics have traditionally focused on holding the tech industry to account for numerous violations of existing privacy and data protection laws.”While not specifically focusing on the privacy critique, three years ago I wrote in my first piece for The New Atlantis that whatever we make of the so-called tech backlash, it was not a serious critique of contemporary technology. I tend to think that piece holds up pretty well. * Interesting essay in Real Life by Lauren Colleen: “The technologies that evoke synaesthetic fallacy expedite the easy translation of all experience into data, and all data into capital. At the same time they mystify this process, cloaking it in the language of scientized magic. Synaesthetic fallacy is wielded as a tool for re-branding the interfaces that serve the data economy as essential mediators of a broken relationship between screen-obsessed humans and the external world. It seduces us into the illusion that tech can ever function as simply a neutral translator.”Re-framings— From an interview with Suzanne Simard in Emergence Magazine. By the way, Emergence is a really interesting and beautifully put together publication. You should check it out.EM Throughout your work, you’ve kind of departed from conventional naming in many ways—“mother,” “children,” “her.” You use very unscientific terms—and, it seems, quite deliberately, as you just described—to create connection, relationship. But it’s not the normal scientific practice.SS No, it’s not, and I can hear all the criticisms going on, because in the scientific world there are certain things that could kill your career, and anthropomorphizing is one of those things. But I’m at the point where it’s okay; that’s okay. There’s a bigger purpose here. One is to communicate with people, but also—you know, we’ve separated ourselves from nature so much that it’s to our own demise, right? We feel that we’re separate and superior to nature and we can use it, that we have dominion over nature. It’s throughout our religion, our education systems, our economic systems. It is pervasive. And the result is that we have loss of old-growth forests. Our fisheries are collapsing. We have global change. We’re in a mass extinction.I think a lot of this comes from feeling like we’re not part of nature, that we can command and control it. But we can’t. If you look at aboriginal cultures—and I’ve started to study our own Native cultures in North America more and more, because they understood this, and they lived this. Where I’m from, we call our aboriginal people First Nations. They have lived in this area for thousands and thousands of years; on the west coast, seventeen thousand years—for much, much longer than colonists have been here: only about 150 years. And look at the changes we’ve made—not positive in all ways.Our aboriginal people view themselves as one with nature. They don’t even have a word for “the environment,” because they are one. And they view trees and plants and animals, the natural world, as people equal to themselves. So there are the Tree People, the Plant People; and they had Mother Trees and Grandfather Trees, and the Strawberry Sister and the Cedar Sister. And they treated them—their environment—with respect, with reverence. They worked with the environment to increase their own livability and wealth, cultivating the salmon so that the populations were strong, the clam beds so that clams were abundant; using fire to make sure that there were lots of berries and game, and so on. That’s how they thrived, and they did thrive. They were wealthy, wealthy societies.I feel like we’re at a crisis. We’re at a tipping point now because we have removed ourselves from nature, and we’re seeing the decline of so much, and we have to do something. I think the crux of it is that we have to re-envelop ourselves in our natural world; that we are just part of this world. We’re all one, together, in this biosphere, and we need to work with our sisters and our brothers, the trees and the plants and the wolves and the bears and the fish. One way to do it is just start viewing it in a different way: that, yes, Sister Birch is important, and Brother Fir is just as important as your family.Anthropomorphism—it’s a taboo word and it’s like the death knell of your career; but it’s also absolutely essential that we get past this, because it’s an invented word. It was invented by Western science. It’s a way of saying, “Yeah, we’re superior, we’re objective, we’re different. We can overlook—we can oversee this stuff in an objective way. We can’t put ourselves in this, because we’re separate; we’re different.” Well, you know what? That is the absolute crux of our problem. And so I unashamedly use these terms. People can criticize it, but to me, it is the answer to getting back to nature, getting back to our roots, working with nature to create a wealthier, healthier world.The ConversationReminder: the previous post announced the results of my experiment with much shorter, “is this anything?” posts. The feedback was overwhelmingly positive, so you can expect one or two of those in the coming days. Also in the last post, I suggested that if you were not keen on supporting the Convivial Society directly through this platform, you could name a price on my ebook. The response indicated that it might be worthwhile to offer a standing alternative, so I created some subscription options through Gumroad. And, finally, if you value the newsletter, please consider yourself duly encouraged to share it with others. Cheers,Michael Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Apr 28, 2021 • 0sec

The Answer Is Not More Information

Some have argued that one benefit of the new newsletter ecosystem is a return to the older conventions of blogging in its halcyon days. I don’t know about that. I doubt you or I really want two or three dispatches from every newsletter we’ve subscribed to arriving in our inbox every day. That said, I do occasionally feel the attraction of that older form. I won’t say that this installment is blog-ish in that way, but I sat down to compose it in that old, familiar frame of mind: aiming at something relatively brief and discursive, suggestive rather than fully developed. As always, I hope you find it useful.There is a genre of tweet that begins thus: “I don’t know who needs to hear this, but …” Well, it’s in that spirit, and un-ironically, that I write what follows. Yesterday, I listened to a radio program with two thoughtful scholars and mental health practitioners on the subject of doom scrolling, the habit of thoughtlessly and compulsively scrolling through our infinite feeds with no clear sense of direction or purpose, often late at night when we ought to be sleeping, and with the result of inducing greater degrees of anxiety and unease. One of the guests explained doom scrolling as an effect of having to navigate unfamiliar and potentially threatening situations with inadequate information and little clarity about what one ought to do. Understood this way, doom scrolling is just a function of our need for decent maps of the world. This is an entirely plausible account of doom scrolling, even if it does not account for every dimension of the practice. For example, I’d say that what we’re often after is not information per se but an affective fix. Nonetheless, the term did explode onto the lexical scene last year as the pandemic wildly reconfigured the way most of us live, transforming everyday decisions we used to make rather carelessly into matters of complex actuarial decision making: when to go out, whom to see, at what distance, for how long, with what precautions. Etc., etc. Some handled these conditions better than others, but it’s easy to see how under such circumstances one would cast about for any bit of news or information that would help clarify matters and relieve the acute uncertainty, anxiety, and fear we might have been experiencing. Of course, in this case, as in so many others, the pandemic merely revealed and heightened an already existing pattern. While the term was popularized under pandemic conditions, it pre-dates the public health crisis by at least two years and certainly describes a phenomenon that was common long before then. And, I would add, whatever its precise relationship to the pandemic, the practice of doom scrolling will persist independently of the uncertainties and anxieties generated by the pandemic. Previously, I’ve characterized the activity we call doom scrolling as structurally induced acedia, and the conditions that thus tempt us aren’t going anywhere.What struck me about the characterization of doom scrolling in the interview, however, was the implicit assumption embedded into the terms of the analysis, an assumption which acts as the mechanism linking the experience of uncertainty to the practice of doom scrolling. That assumption, simply put, is that what we need in order to better navigate uncertainty is more information. I grant that this is obviously true to some extent. Good information can helpfully inform our choices. And in the total absence of good information, we would rightly feel altogether adrift and at the mercy of forces beyond our ken. Yet, it is also the case that our problem with information is not that we have too little of it but rather that we have too much. Granted that, in saying this, I am assuming a distinction between information and knowledge, which is to say that an abundance of information does not necessarily imply an abundance of knowledge. My point turns out to be relatively straightforward: maybe you and I don’t need more information. And, if we think that the key to navigating uncertainty and mitigating anxiety is simply more information, then we may very well make matters worse for ourselves. Believing that everything will be better if only we gather more information commits us to endless searching and casting about, to one more swipe of the screen in the hope that the elusive bit of data, which will make everything clear, will suddenly present itself. From one angle, this is just another symptom of reducing our experience of the world to the mode of consumption. In this mode, all that can be done is to consume more, in this case more information, and what we need seems always to lie just beyond the realm of the actual, hidden beyond the horizon of the possible. And, once again, this mode of being, with regards to navigating uncertainty, has the paradoxical effect of sinking us ever deeper into indecision and anxiety because the abundance of information, especially if it is encountered as discrete bits of under-interpreted data, will only generate more uncertainty and frustration. One alternative to this state of affairs is to ditch the idea, should we be under its sway, that what we need to make our way in the world is simply more information. For one thing, there are practical difficulties: even in cases where more information might be genuinely helpful, it may not be forthcoming when we need it. But, more importantly, some matters cannot be adequately decided simply by gathering more information and plugging it into some sort of value-neutral formula. Indeed, we might even say that what we need to make is not a merely a decision but more like a commitment with all the risk, responsibility, and promise that this entails. What we might truly need, then, is not information but something else altogether: courage, patience, practical wisdom, and, perhaps most importantly, friendship. Of course, these can be harder to come by than mere information, however valuable it may be.I trust that there is no need to further clarify why what we might really need in the face of uncertainty might be courage, patience, and wisdom. Lacking these, I might add, it is easy to see how we might take refuge in the idea that we lack sufficient information. The claim that I’m holding out for more information can neatly mask my lack of courage to do what I know needs to be done.But it’s worth reflecting for just a moment on the the last of these: friendship. I was thinking here of how isolation and loneliness, which I would sharply distinguish from solitude, can warp and disfigure our cognitive faculties. The more isolated we find ourselves, the more harrowing and disorienting the experience of uncertainty.Moreover, if we do proceed, as we often must, without the benefit of certainty, venturing forth and assuming the real risks that must accompany our action in this world—especially once we renounce the imperative to control, manage, and master—then it would be a far better thing to do so in the affectionate and heartening company of friends who will sustain us in our failures and celebrate our triumphs. After all, it is easier by far to take a step into the unknown with another walking alongside of us than it is to do so alone. If I must bear the consequences of my choices alone, if there is no one whose counsel I trust, then it becomes especially tempting to seek both perfect knowledge and certainty before acting, and find myself paralyzed in their absence. Unfortunately, the patterns of our techno-social order tend toward the fracturing of community and the isolation of the person. We are offered an array of tools that promise to assuage the resulting economic and psychic precarity, but, more often than not, their real aim implicit in their design is to perpetuate and accelerate social fragmentation and cultivate deeper degrees of dependency from users. They tend to inhibit the enduring satisfaction of our genuine needs in order to perpetuate our dependence on their services. They distract us from attending to the roots of our disorders in order to continue trading on the superficial and counterproductive “solutions.”In a talk Ivan Illich gave late in his life, he made the following observation: “Learned and leisurely hospitality is the only antidote to the stance of deadly cleverness that is acquired in the professional pursuit of objectively secured knowledge.” Then he added, “I remain certain that the quest for truth cannot thrive outside the nourishment of mutual trust flowering into a commitment to friendship.”I come back to these lines often, especially in light of the debilitating epistemic consequences of becoming too dependent on digital media to mediate our perception of the world. It may seem counter-intuitive to say that, in the face of the profound challenges our society faces, what we most need is the deliberate cultivation of friendship. But I also find myself thinking that this conclusion is, from one angle, inescapable. At the very least, it seems to me that we need such friendships as an anchor and a refuge from the disorienting tumult of the digitized public sphere and precarity of our social world. Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Apr 1, 2021 • 19min

Your Attention Is Not a Resource

“Attention discourse” is how I usually refer to the proliferation of essays, articles, talks, and books around the problem of attention (or, alternatively, distraction) in the age of digital media. While there have been important precursors to digital age attention discourse dating back to the 19th century, I’d say the present iteration probably kicked off around 2008 with Nick Carr’s essay in the Atlantic, “Is Google Making Us Stupid?” And while disinformation discourse has supplanted its place in the public imagination over the past few years, attention discourse is alive and well. I don’t intend for the label “attention discourse” to come off pejoratively or dismissively. In fact, I’ve made my own minor contributions to the genre. It was a recurring theme on the old blog (e.g.), and it was the subject of an installment of this newsletter just last summer. And I still think that the fate of attention in digital culture is a topic worthy of our considered reflection. More recently, however, I’ve been reconsidering the approach I’ve taken in the past to the question of attention. You can think of what follows as a report on the fruits this reconsideration as it now stands. As a point of departure, let’s begin with a recent column in the Times by Charlie Warzel, who, I should note, comments frequently on the dynamics of the attention economy. In this particular piece from February, Warzel profiles Michael Goldhaber, whom he calls “the internet prophet you’ve never heard of,” which, I confess, was certainly true for me. As Warzel recounts the story, Goldhaber was a physicist who sometime in the 1980s had an epiphany about the nature of attention in an age of information glut (and do note that this epiphany predates the rise of the commercial internet).The epiphany, simply put, was that we live in an attention economy, a term Goldhaber did not coin but which he seems to have done a great deal to popularize, chiefly with a 1997 essay that appeared in Wired. Here is a key paragraph from that essay: Yet, ours is not truly an information economy. By definition, economics is the study of how a society uses its scarce resources. And information is not scarce - especially on the Net, where it is not only abundant, but overflowing. We are drowning in information, yet constantly increasing our generation of it. So a key question arises: Is there something else that flows through cyberspace, something that is scarce and desirable? There is. No one would put anything on the Internet without the hope of obtaining some. It's called attention. And the economy of attention - not information - is the natural economy of cyberspace.A bit further on, Goldhaber notes that “the attention economy is a star system, where Elvis has an advantage. The relationship between stars and fans is central.” But the average person is not altogether cut out of the attention economy, far from it. “Cyberspace,” Goldhaber explains, “offers a much more effective means of continuing and completing attention transactions, as well as opening up more possibilities to almost everyone. Whoever you are, however you express yourself, you can now have a crack at the global audience.”Goldhaber goes on to argue that attention will, quite literally as I read him, become the real currency of the internet age, by which he means that it will eventually displace money. This portion of his argument seems to have missed the mark, although the entanglement of money and attention has certainly been borne out. Others will be better qualified to judge the financial aspects of Goldhaber’s vision of attention as currency. Needless to say, developments in the NFT markets suggest some interesting lines of inquiry, something Warzel explored in his most recent column. There, Warzel walks us through some of the most recent trends among those who are, in fact, earning their livelihood by transforming attention into money. At the frontiers of attention monetization we find, for example, “a platform called NewNew, which wants to build a ‘human stock market,’ where fans can vote to control mundane decisions in a creator’s day-to-day life.” As I was composing this piece, a story about an artist who sold a portion of their skin as an NFT also crossed my feed [correction: it was a tennis player].Clearly, there is a profoundly dark side to this vision of unfettered monetization. Frankly, it’s hard to read this as anything more than a form of voluntary indentured servitude. Beyond that, as Anil Dash explained to Warzel, “The gig economy is coming for absolutely everyone and everything […] The end game of that is the GoFundMe link posted beneath a viral tweet so they can pay for their health care. Being an influencer sounds fun until it’s ‘keep producing viral content to literally stay alive.’ That’s the machine we’re headed toward.”In his 1997 essay, Goldhaber had anticipated some of these disturbing dynamics. “Already today,” he observed at the time, “no matter what you do, the money you receive is more and more likely to track the recognition that comes to you for doing what you do. If there is nothing very special about your work, no matter how hard you apply yourself you won't get noticed, and that increasingly means you won't get paid much either.”Coming back to the piece with which we started, Warzel summarized Goldhaber’s thinking about the attention economy this way: Every single action we take — calling our grandparents, cleaning up the kitchen or, today, scrolling through our phones — is a transaction. We are taking what precious little attention we have and diverting it toward something. This is a zero-sum proposition, he realized. When you pay attention to one thing, you ignore something else.And, as we noted at the outset, Goldhaber was especially keen to point out that the value of attention is defined by its scarcity. Not that long ago, I would’ve pretty much assented to this whole line of thought. Indeed, I know that I have also spoken and written about attention as a scarce resource that we ought to take great care in how we allocate it. I would have had no problem at all with Howard Rheingold’s principle, cited by Goldhaber, that attention is a limited resource, so we should pay attention to where we pay attention.While I remain quite sympathetic to the spirit of this line of thought, it now seems to me that the framing of the problem is itself part of the problem. To begin with, we might do well to stop thinking about attention as a scarce resource. After he published a burst of spirited and prophetic works of social criticism in the early and mid-70s, Ivan Illich decided that it was time to re-evaluate his own critical approach. Despite their obvious faults, the industrial age institutions Illich targeted in his scathing critiques proved to be more resilient than he anticipated, and not necessarily because they were, in fact, useful, just, and sustainable enterprises. Rather, Illich came to the conclusion that we remained locked into these inevitably self-destructive institutional structures because they were, as David Cayley explained, “anchored at a depth that ‘rabble-rousing’ could not reach, even if it were as lucid and rhetorically refined as Illich’s critiques had been.” Illich began referring to the “certainties” upon which modern institutions rested. These certainties were assumptions of which we are barely aware, assumptions which lend current institutional structures a patina of inevitability. These certainties generated the sense that people couldn’t possibly do without such tools or institutions, even if they were, in fact, relatively modern innovations. And in this next phase of his career, Illich set out to trace the origins of these certainties, which, in his view, were anything but. Chief among these certainties was the presumption of scarcity. In fact, in 1980, Illich announced his intention to write a history of scarcity. That history never materialized, but a number of pieces of that larger work Illich was working toward were published in a variety of contexts. The presumption of scarcity undermined the more hopeful possibility of a convivial society, which Illich had outlined in Tools for Conviviality. However, by the time he published Shadow Work in the 1980, Illich had been encouraged by the growth of small pockets of conviviality in a variety of communities across the globe. But he also worried that these outposts of more convivial social arrangements were threatened by the encroachment of formal economic structures, which were necessarily premised on the idea of scarcity. As Illich put it, he wanted to defend “alternatives to economics” not simply “economic alternatives.” What Illich sought to defend was what he called the “vernacular domain.” Of his use of the word vernacular, Illich explained, “I would like to resuscitate some of its old breath to designate the activities of people when they are not motivated by thoughts of exchange.” The term, as he meant to use it, “denotes autonomous, non-market related actions through which people satisfy everyday needs—the actions that by their own true nature escape bureaucratic control, satisfying needs to which, in the very process, they give specific shape.” As Cayley explains in his invaluable guide to Illich’s thought, By naming a vernacular domain, Illich hoped to do two things: to endow activities undertaken for their own sake with a specific dignity and presence and to distinguish these activities from things done in the shadow of economics. He wanted to highlight that portion of social life that had been, remained, or might become immune to the logic of economization. By offering a name, he hoped to secure for those pursuing alternatives a place to stand—a respite from management, economization, and professionalization where new commons could take shape.At this point, I suspect you have a good sense of where this is going. Attention discourse proceeds under the sign of scarcity. It treats attention as a resource, and, by doing so, maybe it has given up the game. To speak about attention as a resource is to grant and even encourage its commodification. If attention is scarce, then a competitive attention economy flows inevitably from it. In other words, to think of attention as a resource is already to invite the possibility that it may be extracted. Perhaps this seems like the natural way of thinking about attention, but, of course, this is precisely the kind of certainty Illich invited us to question. I can hear the rejoinders taking shape, of course: But attention is scarce. Right now I’m giving it to your writing and not to something else. I have only so many waking hours, and so much to which I must or would like to give my attention. At any given moment, I’m likely to find my attention divided and fragmented. Etc.Given the intuitive force of these claims, further variations of which I suspect you can readily supply, is the claim that we have all the attention we need even plausible?As I’ve thought about it, I’ve come to think that it is, but we may need to reevaluate more than just how we think about attention in order to see it as such.So here is a proposition for you to consider: you and I have exactly as much attention as we need. In fact, I’d invite you to do more than consider it. Take it out for a spin in the world. See if proceeding on this assumption doesn’t change how you experience life, maybe not radically, but perhaps for the better. And the implicit corollary should also be borne in mind. If I have exactly as much attention as I need, then in those moments when I feel as if I don’t, the problem is not that I don’t have enough attention. It lies elsewhere. (There is an additional consideration, which is that I’ve failed to cultivate my attention, but, again, this is not a question of scarcity.) In any case, I obviously can’t make any promises, but, you may find, as I have of late, that refusing the assumption of scarcity can be surprisingly liberating. There are obstacles, of course. Force of habit chief among them. And, as we all know, attention feels scarce to the degree that we yield to the imperatives of telepresence, which is to say the imperatives of being digitally dispersed, of anchoring attention to a plane other than or in addition to that of bodily presence. Relatedly, resisting the assumption of scarcity probably presumes the acceptance in principle of benevolent limits, limits that, far from amounting to constraints to be overcome, are, in fact, the necessary parameters of our well-being. It is also true, of course, that I’m presuming a measure of agency that is simply not available to everyone in equal measure, but even in such cases, the problem is not one of attention scarcity, but of unjust or unequal social arrangements. All of this said, however, it still seems to me that one could get rather far in the right direction by refusing to think of attention as a scarce resource. But it may be, too, that my initial proposition requires a qualification. Let’s put it this way: you and I have exactly as much attention as we need at any given moment provided that at that moment we also know what it would be good for us to do. This qualification also stems from Illich’s insights, so allow me to elaborate. His crusade against the colonization of experience by economic rationality led him not only to challenge the assumption of scarcity and defend the realm of the vernacular, he also studiously avoided the language of “values” in favor of talk about the “good.” He believed that the good could be established by observing the requirements of proportionality or complementarity in a given moment or situation. The good was characterized by its fittingness. Illich sometimes characterized it as a matter of answering a call as opposed to applying a rule. The idea of the vernacular, the preference for the language of the good rather than values, and resistance to the presumption of scarcity are all related in Illich’s thinking. In 1988, in a conversation with David Cayley, Illich says that he has “become increasingly aware of the question: What happened when the good was replaced by values?” “The transformation of the good into values,” he answers, “of commitment into decision, of question into problem, reflects a perception that our thoughts, our ideas, and our time have become resources, scarce means which can be used for either of two or several alternative ends. The word value reflects this transition, and the person who uses it incorporates himself in a sphere of scarcity.”A little further on in the conversation, Illich explains that value is “a generalization of economics. It says, this is a value, this is a nonvalue, make a decision between the two of them. These are three different values, put them in precise order.” “But,” he goes on to explain, “when we speak about the good, we show a totally different appreciation of what is before us. The good is convertible with being, convertible with the beautiful, convertible with the true.” Interestingly, when Cayley asks Illich if the language of the good is recoverable, Illich responds, “Between the two of us, at this moment, yes!” Between the two of us. At this moment. There is a personal, even intimate quality about the apprehension of the good. Illich goes on to explain how, in that specific case, it is borne out of their mutual friendship and the conditions of their conversation. To draw this more directly into our present theme, Illich and Cayley each has just as much attention as they need in the moment. The joy of their conversation, the resonance of their encounter, to borrow another formulation, may tacitly derive from the sense that there is nothing else to which they ought to be giving their attention in that moment because their attention is ordered toward the good at that moment. If I may, in turn, speak about this in a rather personal vein, given the contours of my own life, I feel the tensions we’ve been exploring most acutely in relationship to my children. My guess is that if you are a parent yourself, you may be nodding along in agreement. This has especially been true when I have, like so many others, found myself “working” from home. What better illustration, in my experience anyway, of the contrast between the realm of value and the realm of the good. But the line cannot be as clearly drawn as one might suspect. It is not simply that my work lies in the realm of value and my children in the realm of the good. At a given moment it may be good for me to attend to my work, and at another the good requires that I set my work aside to attend to my children. My point all along has been that I have just as much attention as I need in either case, so along as I can be responsive to what is good and my circumstances enable me to be responsive in this way. The myriad factors that complicate matters do not entail a scarcity of attention. Re-framings, of course, only get us so far, and I would be happy to hear how this particular reframing of attention serves you, or how it falls short in your view. There is one other path that I wanted to pursue, but my sense is that it would be good (!) to bring this installment to a close. Next time, however, I will come back to this discussion and consider how attention discourse fails precisely by talking about attention as if it were an abstract resource rather than something that is intimately tied to our bodily senses. Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
undefined
Mar 23, 2021 • 17min

Impossible Silences

Over the years, I’ve thought on and off about silence in the context of digital media. Mostly, this has taken the form of commending what came to be called strategic silence. The idea being that, given the dynamics of the attention economy, it is sometimes better to pass over certain developments in calculated silence than it is to comment on them or even to speak out against them. At other times I’ve commented on how the structure of social media generates an imperative to speak, and how in times of crisis and tragedy this imperative to speak feels especially disordered. More recently, however, I’ve come to think that it is impossible to be silent online.I don’t mean that it’s really hard and that we lack to will power to be silent. I mean that it is, quite literally, impossible.“No, it’s not,” you may be thinking just now, “I do it all the time. It even has a name: it’s called lurking.” I would propose, however, that we distinguish between the mere absence of speech and what might properly be called silence. Perhaps it would be more precise to say that it is impossible to enter into silence online. Saying nothing, in other words, is not the same thing as silence. Silence is felt. It is meaningful. It is not mere negation. In fact, it can be, as we shall see, eloquent. But, and here I suppose is the crux of the matter, this kind of silence presupposes bodily presence. Silence, in the way that I’m encouraging us to think of it, emanates from the body taken whole. Consider, for example, how even this word we’ve landed on for describing the practice of being online but saying/posting nothing, the word lurking, ordinarily suggests something rather untoward, even sinister precisely because in non-digitized settings it conveys the sense of watching without being seen, that is of a presence that hides itself, that fails to materialize in the sight of others. To be clear, my interest here is not to police how the word silence gets used. You are perfectly within your rights to go on using the word silence however you please. Rather, I’m taking the opportunity presented by digital media to reconsider an aspect of analog experience that might not have been fully appreciated—a generally fruitful exercise, in my view, the point of which I should add, is simply to see both digital and analog forms of communication more clearly for what they are or for what they can and cannot be. I have lately been helped in this direction by a couple of short, less well-known pieces by Ivan Illich, a couple of deep cuts, if you will. In the first, “Silence is a Commons,” Illich reflects upon the moment he was brought, within weeks of being born, to his grandfather’s estate off the coast of Dalmatia. He noted that little had changed on the estate since its establishment in the late middle ages. Then, however, something did change. “On the same boat on which I arrived in 1926,” Illich explained, the first loudspeaker was landed on the island. Few people there had ever heard of such a thing. Up to that day, all men and women had spoken with more or less equally powerful voices. Henceforth this would change. Henceforth the access to the microphone would determine whose voice shall be magnified. Silence now ceased to be in the commons; it became a resource for which loudspeakers compete. Language itself was transformed thereby from a local commons into a national resource for communication. As enclosure by the lords increased national productivity by denying the individual peasant to keep a few sheep, so the encroachment of the loudspeaker has destroyed that silence which so far had given each man and woman his or her proper and equal voice. Unless you have access to a loudspeaker, you now are silenced.Illich would retain a lifelong aversion to using a microphone. And, before proceeding any further, let me say that I don’t think Illich is suggesting that there had never been other, more heavy-handed ways of silencing people before the advent of loudspeakers.The idea of silence as a commons, as Illich described it here, suggests to a me shared space into which you and I might enter and have just as much of a chance of being heard as anyone else. Technologies that augment the human voice empower those who possess them at the expense of those who do not, setting off the escalatory dynamics that eventually generate counterproductivity. To put it that way, as long-suffering readers know by now, is to use standard Illichian terminology. Put otherwise, we might say that tools that augment speech trigger an arms race for ever more powerful tools that do the same until finally everyone is thus provisioned with the result being that human speech itself begins to break down. To give everyone a loud speaker is to assure that no one can be heard. Illich’s anecdote is, of course, a provocative reversal of the usual way that new media tend to be presented as a necessarily democratizing and empowering force, and it seems closer to mark as the events of the last decade or so have illustrated. The ostensible promise of social media was that anyone’s voice could now be heard. Whether anyone would be listening has turned out to be another matter altogether, as would be the society-wide consequences. Naturally, this is not to deny that in some cases social media can serve the interest of marginalized communities. It is only to suggest the rather obvious point that, taken as a whole, its consequences are more complicated, to put it mildly. “Just as the commons of space are vulnerable and can be destroyed by the motorization of traffic,” Illich went on to argue, “so the commons of speech are vulnerable, and can easily be destroyed by the encroachment of modern means of communication.” Illich was presenting these comments in 1982, and he had emerging electronic and computerized modes of communication chiefly in mind. The issue he proposed to his audience for discussion was this: how to counter the encroachment of new, electronic devices and systems upon commons that are more subtle and more intimate to our being than either grassland or roads - commons that are at least as valuable as silence. “Silence, according to western and eastern tradition alike,” he went on to add, is necessary for the emergence of persons. It is taken from us by machines that ape people. We could easily be made increasingly dependent on machines for speaking and for thinking, as we are already dependent on machines for moving.There seem to be two distinct concerns for Illich. The first is that we lose the commons of which silence is an integral part and thus a measure of freedom and agency. The second, concurrent with the first, is that you and I may find it increasingly hard to be heard even as we are given more and more tools with which to speak. Alternatively, we might also distinguish between silence as a space of possibility and silence as itself a good to be defended, something we need for its own sake. Illich is reminding us yet again that what we may need is not more of something, in this case words, but less. I would say, too, that the temptation to be resisted, if I may put it that way, is that of reducing human interaction to a matter of information transfer, something that can be readily transacted without remainder through technological mean. This is the message of the medium, in McLuhanist terms: that, becoming accustomed to electronic or digitized forms of communication, I forget all that is involved in being understood by another and which cannot be encoded in symbolic form. A second early essay by Illich helped me to think about silence from yet another perspective: silence as an indispensable element of human conversation. In the 1960s, Illich was involved in leading intensive language courses in Mexico, mainly for clergy interested in learning Spanish. Illich prepared brief talks for participants, and one such talk has been published as “The Eloquence of Silence.”The title itself tells you much of what Illich will go on to say: silence can be eloquent, it can, paradoxically, “speak.” As Illich puts it in his opening remarks, “Words and sentences are composed of silences more meaningful than sounds.” Thus, as Illich explains, “It is … not so much the other man’s words as his silences which we have to learn in order to understand him.” And a bit further on, he adds, “The learning of the grammar of silence is an art much more difficult to learn than the grammar of sounds.” Illich is presenting his students a view of language learning that renders it an ethical as well as a cognitive undertaking. He says, for example, that “to learn a language in a human and mature way is to accept the responsibility for its silences and for its sounds.” In his brief comments introducing the piece, Illich goes so far as to argue that “properly conducted language learning is one of the few occasions in which an adult can go through a deep experience of poverty, of weakness, and of dependence on the good will of another.” Illich goes on to identify three different kinds of silences as well as their destructive and degrading counterfeits. I won’t walk us through each of these, but I will draw your attention to a few portions of Illich’s discussion. The primary context for what Illich has to say here is the face-to-face encounter, and we would do well, in my view, to take to heart what Illich has to tell us here, even if we’re not presently involved in language learning. It’s obvious that Illich is not just commending a technique for learning a language but a way of becoming a better human being. Recalling where we began, however, I would also encourage us to consider how the forms of silence Illich commends fare in the the context of social media and how we might adapt our use and expectations of social media accordingly. “First among the classification of silences,” Illich tells his students, “is the silence of the pure listener … the silence through which the message of the other becomes ‘he in us,’ the silence of deep interest.” Illich adds that “the greater the distance between the two worlds, the more this silence of interest is a sign of love.” Here Illich is encouraging a silence grounded in humility, a silence that arises not from a desire to be heard but from a desire to hear and to understand. A second kind of silence is the silence that precedes words, a silence that is a preparation for speech. It involves a patience that deeply considers what ought to be said and how, one that troubles itself over the meaning of the words to be used and proceeds with great care. This silence is opposed, according to Illich, by all that would have us rush to speak. “The silence before words,” Illich adds, “is also opposed to the silence of brewing aggression which can hardly be called silence—this too an interval used for the preparation of words, but words which divide rather than bring together.” There was a third kind of silence, “this is the silence of love beyond words,” Illich explains, but I will leave it to you to read what Illich has to say about that. In both of the two former cases, Illich commends virtuous and meaningful silences that pass between people as they seek to be understood, indeed silences upon which meaning may very well depend. While Illich’s comments focus on speakers of different languages, it seems to me that what he has to say should inform how we relate to others even when we share a common language. But, and here again is the point I’ve been driving at throughout this post, such silences can take shape only when we are in the presence of another, although even then, of course, they may fail to materialize. They seem to me to be the kind of silences that are mutually felt and acknowledged, that are a function not merely of the ceasing of sound but of a body at ease or eyes that remain fixed. These are silences which assure the other they are being heard not ignored. Silences that, if attended to closely and with care disclose rather than veil, clarify rather than obfuscate. They gather rather than alienate. The frequency with which the kind of silences we do encounter in the context of digital media occasion anxiety and misunderstanding and invite hostility is striking by contrast. And let me say again that this is not intended as a brief against social media. It is rather a way of exploring at least one reason why the experience of social media can take on such a dispiriting quality. My suggestion here is that it does so, in part, because we are forced to make do without meaningful silences. Maybe it is possible to bring the spirit of such silences to bear on exchanges that unfold on social media, but it seems that we are then working against the grain of the medium, seeking a fullness of experience in the absence of the materiality that sustains it. But perhaps that is all that we can do. To remember, in its absence, the silence that is not merely an absence. At least it seems to me that we are in a better position to proceed if we are at least aware of what we are missing when we seek to speak and to be heard online. Coda: Most of this was written over the last day or two. It was finished, however, in the immediate aftermath of yet another tragic shooting, this one in Boulder, Colorado. The precise number of the dead has not been disclosed, but it appears that several lives have been lost. One of my first reflections on silence and social media was occasioned by the shooting at Sandy Hook Elementary in 2012. Nearly ten years later it seems that we’ve not come very far. At least it seems as if I am still trying to say more or less what I sought to say then: We know that our words often fail us and prove inadequate in the face of the most profound human experiences, whether tragic, ecstatic, or sublime. And yet it is in those moments, perhaps especially in those moments, that we feel the need to exist (for lack of a better word), either to comfort or to share or to participate. But the medium best suited for doing so is the body, and it is the body that is, of necessity, abstracted from so much of our digital interaction with the world. With our bodies we may communicate without speaking. It is a communication by being and perhaps also doing, rather than by speaking.[…]Embodied presence also liberates us from the need to prematurely reach for explanations and solutions — for an answer. If I can only speak, then the use of words will require me to search for sense. Silence can contemplate the mysterious, the absurd, and the act of grace, but words must search for reasons and solutions. This is, in its proper time, not an entirely futile endeavor; but its time is usually not in the aftermath. In the aftermath of the tragic, when silence or “being with” or an embrace may be the only appropriate responses, then only embodied presence will do. Its consolations are irreplaceable. Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app