Part I, from yesterday.
Before jumping into part II, I note that Gene Veith has a piece up this morning about baptizing robots!
Here is part II then, taken from my library technology presentation from last year.
I think Therefore You Aren’t?: Philosophical issues
“The machine does not isolate man from the great problems of nature but plunges him more deeply into them”.
— Antoine de Saint-Exupery[i]
I suggest we are slowly becoming one with the Mechanical Muse – surprisingly alluring – that like a physical automaton can serve as a symbol – a microcosm – of what the MSTM sees (at the very least as in practice)[ii] – as the cosmic machine, our “final frontier”. But we may question: is this really a bad thing? For example, Sherry Turkle may be warning us about the ways that machines can seduce us, but in a panel at a recent QUT Robotronica Event Dr Christy Dena spoke excitedly about such a phenomenon, stating: “All you have to do is put two eyes on a robot and people will treat it in a certain way”.[iii]
What is she really getting at here? Let me suggest this: when it comes to determining what is alive, what is a “person”, or what is at the very least equivalent to human being, all that is felt and thought to matter is what we notice with our senses. As one of the other distinguished panelists[iv] at this conference suggested, we as human beings will discern a robot to be an intelligent and self-aware entity when we say “I would have done that” (“What else do we have?”, even the sophisticated person today asks)[v]
Assuming this is true, what are the practical implications of this? I see two fundamental and related issues here: first, robots and how they “know” us. Second, and following from the first, how human beings are increasingly coming to “know” other human beings through technology.
How do robots really “know” us? Up to this point, I think it is easy to see how. Recently, in an interview with the New York Public library Jaron Lanier, when asked to share seven words that might define him, answered in a joking but semi-serious way, “our times demand rejection of seven word bios.”[vi] Doing that, Lanier explained, is a form of disempowerment because “you are creating database entries for yourself [i.e. “putting yourself in standardized forms”] that will put you into somebody’s mechanized categorization system.”[vii] As stated in Don DeLillo’s award-winning 1985 fictional novel “White Noise”: “…you are the sum total of your data. No man escapes that.”[viii]
This is how robots “know” us. The “useful fiction” for the robot – or, more accurately, for the one programming the robot – is that through a combination of some information about yourself – culled from structured and unstructured data sources – and some workable mathematical models and algorithms, you can be understood insofar as necessary – for the goals they think best (and how can you doubt that they care?[ix]). Yes of course, maybe the maker can’t really understand you on a deep level, but the maker, through the robot, can see evidence of what you do – perhaps even noticing things about your behavior that neither you nor anyone else has.[x] And that is all he needs: taking account of this “works” for him regarding the things he wants to do: sell things to you, prevent terrorism, perhaps even genuinely help you, etc.
I simply note what happens to the maker – and the users – as this kind of technology is embraced more and more – we choose to understand others through the limitations of the robot. Anyone who knows something about the origin of computers should not find it surprising that some who use powerful computers are tempted to reduce what is complex into a false simplicity. Alan Turing invented the computer based on his own idea – his own model – of how the brain operated and how human beings communicated. After the computer begin to dominate our lives, it became more and more common to think about the brain – and our own communication as human beings – in terms of the computer itself and computer networks. As far as it pertains to academia, this happened in the sciences as well as the humanities. Jaron Lanier even talks about how words like “consciousness” and “sharing” have been “colonized” by Silicon Valley nerd culture.[xi]
Is this a cause for concern? Is this perhaps a major frog in the kettle situation? Can we say that as we increasingly give ourselves to the technology, we see that it is not so much that the robots resemble us, but we that resemble the robots?[xii] Why am I wrong to suggest that technology – perhaps particularly computer technology – offers us powers that appear to enable us – like never before – to not have to really know and love persons and things – or at least to not know them very well?[xiii] Rather, with other human beings, we are ever more tempted to operate by force – applied more lightly or heavily as the case may be, aiming to attain what we want now in a “good enough” fashion – and supposedly with few or no consequences.[xiv]
To many, this evidently does not seem to be something to be overly concerned about. After all, perhaps it is only fair – at the very least it makes sense that robots might be people to (another “useful fiction” for now?)! And even though you don’t necessarily understand them, you do, after all, use their services which “work” for you.[xv] Do onto others as they do onto you, you know.
But if “good enough” increasingly becomes the one ring to rule them all, how will our human relationships be affected – and will we be able to keep going on like this? Perhaps that is the question for us. As technology becomes more and more ubiquitous, we see more and more “smart technology” leading up to more and more things automatic and robotic. What does this mean for each one of us? “Thank you for becoming a part of the machine?” The evisceration of our souls?
Is it not clear that those who give themselves over to the lure of these kinds of “power tools” – seeking the powers afforded by the technology apart from technology’s rightful purposes – in fact yield to the same pragmatism and reductionism those wielding them are captive to? In other words, are they not ultimately nullifying themselves philosophically, politically, and economically – their value increasingly being only the data concerning their persons… and its perceived usefulness?
The MSTM seems to increasingly be the water in which we swim – are we concerned?
Lanier’s wager and privatized humanism
Jaron Lanier is concerned and here is where his ideas again come into play. I see his book You are Not a Gadget, for example, as very much going against the flow – even the flow of what is generally thought to be knowledge.[xvi] In a recent interview with KCRW’s Matt Miller, Lanier stated that “we are better off believing we are special and not just machines”.[xvii] I see this point as critical in addressing another point he has made: “Clout must underlie rights, if rights are to persist”.[xviii]
And I will call this “Lanier’s wager”, drawing the analogy from Pascal’s more famous one. For me, hearing such words from Lanier resonate with my soul, but are, in the end, only slightly encouraging to me. It seems to me that he is fighting a losing battle with weak weapons.[xix] For these are not the days where most persons know of any real grounding for “the inalienable rights of man” or even the days of Ralph Waldo Emerson and Henry David Thorough when transcendentalism held some sway in the land. After all, even Lanier says, “we are the only measure that we have of the world”.[xx] These are the days where it is at least somewhat reasonable to talk about rights for animals[xxi] (is Lanier a “speciest”[xxii]?), plants, and yes, even robots.[xxiii] These are the days when there are movements headed by serious intellectuals among the elite of the elites that go by the name of “posthumanism” and “transhumanism”[xxiv].
For in the end, it comes down to this: Jaron Lanier’s humanism, as better as it might be compared to the views of many, is only a “privatized humanism”.
I submit that the fight cannot be won with a “privatized humanism” but can only be won when hearts and the habits of the heart are fundamentally changed.[xxv] Certainly, there are many who still believe that there is something more foundational about life’s essence than the simplest particles of physics and nature’s laws.[xxvi] That said, even here, the temptation is great even for those who try to hold onto traditional views. For example, in his review of the new book The Second Machine Age, David Brooks says “essentialists will probably be rewarded” in the machine-dominated economy. But whatever Brooks might mean by “essence”, it does not seem to be connected with any classical notion of permanence – i.e. something that is intrinsic, real, and lasting: “creativity can be described as the ability to grasp the essence of one thing, and then the essence of some very different thing, and smash them together to create some entirely new thing.”[xxvii] Sounds rather violent and un-conservative to me![xxviii]
The firm conviction that there really are essences in the world that ought not or cannot be changed – i.e. that there are some boundary lines that should and in some cases cannot be crossed (at the very least in the long run) – may certainly be seen as confining and suffocating.[xxix] But on the other hand, it can be comforting as well to know that that there really are some things we all have in common – and that we can count on.[xxx]
Along these same lines, reading about the all-important topic of education, I recently came across these wise words from one Robin Lewis: “Appreciating some artifacts are good in themselves, and not merely because of what they do for us, is the first step towards a proper appropriation of the liberal arts”[xxxi]
Indeed. And if that goes for things in general, it really does double in importance for other human beings in particular. Big data[xxxii] and information technology must bow to higher principles – held by human beings who sincerely believe in them. “Good enough” is not good enough for the library’s soul.
So let’s talk now about libraries, technology and the classical liberal arts…
[i] Brynjolfsson, Erik. 2014. Second Machine Age: Work, Progress, and Prosperity in the Time of Brilliant Technologies. [S.l.]: W W Norton, p. 249.
[ii] Again, historically it has not been uncommon to see the universe, or cosmos, as a machine – early on in the days leading up to modern science as a clock and later on as an automaton. See Cohen, John. 1967. Human Robots in Myth and Science. South Brunswick [N.J.]: A.S. Barnes, pp. 76-78. Nowadays, it is not uncommon to hear serious physicists talking about the possibility of the universe being a computer, or a quantum computer, or a computer program. The trasnhumanist (note that Nick Bostrom is a professor of philosophy at Oxford University and is the chairman of the World Transhumanist Association) Hans Moravec believes that the whole of our reality is a simulation created by machine intelligences from the future. In the introduction to the book Is God a Mathematician? by Larion Lavio he talks about how the fastest way to get rid of most pesky persons who want to share their theory of the universe with him is to tell them that they need to be able to express it mathematically, because no theory of the universe is worth anything unless this can be done. It is said in jest, but there is much more to that I think. The mechanical and mathematical seem to go hand in hand to me.
[iii] Barclay, Paul. 2013. Morals and the Machine. Big Ideas. podcast radio program. Sydney: ABC Radio National, October 3. http://www.abc.net.au/radionational/programs/bigideas/morals-and-the-machine/4881302.
[iv] Professor Gordon Wyeth, Head of School Science and Engineering, Queensland University of Technology.
[v] Barclay, Paul. 2013. Morals and the Machine. Big Ideas. podcast radio program. Sydney: ABC Radio National, October 3. http://www.abc.net.au/radionational/programs/bigideas/morals-and-the-machine/4881302.
Hence the famous Turing test, a test in which those who participate discern whether or not they are dealing with a robot or another human being by taking part in a simple conversation by exchanging messages back and forth.
[vi] This can simply be summed up as imperfect models not representing the world well – or as well as the context demands that you should know it – but forced on it nonetheless. Google made some big claims not long ago saying that it could trace flu outbreaks with the big data that it had, but was humbled later on when they weren’t able to do it in real-time. A man named Dr. Hansen said the problem was “data without context” and summed the situation up with a quote from the playwright Eugène Ionesco: “Of course, not everything is unsayable in words, only the living truth.” http://bits.blogs.nytimes.com/2013/02/24/disruptions-google-flu-trends-shows-problems-of-big-data-without-context/
I understand the power behind Ionesco’s critique and yet, as a Christian, my view of words is that they are meant to be living and active, life-giving and life-forming. Even when put on paper for safeguarding – perhaps then especially so. For I believe there is nothing less than human about the “technology” of writing. After all, one might memorize the love poems of the beloved, or even better, the Beloved. Yes, [living] context is key.
[viii] “What is most unfortunate about this development is that the data body not only claims to have ontological privilege, but actually has it. What your data body says about you is more real than what you say about yourself. The data body is the body by which you are judged in society, and the body which dictates your status in the world. What we are witnessing at this point in time is the triumph of representation.” (Critical Art Ensemble, The Electronic Disturbance, 1993 ; quoted in Gitelman, Lisa. 2013. “Raw data” is an oxymoron. Cambridge, Massachusetts: MIT Press, p. 121.
[ix]Are computers just not able to love perhaps because they do not have bodies – i.e. that they do not have an “embodied mind” – or is there something else that separates us from them? See http://www.abc.net.au/radionational/programs/philosopherszone/minds-and-computers/3290844
[x] If this sounds cryptic, read this short blog post by Phil Simon about “Big Data Lessons From Netflix”: http://www.wired.com/insights/2014/03/big-data-lessons-netflix/ In short, Netflix knows what kinds of colors are likely to get your attention in the movie and TV series posters they show you.
[xi] We can add the word “ontology” as well.
[xii] One might hope that when it comes to any technological development we would first focus on coming to deeply know and love the world – and to find the best ways to work with it to the mutual benefit of all. In other words, that we would exist in an environment where any technological development is slow, flexible, and constrained. “Permaculture” is a good metaphor here. More often than not however, it seems that we must operate in an environment where technological development cannot be slow. It cannot be flexible. It cannot be constrained.
[xiii] In an email message to the author from November 2013 from now retired University of Chicago librarian David Bade he commented: “If we reorient our understanding of knowledge to be what the lover alone knows of the beloved, and that precisely because that knowledge is freely and joyfully shared, knowledge as power is seen to be the lie that it is.” Compare this to Lord Kelvin: “When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.” (quoted on p. 57 Brynjolfsson, Erik. 2014. Second Machine Age: Work, Progress, and Prosperity in the Time of Brilliant Technologies. [S.l.]: W W Norton) Also note that if such scientism is a god of this age, it is eros, not phileo or agape (that is, that love which Bade spoke of above), that is another. See the highly insightly essay by philosopher Simon May “The irresistible appeal of the romantic ideal”, in this Financial Times article: http://www.ft.com/intl/cms/s/0/bf810484-9255-11e3-8018-00144feab7de.html#axzz2w38ZEGCe ; also this fascinating piece featuring a letter from J.R.R. Tolkien to his 21 year old son: http://www.albertmohler.com/2014/03/11/from-father-to-son-j-r-r-tolkien-on-sex/
[xiv] Even seemingly more humanistic endeavors might seem to occasionally fall prey to language that, in effect, makes human beings and data about human beings equals : “Ribes and Jackson [chapter 8] show the surprising complexities in something as apparently simple as collecting water samples from streams, while they challenge readers to think of scientists and their data as evolved and evolving symbionts, mutually dependent species adapted amid systems ecological and epistemic”. Gitelman, Lisa. 2013. “Raw data” is an Oxymoron. Cambridge, Massachusetts: MIT Press, p. 11 (introduction).
In a world where big data increasingly seems to rule I wonder if this kind of language helps…
[xv] It seems to me that many of us are like fish in the fish tank where all we know is “what works” and “useful fictions”.
[xvi] In other words, to say that the pragmatic approaches that we are discussing here are shortsighted is the least of our problems. Fundamentally, it seems to me that there is a crisis here in belief regarding any true knowledge.
Any real ontology (what is, period) and teleology are gone and even epistemology (the mind’s apprehension of reality… an analysis of the contents of consciousness… not what is but what is known and how), perhaps kept alive in a post-Christian age by movements like romanticism and historicism, has been eclipsed by a more or less pure and perpetually skeptical naturalism – which means we are left only with the pragmatism that must accompany this naturalism en route to our increasingly unreflective pursuits of happiness (and a little bit of social justice to of course).
[xvii] Miller, Matt. 2013. Will Google and Facebook Destroy the Middle Class? This…Is Interesting. podcast radio program. Santa Monica: KCRW News, Jun 5. http://www.kcrw.com/news/programs/lr/lr130605will_google_and_face
Lanier makes a similar, but not identical statement in p. 196 of his new book. There he states that while he can’t prove that people are special, “I can argue that it’s a better bet to presume we are special, for little might be lost and much more might be gained by doing so”. Lanier, Jaron. 2013. Who owns the future? New York: Simon & Schuster.
Here is where I must ask “how can this be enough”? It seems what is being said here is that we simply need “useful fictions” in order to survive and thrive as human beings. Lanier’s wager seems to me a house of cards – not having the requisite foundation. In other words, it seems to me that Lanier has some very good and true insights, but the intellectual superstructure that can actually buttress them at a deep and satisfying level has been removed. Lanier’s account – while perhaps being more compelling, personal, holistic, and “everyone has a voice”-ish than most – still seems to leave human beings in their position of being just another “a cog in the machine”.
I particularly find Lanier’s wager to be severely undercut by this statement from his book: “You are the reverse image of inconceivable epochs of heartbreak and cruelty. Your would-be ancestors in their many species, reaching back into the phylogenetic tree, were eaten, often by disease, or sexually rejected before they could contribute genes to your legacy. The genetic, natural part of you is the sum of the leftovers of extreme violence and poverty. Modernity is precisely the way individuals arose out of the ravages of evolutionary selection.” (p. 131)
Later on, he also makes this statement: “Belief in the specialness of people is a minority position in the tech world, and I would like that to change. The way we experience life – call it ‘consciousness’ – doesn’t fit in a materialistic or informational worldview. Lately I prefer to call it ‘experience,’ since the opposing philosophical team has colonized the term consciousness. That term might be used these days to refer to the self-models that can be implemented inside a robot.” (p. 195)
Lanier talks about predominant Silicon Valley forms of faith on pp. 193-195 of his book.
So some hard questions to think about: Other than getting some basic facts on the ground right to ensure survival, what is the non-transcendence-minded person’s strongest incentives (I would say Lanier seems to be a transcendence-minded person) to be as accurate as possible regarding all questions of significance persons have or care to have about what is true?
[xviii] Ibid, p. 205, This is the title of chapter 17 of his book.
[xix] Note that “the German philosopher Martin Heidegger developed the theory that technology, as it gradually comes to dominate our world, forces us to see the world in a defined way; a world view in which everything must necessarily be seen as a means to an end and where it is not possible to see anything as valuable in itself… This is in line with the German sociologist Max Weber’s view of development during industrialization. Here he speaks about more and more areas, beginning with working life, but with increasing ripples out to the ‘social’ work’, being dominated by a rational logic that stems from technology.” Danish Council of Ethics, “Technology in Human Development,” The Danish Council of Ethics, last date of modification not listed, http://www.etiskraad.dk/en/Temauniverser/Homo-Artefakt/Artikler/Kulturhistorie/Teknologien%20i%20menneskets%20udvikling.aspx, accessed Mar. 13, 2014. I have heard about and listened to lectures on both of these men, but have not read any of their works. I am not aware of whether or not they used the same arguments that I have used to arrive at their conclusions. In any case, I note that in spite of the power of Heidegger’s critique, there really is nothing positive – not to mention firm and confidence-inducing – that he has to put in its place. One wonders whether or not that could explain why a man like Heidegger – widely recognized as one of the most influential and brilliant philosophers of the 20th c. – ended up throwing in his lot with the Nazis, a fact that has only come to light in recent years.
[xx] Miller, Matt. 2013. Will Google and Facebook Destroy the Middle Class? This…Is Interesting. podcast radio program. Santa Monica: KCRW News, Jun 5. http://www.kcrw.com/news/programs/lr/lr130605will_google_and_face (with guest Jaron Lanier)
[xxi] To consider human beings no differently than animals seems to me an extreme position to take. Even radical environmentalists in effect treat human beings as special because they believe we are uniquely responsible to for being responsible stewards of the world. In any case, perhaps one looks closely at the practices of various kinds of factory farming, such an extreme position becomes, at the very least, more understandable.
[xxii] See the works of the highly regarded and respected Princeton ethicist Peter Singer.
[xxiii] Serious technologists talk about robots having rights. In the Robotronica conference mentioned above, most all of the panelists talked about how we must be forward thinking about his from a legal point of view. Yes, there were those who simply talked about this from a legal perspective: one noted that for liability reasons ships and companies are defined as legal persons and another pointed out that we have laws that protect companion animals for sake of owners (because they are attached to them) and therefore should also have that for companion robots. And yet another panelists argued that insofar as robots have the potential to be like human beings, they should be afforded that kind of status. We should think about them as a new kind of species. Barclay, Paul. 2013. Morals and the Machine. Big Ideas. podcast radio program. Sydney: ABC Radio National, October 3. http://www.abc.net.au/radionational/programs/bigideas/morals-and-the-machine/4881302.
[xxiv] “In posthumanism, the association of humanity with a “natural” (unenhanced) mind and body is reduced to an ‘accidental’ ‘biological substrate.’ Elsewhere, Hayles argues that be viewing the human as an existence without essence, ‘as a pattern rather than a presence,’ the body can be disposed of, and the mind uploaded to a database; the body, replaced with a cybernetic prosthesis; the mind, enhanced and ‘improved’ using computer software. The line that separates humans and machine, mind and computer is dissolved, and can become anything the designer wishes it to be.” Justin Everet, “The Borg as Vampire in Star Trek”, in Browning, John Edgar, and Caroline Joan Picart. 2009. Draculas, vampires, and other undead forms: essays on gender, race, and culture. Lanham, Md: Scarecrow Press, 79. For more on transhumanism, see this excellent web article from the Danish Council of Ethics: http://www.etiskraad.dk/Temauniverser/Homo-Artefakt/Artikler/Kulturhistorie/Transhumanisme.aspx
[xxv] Here is where I can only point to something outside of ourselves: transcendence, and particularly the Christian faith centered on the grace of God (I suggest a good study bible, and you can read my blog, theology like a child, for more from me – or feel free contact me using the “about” page there). At the very least, I am sure that many would agree that we should be curious about the nature of being and consciousness!
Here’s a start in that direction:
In the dawn of life we sense with a perfect immediacy, which we have no capacity or inclination to translate into any objective concept, how miraculous it is that—as Angelius Silesius (1624-1677) says—”Die Rose ist ohne warum, sie blühet, weil sie blühet”: “The rose is without ‘why’; it blooms because it blooms.” As we age, however, we lose our sense of the intimate otherness of things; we allow habit to displace awe, inevitability to banish delight; we grow into adulthood and put away childish things. Thereafter, there are only fleeting instants scattered throughout our lives when all at once, our defense momentarily relaxed, we find ourselves brought to a pause by a sudden unanticipated sense of the utter uncanniness of the reality we inhabit, the startling fortuity and strangeness of everything familiar: how odd it is, and how unfathomable, that anything at all exists; how disconcerting that the world and one’s consciousness of it are simply there, joined in a single ineffable event. … One realizes that everything about the world that seems so unexceptional and drearily predictable is in fact charged with an immense and imponderable mystery. In that instant one is aware, even if the precise formulation eludes one, that everything one knows exists in an irreducibly gratuitous way: “what it is” has no logical connection with the reality “that it is”; nothing within experience has any “right” to be, any power to give itself existence, any apparent “why.” The world is unable to provide any account of its own actuality, and yet there it is all the same. In that instant one recalls that one’s every encounter with the world has always been an encounter with an enigma that no merely physical explanation can resolve. Hart, David Bentley. 2013. The experience of God: being, consciousness, bliss. New Haven, Yale University Press, pp. 88-89.
This sounds rather intelligent, reasonable and erudite, does it not? And yet, nowadays it seems to me that it is becoming ever more fashionable to speak as Charles Blow does in this New York Times op-ed:
“I don’t personally have a problem with religious faith, even in the extreme, as long as it doesn’t supersede science and it’s not used to impose outdated mores on others.”
He goes on: “But some people see our extreme religiosity itself as a form of dysfunction. In a 2009 paper in the journal Evolutionary Psychology, Gregory Paul, an independent researcher, put it this way: “The level of relative and absolute societal pathology in the United States is often so severe that it is repeatedly an outlier that strongly reinforces the correlation between high levels of poor societal conditions and popular religiosity.” Charles M. Blow, “Indoctrinating Religious Warriors,” The New York Times, January 3, 2014, accessed Mar. 14, 2014, http://www.nytimes.com/2014/01/04/opinion/blow-indoctrinating-religious-warriors.html?ref=charlesmblow&_r=1
[xxvi] There are many who hold to sincere materialist / reductionist positions, and even for many who don’t believe this – or perhaps hold to these beliefs lightly (keeping it as one of their spheres of knowledge that may or may not overlap that much with the others) – it is easy to act like a “functional” reductionist. What I mean is that we start functioning largely with a view to assert ourselves over and against all those we must deal with because we feel we can’t trust them to be all that concerned about us – even if we don’t believe that life’s fundamental essence can be reduced to the smallest individual particles physics is able to discern.
FDR said, “No country, however rich, can afford the waste of its human resources. Demoralization caused by vast unemployment is our greatest extravagance. Morally, it is the greatest menace to our social order”, quoted in Brynjolfsson, Erik, and Andrew McAfee. 2012. Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Lexington, Mass: Digital Frontier Press, p. 65. I would say that there is an even greater menace that but a few realize and take seriously – even as all manner of social science can be drawn upon to support this assertion. That is that having strong natural families – nuclear or extended – is absolutely critical to having a healthy society. Christianity has bequeathed to us an understanding of the individual as having an infinite value, and yet the Enlightenment, stealing from Christianity, made the individual important in its own way, and it seems clear to me the natural family has been dissolved largely in the light of this Enlightenment acid. The individual replaces the family as the fundamental unit of social organization, and society cannot ultimately bear such atomization.
[xxvii] Brooks, David. “What Machines can’t do.” New York Times, Feb 04, 2014, Late Edition (East Coast). The full quote about essentialists is this: “essentialists will probably be rewarded. Any child can say, “I’m a dog” and pretend to be a dog. Computers struggle to come up with the essence of “I” and the essence of “dog,” and they really struggle with coming up with what parts of “I-ness” and “dog-ness” should be usefully blended if you want to pretend to be a dog.”
I note that Nicholas Carr talks about essence (actually substance) in his review of Andrew Keen’s book, Digital Vertigo, cleverly noting that “substance is more important than being transparent”. And yet here to, in this context, substance, or essence, does not necessarily talk about stable things that last, but rather the matter of personal integrity.
This quote from a Stanford humanist is telling: What I want to say….is that there is probably no way to end the exclusive dominance of interpretation, to abandon hermeneutics… in the humanities without using concepts that potential intellectual opponents may polemically characterize as “substantialist,” that is concepts such as “substance” itself, “presence,” and perhaps even “reality” and “Being”. To use such concepts, however, has long been a symptom of despicably bad intellectual taste in the humanities; indeed, to believe in the possibility of referring to the world other than by meaning has become anonymous with the utmost degree of philosophical naivete – and until recently, few humanists have been courageous enough to deliberately draw such potentially devastating and embarrassing criticism upon themselves. We all know only too well that saying whatever it takes to confute the charge of being “substantialist” is the humanities on autopilot (Gumbrecht, Hans Ulrich. 2004. Production of presence: what meaning cannot convey. Stanford, Calif: Stanford University Press, quoted in Armin Wenz. 2013. “Biblical Hermeneutics in a Postmodern World: Sacramental Hermeneutics versus Spiritualistic Constructivism.” Logia 22, no. 3: pp.?
This is someone who is in the belly of the beast so to speak, and these seem to be his conclusions about what is necessary to counter the more pernicious and reductive aspects of what has been called the “technological imperative” (if it can be done, it will be done, should be done).
[xxviii] “In his essay ‘Farewell to the Information Age,’ linguist Geoffrey Nunberg notes the shift in the nineteenth century from understanding information as the productive result of the process of being informed to a substance that could be morselized and extracted in isolated bits.” Nunberg, Geoffrey, in The Future of the Book, ed. Geoffrey Nunberg, Berkeley: University of California Press, 1996, 103-138, quoted in Garvey, Ellen Gruber, “Facts and FACTS : abolitionists’ database innovations”, in Gitelman, Lisa. 2013. “Raw data” is an Oxymoron. Cambridge, Massachusetts: MIT Press, p. 91
[xxix] Can we at least agree that all people everywhere are universally endowed with at least some shared concepts: e.g. “thirsty”, “clouds”, “tears”, “sad”, “food”, “mother”, “father”, etc. – and that this has great significance for us as human beings?
[xxx] This is similar to the dilemma faced by the secular Jew, Andrew Leff, who said the following:
“I want to believe – and so do you – in a complete, transcendent, and immanent set of propositions about right and wrong, findable rules that authoratively and unambiguously direct us how to live righteously. I also want to believe – and so do you – in no such thing, but rather that we are wholly free, not only to choose for ourselves what we ought to do, but to decide for ourselves, individually and as a species, what we ought to be. What we want, Heaven help us, is simultaneously to be perfectly ruled and perfectly free, that is, at the same time to discover the right and the good and to create it.” Leff, Arthur Allen. “Unspeakable Ethics, Unnatural Law”. Duke Law Journal. 1979 (6): 1229-1249, p. 1229.
And Nicholas Carr talks more about matters of essence, or nature, in a different, but perhaps related, context: “One of the advantages of embedding culture in nature, of requiring that works of reason and imagination be given physical shape, is that it imposes on artists and thinkers the rigor of form, particularly the iron constraints of a beginning and an ending, and it gives to the rest of us the aesthetic, intellectual, and psychological satisfactions of having a rounded experience, of seeing the finish line in the distance, approaching it, arriving at it. When we’re in the midst of the experience, we may not want it to end, we may dream of being launched into the deep blue air of endlessness, but the dream of endlessness is only possible, only has meaning, because of our knowledge that there is an end, even it is an arbitrary end, the film burning in the project…
The inventors and promoters of hypertext and hypermedia systems have always celebrated the way they seem to free us from the constraints of form, the way they seem to reflect the open-endedness of thought itself and of knowledge itself. Said Ted Nelson: ‘Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial.’ He did not mean that as a compliment.
But even though we read ‘forced’ and ‘artificial’ as negative terms, there’s much that’s praiseworthy about the forced and the artificial. Civilization is forced and artificial. Culture is forced and artificial. Art is forced and artificial. These things don’t spring from the ground like dandelions. And isn’t one of the distinctive glories of the human mind its ability to impose beginnings and endings on its workings, to carve stories and arguments out of the endless branching flow of thought and impression? Not all containers are jails. Imposing form on the formless may be artificial, but it’s also liberating (not least for giving us walls to batter).”
Nicholas Carr, “No Exit,” Rough Type (blog), October 29, 2012, 10:28 AM, http://www.roughtype.com/?p=2019.
[xxxi] Phillips, Robin. 2014. “More Than Schooling: the Perils of Pragmatism in Christian Attitudes Toward the Liberal Arts”. Touchstone, Sep, Oct 2013, accessed Mar. 2014, http://www.touchstonemag.com/archives/article.php?id=26-05-028-f
[xxxii] When it comes to big data, Lanier sees the fundamental issue as one of honesty: we can’t really be honest about what all the big data out there means when the powerful Siren servers that control that data have a vested interest in using that data for their own purposes. Barclay, Paul. 2013. Jaron Lanier: Reconstructing the Digital Economy. Big Ideas. podcast radio program. Sydney: ABC Radio National, July 10. http://www.abc.net.au/radionational/programs/philosopherszone/minds-and-computers/3290844 I think that is a noteworthy point, but also think that there is something even deeper going on here as I have argued – something to be aware of, and honest about.