On the KFUO radio show Reformation Rush Hour Pr. Donavon Riley joined Pr. Donofrio for a show
“to discuss vocation and social media. First the importance of understanding vocation and how we all serve our neighbor in our work and in this, we serve God.
Then the pastors go on to discuss how we should carry our vocations into our interactions in person and on social media.”
It was a great show and I highly recommend it.
It also put me in mind of a chapter that I wrote for my paper on libraries and new technologies last that I gave last March. In the chapter, I discuss some of the deleterious effects of the new internet technologies that we should be aware of. Pastor Riley also talked about Sherry Turkle a bit, and I might do a few more posts in the coming weeks which deal more with her insights (the chapters from the paper that follow this one).
Here it is:
Ethical Issues with Information Technology
Big data is just people in disguise…. Even friendly, consumer-facing Siren Servers ultimately depend on spreading costs to the larger society…. – Jaron Lanier [i]
At this point, we will begin to address some of the ethical issues related to the use of modern “information technology” – in general, is information technology being used in accordance with technology’s rightful purposes, as described above? In passing, I note that technologies always present certain temptations to us, but that with “information technologies” – many arising from those immersed in the MSTM [modern scientific and technological mindset] – the temptation is simply more powerful (much more on this in the next section). Also note that since concerns about privacy[ii], facial recognition software[iii], and all around “dataveillance” have been covered extensively elsewhere[iv], I will not focus on these issues. Later on in the presentation, we will apply what is discussed here in general to libraries in particular.
Ethical issue #1 – information technology tempts us to overly simplify everything

Jaron Lanier
Jumping off of Croll’s comment from the last section, it is true that the optimal three-wheeled device is indeed something that can be determined – by mathematical engineering and testing – and can thus be labeled a convergent problem. In other words, a variety of solutions are proposed and tested, until finally a design emerges which is “the answer” and remains amazingly stable over time. [v] I will take his word for it that much reliable optimization of this kind can occur via computer without humans doing on-the-ground empirical testing.
That said, life is full of not only convergent problems – where evidently the need for human creativity and activity is in the decline – but divergent ones as well, where human creativity is definitely needed. What is an example of divergent thinking? To give one easy example, one cannot ask whether more discipline or freedom in education is the best thing because it needs to be a complicated mix. E.F. Schumacher, in his enlightening little 1977 book A Guide for the Perplexed, looks at life in a rather broad fashion and puts it this way:
“Justice is a denial of mercy, and mercy is a denial of justice. Only a higher force can reconcile these opposites: wisdom. The problem cannot be solved, but wisdom can transcend it. Similarly, societies need stability and change, tradition and innovation, public interest and private interest, planning and laissez-faire, order and freedom, growth and decay. Everywhere society’s health depends on the simultaneous pursuit of mutually opposed activities or aims. The adoption of a final solution means a kind of death sentence for man’s humanity and spells either cruelty or dissolution, generally both… Divergent problems offend the logical mind (italics mine).”[vi]

Many recent analyses resonate with Lanier’s observations
From where I sit, it seems clear that a focus on the technological tends to crowd out the nuances here. Much of this is simply the nature of computer technology: since Turing’s invention of the computer, for example, it has become a popular idea to simply think about our minds merely as computers – albeit as “wetware” instead of hardware. And as Jaron Lanier points out about computer software, it will always necessarily constrict the world it creates – one must use what has been built into the tools.[vii] I think it is clear that Big data popularizers and proponents Kenneth Cukier and Viktor Mayer-Schonberger are “constricting the world” when they ask: “The possession of knowledge…. is coming to mean an ability to predict the future… the data will take center stage… In a world where data shapes decisions more and more, what purpose will remain for people… or for going against the facts?” and answer with “the spark of invention” – because algorithms would have said that Ford’s potential customers wanted a “faster horse”.[viii] Here, I suggest that truly divergent problems are being passed over – for even with given facts that are not in dispute (sometimes a rare thing!) plenty of problems besides a lack of innovation remain. This is a simplistic approach that sees convergent problems where there is in fact no convergence. And no, I do not think that we could say that “facts and inventiveness” would belong in Schumacher’s list above!
Ethical issue #2 – information technology tempts us to push real costs on to everyone else
Lanier also points out another major ethical issue that he sees with modern computer technology, and this has to do with its capacity to concentrate wealth and power. Let me share some of his observations that may not immediately seem to be relevant, but certainly are.
Based on his own personal experiences doing consultation for various institutions, Lanier notes that some health
![Lanier: “[terms of agreement] basically do not exist, except for setting the basic rule everyone understands, which is that the server takes no risks, only the users of the server.”](https://infanttheology.files.wordpress.com/2015/02/pic2.png?w=300&h=243)
Lanier: “[terms of agreement] basically do not exist, except for setting the basic rule everyone understands, which is that the server takes no risks, only the users of the server.”
insurance companies, for example, have always wanted to only insure persons who did not need insurance. That said, with Big data and better computers, this temptation has now been made possible for those who possess them. In his words, you can now “create the perfect insurance company…” Lanier points out that some companies have bigger and better computers that run faster, are better connected, and built and maintained by the best mathematicians. Those who have the means to do so, are able to create “intense approximations of wealth and power around giant computers”. He notes that what has happened in the financial world (with disaster-causing things like bundled derivatives), the consumer communications world (Google and YouTube, Facebook), and the political world (election winners all use big computers) go hand in hand: the same story is at work.
According to Lanier, if the course of this ship is not somehow adjusted, the end results will be disastrous. And his explanation is convincing. He says that when you try to radiate all the risk to someone else society is not infinitely large and so cannot absorb all that risk. “One might think that all the risk has been loaded on to someone else but eventually you will have to pay for it”. “There is no free lunch”, Lanier asserts, drawing an analogy with what he sees happening here to Maxwell’s demon, the supposedly perpetual motion machine. “This is what happened with big finance… [this is what happened with] Enron. There is not an infinitely large society that can absorb the risks from these ‘perfect’ schemes.”

Maxwell’s demon: it does not work because energy it expended in the very act of discernment. No free lunch.
In sum, Lanier’s whole book is about how “information should be free” – at least when applied broadly beyond libraries (see Schumacher quote above)! – is a nice and understandable-sounding slogan, but eventually leaves us in the lurch.[ix] He notes that because idealists like him had “insisted that information be demonetized online”, “services about information, instead of the information itself would [inevitably] become the main profit centers”.[x] Therefore, even those who don’t want to play the game have a hard time avoiding it: once everyone else is on Facebook, it is a constant battle to explain why you don’t care to be (as Croll points out[xi], in the future persons who do not participate in things like this may well be considered suspicious).
Lanier’s own answer to this issue is that people who contribute information that is of value on the web should be compensated for the value they contribute online.[xii] Noting again that the efforts of real human translators underlie all machine translation, he states “the rise of inequality isn’t because of people not being needed — more precisely, it’s because of an illusion that they aren’t even there” and “Big data is just people in disguise”. [xiii] Lanier also takes issue with the idea that companies like Google and Facebook are simply getting revenue from advertising. Rather, we are their product and they are selling anonymized data – which is calculated “off the books” – to companies, who then create behavioral models to subtlely manipulate “what steps are put in front of you” – not communicate with us with what we have traditionally called advertising. [xiv] Again, I will not even get into the tangled thicket that is privacy issues related to Big data here, even as these are certainly at issue as well.[xv]

Play nice and be a “friend”
Ethical issue #3– information technology tempts us to be more self-centered and to increasingly “commodify” the world
Of course another issue presented by current information technologies is that their anonymity and ease of use can readily help us “enhance” our self-centeredness and self-justifying tendencies. We live in what seems to be not only an increasingly quantified but commodified world, where it is easier to treat ourselves and one another like commodities and accessories, where, as a self-help book of 15 years ago put it, we are all about “getting what you want in your relationships”. Going hand in hand with this, it seems to me, is a recent book titled: Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts by Carol Travis. I can’t tell you why she thinks it happens, but I simply will say that the internet – removing us from the real awkwardness of having to physically face other flesh and blood (yes, like the temptations posed by paper before it!) – certainly has the ability to exacerbate things.[xvi]
Speaking of “getting what you want”, Lanier writes in the N.Y. Times that
“A Siren Server gains influence through self-effacement. There is a Zen quality to it. A big computational-finance scheme is most successful when the proprietors have no idea what they finance. The whole point is to make other people take risks, and knowledge means risk. The new idea is to have no idea whether the security you bundled is fraudulent or not.”[xvii]
This is a concrete example of avoiding responsibility, and of “mistakes being made”, but “not by me”. He goes on to say: “The point is to be a computational actor — the more meta, the better — but without seeming, or behaving like, an actor. The digital pursuit of reward without risk happens automatically, at arm’s length. Documents are signed by ‘robosigners,’ and prices are set by ‘price bots.’”
Obviously, any idea of “doing unto others as you would have them do unto you” can be kept at a comfortable arm’s length here as well. Is it too much to say that increasingly, the subtle message is that others must earn our respect – not only to be seen as a decent human being, but to be treated like one, period? I submit that the increasing necessity for us to “prove our right to exist”[xviii] in each and every sphere of life is a problem that is enhanced through digital technologies – for example, a recent Pediatrics study talks about how digital devices negatively affect many parents around dinnertime.[xix] Yes, online technologies may occasionally “bring out our best”, but more often than not this is not the case (more in the next section about what might be the deeper reasons for this).
Ethical issue #4– information technology tempts us to forget how to do traditional yet valuable tasks – and tempts us to avoid attention-developing practices in general

Nicholas Carr
There is one more issue I think should be addressed here and it can be summed up in the terms “memory” and “attention”. Technology and culture watcher Nicholas Carr has recently been addressing the mass “externalization of knowledge”[xx] – both the “know-that” and the “know-how”. He addresses the concern that much knowledge – perhaps some of it necessary – is being lost as we continue to rely on devices and Big data to help us do our work. For example, as an illustration, he shares the interesting example of Inuit eskimos who can no longer rely on the repository of tracking skills hitherto passed on by tradition due to the current reliance on GTS systems. What kind of responsibility does each one of us have to reflect on what human abilities can or should not be lost? We may worry – rightly – about the crippling effects that a terrorist attack could have on the technological infrastructure that we rely on, but as Carr reminds us, there are other ways that “all can be lost”.[xxi] Carr warns: “Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.” Of course, it goes without saying that businesses, as opposed to things like schools, are going to have a vested interest in making things as easy as possible for their customers – even if this necessarily means that their customers will thereby miss out on learning skills that would be valuable and beneficial to them. As regards our ability to sustain attention when it comes to things like reading books, Carr also notes in his popular work, the Shallows, “deep and concentrated cognitive exercise changes the synapses between neurons and the structures of the neurons themselves”.
And fighting distraction en route to self-discipline is not only a practical issue, but a deep moral issue as well – and one we can’t outsource. That said, there are those who think even our conscience can be outsourced – or, to be more fair, supplemented via technical means. Ariel Garten, the inventor of a wearable and computerized biofeedback device, Muse, talks about some of the possibilities of this kind of technology: perhaps in the future, your iTunes device, reading your brain’s electrical signature, will say to you “I saw you were depressed….would you like this song played for you?”.[xxii] Even better, certain devices might be able to convince us , through their gentle nudging[xxiii] that, “I can not yell at my kids…”. She says that we will have knowledge of ourselves via technology and be able to make better choices. As her interviewer said, “better parenting through thought-controlled computing.”[xxiv]
Why not? Why should I not hope for help and guidance from a technology (or, more specifically, the ones who created and programmed it) instead of myself and those others who love me? Human life is so messy after all. As I have been noting, the temptation is very real and very alluring, especially when the MSTM dominates our way of thinking. More on that in the next section.
FIN
[i] Lanier, Jaron. “Fixing the Digital Economy.” New York Times, Jun 09, 2013, Late Edition (East Coast). More from his book: “Automation can always be understood as elaborate puppetry….It turns out… that big data coming from vast numbers of people is needed to make machines appear to be ‘automated.’ Do the puppeteers still get paid once the whole audience has joined their ranks?” Who Owns the Future? New York: Simon & Schuster.,. 123-124.
[ii] See Cumbley, R., and P. Church. 2013. “Is ”Big Data” Creepy?” The Computer Law and Security Report. 29 (5): 601-609: “Whilst the mere collection of this information can be intrusive, the privacy risks are multiplied when multiple pools of data are combined. However, data combination is one of the central aims of Big Data analysis. It creates valuable datasets, even when purportedly anonymized… One prominent example is Google…. A footnote says “See, for the example, the analysis of data about users of Everything Everywhere’s network by Ipsos Mori as discussed in Switch on and you become a goldmine, Richard Kerbaj and Jon Ungoed-Thomas, The Sunday Times, 12 May 2013 and Ipsos MORI’s response on 12 May 2013, http://www.ipsos-mori.com/newsevents/latestnews/1390/Ipsos-MORI-response-to-the-Sunday-Times.aspx”
[iii] See http://www.businessinsider.com/advertisers-using-facial-recognition-technology-2013-5?op=1.
[iv] See Cumbley, R., and P. Church. 2013. “Is ”Big Data” Creepy?” The Computer Law and Security Report. 29 (5): 601-609 and for information on the NSA see https://firstlook.org/theintercept/2014/02/24/jtrig-manipulation/ Interestingly, this is an issue that unites persons of different political extremities. Lanier notes that the public library is the last place you can learn without being watched, without your data being aggregated. “There’s a remarkable thing about the public library,” he said. “If you go to the public library to learn about something, and you do it with paper books, it’s the only instance in which you can learn in our society today…[where] you aren’t under observation.” “Jaron Lanier on Big Data”. 2013. Library Journal –New York. 138 (19): 18.
[v] For a nice example of this see Schumacher, E. F. 1977. A Guide for the Perplexed. New York: Harper & Row, p. 121. “Various solutions are offered which gradually and increasingly converge until, finally, a design emerges which is ‘the answer’ – a bicycle – an answer that turns out to be amazingly stable over time… because it complies with the laws of the Universe – laws at the level of inanimate nature.”
[vi] Ibid, p. 127. More excellent, and I would say very ethical, observations from p. 5 and 125: “What we have to deplore… is not so much the fact that scientists are specialising, but rather the fact that specialists are generalizing…. Convergent problems relate to…where manipulation can proceed without hindrance and where man can make himself ‘master and possessor,’ because the subtle, higher forces – which we have labeled life, consciousness, and self-awareness – are not present to complicate matters. Wherever these higher forces intervene to a significant extent, the problem ceases to be convergent”.
[vii] The New York Public Library. 2013. “Jaron Lanier | LIVE from the NYPL.” YouTube video, October 10. https://www.youtube.com/watch?v=aFW9qxKojrE.
[viii] Cukier, K., and V. Mayer-Schoenberger. 2013. “The Rise of Big Data How It’s Changing the Way We Think About the World”. Foreign Affairs – New York. 92 (3): 28-40, p. 39 and 40.
[ix] Barclay, Paul. 2013. Jaron Lanier: Reconstructing the Digital Economy. Big Ideas. podcast radio program. Sydney: ABC Radio National, July 10. http://www.abc.net.au/radionational/programs/philosopherszone/minds-and-computers/3290844
[x] Schumacher, E. F. 1977. A Guide for the Perplexed. New York: Harper & Row, 207.
[xi] OCLCVideo. 2013. “Alistair Croll: Implications and Opportunities of Big Data.” YouTube video, March 13. http://www.youtube.com/watch?v=Ic_BlPesEls.
[xii] I simply cite his ethical point. How this could be done is another issue, and some have strenuously argued that Lanier’s prescription for the solution is untenable and unworkable. I wish I could disagree with them.
[xiii] Again, Lanier constantly notes “the persons behind the curtain”. As regards machine translation, AI in the classic sense does not work, but big data does work – but big data is just people in disguise. Lanier notes that we can’t unlock the “formulas” to translate like Einstein did for space and time. Therefore, the only question is whether or not we acknowledge the amount of value that persons put into the econ or not. Otherwise, wealth and power will continue to be concentrated around those with biggest computers.
[xiv] More complete picture (from my notes): We have decided that the only business plan that’s viable in the information space – because we believe information should be free – is to use behavior models of people or behavior models of the world to manipulate the world….that’s a much better description of what companies like Google and Facebook sell than the term advertising…. Manipulating the options in front of you is not like advertising – it is not a communications act – it’s a subtle manipulation of what steps are put in front of you.
[xv] Notes from Davis, Kord, and Doug Patterson. 2012. Ethics of Big Data. Sebastopol, CA: O’Reilly: (p. 16): identity (Christopher Pool says multifaceted, Zuckerberg says having more than one demonstrates a “lack of integrity”), privacy (funny 1993 New Yorker cartoon: “no one knows you’re a dog on the internet”…. “what right do others have to make [information about one’s identity] public?…” “Can the creation of data about ourselves be considered a creative act? Does our mere existence constitute a creative act? If so, then do not all the legal protections associated with copyright law naturally follow?” [17] “Why do we expect the ability to self-select and control which facets we share with the world online to be the same as it is offline?” [18]), reputation (ability to manage this online is growing farther and farther out of individual control [18]) and ownership (“do we, in the offline world ‘own’ the facts about our height and weight?”, does info about us or what we can do “constitute property that we own? Is there any distinction between the ownership qualities of that information?” “As open data markets grow in size and complexity, open government data becomes increasingly abundant, and companies generate more revenue from the use of personal data, the question of who owns what – and at what point in the data trail – will become a more vocal debate” [19].
[xvi] Fate has always been a way of avoiding personal responsibility, but perhaps it can now be supercharged with technology.
[xvii] Lanier also notes that “YouTube doesn’t take responsibility for checking if a video, before it’s uploaded, violates a copyright. Facebook isn’t culpable if a tormented teenager is driven to suicide.” Lanier, Jaron. “Fixing the Digital Economy.” New York Times, Jun 09, 2013, Late Edition (East Coast).
[xviii] I admit that I fight being cynical about the world, even as I, for religious reasons, have great hope. In a world increasingly focused on the efficient acquisition of commodities to meet our desires, all are expected to prove their worth, and perhaps, in some cases, their case for continuing to be able to exist, presuming that they their lives are not ended early on. Real love, as opposed to a “love” rooted only in feelings of what the other does for us, has left the building.
The new Pope has some very insightful words here as well, it seems:
The joy of living frequently fades, lack of respect for others and violence are on the rise, and inequality is increasingly evident. It is a struggle to live and, often, to live with precious little dignity. This epochal change has been set in motion by the enormous qualitative, quantitative, rapid and cumulative advances occuring in the sciences and in technology, and by their instant application in different areas of nature and of life. We are in an age of knowledge and information, which has led to new and often anonymous kinds of power. (p. 45)…
Human beings are themselves considered consumer goods to be used and then discarded. We have created a “throw away” culture which is now spreading. It is no longer simply about exploitation and oppression, but something new. Exclusion ultimately has to do with what it means to be a part of the society in which we live; those excluded are no longer society’s underside or its fringes or its disenfranchised – they are no longer even a part of it. The excluded are not the “exploited” but the outcast, the “leftovers”. (p. 46)
“We have created new idols. The worship of the ancient golden calf (cf. Ex 32:1-35) has returned in a new and ruthless guise in the idolatry of money and the dictatorship of an impersonal economy lacking a truly human purpose. The worldwide crisis affecting finance and the economy lays bare their imbalances and, above all, their lack of real concern for human beings; man is reduced to one of his needs alone: consumption.” (p. 47)
“In the prevailing culture, priority is given to the outward, the immediate, the visible, the quick, the superficial and the provisional. What is real gives way to appearances. In many countries globalization has meant a hastened deterioration of their own cultural roots and the invasion of ways of thinking and acting proper to other cultures which are economically advanced but ethically debilitated.”
Catholic Church. 2013. The joy of the gospel = Evangelii gaudium : Apostolic Exhortation, accessed November 2013, http://www.vatican.va/evangelii-gaudium/en/files/assets/basic-html/page46.html
[xix] See http://well.blogs.nytimes.com/2014/03/10/parents-wired-to-distraction/
[xx] I think that it is not wrong to talk about “externalizing knowledge” per se, even as I think it is better to talk about “knowledge” that does not reside in actual human beings as information, simply in order to highlight the importance of the personal aspect (here, see Michael Polanyi’s Personal Knowledge). The question simply revolves around what we want to make sure remains in our “working memories” as well. Along these lines, E.F. Schumacher offers some general observations about this kind of internal “mapmaking” that seem to me most helpful:
Mapmaking is an empirical art that employs a high degree of abstraction but nonetheless clings to reality with something akin to self-abandonment. Its motto, in a sense, is “Accept everything: reject nothing.” If something is there, if it has any kind of existence, if people notice it and are interested in it, it must be indicated on the map, in its proper place……What is the value of a description if it omits the most interesting aspects and features of the object being described?
Schumacher, E. F. 1977. A Guide for the Perplexed. New York: Harper & Row, 7, 118.
Is all “pattern recognition” valid? Is it reasonable to think that we can creatively “synthesize information” any way we intuit? I submit that reality not infinitely malleable, i.e., “it can’t be carved up just any way” as David Weinberger said a few years ago. The University of Chicago sociologist Andrew Abbot shares the interesting observation, that [even] in library-based work [historians, English literature, etc], there is “a taste for reinterpretation that is clever and insightful but at the same time founded in evidence and argument.” If we go with Schumacher’s map-making analogy above, it becomes clear that we human beings need very complex maps, because as Abbot says: “Meaning has an extraordinary multiplicity that cannot be easily captured by the rigidly limited vocabularies of variables and standard methods”. Quotes from Andrew Abbot, “The Traditional Future: A Computational Theory of Library Research” [pre-print], eventually published in College & Research Libraries vol. 69 no. 6, November 2008, pp. 524-545
[xxi] Carr, Nicholas, “All Can Be Lost: the Risk of Putting All Our Knowledge in the Hands of Machines,” The Atlantic, November, http://www.theatlantic.com/magazine/archive/2013/11/the-great-forgetting/309516/. Carr starts out the article talking about how pilots have had difficulty working with automatic pilot in some situations – unaccustomed to flying the plane themselves, when the automatic pilot has failed, this has recently led to some major crashes. Perhaps this bit from the article is preview of his upcoming book, The Glass Cage: The first automatic pilot, dubbed a “metal airman” in a 1930 Popular Science article, consisted of two gyroscopes, one mounted horizontally, the other vertically, that were connected to a plane’s controls and powered by a wind-driven generator behind the propeller. The horizontal gyroscope kept the wings level, while the vertical one did the steering. Modern autopilot systems bear little resemblance to that rudimentary device. Controlled by onboard computers running immensely complex software, they gather information from electronic sensors and continuously adjust a plane’s attitude, speed, and bearings. Pilots today work inside what they call “glass cockpits.” The old analog dials and gauges are mostly gone. They’ve been replaced by banks of digital displays. Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes. What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators….” For more on the difficulties encountered with the automation of fighter jet cockpits, see Bade, David. 2012. “IT, That Obscure Object of Desire: On French Anthropology, Museum Visitors, Airplane Cockpits, RDA, and the Next Generation Catalog”. Cataloging & Classification Quarterly. 50 (4): 316-334.
[xxii] Miller, Matt. 2013. Thought-Controlled Computing. This…Is Interesting. podcast radio program. Santa Monica: KCRW News, July 31. http://www.kcrw.com/news/programs/in/in130731thought-controlled_c (with guest Ariel Garten)
[xxiii] Carr: “I don’t have a microchip in my head – yet,” says the man charged with transforming Google’s relations with the technology giant’s human users. But Scott Huffman does envisage a world in which Google microphones, embedded in the ceiling, listen to our conversations and interject verbal answers to whatever inquiry is posed.
Ceilings with ears. A dream come true.
It’s clear now that Google and Microsoft have to bury the hatchet, if only to collaborate on a system combining the Microsoft Nudge Bra with the Google Ambient Nag. So when the Nudge Bra picks up a stress-related eating urge, the Ambient Nag will be able to say something like, “Do you really want those Twizzlers?”
The voice from the ceiling is only the beginning. Eventually, Huffman suggests, the Ambient Nag will become indistinguishable from the voice of your conscience:
Google believes it can ultimately fulfil people’s data needs by sending results directly to microchips implanted into its user’s brains. … “If you think hard enough about certain words they can be picked up by sensors fairly easily. It’ll be interesting to see how that develops,” Mr Huffman said.
Nicholas Carr, “Voice From Above,” Rough Type (blog), December 12, 2013, 4:01 PM, http://www.roughtype.com/?p=4095.
[xxiv] Miller, Matt. 2013. Thought-Controlled Computing. This…Is Interesting. podcast radio program. Santa Monica: KCRW News, July 31. http://www.kcrw.com/news/programs/in/in130731thought-controlled_c (with guest Ariel Garten)