Regarding the question of whether we can trust AI, I will observe that its trustworthiness is an artifact of its relationship with the truth. I wrote a lengthy piece last summer, an adaptation of which will be published in Salvo Magazine in the spring. I contend that language models are different from other forms of AI because their facility with language evokes a spellbound response in human beings. Here are a couple of snippets from my original piece that are relevant to your question:
"So the Judeo-Christian worldview holds that God uses language to create and to reveal, while Satan uses language to distort and to deceive. And it is on this point, the distinction between truth and falsehood, that the question of AI's malevolence hinges...More troubling than whether models are actually even able to always tell us the truth is the spellbinding effect of their facility with language, and how it is being actively exploited by those with an interest in encouraging a sense of awe and wonder directed toward the models themselves...All who perceive that human life is more than material and mechanistic must recognize how profoundly mistaken it is to encourage the notion that something so plainly mechanistic can nevertheless possess motives or agency. The cultural battle over what it means to be human is red hot at just this particular moment. This is no time to inadvertently affirm the delusion that machines are sentient beings."
My original post about all of this can be found here:
"...language models are different from other forms of AI because their facility with language evokes a spellbound response in human beings." I think "Spellbound" is key here. Even when people know that AI is merely sophisticated statistics, they are cast under a spell by its seemingly uncanny powers. Looking forward to reading your piece!
I like the use of the word “spellbinding” -- though in a way all human technology does that. We are entranced by extensions of ourselves (that is, technology or media) and are thus open to their effects without realizing it. Only by understanding, thinking, examining, can we stop the action or effect tech has on us. Only by asking what message the medium or media delivers can we parse whether or not the medium itself is good or bad or somewhere in between. Marshall McLuhan liked to say that technology is not neutral; and he’s right (though of course it’s complicated). Think of TikTok for example: even if every video was positive, it would still have a negative effect on us (attention) and preach a false word to us (human relationship and knowledge can happen almost instantaneously).
As far as resisting AI: Does anyone put it better than Wendell Berry? Do something that doesn’t compute. I’ve recently gotten rid of the internet at my house and the feeling that “I cannot be reached here” is wonderful. That and the time and attention to learn piano (from books) and practice singing (with a teacher) and to draw (thanks to printed art lessons) and to play with my kids is so life-giving. We don’t have to play by the rules of our technology-ruled society. We can choose to live differently, choose to find what gives each of us life. Do something that doesn’t compute!
"Do something that doesn't compute" seems to be a great act of immunization against the spell that is being cast. Not sure if you have already come across our post "Sowing Anachronism", but in the comments section you will find hundreds of like-minded people who have found small acts of resistance.
"Resisting" reminds me of going to a protest march. There is something valuable about getting out of the house and expressing simple ideas in public. 'Ceasefire NOW in Gaza!'; 'Abolish the police!' But reality seems more complex, and solutions are rarely found in binaries. Getting rid of the internet and making music instead is wonderfully humanistic. Yet, I am not attracted. If I gave up the internet, I could no longer be a CASA (Court Appointed Special Advocate for a Child). I could not attend meeting for the children I represent. I could not write my court reports. I could not investigate the case without access to the documents found only online. I have to deal the messy reality of the world as it is. I agree that the attempt to find an internet/reality life balance is probably a quixotic goal, but I cannot walk away from modern, and yes, urban society.
I struggle with the complexity too, both positive and negative. The balance is especially difficult as the technologies keep advancing, sometimes by the day. If only we had to deal with the equivalent of say, a steam engine or something like that, for half a century, before having to adapt to something else!
Every time I read something by you and Ruth I want to grab a community of people and sit in a room and talk about it... but so many people I know, in person, seem to be edging comfortably and unknowingly towards Machinehood -- and I feel entirely at a loss of what to do about it.
I have a very similar instinct, and our dinner table often seems like "Substack live". We'll continue to consider how we can support people in forming real-life community.
One my try equipping yourself with knowledge Like presented here in this excellent article and try try try to be persuasive (and humble) to those edging toward machine-hood! They are eternal beings right now and we must bear our cross, in being "that friend" who is always keen to speak about the direction of our world!
"starts to lose faith in the experts who are supposed to be guiding us. The panel that night included a speaker from NASA, an industry rep, and two people from academia, but it should have also included, maybe, a plumber, a farmer, a nurse, a mother, a priest, an elderly person, a teacher, a writer, or an artist. People who specialize in real people or real things. People who work with hands or hearts."
This is so important, how much would be made clear if regular people who are on the receiving end of experts could reflect back to them. I would like to see a series called something like "Not the Experts" maybe more creative where the real visionaries of their fields, looking to apply things to other lives face a lay persons opinion and interpretation.
I recall the Einstein quote, "If you can't explain it simply enough, you don't understand it." This AI conference sounds like a perfect representation.
The term motivated somnambulism is a lovely representation too.
Thank you for sharing, I find articles by yourself and Ruth stick with me for some time and show up in my own little creative writings that who knows may get somewhere outside of my notebooks one day.
Ha, that's a great idea "Not the Experts" - this is something that should be suggested to the institutions that host these events. Glad to hear that you find our writing thought-provoking and hope that it can bear fruit in your own writings as well :)
I am studying my PhD in social psychology. I come from a world of theory, of words, of coloured graphs on whiteboards. My fellow PhD students - and many of the professors - knowledge of human nature is derived from scientific studies, most of which are not replicable. Most have very limited personal experiences, and have life stories that involve receiving a lot of As at school and staying in on the weekends. Few have got drunk in another country. Few have been addicted to drugs or alcohol. Few enjoy going on adventures, like hiking or skiing. Few do anything really, beside from talk about psychology and politics, and study psychology and politics. They live in a perfect little ideological bubble, where utopia comes true in their minds, because it cannot be invaded by the ‘common man’.
I believe strongly that these people do not understand human nature. How could one, who couldn’t have a conversation with a random person on the street? Who might even feel disgust at talking about the weather, let alone religious or spiritual ideas?
So no, I was not surprised to see the experts fail wholeheartedly to understand a common citizens worries.
Another fantastic article by the way. Keep up the great work. You, as a family, are inspiring. I am glad to have found a small community who shares my fears about technology. Back home, I really am the weird guy with flip phone. It’s nice their are like minded people out there, who have take far greater steps than me.
Thank you for the encouragement, and reflections. I was a student once as well—alas, I was actually among those who spent too much time in their heads…which is something I am slowly trying to undo, even while technological life keeps pushing me in the opposite direction.
Wendell Berry has been looming large in my imagination over these last days and weeks. His thinking on the creature vs. machine dichotomy is becoming less and less a hypothetical question and increasingly a choice we must make each moment of every day.
May we choose wisely, also realizing that being a creature does not mean we must fully reject machines, but that we must thoughtfully consider how and why were are using them and be aware of the ways they might be using us.
Agreed Josh. The choices may seem trivial at times (such as whether we take a cell phone along on a walk or not), but each small choice shapes us. It will require wide open eyes and continued recognition, questioning, and discernment to keep our feet firmly grounded in the human realm. Thanks for your articles - I appreciate your thinking as well as the links you provide.
Exactly right. These little choices add up over time—for good or ill. What matters most is what we do most often.
Thanks for the kind words about my writing. I'm glad that you have found some of what I've shared helpful. I continue to enjoy what you are doing here and find our various perspectives on some of the same core, important issues to be encouraging.
"Other experts we chatted with struggled to initially understand even what we were talking about—and once they did, didn’t have much to say."
I attended a talk by a NatGeo VP a few years ago about the new AR/VR stuff the company is implementing, and found that about half the room had the same impression of "expert" understanding. We were told all about the wonders of virtual reality storytelling and augmented reality communication and all these fun sounding technological advancements being used to make new National Geographic content, but this VP of content and the few AI-tech users in the room didn't actually seem concerned or even able to understand the couple questions that were any deeper than basic fascination with new tech for it's own sake. It's disturbing that this stuff is mostly pushed forward by the average, everyman foot solider endlessly repeating "but isn't it so cool?" in the face of every single concerning thing the tech can do. Sure, my basic perception of reality is warped and I can use AI media to make up lies convincing enough to destabilize countries, but isn't it kinda cool that it can do that?
There's no reasoning with the logic because there is no logic to reason with. Most people just don't have a notion of the depths we're wading into. It's all facts you can spy from the surface, while the real questions lie beneath.
"It's disturbing that this stuff is mostly pushed forward by the average, everyman foot solider endlessly repeating "but isn't it so cool?" in the face of every single concerning thing the tech can do."
"“Attention is the currency of worship”.5 The thing we most give our attention to is, by default, the thing we elevate as the highest and most important of our lives."
The observation that struck me most was that the most compelling (to me) quotes were from early twentieth century authors, George Orwell and C.K. Chesterton, well before the threat/promise of AI entered our conversation. The second was that those who are quickest to adopt it do so with religious fervor and who fervently oppose traditional religion.
You ask, "Can we trust AI?” Obviously, we can; some do already. Your second question, "What questions do you think need to be asked with regard to AI?" The most important is, "Should we?" To do what? To make us Godlike? Impossible! at least not without the highly unlikely transformation of our moral fiber. Third, "What actions can we take to resist AI harvesting our language, images, art, etc.?" Another highly unlikely outcome. Its only current ability is to, maybe, correct the abuses we have made to them, mainly from having paid scant worshipful attention to them lately.
I'm a photographer on the side, and the AI question right now is massively important as generative AI engines grow in capabilities (videos are not far behind). There are a great number of simultaneous concerns - authenticity of what one sees, the harvested data that feeds (and regurgitates back into itself in a sort of "computer centipede" fashion - sorry for that image), the rights to what the AI spits out (the person who gave it the textual prompts? the AI owners?), and the replacement of actual photographic work by concocted fantasies.
One need only see how in the last year many many Youtube channels are using AI thumbnails and video screens to begin to understand the impact. The video titles are created then remade again and again after a video is posted too. Stock photographers, whose work has fed this beast, are now undone by it, unable to sell their "perfect beach sunset" photos due to easier (and seemingly "free") AI... stuff.
And, as is usual with the tech industry, the Libertarians are closely allied with the techno-zealots, with assurances that "The Market" will somehow "find" ways to re-employ people "liberated" from yet more work - nevermind the cost. You can already hear the repetition of old slurs like the "best buggy whip maker". As if the "The Market" is some sort of hyper-rational god of fairness and wealth.
How to fight back? How to resist? I wish I knew. Amy Winfrey had one idea:
The Winfrey video was hilarious. It also made me wonder if something loosely similar to this could happen naturally to some AI systems, if they started to feed on AI-based data (for efficiency’s sake) rather than real-world data. I’m thinking of a kind of data self-cannibalism, where AI feeding off AI-made content could create weird new errors in its system, i.e., images and ideas that don’t really correspond properly to anything real or useful.
Sounds like a new novel- sequel to Exogenesis? Exploring AI cannibalism could uncover some humor in the dystopia. I’ve heard mocking ludicrous assumptions can help disarm the Machine.
A couple of what I see as significant AI questions:
If we can replace a large proportion of our work with AI, are there activities that would be a more constructive use of our time?
What work should be done by machines? What work should be reserved for humans?
What happens to people made unemployable by AI?
Is there a certain amount of suffering we need in our lives that we should not eliminate with machines? Are there difficult jobs we need to do ourselves for our own good?
If there is no difference in quality, are we degraded by using intellectual or artistic products – articles or images, say – made with AI?
If we choose to continue to do work that can be done by a machine for the first time in history, what do we tell ourselves about the meaning of our work?
One of my pet peeves about modern online media. It could be solved with a simple line in their programming. It can be done. You offer proof with your PDF. Formatting printing that uses less paper and less ink. I would buy less paper and less ink. May be that is the discouraging point of it all. Profit.
It seems humans are either born with or develop a foundational belief system to guide their actions. Whether good or bad the influence on human societal development depends on persuasion through language or force of power. In the natural realm there is always an opportunity to revisit to reinforce or discard on human and nature’s terms.
Perhaps AI needs a foundation system also. A human/nature/spiritual bill of rights. A robust feedback and revision system fenced from human and machine corruption. One that is transparent and evokes our trust. I think I just described the “Prime Directive” lol
What I see now is technology and particularly AI rushing forward being rationalized by anti human and unnatural agendas. And it appears we are powerless to change it. Especially if it has evolved beyond its creators ability or desire to understand it. Trust us just doesn’t cut it.
There are many things about AI that I find unsettling, and my general response is to want to avoid it as much as possible, even to the extent that it means rejecting the use of certain technologies altogether.
That said, I am curious to know how others are approaching their attempts to keep AI from harvesting their work, their images, etc.
Your experience at this conference brings my mind sharply back to the headquarters of the N.I.C.E. in Lewis’ That Hideous Strength, and the laughable confusion of Mark Studdock as he does his duty “in service to science.” On the one hand it’s funny to see loads of highly educated men and women stumble about talking nonsense that they think quite clever, on the other hand it can be a bit frightening to think what such lack of understanding for the deep things of life might cause.
I’ve a couple of thoughts on this, and I’ll try to articulate some of them here.
It seems to me that the fundamental issue that is coming up as the world grapples with these questions around AI and related technologies, is not whether we can trust such technologies, but what we actually trust anything for. (A point well-raised by Frodo Baggins’ friends when he attempts sneaking out of the Shire without them. He wonders if anyone can be trusted, and Merry says that it depends on what he’s trusting them to do. They won’t allow him to go off into the wild by himself, but they can be trusted to stick with their dear hobbit friend through thick and thin.) I trust my phone to ring when my mother calls me; I don’t trust it to initiate any sort of meaningful relationship with her or anyone else on my behalf, nor do I think it would be a beneficial thing if it could.
“All things are permissible. But not all things are beneficial.”
The response to the question of whether or not AI is a good thing all depends on the respondent’s view of good. And in a world where sheer capability and power is often held in highest esteem, it’s no wonder that the “great men” and powers in the world would see technological transcendence as an ultimate good — a way, potentially, to be liberated of the lordship of God, as so many have mistakenly viewed science throughout the years. If the ultimate aim is ultimate power, anything that brings humanity closer to that will be seen as a net positive thing, and questions of “oughts,” as Lewis wrote of in Mere Christianity, will be passed off as irrelevant.
In the garden narrative of Genesis, humanity is offered the choice to seize knowledge simply as one eats a fruit. Rather than trusting in the One who placed them in the garden, who would surely lead them where and at the pace that they should go, they take the easy path — but the knowledge it grants does not bring greater life, only a heavier burden.
Thank you both for sharing your experience of this conference with us, and continuing to shed light on a rather dimly lit subject.
Thanks for these many thoughts, and esp for pointing out The Hideous Strength. I read and thoroughly enjoyed that book last year, although I didn’t think of it in connection with the talk we attended, probably as nobody seemed nefarious (as are some of the characters in Lewis’s novel). My sense, if anything, was that people were well-intended, but just deeply unaware of certain issues.
Yes, I would assume that would be the case; in general, I tend to assume positive intent on the part of just about everyone. The crossover for me would be in the forces behind the N.I.C.E., which understand at a much deeper level the things which the unsuspecting scientists were trying to pry into. (Namely, there being spiritual beings trying to manipulate human activity to their own ends.) I do think that there are far fewer nefarious characters at play on the human side of the situation we find ourselves in.
"A great chapter of the history of the world is written in the chalk. Few passages in the history of man can be supported by such an overwhelming mass of direct and indirect evidence as that which testifies to the truth of the fragment of the history of the globe, which I hope to enable you to read, with your own eyes, to-night... I weigh my words well when I assert, that the man who should know the true history of the bit of chalk which every carpenter carries about in his breeches-pocket, though ignorant of all other history, is likely, if he will think his knowledge out to its ultimate results, to have a truer, and therefore a better, conception of this wonderful universe, and of man's relation to it, than the most learned student who is deep-read in the records of humanity and ignorant of those of Nature."
This is very thought provoking. I dislike the term AI as I find it mystifies and anthropomorphises the technology. The same goes for terms like learning, hallucinating, chatting. The essence of the technology I find interesting. It could help facilitate inter-species communication which could reduce cruelty to animals and temper our anthropocentric view of the world. However, I am with you on how to absorbs our attention away from the numinous and the people around us. I was, and still am somewhat, part of the London Hacking community (not breaking into computers, but subverting technology). I still favour the approach of getting into the code and finding beauty in it, but I understand that this is personal. Like everyone observing this change my views are evolving. I see our technological malaise as stemming from a materialistic, oligarchic, philistinism more so than something intrinsic to the technology. The utopian notion that technology could liberate us from drudgery to pursue more artistic, intellectual, communal, and spiritual lives I still believe. Uploading our consciousness to some digital utopia strikes me as sci-fi bs that is actively dangerous. Believe in a God, or don’t, but don’t go inventing one. We saw last century what that led to.
Btw, you should check out Jamie Bartlett’s latest podcast on social media. He really does his research and spins a good yarn.
Thanks for the podcast reference. I will check it out. Also thanks for your thoughts, including, "Believe in a God, or don’t, but don’t go inventing one. We saw last century what that led to. "
Regarding the question of whether we can trust AI, I will observe that its trustworthiness is an artifact of its relationship with the truth. I wrote a lengthy piece last summer, an adaptation of which will be published in Salvo Magazine in the spring. I contend that language models are different from other forms of AI because their facility with language evokes a spellbound response in human beings. Here are a couple of snippets from my original piece that are relevant to your question:
"So the Judeo-Christian worldview holds that God uses language to create and to reveal, while Satan uses language to distort and to deceive. And it is on this point, the distinction between truth and falsehood, that the question of AI's malevolence hinges...More troubling than whether models are actually even able to always tell us the truth is the spellbinding effect of their facility with language, and how it is being actively exploited by those with an interest in encouraging a sense of awe and wonder directed toward the models themselves...All who perceive that human life is more than material and mechanistic must recognize how profoundly mistaken it is to encourage the notion that something so plainly mechanistic can nevertheless possess motives or agency. The cultural battle over what it means to be human is red hot at just this particular moment. This is no time to inadvertently affirm the delusion that machines are sentient beings."
My original post about all of this can be found here:
https://keithlowery.substack.com/p/is-ai-demonic
"...language models are different from other forms of AI because their facility with language evokes a spellbound response in human beings." I think "Spellbound" is key here. Even when people know that AI is merely sophisticated statistics, they are cast under a spell by its seemingly uncanny powers. Looking forward to reading your piece!
Thank you, Keith, for this excellent excerpt and link. We need more of this kind of insight to turn the narrative in a fresh direction.
I like the use of the word “spellbinding” -- though in a way all human technology does that. We are entranced by extensions of ourselves (that is, technology or media) and are thus open to their effects without realizing it. Only by understanding, thinking, examining, can we stop the action or effect tech has on us. Only by asking what message the medium or media delivers can we parse whether or not the medium itself is good or bad or somewhere in between. Marshall McLuhan liked to say that technology is not neutral; and he’s right (though of course it’s complicated). Think of TikTok for example: even if every video was positive, it would still have a negative effect on us (attention) and preach a false word to us (human relationship and knowledge can happen almost instantaneously).
As far as resisting AI: Does anyone put it better than Wendell Berry? Do something that doesn’t compute. I’ve recently gotten rid of the internet at my house and the feeling that “I cannot be reached here” is wonderful. That and the time and attention to learn piano (from books) and practice singing (with a teacher) and to draw (thanks to printed art lessons) and to play with my kids is so life-giving. We don’t have to play by the rules of our technology-ruled society. We can choose to live differently, choose to find what gives each of us life. Do something that doesn’t compute!
"Do something that doesn't compute" seems to be a great act of immunization against the spell that is being cast. Not sure if you have already come across our post "Sowing Anachronism", but in the comments section you will find hundreds of like-minded people who have found small acts of resistance.
"Resisting" reminds me of going to a protest march. There is something valuable about getting out of the house and expressing simple ideas in public. 'Ceasefire NOW in Gaza!'; 'Abolish the police!' But reality seems more complex, and solutions are rarely found in binaries. Getting rid of the internet and making music instead is wonderfully humanistic. Yet, I am not attracted. If I gave up the internet, I could no longer be a CASA (Court Appointed Special Advocate for a Child). I could not attend meeting for the children I represent. I could not write my court reports. I could not investigate the case without access to the documents found only online. I have to deal the messy reality of the world as it is. I agree that the attempt to find an internet/reality life balance is probably a quixotic goal, but I cannot walk away from modern, and yes, urban society.
I struggle with the complexity too, both positive and negative. The balance is especially difficult as the technologies keep advancing, sometimes by the day. If only we had to deal with the equivalent of say, a steam engine or something like that, for half a century, before having to adapt to something else!
"I’ve recently gotten rid of the internet at my house and the feeling that “I cannot be reached here” is wonderful."
Every time I leave home without my phone I feel like I'm getting away with something. Heh.
Every time I read something by you and Ruth I want to grab a community of people and sit in a room and talk about it... but so many people I know, in person, seem to be edging comfortably and unknowingly towards Machinehood -- and I feel entirely at a loss of what to do about it.
I have a very similar instinct, and our dinner table often seems like "Substack live". We'll continue to consider how we can support people in forming real-life community.
We often feel the same thing, Kristine. Finding that community is, of course, the hard part, and the vital part.
One my try equipping yourself with knowledge Like presented here in this excellent article and try try try to be persuasive (and humble) to those edging toward machine-hood! They are eternal beings right now and we must bear our cross, in being "that friend" who is always keen to speak about the direction of our world!
Yep, right there with you. It's ironic that it's so much easier to find others who want to resist the technology march through our modern technology.
We hear you Alex! This is an issue that has been raised by many readers and will deserve some deeper thought and concrete suggestions.
"starts to lose faith in the experts who are supposed to be guiding us. The panel that night included a speaker from NASA, an industry rep, and two people from academia, but it should have also included, maybe, a plumber, a farmer, a nurse, a mother, a priest, an elderly person, a teacher, a writer, or an artist. People who specialize in real people or real things. People who work with hands or hearts."
This is so important, how much would be made clear if regular people who are on the receiving end of experts could reflect back to them. I would like to see a series called something like "Not the Experts" maybe more creative where the real visionaries of their fields, looking to apply things to other lives face a lay persons opinion and interpretation.
I recall the Einstein quote, "If you can't explain it simply enough, you don't understand it." This AI conference sounds like a perfect representation.
The term motivated somnambulism is a lovely representation too.
Thank you for sharing, I find articles by yourself and Ruth stick with me for some time and show up in my own little creative writings that who knows may get somewhere outside of my notebooks one day.
Ha, that's a great idea "Not the Experts" - this is something that should be suggested to the institutions that host these events. Glad to hear that you find our writing thought-provoking and hope that it can bear fruit in your own writings as well :)
I am studying my PhD in social psychology. I come from a world of theory, of words, of coloured graphs on whiteboards. My fellow PhD students - and many of the professors - knowledge of human nature is derived from scientific studies, most of which are not replicable. Most have very limited personal experiences, and have life stories that involve receiving a lot of As at school and staying in on the weekends. Few have got drunk in another country. Few have been addicted to drugs or alcohol. Few enjoy going on adventures, like hiking or skiing. Few do anything really, beside from talk about psychology and politics, and study psychology and politics. They live in a perfect little ideological bubble, where utopia comes true in their minds, because it cannot be invaded by the ‘common man’.
I believe strongly that these people do not understand human nature. How could one, who couldn’t have a conversation with a random person on the street? Who might even feel disgust at talking about the weather, let alone religious or spiritual ideas?
So no, I was not surprised to see the experts fail wholeheartedly to understand a common citizens worries.
Another fantastic article by the way. Keep up the great work. You, as a family, are inspiring. I am glad to have found a small community who shares my fears about technology. Back home, I really am the weird guy with flip phone. It’s nice their are like minded people out there, who have take far greater steps than me.
Thank you for the encouragement, and reflections. I was a student once as well—alas, I was actually among those who spent too much time in their heads…which is something I am slowly trying to undo, even while technological life keeps pushing me in the opposite direction.
What did you study if you don’t mind?
Wendell Berry has been looming large in my imagination over these last days and weeks. His thinking on the creature vs. machine dichotomy is becoming less and less a hypothetical question and increasingly a choice we must make each moment of every day.
May we choose wisely, also realizing that being a creature does not mean we must fully reject machines, but that we must thoughtfully consider how and why were are using them and be aware of the ways they might be using us.
Agreed Josh. The choices may seem trivial at times (such as whether we take a cell phone along on a walk or not), but each small choice shapes us. It will require wide open eyes and continued recognition, questioning, and discernment to keep our feet firmly grounded in the human realm. Thanks for your articles - I appreciate your thinking as well as the links you provide.
Exactly right. These little choices add up over time—for good or ill. What matters most is what we do most often.
Thanks for the kind words about my writing. I'm glad that you have found some of what I've shared helpful. I continue to enjoy what you are doing here and find our various perspectives on some of the same core, important issues to be encouraging.
"Other experts we chatted with struggled to initially understand even what we were talking about—and once they did, didn’t have much to say."
I attended a talk by a NatGeo VP a few years ago about the new AR/VR stuff the company is implementing, and found that about half the room had the same impression of "expert" understanding. We were told all about the wonders of virtual reality storytelling and augmented reality communication and all these fun sounding technological advancements being used to make new National Geographic content, but this VP of content and the few AI-tech users in the room didn't actually seem concerned or even able to understand the couple questions that were any deeper than basic fascination with new tech for it's own sake. It's disturbing that this stuff is mostly pushed forward by the average, everyman foot solider endlessly repeating "but isn't it so cool?" in the face of every single concerning thing the tech can do. Sure, my basic perception of reality is warped and I can use AI media to make up lies convincing enough to destabilize countries, but isn't it kinda cool that it can do that?
There's no reasoning with the logic because there is no logic to reason with. Most people just don't have a notion of the depths we're wading into. It's all facts you can spy from the surface, while the real questions lie beneath.
"It's disturbing that this stuff is mostly pushed forward by the average, everyman foot solider endlessly repeating "but isn't it so cool?" in the face of every single concerning thing the tech can do."
Well said.
"“Attention is the currency of worship”.5 The thing we most give our attention to is, by default, the thing we elevate as the highest and most important of our lives."
The observation that struck me most was that the most compelling (to me) quotes were from early twentieth century authors, George Orwell and C.K. Chesterton, well before the threat/promise of AI entered our conversation. The second was that those who are quickest to adopt it do so with religious fervor and who fervently oppose traditional religion.
You ask, "Can we trust AI?” Obviously, we can; some do already. Your second question, "What questions do you think need to be asked with regard to AI?" The most important is, "Should we?" To do what? To make us Godlike? Impossible! at least not without the highly unlikely transformation of our moral fiber. Third, "What actions can we take to resist AI harvesting our language, images, art, etc.?" Another highly unlikely outcome. Its only current ability is to, maybe, correct the abuses we have made to them, mainly from having paid scant worshipful attention to them lately.
The more I read Chesterton, the more surprised I am by his foresight and insight.
I have the same reaction to him.
I'm a photographer on the side, and the AI question right now is massively important as generative AI engines grow in capabilities (videos are not far behind). There are a great number of simultaneous concerns - authenticity of what one sees, the harvested data that feeds (and regurgitates back into itself in a sort of "computer centipede" fashion - sorry for that image), the rights to what the AI spits out (the person who gave it the textual prompts? the AI owners?), and the replacement of actual photographic work by concocted fantasies.
One need only see how in the last year many many Youtube channels are using AI thumbnails and video screens to begin to understand the impact. The video titles are created then remade again and again after a video is posted too. Stock photographers, whose work has fed this beast, are now undone by it, unable to sell their "perfect beach sunset" photos due to easier (and seemingly "free") AI... stuff.
And, as is usual with the tech industry, the Libertarians are closely allied with the techno-zealots, with assurances that "The Market" will somehow "find" ways to re-employ people "liberated" from yet more work - nevermind the cost. You can already hear the repetition of old slurs like the "best buggy whip maker". As if the "The Market" is some sort of hyper-rational god of fairness and wealth.
How to fight back? How to resist? I wish I knew. Amy Winfrey had one idea:
https://www.youtube.com/watch?v=9vUtgO2dTBs
VFX Firm Corridor, instead, is using AI to help make original things:
https://www.youtube.com/watch?v=_9LX9HSQkWo&pp=ygURY29ycmlkb3IgY3JldyBycHM%3D
The Winfrey video was hilarious. It also made me wonder if something loosely similar to this could happen naturally to some AI systems, if they started to feed on AI-based data (for efficiency’s sake) rather than real-world data. I’m thinking of a kind of data self-cannibalism, where AI feeding off AI-made content could create weird new errors in its system, i.e., images and ideas that don’t really correspond properly to anything real or useful.
Hi Peco, its very much a thing and called Model Collapse.
Sounds like a new novel- sequel to Exogenesis? Exploring AI cannibalism could uncover some humor in the dystopia. I’ve heard mocking ludicrous assumptions can help disarm the Machine.
I hadn't thought about that as a sequel, but hmmm...
A couple of what I see as significant AI questions:
If we can replace a large proportion of our work with AI, are there activities that would be a more constructive use of our time?
What work should be done by machines? What work should be reserved for humans?
What happens to people made unemployable by AI?
Is there a certain amount of suffering we need in our lives that we should not eliminate with machines? Are there difficult jobs we need to do ourselves for our own good?
If there is no difference in quality, are we degraded by using intellectual or artistic products – articles or images, say – made with AI?
If we choose to continue to do work that can be done by a machine for the first time in history, what do we tell ourselves about the meaning of our work?
Thank you for the PDF print out. One of the many who prefers reading to scrolling.
You are welcome :) It takes a bit of extra formatting, but it so much more pleasant to read off paper!
One of my pet peeves about modern online media. It could be solved with a simple line in their programming. It can be done. You offer proof with your PDF. Formatting printing that uses less paper and less ink. I would buy less paper and less ink. May be that is the discouraging point of it all. Profit.
I try and save space, making fonts and images smaller, making margins wider. No one likes printing off large images that use up loads of ink...
Well done.
It seems humans are either born with or develop a foundational belief system to guide their actions. Whether good or bad the influence on human societal development depends on persuasion through language or force of power. In the natural realm there is always an opportunity to revisit to reinforce or discard on human and nature’s terms.
Perhaps AI needs a foundation system also. A human/nature/spiritual bill of rights. A robust feedback and revision system fenced from human and machine corruption. One that is transparent and evokes our trust. I think I just described the “Prime Directive” lol
What I see now is technology and particularly AI rushing forward being rationalized by anti human and unnatural agendas. And it appears we are powerless to change it. Especially if it has evolved beyond its creators ability or desire to understand it. Trust us just doesn’t cut it.
Arthur C Clarke. The Foundation Trilogy. The three rules of robotics.
There are many things about AI that I find unsettling, and my general response is to want to avoid it as much as possible, even to the extent that it means rejecting the use of certain technologies altogether.
That said, I am curious to know how others are approaching their attempts to keep AI from harvesting their work, their images, etc.
Your experience at this conference brings my mind sharply back to the headquarters of the N.I.C.E. in Lewis’ That Hideous Strength, and the laughable confusion of Mark Studdock as he does his duty “in service to science.” On the one hand it’s funny to see loads of highly educated men and women stumble about talking nonsense that they think quite clever, on the other hand it can be a bit frightening to think what such lack of understanding for the deep things of life might cause.
I’ve a couple of thoughts on this, and I’ll try to articulate some of them here.
It seems to me that the fundamental issue that is coming up as the world grapples with these questions around AI and related technologies, is not whether we can trust such technologies, but what we actually trust anything for. (A point well-raised by Frodo Baggins’ friends when he attempts sneaking out of the Shire without them. He wonders if anyone can be trusted, and Merry says that it depends on what he’s trusting them to do. They won’t allow him to go off into the wild by himself, but they can be trusted to stick with their dear hobbit friend through thick and thin.) I trust my phone to ring when my mother calls me; I don’t trust it to initiate any sort of meaningful relationship with her or anyone else on my behalf, nor do I think it would be a beneficial thing if it could.
“All things are permissible. But not all things are beneficial.”
The response to the question of whether or not AI is a good thing all depends on the respondent’s view of good. And in a world where sheer capability and power is often held in highest esteem, it’s no wonder that the “great men” and powers in the world would see technological transcendence as an ultimate good — a way, potentially, to be liberated of the lordship of God, as so many have mistakenly viewed science throughout the years. If the ultimate aim is ultimate power, anything that brings humanity closer to that will be seen as a net positive thing, and questions of “oughts,” as Lewis wrote of in Mere Christianity, will be passed off as irrelevant.
In the garden narrative of Genesis, humanity is offered the choice to seize knowledge simply as one eats a fruit. Rather than trusting in the One who placed them in the garden, who would surely lead them where and at the pace that they should go, they take the easy path — but the knowledge it grants does not bring greater life, only a heavier burden.
Thank you both for sharing your experience of this conference with us, and continuing to shed light on a rather dimly lit subject.
Thanks for these many thoughts, and esp for pointing out The Hideous Strength. I read and thoroughly enjoyed that book last year, although I didn’t think of it in connection with the talk we attended, probably as nobody seemed nefarious (as are some of the characters in Lewis’s novel). My sense, if anything, was that people were well-intended, but just deeply unaware of certain issues.
Yes, I would assume that would be the case; in general, I tend to assume positive intent on the part of just about everyone. The crossover for me would be in the forces behind the N.I.C.E., which understand at a much deeper level the things which the unsuspecting scientists were trying to pry into. (Namely, there being spiritual beings trying to manipulate human activity to their own ends.) I do think that there are far fewer nefarious characters at play on the human side of the situation we find ourselves in.
Re. chalk, the classic chalk essay is T. H. Huxley's (AKA Darwin's bulldog) essay On a Piece of Chalk. http://aleph0.clarku.edu/huxley/CE8/Chalk.html
"A great chapter of the history of the world is written in the chalk. Few passages in the history of man can be supported by such an overwhelming mass of direct and indirect evidence as that which testifies to the truth of the fragment of the history of the globe, which I hope to enable you to read, with your own eyes, to-night... I weigh my words well when I assert, that the man who should know the true history of the bit of chalk which every carpenter carries about in his breeches-pocket, though ignorant of all other history, is likely, if he will think his knowledge out to its ultimate results, to have a truer, and therefore a better, conception of this wonderful universe, and of man's relation to it, than the most learned student who is deep-read in the records of humanity and ignorant of those of Nature."
Thanks for that link Rosie! Will add this to my reading list :)
This is very thought provoking. I dislike the term AI as I find it mystifies and anthropomorphises the technology. The same goes for terms like learning, hallucinating, chatting. The essence of the technology I find interesting. It could help facilitate inter-species communication which could reduce cruelty to animals and temper our anthropocentric view of the world. However, I am with you on how to absorbs our attention away from the numinous and the people around us. I was, and still am somewhat, part of the London Hacking community (not breaking into computers, but subverting technology). I still favour the approach of getting into the code and finding beauty in it, but I understand that this is personal. Like everyone observing this change my views are evolving. I see our technological malaise as stemming from a materialistic, oligarchic, philistinism more so than something intrinsic to the technology. The utopian notion that technology could liberate us from drudgery to pursue more artistic, intellectual, communal, and spiritual lives I still believe. Uploading our consciousness to some digital utopia strikes me as sci-fi bs that is actively dangerous. Believe in a God, or don’t, but don’t go inventing one. We saw last century what that led to.
Btw, you should check out Jamie Bartlett’s latest podcast on social media. He really does his research and spins a good yarn.
Thanks for the podcast reference. I will check it out. Also thanks for your thoughts, including, "Believe in a God, or don’t, but don’t go inventing one. We saw last century what that led to. "
As usual we're asking the wrong questions, which you highlight very well.