Nothing vast enters the life of mortals without a curse. – Sophocles
I wouldn’t normally write an article on a Sunday evening when I really ought to be lounging on the couch with a mug of green tea, but something’s come up and it can’t wait.
Anyone who’s written an article on Substack knows how much time it can take, between the idea, the research, the writing, the editing, and the photos or artwork. I’m fortunate because I often work together with my Susbtack partner (and wife,
), but even so, the process is slow, and consumes much free time that might have been otherwise usefully spent with my children, reading, folding mountains of laundry, or cleaning out the straw in the chicken coop.This particular Sunday evening, as I casually pondered ideas for my next article, I discovered a new app on Google for creative content generation. We’ve all heard about AI platforms that generate creative text outputs, like essays and stories, but I hadn’t yet heard about NotebookLM, which can do something remarkable: create podcasts out of uploaded text.
About a year ago, Ruth and I published an article on the 3Rs of Unmachining, which is about how to manage digital technology in our lives through the three basic principles of Recognizing the harms of technology, Removing unwanted tech from our environment, and Returning to more human ways of living.
The 3Rs was one of our most popular pieces.
So out of curiosity I uploaded the PDF for this 4000-word article into Notebook LM, and then I went for a quick bathroom break. By the time I’d flushed the toilet and returned to my computer (I think there’s some weird symbolism in that), NotebookLM had produced a lively 8-minute podcast in which two hosts, a man and woman, discuss the article. There’s bantering, reflection, little expressions of surprise, clever turns of phrase, and all sorts of other things you might expect when two people are chatting online about a cool idea.
I was floored. I played it to Ruth, not letting on where the podcast had originated. She was curious how the hosts had come across our writing and asked what show it was. When I revealed that this was created in a couple of minutes by an AI program, she was repulsed (I think she actually said “That’s disgusting!”).
If you want to give it a listen, here it is. Remember, this took about 120 seconds to make. The voices are all AI.
How ironic that two machine-generated entities are talking about strategies for being more human in a machine world. At the 4:04 mark the male AI host even recalls a childhood memory, and at 4:49 talks about how he wrote his grandmother a letter this week rather than texting. You can even hear him take an in-breath at 6:46.
Sure, if you know ahead of time that the voices are AI, you can start to pick up the shimmers of unreality. It’s all a bit too polished, a bit too clean. But I’m confident that our friends at Google will figure out how to rough up the edges in a year, or maybe by tomorrow, and make it so realistic that it will not just pass the Turing Test but smash the Turing Test.
And it could also smash the creator’s economy.
Just as unskilled writers can get AI to produce passably good writing, NotebookLM is an easy way to create marketable podcasts with almost no human effort.
If you’ve spent any time at all on Substack, you soon notice that some authors can publish a couple of times a week. If you’re a decent enough writer (or podcaster), that might be enough to attract a following and generate a small income stream for your hard work. You probably won’t get rich, but it’s a nice reward for the toil.
AI content generation tools bring the risk of flooding the market with a lot of cheap goods that can undercut even the most productive human creators. And in the case of writers like myself—who publish fairly infrequently—the risk is greater. Invariably, the question that haunts me, and that ought to haunt all of us, is this: Why should anyone go through all the work to create a deep and eloquent essay or podcast, when somebody will use AI to do the same thing more eloquently and deeply—and in the same time it takes them to go on a bathroom break?
Some writers such as Ruth and I,
, , and several others, have taken a clear stand on rejecting the use of AI in all aspects of writing and creating. In a recent note Ruth commented:In Switzerland bakers in large grocery stores prepare bread in full sight, kneading, braiding, and scoring it by hand. Would it be faster and more efficient to do by machine? Yes. But they decided to uphold tradition because people care about provenance - it matters where things come from and how they were made. This is as true for bread as it is for words.
Language is what makes us human. Once you abdicate part of the writing process to a soulless machine you compromise your voice. As a reader I want to read words and ideas that have been woven and crafted by a human. If I know that AI was used in the process, not only do I lose all interest, but the writer loses credibility in my eyes.
For me this door is firmly shut. I draft all articles by hand, type them up, print them out, read them through together with my husband Peco, edit them by hand again. 100% human-made. (Even the logo for was hand-drawn by my then 10-year old son).
Substack is growing fast, yet the tools for creating the equivalent of Rolex knock-offs of its main products are growing even faster.
, , and : Substack needs to act on this issue quickly, if it wants to protect the authenticity of its content and safeguard the trust of its readership. Maybe we need to get authors to sign a declaration indicating whether they use AI in the generation of their content? Or maybe Substack can introduce an app to help detect whether an author’s works appear to have been AI-generated?I’m not suggesting that people who use AI shouldn’t be allowed to publish on Substack; rather, that without some form of intervention, the power of AI will undermine many people’s faith in the platform.
No matter how good AI gets, many of us will always want articles and podcasts created by human beings. You might call it “artisanal writing”, or “artisanal content”—though it’s also a bit ridiculous that we might have to label something to identify it as authentically human. Like putting on a sticker that says “Organic”, or in this case “Made by an actual Soul”.
Substack has effectively upheld the freedom of speech. Now the question remains, will it also stand up to protect uniquely human speech?
The AI curse is coming. Are we ready to safeguard the creator’s economy?
I'm a photographer, and the last 2 years have seen Adobe, whose software tools dominate the industry, on the one hand pander to photographers about how their tools can improve not only the photo editing process, but even automate it by doing in seconds what used to take hours in Photoshop - then on the other hand run endless commercials to other businesses about how Photoshop can eliminate the need for actual photographers entirely. Their terms of service have changed too, such that increasingly their tools run not on local machines, but Adobe's servers, and they have said they have a right to review your entire catalog of work the moment it touches their servers - this violates any number of corporate and client confidentiality agreements, though Adobe defends this practice by claiming it is to stop child porn and similar things and they pinkie swear that your own work isn't being used to train their AI, and won't be stolen. I don't think anyone believes them, but many feel they have no choice as they have used nothing but Adobe tools for 20 years, and moving to another system is difficult (especially when so many systems now, like Adobe, only rent the software to you, and likewise are suspected of siphoning off work).
Adobe's two-faced advertising certainly put a lot of photographers on edge - it would be like Ford offering to sell you smart cars on the one hand, but then lobbying the feds to ban private car ownership in hopes that Ford will be granted a national monopoly on building automated taxis.
Jonathan Pageau has been saying we're entering "Clown World", where we increasingly are unable to tell what is real and what is fake, which is dangerous turf. That world breeds cynicism on the one hand, and desperation to believe *something* on the other.
Worse yet, the need to "touch grass" as it were, and know that at least something somewhere is real, is driving tech towards a universal centralized identity system. I can see this in how increasingly I cannot actually conduct my business without having my phone on me, since my phone has biometric ID verification with my face. I am always being pressed to use "passkeys", which require me to log in not only with the usual name and password on a website, but then ALSO show my face to my phone in some companion ID system, just to prove I am who I say I am. I cannot even file certain legal documents with my state government online without using my phone now.
This is extending to the next generation of digital cameras too, where Leica and a few others are starting to use authentication chains that begin with cryptographic hashes embedded in photo files that allow a photographer to prove a photo's historical provenance - without a hash traceable to one camera and one photographer, you'll not be able to sell your work as "authentic". This will start with photo-journalism, where fakery is already a massive problem, but I expect it to spread to other outlets over time, except where fantasy itself is the desired "product". I expect too that this soon this will be tied to the ubiquitous phones to further lock-down identities.
And this is where it is going long-term: to shut down "clown world" and AI spoofing where it is not desired, you will have to be entirely visible and (worse yet) your entire history will be exposed and auditable, not just like the bakers behind the counter, but always and to all - especially the government. I'm not sure how to escape this.
[Apologies if this comment is too long. The fourth paragraph sums up my thoughts.]
I read this last night and then had a long conversation with my wife about the economic and creative ramifications of AI's inevitable domination of "content creation." [I hate the term, but so far haven't been able to come up with a better one that covers the whole gamut from "fine art" to "mere consumables."] My wife was far less concerned about my economic fears (I work as a hand-craft repairman, so my trade was destroyed by machines about a century before I was even born). Her concern was more for what AI will do to our ability to evaluate truth from falsehood online.
Lo & behold, this very morning my wife stormed into the breakfast room in a lather because she had just visited a permaculture forum that she frequents (yes, we're crunchy). The topic a la mode was an article on Substack purporting to have found a way to make crossbreeding of certain plants easier with applications of concrete dust and/or MSG powder during pollination. My wife was incredulous as many people were chiming in and saying what an amazing breakthrough this could be without having first tried the steps outline in the article. The idea seemed far fetched. A quick look at the original stack did not pass the smell test. Poor compositional structure, no external sources, and instructions that were simply impossible to follow because of logical inconsistencies. A detail web search provided no corroboration for the technique, or similar paths of research in any agricultural circles. And the image on the post was AI generated.
I looked at the "stack" in question and it seems to exhibit all the superficial signs of being AI generated content. Not being a computer person, I can't forensically prove that it's so, but either way the content is not only useless to serious permaculturists, it's downright fallacious. The part that scares me is how much traction it was getting on a web forum that claims to be for serious, rational practitioners of an agricultural science. It will all blow over soon, as do all fads on the interwebs, but I think this reveals an underlying issue.
When we no longer take the time to consider the origins of the content we consume, we have already devalued the quality of the content to the point where AI's inevitable domination of "content creation" is redundant. There are AI generated substacks and podcasts out there, and they exist because the majority of the public no longer cares where they get their information. I am a bookbinder. I do not make "new" books because I cannot economically compete with publishers printing thousands of cheep volumes by assembly line overseas. Instead, the vast majority of my business comes from individuals who want rare volumes preserved, or books and bibles with sentimental value rebound and kept "alive" for a few more decades. I hope we never get to the point where human-created content is viewed as a "bespoke service" only for those who can afford to pay for something that is made by a person. But I can foresee a world where the majority of consumable culture is made by algorithm and consumers are perfectly happy to have nothing else.
Hats off to Peco and Ruth for reminding us of what we should value, and what -God forbid- we may soon be losing. Apologies if this seems to much doom-laden cynicism. I can't help it. I see nothing in human history that would give me hope for a reversal against the onward rolling juggernaut of The Machine.