According to recent research, populists are gullible and inclined to believe in conspiracy theories. When I mentioned this to a populist I know, she replied, “Maybe there’s a conspiracy behind it?”
She was joking, sort of.
Her actual concerns, when I explored them, turned out to be reasonable. For instance, she wondered:
- Who were the researchers behind the study?
- What were their social and political values?
- Were they able to see their own bias when they designed the study?
These are good questions, of course, and rather important, because bias is built into the field of psychology, with only 6% of social and personality psychologists identifying as conservative—and that statistic is from 2011, before Brexit, before Trump, before a lot of things that changed the world.
The situation is so bad that in “decisions ranging from paper reviews to hiring, many social and personality psychologists said that they would discriminate against openly conservative colleagues. The more liberal respondents were, the more they said they would discriminate.”
In a world like that, who needs conspiracy theories?
And in a world like that, maybe it’s gullible to believe the research on gullibility?
Well, I wouldn’t go that far, but it’s a caveat to keep in mind. Bias infects everybody’s thinking to some extent, and bias, in turn, can both lead to gullibility and feed it to others.
Serpents and intuitions
The first recorded conversation in the Bible is a lie told by a serpent to a woman in a garden, and the second recorded conversation is a man talking to God and blaming the woman for the problem caused by believing the lie.
We live in an age of lies, but it isn’t a new thing. The garden is gone, but the world remains, wrapped in a boa constrictor we call the Internet, whose main occupation is to strangle our awareness of anything but what it wants us to see.
And we are complicit in the strangulation. A long time ago, if you heard a dirty little rumor, say, about what the prince did with the scullery maid, you might whisper it to your friend over a flagon of beer, and it might not have gotten much further than a chuckle. Today, Internet whispers are the butterfly wings that give rise to conspiracy theories and worldwide virtue-signaling movements.
Information breeds the distortion of information; the more of the first, the more of the second. What kind of distortion should we fear? All distortion, you might say, but especially lies. But not all lies are the same. A lie that feels familiar is harder to spot than one that feels unfamiliar. It’s easier to see through a story that’s half true and half false than one that’s 90% true and 10% false.
I fear leaders who say obviously outrageous things, but I fear more the smooth-talkers who empathically assure me they have my best interests at heart. The first sort of person wants to sell me something dumb or dangerous; the second wants to get deep in my soul, and twist it.
Still, despite my wariness, I’m gullible to an extent, and we all are. According to the research I mentioned earlier, populists are more gullible because they tend to rely on their intuitions, rather than on reason or knowledge. The authors point out:
This is consistent with the general notion that people’s first intuitive response after comprehending a proposition is to believe it, and unbelieving a proposition requires mental effort.
Daniel Kahneman makes much the same point about human beings in general. Our intuition is a fast-thinking system, with a tendency to race ahead of the rest of the mind, uncritically accepting anecdotal and personal information as true. Familiar information also moves more easily through this system, as does repeated information.
We can think of most social media as a fast system of collective human thought: quick brief messaging, charged with emotion, getting flicked into life by billions of twitching thumbs and fingers. Social media, therefore, is a global gullibility generator, inviting us to believe an endless stream of repeated propositions, and overwhelming the mind’s capacity to think critically and to unbelieve.
The blind and the paranoid
Liberals and conservatives may differ in the kinds of information they accept and believe, and this may be due to differences in how they experience and perceive the world. Liberals tend to score higher on a personality trait known as openness to experience; they are more open to change and novelty, whereas conservatives are more guarded and sensitive to threat.
Conservatives, on the other hand, tend to score higher in conscientiousness, meaning the are more prone to efficiency, orderliness, stability, and diligence.
These are generalizations, of course. We can all probably think of people who don’t fit the mold or who possess elements from both groups. There’s also recent research suggesting conservatives aren’t more sensitive to threat, and that it depends on context.
With those caveats, let’s explore these themes a bit.
If conservatives are indeed more vigilant to threat, they might also be like society’s radar. They may be equipped to foresee dangers and prepare defenses. The downside is that the conservative radar can be over-sensitive, resulting in an elevated number of false alarms (“false positives”). As a result they may identify threats that aren’t truly threats, like viewing certain immigrant groups as inherently suspicious.
In contrast, if liberals are more open to experience, it means they’re less prone to perceive threat. Their radar is weaker. They are susceptible to false negatives. They might feel that’s okay, because that way people who aren’t really threats—newcomers to the country, marginalized identity groups, etc.—don’t end up being labelled as threats and therefore victimized.
By the same token, liberals may be less likely to anticipate and foresee true threats. Conservatives think liberals are blind, and liberals think conservatives are paranoid. They are like two halves of a brain that might work well together, if they would listen to each other; but the political corpus callosum that ought to connect them has grown thin, and increasingly they function as a split-brain patient, literally experiencing the world through different consciousnesses.
Conservatives and liberals, then, are both gullible, only in different ways. They represent divergent views in how information is processed, and in the kinds of stories they are inclined to think are true. The real problem for each group is not the other group, but the failure to communicate and cooperate with the other and instead to turn further inward, into their own social and political hemisphere.
Spinning stories
By retreating inward, liberals and conservatives make themselves susceptible to being manipulated by people who recognize what resonates with them. The media knows this. Politicians know this. Stories get spun, and the masses start spinning with them.
Perhaps the media and the politicians hope that if a story spins fast enough, it’ll be strong enough to pull in people from the other side; strong enough, even, for one hemisphere to swallow up the other, so that only one side gets to tell the story, only one side gets to define and control social and political reality.
I admit that sounds like a conspiracy theory. In fact, what I’ve described is one of the essential elements of populist thinking, which assumes that there is no difference between political parties, and that they’re really an oligarchy of “evil elites”—or possibly just “greedy elites”—trying to control “the people”.
Certainly, conspiracy theories of one kind or another are a way to account for the often-coordinated messaging we seem to see in politics, media, and culture. But is it the only way?
Conspiracies or triangles?
A few articles ago I wrote about Mattias Desmet, author of The Psychology of Totalitarianism, and his theory of mass formation narratives—the powerful ideological stories that unite people in a heroic struggle against an object of anxiety.
Desmet also proposes an alternative to conspiracy theories to explain the coordinated behavior of the masses. His analogy is the Sierpinski triangle, which you can make like this:
Put three dots far apart on a sheet of paper. Randomly put a fourth dot on the sheet, anywhere you like. Then take a ruler, measure the distance between this fourth dot and any of the three other dots and divide it by two; put a new dot there. Measure the distance between this new dot with any of the three initial dots (randomly indicated) and divide the distance again by two, put a new dot there. Repeat this process a few hundred times and you will witness an astonishing phenomenon. You will see that, from the nebula of points, a Sierpinski triangle will arise—a fractal pattern that, from its overall composition to its tiniest detail, shows an identical pattern…
Desmet points out that if a naive viewer were to see such a triangle without knowing how it was made, it might be supposed that the person who drew it had a plan about what the drawing was going to look like. The naive viewer might never guess that, in fact, the emergence of such a seemingly coordinated design could occur through the repeated random application of very simple rules (i.e., “measure the distance between the new dot with any of the three initial dots…” etc.).
By analogy, Desmet argues that the same thing can happen in a society if enough people believe and follow a narrative. For example, the narrative might be: “This virus is very dangerous for everybody, therefore everybody wear masks and stand two meters apart”. If enough of the population adheres to these simple rules, then a consistent pattern of behavior will arise across individuals, groups, communities, and nations—patterns embedded within patterns—so that it seems somebody is behind the scenes, coordinating the entire event with meticulous control.
Desmet notes that Gustave Le Bon made the same observation a century ago, long before the age of mass communication:
In the crowd, the individual soul is replaced by a common group soul…The crowd acts in a coordinated way and repeats the same slogans. It engages thoughts and expressions that spread through its ranks at lightning speed (Le Bon described the “contagiousness” of thoughts in a crowd). Every segment of society participates in that pensée unique [conformism to the ideology]—politicians, academics, the press, experts of all kinds, judges, and police officers. In this way, the masses give the impression of a highly organized phenomenon.
The Sierpinski phenomenon doesn’t disprove the possibility that conspiracies exist. But it does suggest that coordinated messaging on a mass scale, whether in physical crowds or Internet crowds, is often driven by more mundane forces, specifically a zealous conviction in a particular ideology that has virally spread through a large group of people.
It also means, contrary to our intuitions, the leaders of ideological movements are less powerful than we imagine. It’s often supposed, for instance, that populist leaders are the cause of ideologies, whereas in fact it’s more likely these leaders are attuned to the stories that people are already inclined to believe and simply echo the stories back to people.
That’s why, for instance, replacing Trump with some other presidential candidate in 2024, or replacing Biden with some other candidate, won’t change much at all. These individuals came to power riding on the waves of pre-existing ideological narratives. If Trump or Biden weren’t in the picture, then another individual would emerge, riding on the same ideological waves.
This observation has also been made in connection with human gullibility. Research suggests that people are not strongly gullible to the stories told to them by influential authorities, whether religious leaders, demagogues, TV anchors, or celebrities. Rather, when people are gullible to a false or unfounded message, it’s more often because the message is consistent with something they already believe or want to be true. It’s a type of confirmation bias.
Desmet puts it this way: The ultimate master is the ideology, not the elite.
The ultimate master: Machine ideology
Again, Desmet’s theory doesn’t preclude the possibility a secret “conspiracy” might be behind the spread of an ideology. But conspiracies aside, there are obviously powerful interests in the world who can leverage their power for profit or potential influence. Examples are easy to come by—like the fact this past year, 75% of the Food and Drug Administration’s (FDA’s) drug division budget was funded by the pharmaceutical industry, or that Mark Zuckerberg’s various Facebook-related companies (Meta) have 3.59 billion core product users whose online activities can be tracked in order to target ads at them.
We might do well to pay attention to the unhealthy concentrations of power we can identify, and to aim our criticisms and political efforts at them, rather than to dwell on the secret power centers we can only speculate about. The latter fuels paranoia; the former fuels constructive action.
The Meta and pharma industry examples also reveal something else. They come with an identifiable story. It’s not necessarily a story they tell us. They themselves may not even be aware of the story. But it’s their story, and one shared with other tech industries and influencers: the emerging ideology of our time. This ideology sees life mechanistically, as a collection of intricate machine parts—lovely and wondrous perhaps, but parts nonetheless—which must be rationally understood and manipulated.
And it’s all thickly veneered with optimism. Smart cities will make life more convenient. Smart agriculture will ensure a nutritious food supply. The Internet of Things will keep us all connected. Biotech will ensure we’re in good health. Automation will take over the boring jobs, leaving us free to pursue our passions.
Of course these sub-narratives have a dark side, a shadow potential; lives under continuous surveillance, GMO-and-insect diets, vast farmlands controlled by powerful individuals, sensors under the skin, coercive health care, the atrophy of human virtue.
But the shadow side doesn’t get talked about much. Except for dystopian books and films, what we tend to hear is the optimism and the hope, and the urgency. If we don’t do these things, then humanity will keep suffering and might even be doomed! Machine ideology is rooted in a TINA attitude: There Is No Alternative.
The difficulty is, even if we reject Machine ideology, our lives have become so entangled with technology that our criticisms can seem vapid or hypocritical. We’re like Facebook users complaining about Facebook on Facebook.
Mattias Desmet uses the global response to the coronavirus pandemic as an illustration of the dominance of Machine ideology. The coordinated worldwide policies to manage the pandemic were not the result of any secret conspiracy, but because of experts and leaders who were (and still are) in the grip of an overarching ideology that emphasizes solutions through the technological and biomedical control of society.
If you think that’s overstating the case, consider that Scotland has identified an alarming increase in baby deaths following the rollout of the Covid-19 vaccines. An expected scientific response would be to investigate all the potential causes of the deaths, including the vaccines. But PHS (Public Health Scotland) has decided not to probe that possibility, because
identifying the vaccination status of the mothers, even at aggregate level, would result in harm to those individuals and others close to them, through actual or perceived judgement of the effects of their personal vaccination decision…[and further]… the outcomes of such analysis, whilst being uninformative for public health decision making, had the potential to be used to harm vaccine confidence at this critical time.
Of course it’s entirely possible the Covid vaccines had nothing to do with the baby deaths, and indeed that is the official line of PHS. But the refusal to explore this possibility in order to protect “vaccine confidence” demonstrates Desmet’s point perfectly. The narrative about the vaccine matters so much that, by default, any counter-narrative must be dismissed a priori.
Are liberals more gullible?
Earlier I mentioned that liberals tend to score higher in their level of openness to experience, and are less vigilant to potential threats, compared to conservatives. For the same reason, we might expect that liberals will tend to be more open than conservatives to new and novel technologies within society, and more trusting of these technologies.
Some research does support this possibility. For instance, with few exceptions, Democrats are more likely than Republicans to use social media. There’s also evidence that, within the US anyway, Republicans are less likely to get the Covid-19 vaccine (a biomedical technology) than Democrats.
And let’s add a bit more nuance. When it comes to openness to experience, the differences between the two political groups might be more around the experiential and perceptual aspects of life (rather than intellectual experiences).
So, we might expect conservatives to be even more reluctant to accept technologies that directly intrude on their felt experiences and perceptions, such as drugs and implants, whereas we would expect liberals to be relatively more open to experience-transforming technologies.
What does all this mean for Machine ideology? Broadly, it means that liberals will be more susceptible to believing in ideological narratives that promise technological solutions for society’s problems. It means the base of support for Machine ideology will tend to be more liberal than conservative.
Correspondingly, the liberal tendency to accuse conservatives of “conspiracy theories” whenever they highlight the power of the elites is, in some cases, a consequence of relatively poorer liberal radar for genuine threats associated with new technology.
Although conservatives may be less susceptible to Machine ideology, that doesn’t mean they aren’t susceptible at all. They just might lag in the uptake of new technology and the ideological values underlying them.
The Musk Dilemma
With the recent purchase of Twitter by Elon Musk, there will surely be a flood of new conservative users, who might celebrate their freedom to Tweet without the threat of censorship—especially now in the run-up to the US midterms.
Musk’s motivations for the purchase also speak to the liberal-conservative political divide. In a Tweet, Musk wrote
The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence. There is currently [a] great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.
In the relentless pursuit of clicks, much of traditional media has fueled and catered to those polarized extremes, as they believe that is what brings in the money, but, in doing so, the opportunity for dialogue is lost.
That is why I bought Twitter. I didn’t do it because it would be easy. I didn’t do it to make more money. I did it to try to help humanity, whom I love.
The wish to create a space for healthy debate between left and right is a good thing. Musk, it seems, wants to thicken the corpus callosum between the two political hemispheres, to help them communicate and cooperate as a whole rather than as a split-brain society.
But there’s also a dilemma. The use of social media to create a better society carries the risk of drawing us deeper into the belief that technology will save us.
Musk, of course, is a public advocate for Machine ideology. The goal of his company Neuralink is to ultimately augment the human mind with neural lace implants—an ultra-thin mesh that can interact with our neurons—to improve our cognitive abilities and ensure we’re competitive with artificial intelligence.
I don’t believe Musk is part of any conspiracy group, trying to take control of the masses for nefarious reasons. I think he has sincerely held (if mistaken) beliefs that human-machine symbiosis will be a positive development.
If we can accuse Musk of anything, its that his beliefs are gullible. In 2017, he even remarked that human symbiosis with machines “is not a mandatory thing. It is a thing you can choose to have if you want.”
In a post-pandemic world, where coercion by government and business has been standard practice to maximize mandate compliance, we all know what “not mandatory” really means.
The Musk dilemma is that we have genuinely benefited from many technologies, and sincerely want to make life better through more technology, but at the same time fail to appreciate that our increasing dependence on tech is insidiously distorting what it means to be human.
I’m not blaming Musk. He simply embodies the dilemma clearly—a dilemma we’re all caught up in. In a world increasingly enmeshed in Machine ideology, new technological solutions will seem increasingly natural. Conservatives, with their sharper radar for threat, and preference for order and stability, might be a little more resistant to these changes, but for the most part we’re all in this together. The gullibility virus has infected us all.
Gullible’s Travels
Is there a way to navigate around our gullibility? No, not entirely, but a few things might help:
Test thy intuitions: Don’t assume something is true because it feels familiar or true. If your instinct is to believe a proposition, then practice un-believing it, temporarily, while you gather more information to verify the claim.
Know thy peculiar mind: Others can take advantage of us by spreading false or unfounded messages that nevertheless seem plausible because they “fit” with how our mind works. Our manipulators see what our confirmation bias is, and feed that bias. We can limit the negative impact of this bias by paying more attention to the natural inclinations of our personalities—for instance, whether we are more open or closed to experiences, or prefer stability over change—while noting how others may try to play on these inclinations.
Remember the Musk dilemma: The belief that technology is the solution to all of life’s important problems is the default assumption of our age, even though, at the same time, we can foresee ways in which our growing dependence on technology is distorting and inverting our humanity. We need to be continuously vigilant to this dilemma, to be critical of the “solutions” we are offered by governments and corporations, and to explore alternative solutions.
Thanks for reading! Hit the like if you liked, and share and talk about it.
Image Credits: Gulliver tied down by the Lilliputians by Unknown artist, 1883; Sierpinski triangle by Beojan Stanislaus