Memes about climate change occupy a strange and powerful middle ground. At their best, they make an overwhelming crisis legible, compressing complex science into images that are emotionally resonant, shareable, and easy to digest. For audiences who might never read an IPCC report, a single visual can spark curiosity, care, or even action.
But that same efficiency is what makes memes risky. When climate content is reduced to an image, the line between explanation and persuasion can blur.
In exploring how people interpret climate misinformation, researchers at the University of Southern California conducted a study using these climate-related memes. Each meme conveyed false information, but used different visual strategies.
These memes were shown to over 800 participants, who were asked to rate how accurate, authentic, and believable each one seemed.
Before analyzing the data, I had a clear hypothesis. I expected that users of platforms widely associated with misinformation (I'm looking at you, Facebook) would be more likely to rate these false memes as credible. My assumption was that older platforms known for weak moderation would leave users more vulnerable.
Across all measures, frequent Pinterest users consistently rated the fake climate memes as more credible than infrequent users. Below, I compared average perceived authenticity scores between frequent and infrequent users across five major platforms.
Memes Across Social Media Users
The gap in perceived authenticity between frequent and infrequent Pinterest users was the largest of any platform tested.
I myself use Pinterest to vision-board my future, to collect creative ideas, to feel inspired. But I've also encountered countless, casually designed infographics about health and science. Seen in that context, these climate misinformation memes could easily blend in.
Would I, too, mistake one for authentic?
In just one quarter of 2024, Pinterest removed over 20,000 Pins for spreading climate misinformation, a dramatic increase from fewer than 300 earlier that year, when its newer moderation systems were first introduced. Pinterest reports that nearly 80% of these Pins were removed before reaching any users, thanks to improved machine-learning detection.
Even as algorithms catch overt misinformation more quickly, the deeper issue remains: the same visual language that fuels creativity can also lend authority to misinformation.
This dynamic aligns closely with the Elaboration Likelihood Model, developed by psychologists Richard E. Petty and John T. Cacioppo. The model suggests that we process persuasive messages through two routes:
when we carefully analyze arguments and evidence
when we rely on surface cues to decide what feels true
Platforms rich in visual content naturally invite more peripheral processing. This doesn't make them inherently dangerous. But it does mean they cultivate a mode of interpretation where how something looks can matter as much as what it claims.
This project is a work in progress.