Why Dem Messaging Sounds Like it was Written by A.I.
And why A.I. can't think.
Trickle up? Middle out? Floors and ceilings? Fair shots? Why do we create messaging that fails to connect with voters? A.I. fails because it can’t do what humans do: build mental models of the physical world and use them to reason about abstract concepts. Our messaging fails in a similar way when we throw words together without evoking the mental models people use to make sense of our world.

“Strong Floor, No Ceiling”
You may have heard of Hakeem Jeffries’ latest attempt at a winning bumper sticker for Democrats: “Strong Floor, No Ceiling.” This bothers me on so many levels, starting with it being a blatant sop to the corporate class. On the messaging level, it reveals a cluelessness about language that makes me want to bang my head against a wall. I hope to remedy that lack of understanding today with my absolutely favorite cognitive science. Understanding how humans build and use mental models explains both why A.I. tools can’t think and why our slogans fail.
Thank you for reading Reframing America! My work is completely financed by subscribers like you! All content is free, but many people choose to become paying subscribers to help support this mission of improving how we communicate! Use this link to subscribe or upgrade! Thank you!
Mental Models
Human beings grow mental models of our experience living in the physical world. I say “grow” because they are made out of networks of neurons. This is straightforward enough for things that actually exist in the real world, like chairs, your neighborhood, or interactions with people.
In an evolutionary leap (that may have involved peyote) we developed the ability to use these same mental models to reason about abstract ideas like progress, economics, and government. The above quote, “Difficult roads lead to beautiful destinations,” blurs the lines between the physical model and the abstract concept, but metaphors are far more than just pretty language. They are literally how human beings do all abstract thinking. The mental model for traveling down a road is the tool that we use to reason about the passage of time and progress through life.
Here is the coolest bit! The abstract idea has to function the same way as the physical reality we based it on, because we’re using the same tool - the same neural circuitry - to visualize both things. In this way, what we think of as the “rules of logic” are actually the structures of physical reality as we experience it, and those structures make it possible for us to comprehend something, reason about it, and communicate about it.
If you’re driving and you reach a fork in the road, you have to choose which road to take. If you are going through life and you hit a “fork in the road,” you can’t choose to take both. That would be illogical. Why? Because in the physical world, you can’t be in two places at one time. Mental models both facilitate and constrain our thinking. Sometimes we have to switch mental models (reframe the situation) to allow ourselves to visualize a different set of options. We often reason about decisions using “scales” as a mental model, by “weighing” one option against the other. If they really “come out even,” you might choose both. You might then have to “balance” those competing priorities, but at least, in this model, choosing both was a conceivable option.
This is why A.I. can’t think.
Unlike humans, A.I. tools can’t grow and use internal models of the world because they don’t experience living in the world. All they know about “chairs” is that they are depicted in shapes and lines. They can’t develop internal models for “chairs” as they exist in the physical world, so they can’t reason that a chair needs to be structured to support the weight of a person. This is why they can’t reliably create images of chairs that work. We humans use our lived experience with things like chairs to develop a mental model for physical support, which we can then use as a tool to reason about emotional support. As with their nonfunctional chairs, A.I. tools can put words together that sound a lot like emotional support, but they can’t reason about what it means to support someone or understand what would happen if they took that support away.
How Messaging Doesn’t Work
If we use language that doesn’t follow the rules of physical reality, it doesn’t work. People hear it, but it doesn’t make sense.
Because Gravity
“Trickle-down” economics suggests that money, like water, will always find its way to the lowest point. We should give money to the people at the top of the economic ladder because it will naturally trickle down to those of us at the bottom. The “money = water” metaphor persists despite being total bullsh*t, because in addition to triggering automatic comprehension, it is cognitively fruitful: it generates a great variety of ways to think about money based on the real-world behavior of water.
At one point, strategists tried to use “trickle-up economics” to get us to visualize economic growth being driven by wages and consumer demand (which is true). Why did this not work? Because gravity. We have a deeply ingrained mental model for gravity. If money = water, and we have no viable mental model for water going up of its own accord, then “trickle-up” makes no sense. It defies the rules of logic/physics.
“Bottom Up and Middle Out”
If you want the economy to grow, you need to invest in the people at the bottom and in the middle. “Bottom-up” is not terrible wording, but it is surface-only. It does not provide you with a mental model of anything in the real world that goes up if you add something to it at the bottom. We have no tools with which to visualize it.
“Middle-out,” on the other hand, is just awful. We envision our economy as vertical, with ladders and upper and lower classes. Are you trying to suggest that if you invest in the middle class, the economy will grow in ways that benefit those both above and below them? Wouldn’t that be “up and down” as opposed to “middle-out”? Do things even grow down? What grows from the middle out? A waistline? Does this suggest that the economy will get fat if we invest in the middle class. What does that even mean? Ugh.
“Strong Floor, No Ceiling”
Completely setting aside the donor-conciliating implications of whatever policy might be suggested by this, it really does sound like it was written by ChatGPT in that, while it uses words that often appear together, it does not make sense. At best, it’s a mixed metaphor, like “It ain’t rocket surgery.” Is the “strong floor” in our house? If so, does having “no ceiling” leave us unprotected from sh*t coming down from above? Are we talking about the career and achievement ladder? If so, how do we all benefit from standing on that nice strong floor? It is the messaging equivalent of a chair with only two legs, or perhaps, four legs but all in a straight line. Nope. Just nope.
Everybody deserves more than just one shot.
Consciously or unconsciously, much of the Democratic world seem to have settled on the phrasing that “everybody deserves a shot” either “at a decent life,” or “to make a good living,” or “at the American Dream.” This is a scarcity/competition model. Are we the guy on the half-court line who gets one chance to lob the basketball into the net? The archer or marksman shooting for the bullseye? Are we playing the fixed games at the county fair?
Why just the one shot? This is actually terrifying. No mistakes. No second chances. Even Stephan Curry doesn’t make every free throw. I feel the same about the term “opportunity” as in, the opportunity to “compete” for a “chance” at a decent life. It’s like having to buy a raffle ticket for a chance to win a lottery ticket. And what good does it do to have a “fair playing field” if there are 100 people applying and only one job? No, this model is not compatible with our promise to build an economy that “works for everyone.”
Moral Accounting and the American Deal
We want our economic messaging to make sense to people, to make American voters feel secure in putting their economic futures in our hands. To do that, we need to evoke real-world-based mental models that both trigger automatic comprehension and provide ample material for deeper analogical reasoning.
Human beings comprehend fairness and societal obligation using a mental model we call “moral accounting.” I give you three chickens; you give me three bags of wheat. You always drive me to the airport; I will help you move. Moral accounting is baked in at a young age. If you tell a seven-year-old that if they do their homework before dinner, they can play videogames afterwards, you had better live up to your end of that deal or you will find out what it looks like for someone to be mortally wounded by grave injustice.
The concept of the contract – formal or handshake, keeping your word, holding up your end of the deal – is sacrosanct in American culture. Applying the model of moral accounting to the promise of the American Dream, with emphasis on the “promise” part, is a rich and fruitful metaphor that we can use to reframe our economic policy under the banner of “economic fairness” and summon the kind of moral righteousness that would satisfy seven-year-olds and MAGA truckers alike. You can learn all about it here:
If this seems like a challenge, it is. We have let conservatives dominate our economic debate for fifty years, so much so that we have almost no mental models to communicate our beliefs about how the economy works and who it works for. We can’t just make up slogans. We have to build these mental models, because we won’t defeat “free-market” economics until we can give people a better way to understand their economic world.
Thank you so much for reading this. I hope it is of use to you in your work and activism!
In solidarity, always,
Like this post, but not ready to become a paying subscriber? Leave me a tip of any amount you like at my tip jar! Thank you!
Contact me at antonia@antoniascatton.com or (202) 922-6647
NOTE:
Learn more about embodied metaphor theory here:
Metaphors We Live By, by George Lakoff and Mark Johnson, 1980







I think we need people like you to become consultants for the DNC. Their messaging needs a lot of work!
You always get directly to the heart of a thing. You seem like a born problem-solver. It's so impressive how you can spot the real problem just underneath the surface of the assumed problem. Keep up your great work. Eventually it will break through.