Hyperstition and the "Casting Couch" Pornography Genre
In which I clickbait you by mentioning "porn" in the title
I.
TIL1 about “hyperstition”. The term seems to have been coined by English philosopher Nick Land and is a portmanteau of “hyper” and “superstition”. It refers to memetic ideas that “bring about their own reality”.
I’m not a fan of Nick Land’s writing style. While you can read about the idea in the original author’s own terms, I find he uses the obtuse (and often “mystical woo”-leaning) style that I associate with pseudo-intellectual philosophers2. To give you a flavor of what I mean, the primary source that Wikipedia gives for Land coining “hyperstition” is an article titled “Hyperstition: An Introduction” containing an interview with Land. The name of the publication containing the article is “0(rphan)d(rift>)” which… you know… it’s fine. But it sort of gives you a sense of the aesthetic they’re going for. And there are correlations between that particular aesthetic and where one’s values likely lie on the spectrum between clarity versus mystical-woo-ism.
Here is how the article/interview introduces Hyperstition, so judge for yourself whether it’s worth your time directly reading Land as a primary source:
What is concealed (the Occult) is an alien order of time, which betrays itself through ‘coincidences’, ‘synchronicities’ and similar indications of an intelligent arrangement of fate. An example is the cabbalistic pattern occulted in ordinary languages – a pattern that cannot emerge without eroding itself, since the generalized (human) understanding and deliberated usage of letter-clusters as numerical units would shut down the channel of ‘coincidence’ (alien information). It is only because people use words without numerizing them, that they remain open as conduits for something else. To dissolve the screen that hides such things (and by hiding them, enables them to continue), is to fuse with the source of the signal and liquidate the world.
All that said, I think the idea of “an idea that brings about its own reality” is an interesting one, so I will try to give my own introduction of hyperstition in a more down-to-earth and hopefully clearer tone.
It’s possible that I’ve misunderstood Land and that the thing that I’m describing is not what he intended. That’s fine. It’s the “Nebu-hyperstition” concept that I think is interesting and worth exploring further. If it so happens that Nebu-hyperstition and Land-hyperstition are the same thing, great: then I am also introducing Land’s concept. If it turns out that they’re different, also great: I am providing a different and original idea that is worth further exploration, independent of Land’s idea.
II.
So how does an idea bring about its own reality?
I actually stumbled upon this concept from porn actress Sophia Locke giving a frank, behind-the-scenes description of the Casting Couch genre (linked video is SFW, but I can’t guarantee everything else on her profile is SFW). For those unfamiliar, in the Casting Couch genre, you have a young lady who wishes to become a famous star. The diegetic3 camera man interviews the actress (how old is she? does she have a boyfriend? etc.) who is almost invariably seated on a couch, before he and/or other men have sex with her. The subtext is that she must perform these sexual favors before the camera man—who is acting as the gatekeeper—will allow her to become a porn star. There are themes of power dynamics and often of abuse: occasionally, at the end of the video, it is revealed that the camera man actually has no connections in the porn industry (and thus, no power to help her career prospects), and so she was tricked into having sex for no compensation at all.

As with almost all porn, this is, of course, entirely fictional. The actress is paid to perform this scene. Contracts will have been signed (both by her and by someone representing the studio). She (and the male performers) would have undergone STI testing. Etc. It’s a business, people can be sued, money is involved, lawyers are involved, and paperwork is kept proving that everything that occurred was consensual and legal.
With all that said, in all likelihood, the casting couch video would have been that actress’s very first experience performing in a porn video, and many actresses are—”forced” is much too strong a word, but “financially encouraged”?—to film a Casting Couch porn before moving on to do more general pornography work in the future.
The reason is that if you are a regular consumer of porn, and of Casting Couch porn in particular, and you see an “experienced” actress in a Casting Couch video, the illusion is shattered. It’s akin to a professional wrestler hanging out with his rival for lunch, revealing that the feud between them is fake. Most adult enjoyers of porn/wrestling “know” on some level that what they’re watching is fake. But it’s easier to immerse yourself in the fantasy if the kayfabe is maintained. And so the performers are expected to maintain the kayfabe. If they violate that social norm, the viewers become upset, which means less consumption and thus less profits.
And so when a woman expresses interest in become a porn actress, the casting agent or the producer has a strong financial and logistical incentive to have her perform a Casting Couch porn as her first gig: There’s a limited supply of actresses that can play the role. Any actress that has already done porn previously is risky (because a viewer might recognize her from her earlier work), so you want someone new to the industry. Symmetrically, once an actress has done a Casting Couch scene, you can’t use her again for that type of scene, especially if she goes on to become very popular.
And so we have hyperstition: There’s the fictional idea that this particular woman “needs” to do casting couch porn before she can become a “real” porn actress, and the socio-mechanics of the incentive structures make it so that this fictional idea becomes reality.
III.
I became fascinated with the idea after hearing Sophia Locke’s description of this phenomenon. She didn’t use the term “hyperstition”. Instead, after hearing her description, I tried to find more examples of the phenomena, and I found some, and eventually stumbled upon the term coined by Land.
I’m trying to come up with a model for explaining why some ideas are hyperstitious and others are not. I’ve got some ideas, but I’m not super solidly confident on them, so I’d love some feedback and further collaborative development. I think there are three primary components to a hyperstition:
Kayfabe Equilibrium: There exists a (large?) group of people that benefits as long as nobody calls the bluff. That means the participants tend to either stay (willfully or unwittingly) ignorant, or if they “know”, they choose to “play along” anyway.
Performative Belief-Acts: This is a reference to “performative utterances“ from philosophy of language and speech-acts theory. A “performative utterance“ or a “speech-act” refers to the situation where someone says something, and by saying that thing, they have changed social-reality. The prototypical example is when a priest says “I hereby declare you man and wife.” The thing that actually made them man and wife was the priest performing that utterance. Here, “Performative Belief-Acts” refer to the phenomena where if you act “as if” something were true, this actually causes it to become “more true”. An example of this is “acting as if a given form of currency (e.g. dollar bills) has value”. A non-example of this is “acting as if you can fly by flapping your arms”.
Resource Flywheel: Some sort of valuable resource, often social-proof but can also be financial, for which a critical mass needs to be maintained to fuel the socio-incentive structures.
It’s unclear to me which ones are necessary and which ones are sufficient. Perhaps there are actually three subtypes of hyperstition, with the possibility of overlaps. And of course, it’s also possible that I’m missing some aspects.
It’s also not clear to me how hyperstitions come into being in the first place. Right now, my mental model is that they occur randomly, like the random mutations in natural selection. Society is constantly generating new ideas. Most of them are not hyperstitious. Some of them are, but for whatever reason, the environment isn’t right for them to take off and so they die. But every so often, the environment is just right, and the random idea manages to bootstrap itself into self-sustainability and reshapes social reality in some way.
IV.
I listed the Casting Couch porn genre as an example of hyperstition. What are some other examples?
Cyberpunk
This was once an entirely fictional aesthetic invented by 1990s scifi authors like William Gibson. While there is some degree of “prediction” going on, there is also some degree of direct causation. Today’s tech entrepreneurs and venture capitalists are disproportionately scifi nerds, and they tend to invent and fund the technologies that inspired them during their childhood dreams.
Note that it is not a tautological given that society’s childhood dreams will eventually become social reality. Noah Smith wrote an excellent article describing how the scifi of other generations didn’t come true, e.g. the Buck Rogers cowboys-in-space aesthetic never came to fruition. There are a few rich influencers (Bezos, Musk) who are moving space travel tech forward, but for whatever reason, they haven’t hit the critical mass resource flywheel necessary to make this fiction become a reality the way that cyberpunk has.
COVID Toilet Paper
Without the mechanics of hyperstition, there was no reason that there would have been any toilet paper shortage in North America during COVID.
Yes, there were logistical difficulties, and there was a lot of uncertainty about what the on-the-ground impacts would be. But there were no, for example, automobile shortages, and cars are a significantly more complex product to manufacture than toilet paper: they require vastly more inputs, and thus are much more sensitive to breakdown in the logistics chain. If any one input fails (belts, gears, screws, bolts, computer chips, rubber), the whole thing grinds to a halt. But nobody spread a rumor that we might run out of cars during COVID, even though cars are, for many people, much more essential to their livelihood than toilet paper.
Instead, a rumor started that there might be a toilet paper shortage In response to this, people started hoarding toilet paper, which then caused a toilet paper shortage.
Even if you were a “rational actor” who understood that the idea of a COVID-caused toilet paper shortage is ridiculous, empirically, you could observe the empty store shelves all around you. So you really did need to “act as if” there were a shortage, if you didn’t want to run out of toilet paper!
Modern Speakeasy Bars
Due to psycho-social reasons I won’t get into here, some modern day North American adults romanticize the prohibition era. In response to the demand, there are modern “speakeasy bars” that avoid traditional advertising strategies and signage, and enforce pseudo-passwords before allowing entry. All this despite alcohol (and all other activity occurring within the premise) being completely legal.
It’s basically LARPing4, which, again, fine. Consenting adults are free to engage in this fantasy if they wish to do so. It’s just surprising how strong the kayfabe equilibrium is here, relative to, say, the LARPing that occurs at Anime conventions. The Anime con-goers are both more ready-to-admit and more cognizant-in-the-first-place that what they are doing is, in some sense, “fake”.
And obviously, the business owner has a strong financial incentive not to break the illusion.
US Banks
There is some overlap with the COVID toilet paper example here: Normally banks are fine, but if a rumor starts that a bank might run out of cash, then everybody is going to rush to withdraw their cash, and then the bank really will run out of cash!
But there is a more fundamental hyperstition going on with the entire concept of banking, as it is practiced in the United States (and perhaps many capitalistic societies in general): Banks take short term deposits (i.e. you deposit your money at the bank, but you can withdraw it at any time you want) and uses those to make long term loans (i.e. you borrow money from the bank, perhaps in the form of a mortgage, and you are not required to return the money until several years later). At a surface level reading, this is fundamentally impossible! There is no way the bank can give you back your money at any moment you request, if that money has been loaned out and won’t be back for several years. And yet, somehow it all works out anyway.
Matt Levine often talks about how banking is a “pro-social sleight of hand” and how we all need to play-pretend in order for it to work. But most of his writing is behind a paywall, so instead here’s a similar idea articulated by Steve Randy Waldman, though his presentation has a more conspiratorial tone (i.e. the bankers are colluding to trick us, though I think Waldman is being fanciful for rhetorical effect here and does not actually believe this) whereas my presentation emphasizes decentralized incentives (i.e. there’s no secret conspiracy, everyone is just acting in their own self-interest, but we can analyze predictable emergent trends). All emphasis are added by me, and they gesture towards one or more of “kayfabe equilibrium”, “performative belief-acts” and “resource flywheel”:
One purpose of a financial system is to ensure that we are, in general, in a high-investment dynamic rather than a low-investment stasis. In the context of an investment boom, individuals can be persuaded to take direct stakes in transparently risky projects. But absent such a boom, risk-averse individuals will rationally abstain. Each project in isolation will be deemed risky and unlikely to succeed. Savers will prefer low risk projects with modest but certain returns, like storing goods and commodities. Even taking stakes in a diversified basket of risky projects will be unattractive, unless an investor believes that many other investors will simultaneously do the same.
[…]
A banking system is a superposition of fraud and genius
[…]
Bankers make the world a more prosperous place precisely by making promises they may be unable to keep.
[…]
Investors know that bankers don’t have a magic wealth machine, that resources put in bankers’ care are ultimately invested in the same menu of projects that each of them individually would reject. Those risk-less returns cannot, in fact, be riskless, and that’s no secret. So why is this little white fraud sometimes effective? Why do investors’ believe empty promises, and invest through banks what they would have hoarded in a world without?
Like so many good con-men, bankers make themselves believed by persuading each and every investor individually that, although someone might lose if stuff happens, it will be someone else. You’re in on the con. If something goes wrong, each and every investor is assured, there will be a bagholder, but it won’t be you. Bankers assure us of this in a bunch of different ways. First and foremost, they offer an ironclad, moneyback guarantee. You can have your money back any time you want, on demand. At the first hint of a problem, you’ll be able to get out. They tell that to everyone, without blushing at all. Second, they point to all the other people standing in front of you to take the hit if anything goes wrong. It will be the bank shareholders, or it will be the government, or bondholders, the “bank holding company”, the “stabilization fund”, whatever. There are so many deep pockets guaranteeing our bank! There will always be someone out there to take the loss. We’re not sure exactly who, but it will not be you! They tell this to everyone as well. Without blushing.
[…]
This is the business of banking. Opacity is not something that can be reformed away, because it is essential to banks’ economic function of mobilizing the risk-bearing capacity of people who, if fully informed, wouldn’t bear the risk. Societies that lack opaque, faintly fraudulent, financial systems fail to develop and prosper. Insufficient economic risks are taken to sustain growth and development.
[…]
The analogy I would choose is finance as placebo. Financial systems are sugar pills by which we collectively embolden ourselves to bear economic risk. As with any good placebo, we must never understand that it is just a bit of sugar. We must believe the concoction we are taking to be the product of brilliant science, the details of which we could never understand. The financial placebo peddlers make it so.
(Side note: Placebos “work” even when the patients know that they are receiving a placebo. This suggests that placebos are also a hyperstition and that they demonstrate that the kayfabe equilibrium is not necessary, but perhaps the Performative Belief-Acts is central).
The kayfabe equilibrium is particularly strong in financial systems:
Most people don’t understand how finance works, and believe in the magic claims being made.
If you try to point out that these claims are magical and impossible (turning “short, liquid, safe, and simple” assets into “long, illiquid, risky, and complicated” assets, as Nick Rowe describes it), they’ll shrug and continue to “act as if” the magic is real, because… I mean, just look around you: Banks exists, I can physically walk up to one and touch it. If I type in my credit card number into Amazon, they’ll deliver physical objects to me, etc. What more proof do you need?
If you try to explain that no, really, all of this is impossible, and here’s the financial background knowledge to see how this is all fake, their eyes will glaze over and they’ll get bored and forget everything you say.
Even if you, personally and specifically, understand that’s it’s all fake, you’ll still “act as if” it’s real anyway. So what even is “real”?
Meme Stocks, NFTs, Crypto, etc
This is just the obvious extension of the previous example.
Meme stocks, NFTs and crypto more generally are often traded “as if” they have financial value, with transactions involving millions of dollars. They go up in value because people “act as if” they’ve gone up in value. The arrow of causation is utterly confused here.
Indeed, we act “as if” money itself (whether that’s US dollars or other currencies) has value. And in doing so, this causes it to have value. And there is a critical mass of people who stand to benefit from perpetuating this shared fiction.
Copyright, Patents, Intellectual Property, etc.
Is it possible to “own” an idea?
If you practice your imagination skills, you might be able to imagine a society (perhaps cavemen or goat herders or whatever) where it’s not even clear what it would mean to own an idea. It’s a nonsensical premise.
But here, in 2025, we certainly “act as if” owning an idea is a real thing.
Someone had to come up with the idea that owning an idea is possible. Then that person had to convince other members of society to “play along” and “act as if”. And then they played along until a critical mass of value and wealth was tied into the ideas themselves, such that there are now strong incentives to perpetuate this system (sure would suck for Sony if they paid $7 million USD for the rights to “Spider Man” only to later find out that “owning ideas” isn’t actually a thing).
V.
Perhaps due to the Baader-Meinhof phenomenon, having learned about hyperstition, I seem prone to seeing it everywhere. So let’s dial it back a bit and talk about limitations of the idea.
In an earlier draft of this post, the Deepseek LLM asked me what the difference was between hyperstition and a self-fulfilling prophecy (SFP). In my view, an SFP is a prediction about the future (which, by the speech-act of having said the prediction out loud, becomes true). This is distinct from the traditional speech-acts, which are not presented as predictions: When a priest says “I now declare you man and wife”, he is not predicting that you will become man and wife at some point in the future. He is saying that at this point in time, as he is making those utterances, and directly causally due to him making those utterances, you have now become man and wife.
Similarly, a hyperstition is not a prediction about the future. It is an idea, or perhaps more accurately, a belief about the state of the world. The truth-value relationship between the idea and the state of the world can be complex and ill defined (e.g. was it “possible” to “own an idea” before someone came up with “the idea of ‘owning an idea’”?). Hyperstition is similar to SFPs in that in both cases, their presence in people’s mind affect their behaviors so as to cause them to become “more true” over time. However, with an SFP, the person carrying the SFP in their mind in some sense “knows” that the SFP isn’t true “yet”, but may become true in the future. In contrast, the person carrying the hyperstition tends to believe that (or at least “doesn’t question whether”) the hyperstition is, in fact, true. The hyperstition may be, in some sense, “objectively false”, but the belief in that the hyperstition is true causes the person to behave in such a way such that it becomes more true. In contrast, an SFP “becomes true” regardless of whether the person believes in the SFP or not, and indeed many stories involve a protagonist “fighting against” the SFP (implying strong disbelief in the SFP), only for the act of the fight itself to cause the SFP to come true. The analogous situation would be incoherent for hyperstition.
DeepSeek also asked about the difference between hyperstition and plain old memes (in the modern sense, not in the Richard Dawkins sense). Memes are ideas that are more likely to spread from mind to mind, perhaps because they are interesting or amusing. Not necessarily because they are “true”.
In a strict sense, an idea does not need to “spread” to become a hyperstition. “Spreading” is an instrumentally convergent goal in that ideas that spread tend to be “more powerful” and thus more likely to “reshape reality”. But in principle, if an idea only exists in one person’s mind and causes itself to become true, I would say that that counts as a (non-central example of) hyperstition.
This leads to a whole new rabbit hole/can of worms, which is predictive coding: the idea that the brain creates a prediction of the world, and then the body acts to make that prediction come true. I won’t get into that in this article, but I do think the connection could be interesting.
ChatGPT o3 looks at my example with IP law and copyright, and asks whether all social norms are hyperstition under my definition. I think that’s a fair criticism hinting that my definition may be overly broad, and I am open to suggestions for tweaking the definition that would make it a lot crisper as to what exact concept I’m gesturing at. That said, I do believe that any reasonable definition of hyperstition is going to have some overlap with social norms. Perhaps social norms are a subset of hyperstition, and so my answer to ChatGPT would be “yes, all social norms are hyperstition”. I haven’t come up with any counter examples, but it’s possible that I’ve missed some.
On the other hand, not every hyperstition is a social norm, so these are indeed distinct concepts. For example, “Cyberpunk” is not a “social norm”.
Claude didn’t have any questions for me, but presented this synthesis which I found very interesting, though I am too cautious to pronounce is “true”:5
What separates hyperstition from mere prediction or influence is its reflexive quality - the way it creates conditions for its own fulfillment through social mechanisms. These "fictions that make themselves real" operate as a peculiar form of social technology, invisible infrastructure that shapes our economic and cultural landscapes.
The most powerful hyperstitions share three characteristics: they're partially self-concealing (maintaining their kayfabe equilibrium by discouraging too much examination), they're behaviorally contagious (incentivizing performative belief-acts that spread the pattern), and they're resource-generating (creating value that sustains their propagation).
It’s definitely another interesting perspective to keep in mind when empirically collecting examples of hyperstition and checking how well it satisfies the three criterias I listed.
Claude also emphasized that hyperstition are neither good nor bad. As Waldman argued in the banking example, hyperstition appears to be a necessary component of driving innovation in society (Waldman suggests that without it we never would have invented the automobile and we’d still be stuck as goat herders), but that it also “enables a great deal of theft by those placed at the center of the shell game. But surely that is a small price to pay for civilization itself. No?”
Claude goes even further:
By recognizing hyperstitions as emergent social technologies rather than mere falsehoods or truths, we can better evaluate their utility, sustainability, and consequences. Some hyperstitions may be worth cultivating despite their fictional foundations; others might demand more conscious collective management or reformation. Either way, hyperstition reveals how deeply our social reality depends not just on what is objectively true, but on what we collectively act as if were true.
In a world increasingly shaped by narrative and information flows, the power to identify, analyze and perhaps even consciously design beneficial hyperstitions may become an essential skill for navigating our complex social environment - enabling us to distinguish between fictions that empower and those that merely entrap.
Again, perhaps true, perhaps not—I’m not confident enough to say yet. But I do think hyperstition is severely underexplored. It’s unclear to me whether we, as in humanity, are capable enough to intentionally engineer and deploy “socially beneficial” hyperstitions. Money and legal systems are certainly things we-as-in-humanity deployed and they are hyperstitions, but I have my doubt as to how much of the effects of “money”, to focus on one example, were predicted-by and intentional-outcomes-of whoever it is that invented that idea.
But perhaps intentionally engineering and deploying hyperstitions is one of the things Super Intelligent AIs will be able to do. If so, it’s important for us to understand how they work so that we have a better chance of predicting whether their net effect will be good or bad for humanity.
“Today I Learned”
As opposed to philosophers like, say, Daniel Dennett, who I feel has put tremendous effort into making sure his writing was very clear. The problem with “unclear philosophers” is that by not communicating their ideas clearly, it becomes more difficult to know whether you disagree with them or, indeed, whether they are saying something that is factually and objectively wrong. That’s the incentive structure for why many philosophers enjoy using unclear language. And that’s why Dennett (among others) are admirable for fighting against that incentive structure and making it easier for his critics to identify where, if any, he said something that turned out to be wrong.
A thing is “diegetic” if it exists in the fictional universe being presented. In most movies, the camera man is non-diegetic: i.e. the actors pretend that there is no camera man present.
“Live Action Role Playing”. The stereotypical image is of grown when wearing chain mail and carrying rubber swords speaking in a pop-culture (i.e. inaccurate) form of “Olde English”, but “LARPing” is being used here much more generally and vaguely.
Completely off topic, but I can’t help but point out that Claude used a normal dash where I would have used an em-dash. I’m just providing this as evidence that the presence of an em-dash in text is not conclusive proof of AI generated text. I regularly use em-dashes in my writing, and here is an LLM that chose not to use an em-dash where it was clearly warranted.


