In March of 2023, the Silicon Valley Bank collapsed in the fastest bank run in US history. It did not collapse because of fraud, nor because of insolvency (though that followed). But because of speed. In a single day, depositors withdrew almost $42 billion – roughly a quarter of the bank’s total holdings. This wasn’t the result of traditional financial market operations – earnings calls analysis, updated financials, public statement, or even a leak. It was a result of a digitally accelerated belief cascade. There was no single physical event or an official statement that served as a trigger. It was sentiment and messaging momentum spreading through Twitter, Slack channels, private WhatsApp groups and SMS – statements, and re-statements by venture capitalists, founders, and the general public. There was no single authoritative source of information. No fact checking could catch up in time. One founder reportedly pulled $3 million out “… after seeing three other founders do the same in Slack.” Within a few hours, the idea that the bank was unsafe became a shared certainty. Whether that idea was true in any objective sense no longer mattered. The belief itself became a self-fulfilling prophecy. A modern bank run unfolded at the speed of attention. It wasn’t the insolvency that caused the run. It was the run that caused insolvency. The SVB’s collapse was not just a financial event. It was a failure of epistemic infrastructure.
This was not an anomaly. It was a preview of what’s to come. When ideas propagate faster than they can be verified, at the same time as amplification is divorced from epistemic soundness, the information ecosystem can become unstable. Rumors move faster than rebuttals. The current incentive structures reward virality over validity and nuance. And the more powerful the tools for communication become, the faster and more costly this dynamic gets. The SVB collapse is just one example. We have already seen similar patterns in the spread of various conspiracy theories, vaccine misinformation, election denialism, October 7th distortions, and other manipulative AI-generated content. What these cases share is not ideology, but infrastructure: systems that reward engagement, stickiness, and emotional resonance regardless of truth. There is a reason why the modern media has been called an “outrage machine”. As we shall see, what happened with SVB is not unique to finance. It is a symptom of a deeper problem: we now live in a world that allows ideas to spread and gain influence far faster than they can be tested, contextualized, or corrected. The resulting beliefs do not need to be true to have consequences – they only need to be contagious.
In earlier times, harmful ideas might have taken years to propagate, constrained by the pace of communication and the gate-keeping functions of traditional institutions. Today, the infrastructure of information flow – social media platforms, recommender algorithms, and the resulting virality dynamic – has pretty much eliminated those constraints. Belief propagation now operates at the scale and speed of computation, while the mechanisms we use to evaluate truth, assign credibility, or challenge falsehood remain fundamentally human, slow and fragile. The spread of beliefs follows feedback loops shaped by reward structures, trust signals, amplification mechanisms, and social incentives. And when those loops reward emotional provocation over accuracy, the resulting system doesn’t just fail to deliver truth – it can actively suppress it.
This creates a dangerous asymmetry. Coordinated disinformation campaigns, viral hoaxes, conspiracy theories, and emotionally manipulative content exploit this infrastructure to gain disproportionate influence. The result is not just individual confusion or disagreement – it is systemic epistemic harm. The erosion of our collective capacity to distinguish truth from falsehood, relevance from noise, credible expertise from performative influence has far reaching consequences, as we already have observed. The architecture of digital communication now enables and amplifies epistemically unsound content at scale – and does so by design. The incentives are deeply embedded: they range from emotional engagement and identity signaling, algorithmic amplification, to financial reward. These systems are not broken. They are working exactly as they were designed to.
To understand the stakes, consider how this architecture has played out across different domains - from public health, to financial markets, to violent conflict. During the COVID-19 pandemic, misinformation about vaccines, treatments, and transmission spread faster than public health authorities could respond, eroding trust and costing lives. The perverse incentives themselves aren’t new. In the 1980s, the fossil fuels companies funded disinformation campaigns to delay climate action. Decades earlier, tobacco companies knowingly suppressed and obfuscated research on the health effects of smoking. The intent behind these campaigns has not changed, but the infrastructure certainly has. Today, the cost of producing persuasive and misleading content has collapsed, while its reach has expanded exponentially. What once required years of lobbying and expensive media manipulation can now be accomplished in hours by a handful of coordinated actors – or, increasingly, by individuals employing modern generative AI tools. The information ecosystem has already been radically restructured by machine learning and AI. Attention is now one of the most valuable and contested resources in the world, and the infrastructure that governs its flow is naturally optimized for engagement, not accuracy. This creates a structural vulnerability: systems that amplify whatever captures attention will, over time, select for content that is emotionally evocative, polarizing, and frequently false. The mechanisms we have for deciding what is real – our epistemic infrastructure – are no longer adequate to the demands placed upon them. We are past the point where conventional moderation, fact-checking, or appeals to reason can contain the damage. We need better systems, and not just better content, or better ethics, or better users. We need infrastructure-level interventions that reshape how attention, trust, and epistemic weight are assigned in the digital environment. We are not building a system to declare what is true for everyone. We are building infrastructure to make disagreement navigable, to help societies reason through complexity without collapsing into chaos or drowning in noise. By engineering, not by force – by altering the substrate on which ideas compete for visibility and influence.
Belief is no longer a binary state of being convinced or unconvinced. It is a dynamic, socially mediated process that emerges from interactions between information, trust, incentives, and attention. In a digital world, beliefs do not simply settle in individuals – they spread, mutate, and amplify across networks, social and otherwise. This makes it fundamentally different from how belief functioned in earlier, slower media ecosystems. An idea, once seeded and rewarded with engagement, can propagate rapidly before its truth value is assessed or even noticed. An idea turns into belief in a complex cognitive uptake process involving attention, prior knowledge, identity alignment and trust, but an average consumer is now exposed to orders of magnitude more content both per unit time and in absolute terms, overwhelming most people’s cognitive capabilities. By virtue of the fact that falsehoods tend to be more novel, and thus contain higher information entropy, they exploit the well documented human bias for novelty. Their improbability and violation of expectations makes them intrinsically attract attention and be more share-worthy. Everyone who has used social media will have had the experience to corroborate this.
And belief is not harmless. In systems terms, belief is a strategy. It encodes assumptions about the world, guides actions, and signals group alignment. When beliefs misrepresent reality – especially on a grand scale – they become a kind of epistemic pollution. This pollution doesn’t just affect the individuals who hold the beliefs. It spills over, altering the information environment for everyone else. This makes misinformation a systemic risk, not just a personal failure. The real harm, then, is not simply that false beliefs exist. It is that they shape collective decisions: about vaccines, elections, markets, and climate. And unlike individual error, which can be self-correcting, systemic epistemic failures accumulate. They perpetuate flawed narratives, erode institutional trust, and make coordinated action – in an increasingly complex, interdependent world – more and more difficult just as we face new challenges.
The mechanisms behind these failures are not random. They follow recognizable patterns and can be analyzed through the lens of economic and game-theoretic models. Belief propagation is a dynamical systems problem. In classical accounts of communication, we often assume good-faith exchange between rational agents with aligned goals. But the digital epistemic environment operates under very different conditions. It resembles a continuous, multi-level principal–agent problem, distributed across a network of misaligned incentives and asymmetric information. Users (principals) delegate epistemic labor – what to see, read, and, increasingly, believe – to platforms and curators (agents). But those agents do not necessarily optimize for accuracy or soundness. They more often optimize for engagement, retention, and revenue. Their proxy for success is not truth, but time-on-site. Content creators, in turn, act as strategic players anticipating what the platform will reward, producing content optimized virality rather than veracity. Thus, the incentive chain is broken at every level.
This recursive misalignment can be modeled more formally as an iterated nested Stackelberg game - an abstraction that helps illustrate how each actor in this ecosystem anticipates the next, creating perverse outcomes due to structure of incentives. In such a game, strategic leaders (content producers) make moves anticipating the responsive amplification behavior of platforms (algorithmic curators), which themselves react to the behavior of followers (users), whose preferences have been shaped by prior rounds of exposure. Each agent optimizes their behavior under bounded rationality and partial information. The resulting feedback loops distort behavior, and the result is not convergence to shared truths, but epistemic volatility. It is no surprise, then, that there are no equilibria, and attention becomes path dependent. Outrage leads to clicks; clicks lead to amplification; amplification incentivizes imitation. A positive feedback loop emerges, selecting not for coherence or epistemic integrity, but for emotional salience, novelty, and tribal alignment.
Importantly, this is not a simple story of irrational actors. Even if each node in the network acts “locally sensibly” – that is, rationally within their constraints – the global outcome can be catastrophic. This system exhibits all the features of a complex adaptive process: high degree of interdependence (high connectivity, in graph theoretic terms), non-linearity, and emergent dynamics. Small perturbations can therefore cascade, and attempts at correction can backfire. These are hallmarks of what in behavioral economics and planning theory is called a “wicked problem”. There is no obvious objective function, no universal metric of success, and no central actor capable of enforcing coherence across the system. Every intervention changes the problem: it resists solution. This means that the solution, if any, ought to be in a different plane entirely, and change the rules of the game itself.
With the advent of powerful generative AI, we now inhabit a world where synthetic content can be produced on demand: plausible, increasingly personalized, and hyper persuasive – but divorced from truth. The marginal cost of producing high-quality disinformation has collapsed. The cost of validation, meanwhile, remains stubbornly high. This asymmetry introduces a systemic risk: in any market, when the cost of producing counterfeits drops to zero but the cost of verifying authenticity remains high, the system is flooded with noise. We’ve seen this in economics, in spam detection, and now in knowledge itself. Epistemic value, previously inferred through trust signals – source credibility, institutional filtering, human effort – degrades, and becomes indistinguishable from virtue signaling, posturing and manipulation. If this continues unchecked, we risk a collapse of the entire epistemic infrastructure: a breakdown of the informal but essential scaffolding that allows truth to function as a signal. In such a world, it no longer matters whether something is true. What matters is whether it spreads, whether it provokes, whether it fits the emotional priors of its target audience. The result is an accelerating degradation of public reasoning and collective sense-making. Compounding this problem is the architecture of the platforms through which most people now consume information: these systems are not neutral. They are optimized, relentlessly, for engagement – clicks, shares, likes, and comments – measured in real-time and tuned by reinforcement learning loops virtually indistinguishable from slot machines. What is rewarded is not insight, but novelty; not truth, but traction.
This creates a kind of adversarial attention economy, where content is rewarded for its ability to provoke – not inform. We have inadvertently constructed an environment where viral garbage is monetizable, and epistemically grounded analysis is punished almost by design. Note that this is not a consequence of individual malice or institutional failure. It is the logical outcome of incentive structures built without regard for epistemic health. The result is a feedback loop eerily reminiscent of addiction science. Dopamine-driven engagement begets more of the same, distorting not just what we see but what we believe. And over time, what we believe shapes how we act – in markets, in elections, in pandemics. In this model, as shared trust decays, the actual concept of truth is becoming contested territory. The collective consequence is a form of systemic epistemic harm: not just being wrong, but losing the ability to distinguish what’s right from wrong.
This is not just a theoretical concern. The real-world consequences are well-documented and can easily be devastating. In the years leading up to the Rohingya genocide, Facebook’s algorithm amplified divisive, incendiary content in Burmese – much of it manufactured, and much of it false. Human moderators lacked even the language skills, not to mention local context, to respond effectively. The platform functioned as an accelerant, spreading hate speech faster than it could be removed, ultimately facilitating real-world violence. A UN fact-finding mission singled out Facebook’s algorithmic amplification as the key enabler of the disaster. The GameStop short squeeze, propelled by forums like r/WallStreetBets, revealed how belief propagation in retail investor communities – often driven more by memes than forecasting and financial modeling – could destabilize financial markets. Amplified by media attention and low friction trading apps, a kind of financial flash mob formed. While many participants saw it as a joke or expression of protest, some institutional funds lost billions, and retail investors were left exposed to enormous volatility, all driven by a feedback loop of hype, crowd psychology, and algorithmic momentum. And of course, during the COVID-19 pandemic, vaccine skepticism – often seeded in Facebook groups, Twitter, as well as in niche forums – rapidly metastasized into broad anti-vaccine sentiment, partly due to platform recommendation systems surfacing sensationalist content. Algorithms promoting “related” videos or posts often led users from legitimate health information to conspiracy content, creating epistemic rabbit holes with real and severe public health consequences. Multiple studies have confirmed that platforms like YouTube and Facebook played a central role in the growth of these clusters of misinformation. Another, more recent example is the internal conflict within the Facebook’s Responsible AI team. It offers a stark illustration to the failure modes of centralized interventions. In 2021, the team built a model that successfully reduced anti-vaccine misinformation. But it was blocked from deployment, not because it didn’t work, but because it flagged more “conservative” users than “liberal” ones. In the name of fairness, the tool was made ineffective. The result was a policy that neutralized the detector’s impact, allowing known misinformation to persist in the feed. As J.E. Gordon had said (in relation to a rather different set of engineering failures, but if the suit fits, wear it) - “People do not become immune from the classical or theological human weaknesses merely because they are operating in a technical situation, and several of these catastrophes have much of the drama and inevitability of Greek tragedy. … Under the pressure of pride and jealousy and political rivalry, attention is concentrated on the day-to-day details. The broad judgements, the generalship of engineering, end by being impossible. The whole thing becomes unstoppable and slides to disaster before one’s eyes. Thus are the purposes of Zeus accomplished.”
Each of these cases reflects a broader failure of design: an inability – or unwillingness – to prioritize epistemic integrity in systems built for speed and scale. To be fair, some platforms have attempted to address these failures. The last few years have seen a wave of modest yet telling interventions – often reactive, often belated, but instructive nonetheless. For example, Twitter’s “Read Before Retweet” prompt gently nudged users to open an article before sharing it, decreasing blind reshares by significant margins. This was a small architectural tweak – a kind of infrastructural nudge – showing that behavior can be shaped without coercion. YouTube’s down-ranking of conspiracy content began after intense public pressure and academic scrutiny. But while such efforts reduced exposure to certain categories of misinformation, they were uneven, opaque, and susceptible to reversal. TikTok introduced screen-time nudges and content filters for teenagers after mounting concern about mental health harms. However, these kinds of nudges are fragile, and can be circumvented or ignored. In a more powerful example, Apple’s shift toward privacy-respecting ad models has fundamentally altered the data economy, forcing a substantial reconfiguration of the targeted advertising market. While driven by brand reputation considerations and regulatory pressure, it shows that systemic defaults can be re-engineered, even at global scale.
Each of these interventions reveals something important: it is possible to reshape engagement architecture in ways that reduce harm. But these are patches, not principles. They are vulnerable to leadership changes, political pressure, and shifts in public sentiment. They remain top-down and proprietary, built into centralized platforms that retain full control over what is amplified and why. And as long as platform incentives remain aligned with engagement over epistemic health, these fixes are unlikely to endure or be as effective as we would like. If the digital epistemic crisis is structural and not just moral or individual, then so must be our response. The solution is not to legislate belief or suppress speech, but to redesign the underlying infrastructure: change the pipes, not the water. As with urban planning or transportation safety, the aim is to reshape incentives, reduce systemic risk, and make catastrophic failure less likely. We would like to envision a set of tools that individuals can use to improve their information diet and make it more epistemically sound. The continuation of this essay is an attempt at a sketch of design principles for a healthier information environment.
The first principle we would like to propose is that the interventions ought to be non-coercive. No one should be silenced by fiat, nor excluded from participation on ideological grounds. But non-coercion does not mean neutrality: infrastructure always encodes values. Forestalling the inevitable libertarian critique, our current systems already shape attention, they just do so in ways that reward things we want to minimize - manipulation and outrage. Importantly, replacing those defaults is not an attempt at censorship – it is a systemic redesign of attention sinks. The proper analogy is not to free speech laws, but to traffic management systems. Roads have lanes, signals, and signs. We do not ban cars, but we do use design to shape traffic flows, prevent collisions, and protect pedestrians. We don’t dictate destinations, but we can – and must – build guardrails, friction, and feedback loops that help epistemic traffic flow in a more constructive direction. The goal is then not suppression of specific content, but changes in how influence accumulates. A system that slows the spread of unvetted claims, elevates long-term reputation over short-term virality, and gives users better tools to shape their own feeds is not a form of censorship, but a new kind of epistemically-aware infrastructure. And – perhaps – it can be built in a decentralized, client-side fashion, where each user or community can tune its own filters and standards without requiring a central authority.
The second principle is that trust must be reimagined as a systems property, not a personal feeling or a centralized decree. The current ecosystem often defaults to one of two problematic extremes: popularity as a proxy for epistemic value (likes, shares, followers), or top-down curation by platforms or “experts.” As we have shown, neither is particularly robust, especially in a polarized, high-pressure information space. Instead, we can draw from existing trust architectures that already work, however imperfectly. PageRank, Google’s original algorithm, used the link structure of the web to infer importance in a decentralized manner. It did not evaluate content directly, but used the transitive properties of the web graph to infer value. Critically, it decayed influence with distance – preserving locality and preventing runaway cascades. Another pertinent example, academic peer review, despite its obvious flaws, remains the dominant trust mechanism in science. It works not because it guarantees truth, but because it allows recursive vetting by domain-literate peers. The structure of nested trust – reviewers reviewing each other – creates a form of epistemic scaffolding we would like to generalize. Another example, from finance, is credit scoring systems. While opaque and often unfair, they show that evolving, behavior-based proxies for reliability can function at scale. Their flaws, such as centralization and lack of recourse, underscore what not to replicate. Finally, OpenReview, a more transparent and dynamic system for academic vetting, introduces features like public comments, revision history, and distributed review weight. It allows for challenge, evolution, and accountability – important features in a world where the truth rapidly evolves and context shifts unpredictably. The key takeaway from all these examples is that trust must be earned, transparent, and revisable. It should decay with time, recover with demonstrated integrity, and travel with the user across communities. It must be able to be contested. This makes it robust to error, fraud, and change.
How might these principles be implemented in practice? There are a few concrete, feasible interventions which could be layered into existing platforms – or, preferably, built into new, decentralized infrastructure. Slower sharing for low-trust sources would introduce delays or added friction for newly created accounts or domains with no established reputation. The core premise is that algorithmic verification and fact checking is possible - given enough time. This limits the velocity of bad actors without silencing them. “Read-before-share” nudges, as Twitter (pre-Elon) implemented, showed that a simple prompt can dramatically reduce blind resharing leading to viral cascades. This preserves freedom while discouraging impulsive amplification. Ultimately, we would like to go towards user-configurable feeds: instead of opaque ranking algorithms maintained by platforms and tweaked at will, sometimes by belief cascades, sometimes even by impulsive individuals, users ought to have control over what attributes of content they wish to prioritize – novelty, source credibility, factual consensus, diversity of viewpoint, etc. Another critical feature is trust-weighted amplification: rather than defaulting to virality, we would like to use a kind of epistemic inertia. Posts from low-trust sources start with limited reach and only scale as they accrue endorsement from verified or high-reputation nodes: think of it as a reputational rate limiter. Lastly, whatever system we design must provide for recourse and transparency. Trust dynamics have to be visible and explainable, users ought to be able to contest their own standing, and communities should be able to define their own trust heuristics, linked together via shared accounts and structured feedback mechanisms. These interventions do not require banning anyone. They require sensible defaults, friction, and structure – just as any other well-designed system does. They aim not to prevent dissent, but to disincentivize manipulation, virality for its own sake, and the weaponization of outrage.
Having outlined the principles of epistemically grounded infrastructure, we can now turn to the most common objections – and maybe the path forward for practical implementation. It is hopefully clear that this proposal is not a utopian blueprint or a recipe for new coercive mandates. They are a set of infrastructural interventions designed to reduce systemic epistemic risk and improve hygiene at scale while preserving pluralism and voluntary participation. A predictable critique – especially from libertarian quarters – is that any constraint on information flow is tantamount to censorship. But this critique rests on a category error: all infrastructure constrains behavior. Constraints are not inherently coercive; they are the price of functioning systems. Traffic laws don’t dictate where you go, but they make it more likely you’ll get there alive. Building codes don’t tell you how to live, but they ensure the roof doesn’t collapse after a particularly bad snowfall. Browser ad blockers, spam filters, and parental controls shape what content is visible by default, yet no one mistakes them for thought police. What matters is not whether there are constraints, but who controls them, how transparent they are, and whether they can be contested or opted out of. The current environment is already shaped by invisible, unaccountable constraints – algorithmic promotion, revenue dynamics, and frequently opaque moderation. Our proposal is to make the incentives legible and tunable, so individuals and communities can align them with their own values.
We believe the most promising route to implementation is client-side: a user-controlled intelligent assistant, perhaps implemented as a browser extension and/or a local LLM-based co-pilot. Instead of mandating changes at the platform level, which is inevitably going to turn into a political and logistical quagmire, this approach empowers users to filter and prioritize content according to the principles they endorse. Such a system might include a local filter layer, powered by fine-tuned language models, that evaluates incoming content for factuality, provenance, emotional charge, or other metrics. User-configurable knobs, allowing adjustment of feed characteristics (e.g., novelty vs. credibility, speed vs. accuracy, diversity vs. consensus) would allow users to customize what they choose to engage with. An optional trust network graph, where users can opt into decentralized trust signals – much like a social web of citation or endorsement – would replace or augment platform feeds. By virtue of client side implementation, adoption is not coerced – it is driven by outcomes and user experience. If this hypothetical cognitive assistant helps users find higher-quality content, avoid spam and outrage bait, and feel more in control of their information diet, adoption will spread. If not, it won’t. That’s the bet: voluntary adoption is key. Those uninterested in such tools can continue using platforms as-is. But the availability of an alternative creates competitive pressure for better defaults – just as ad blockers eventually pushed platforms toward less intrusive monetization schemes.
A common fear is that any attempt to formalize trust or truth will lead to a new orthodoxy – a centralized arbiter of reality, or a new “digital religion.” This may be a legitimate concern, but would be a misreading of the proposal. The danger is real: history shows that institutions tasked with guarding truth can ossify and turn into gatekeepers. That is why open protocols, decentralized architecture, and transparency are essential. The system should not rely on a central authority to determine what is true. Instead, it should expose the structure of trust judgments – who endorses what, under what assumptions, with what caveats – so that truth becomes an emergent property of a contested but accountable process. This means having interoperable trust layers: allowing communities to define and share their own trust heuristics, which can coexist or compete in a broader ecosystem. Transparency of ranking and filtering criteria is non-negotiable: the system must not consist of black boxes. These features lead to portability of trust: if a user builds a reputation or trust profile in one context, it should be exportable to others.
This approach does make room for epistemic pluralism. It does not require that everyone agree on facts, values, or authorities. But it does demand that those disagreements be transparent – traceable, challengeable, and subject to ongoing feedback. The challenge we face is not simply that falsehoods exist or that people sometimes believe them. It is that we have built and continue to optimize digital systems that systematically reward amplification over accuracy, outrage over coherence, and virality over truth. This is not just an individual problem of critical thinking or media literacy. It is a form of systemic epistemic drift, driven by misaligned incentives and unchecked feedback loops, now accelerated by the new capabilities of generative AI. The result is an information environment where truth loses its signaling function, where coordination becomes impossible, and where shared reality itself begins to degrade. We argue that we cannot fix this by yelling louder, fact-checking faster, or debating more virtuously. The problem is not just ethical or political – it is infrastructural. It is rapidly getting worse. And it demands infrastructural solutions.
The proposal laid out here is not a doctrine or new dogma, or recipe for new centralized authority over truth. It is a set of design principles and technical options aimed at amplifying epistemic resilience: decentralized, non-coercive tools that can help shape how information flows, how trust forms, and how digital public reason together. It does not prescribe what to believe, but it does offer a mechanism for helping us evaluate belief – together, contextually, and with feedback. Information is now infrastructure. It deserves the same care as critical physical infrastructure - planes, roads, railways and bridges.
This is also a call to action. Here is a specific example of a positive outcome. Wouldn’t it be great if at the time of the next US election every American voter had access to tools to help them make a more informed, grounded decision, and more effectively determine what is true for themselves? There are people and organizations working on such tools, models tuned for scientific discovery, fact checking, verification, but these efforts are disjoint. Engineers, civic leaders, technologists, researchers, and policymakers must begin to prioritize the design space of epistemic infrastructure. It is time to stop treating the flow of digital belief as a cultural artifact or a free-for-all, and to begin treating it as the substrate upon which coordination, democracy, and our collective sanity depend. If we fail to act, we argue the cost is not just confusion: it is a systemic breakdown. In the age of AI, reality itself becomes a contested object. What collapses next will not be a bridge or a bank, but our shared capacity to know what is real. This is not an ethics problem. It is a systems problem with profound ethical consequences.
References:
https://arstechnica.com/ai/2025/06/the-resume-is-dying-and-ai-is-holding-the-smoking-gun/
https://www.nature.com/articles/s41562-025-02194-6
https://www.acpjournals.org/doi/10.7326/ANNALS-24-03933
https://www.latimes.com/business/story/2023-05-08/california-regulator-cites-social-media-digital-banking-as-key-factors-in-silicon-valley-banks-failure
https://www.fdic.gov/analysis/cfr/bank-research-conference/annual-22nd/papers/cookson-paper.pdf
https://www.reuters.com/article/business/bank-runs-in-the-twitter-age-svbs-collapse-poses-new-challenges-for-firms-idUSKBN2VW18W
https://arxiv.org/abs/2506.05373
https://iris.who.int/bitstream/handle/10665/43153/9241592907_eng.pdf
https://en.wikipedia.org/wiki/Stackelberg_competition
https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem
https://en.wikipedia.org/wiki/Wicked_problem
https://www.reuters.com/article/world/un-investigators-cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2Q4
https://time.com/6217730/myanmar-meta-rohingya-facebook
https://en.wikipedia.org/wiki/GameStop_short_squeeze
https://www.cato.org/cato-journal/fall-2021/gamestop-episode-what-happened-what-does-it-mean
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9359307
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8914400
https://www.washingtonpost.com/technology/2021/07/22/facebook-youtube-vaccine-misinformation
https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation
https://www.cis.upenn.edu/~mkearns/teaching/NetworkedLife/pagerank.pdf
https://www.dfs.ny.gov/system/files/documents/2021/03/rpt_202103_apple_card_investigation.pdf
https://openreview.net/about
I am deeply grateful to Ted C, Konstantine Arkoudas, Will N, Sergei Z, Anna B, Vlada B and others for insightful comments and feedback on earlier versions of this essay.