Newcombs Paradox as Information Revelation

Newcomb’s Paradox as Information Revelation:

Opening Schrödinger’s Box

Abstract

Newcomb’s Problem is standardly treated as a conflict between causal and evidential reasoning about what to do. I propose reframing it as a problem about what the agent learns. When you enter the room and experience your first instinctive pull toward one box or both, you undergo something structurally analogous to opening Schrödinger’s box: you receive new information that collapses an epistemic superposition. Your gut reaction is not a cause of the box’s contents but a readout of them, since the near-perfect predictor modelled the very dispositions now manifesting as your impulse. This reframe dissolves the two-boxer’s central argument—“I cannot causally change what is in the box”—by showing that the relevant question was never about causation. It was about observation. The practical upshot is a robust case for one-boxing that holds regardless of what the gut initially says.

1. The Problem That Will Not Die

Few thought experiments in the philosophy of decision have proved as durable as Newcomb’s Problem. The setup is by now familiar: a near-omniscient predictor has placed either one million dollars or nothing in an opaque box. A transparent box always contains one thousand dollars. You may take both boxes or only the opaque one. The predictor has already made its placement, and its track record is near-perfect.

Two camps have formed around the problem and have been arguing, in various registers of exasperation, for over half a century. Evidential decision theorists (EDT) hold that since your choice is so strongly correlated with the prediction, one-boxing is the rational play: the expected value, conditional on your choice, overwhelmingly favours it. Causal decision theorists (CDT) reply that the boxes are already set by the time you choose; nothing you do now changes their contents. By dominance reasoning, taking both always nets you an extra thousand dollars, so two-boxing is strictly rational.

Neither camp has won. The literature is a graveyard of attempted dissolutions, each of which turns out to be a restatement of the original impasse in new clothing. I want to suggest that the deadlock persists because both sides have been answering the wrong question. The dispute between CDT and EDT is framed as a question about what you should cause or what your action signals. What has been overlooked is a prior question: what do you learn when you walk into the room?

2. The Existing Landscape: Common Causes and Neural Predictability

Before presenting my reframe, it is worth surveying the terrain. Several authors have gestured toward relevant ideas without, I think, arriving at the destination I have in mind.

Wolfgang Spohn and John Burgess have explored “common cause” structures in which a shared prior state—the agent’s psychological dispositions—produces both the prediction and the eventual choice. This is an important observation, and my account builds on it. But the common-cause framing remains entangled in causal vocabulary and does not foreground the moment of epistemic update that I take to be crucial.

Peter Slezak has connected Newcomb’s Problem to empirical findings on readiness potentials—EEG experiments showing that unconscious neural activity precedes conscious choice by several hundred milliseconds. Slezak’s work makes the predictor’s task physically plausible: if neural correlates of a decision are detectable before the agent is consciously aware of deciding, then a sufficiently sophisticated scanner could in principle read the outcome in advance. I take Slezak’s point as background support for the scenario’s coherence rather than as a resolution of the paradox.

Two contributions explicitly invoke quantum mechanics. Eric Cavalcanti constructs a quantum analogue of Newcomb’s Problem using Bell’s Theorem and entanglement to argue that the paradox reveals a tension between locality and free choice akin to Bell-inequality violations. Ghislain Fourny uses the label “Schrödinger” in a discussion of Newcomb but focuses on the arrow-of-time problem—whether the prediction’s causal direction runs forwards or backwards. Neither of these is doing what I propose. Cavalcanti’s apparatus requires the full machinery of quantum entanglement. Fourny’s concern is temporal, not epistemic. What no one, to my knowledge, has done is frame the agent’s entry into the room—the moment of encountering the boxes and experiencing a gut-level pull—as an observational event that collapses an epistemic superposition.

3. The Schrödinger Reframe

Consider your epistemic state at each stage of the scenario.

Before entering the room.
The predictor has already made its placement. The million dollars either is or is not in the opaque box. But you do not know which. From your perspective, the system is indeterminate—not physically, but epistemically. You assign some probability to each state. This is structurally identical to the state of affairs before Schrödinger’s box is opened: the cat is already alive or dead, but from the observer’s standpoint, the system is in superposition.

The moment of entry.
You walk in. You see the two boxes. And you experience, immediately and involuntarily, a gut-level pull—an instinctive inclination either toward taking the opaque box alone or toward taking both. This reaction is not deliberated. It arises from the very dispositional structure—personality, risk tolerance, philosophical commitments, temperament—that the predictor modelled when it made its placement.

Here is the key claim: your gut instinct is an observation that collapses the epistemic superposition. The predictor was near-perfectly accurate. Its accuracy means there is an extremely tight correlation between your dispositional makeup and the box’s contents. Your first instinctive reaction is a surface expression of that dispositional makeup. It is, therefore, a readout of what the predictor already decided. Just as opening Schrödinger’s box reveals a pre-existing state, registering your own gut reaction reveals the pre-existing state of the opaque box.

Note carefully what this claim is and is not. It is not a claim about backwards causation. Your gut feeling does not cause the million dollars to appear or disappear. The contents were fixed before you entered. It is, rather, a claim about information. Your instinctive reaction carries evidential weight about a hidden state because both your reaction and the hidden state spring from a common source: your deep dispositional profile. The gut instinct is a signal, not a lever.

4. The Practical Upshot: One-Box Either Way

If the foregoing is correct, the rational strategy becomes transparent. There are two cases to consider.

Case 1: Your gut says “take only the opaque box.”
Since the predictor modelled you with near-perfect accuracy, this instinct is strong evidence that the predictor predicted one-boxing. The opaque box almost certainly contains one million dollars. Take the opaque box. You walk away with a million.

Case 2: Your gut says “take both.”
By the same logic, this instinct is strong evidence that the predictor predicted two-boxing. The opaque box is almost certainly empty. If you follow the instinct and take both, you get only the thousand. But now you have new information: you know, with high probability, that the box is empty. The rational response to this information is to override the gut. Take only the opaque box. If you are wrong and the predictor was wrong about you, you lose a thousand dollars you would have had. If you are right and the predictor was right that you are a two-boxer—but you override the impulse—then you have broken the correlation the predictor relied on, but note that the predictor’s near-perfect accuracy makes this scenario exceptionally unlikely. The expected value still overwhelmingly favours one-boxing.

In both cases, the answer is the same: take one box. The information-revelation framing yields a stable, univocal recommendation.

5. Dissolving the Two-Boxer’s Objection

The two-boxer’s strongest card is the dominance argument: “Whatever is in the boxes, taking both gives me a thousand dollars more. The prediction is already made. My choice cannot change the past.” This argument has considerable intuitive force, and on many accounts of the problem, it is difficult to refute without contesting the causal structure of the scenario.

The information-revelation reframe sidesteps it entirely. Grant everything the two-boxer says about causation. You cannot change the past. Your choice does not alter the box’s contents. All of this is true. But it is beside the point. The question was never “what should I cause?” The question is “what have I just learned?”

When you walk in and feel the pull toward both boxes, you have not received an instruction to obey. You have received a diagnosis. The gut reaction tells you, with high reliability, what the predictor decided. And what the predictor decided, given its near-perfect track record, tells you what is in the box. You are not choosing between actions that affect different outcomes. You are choosing how to respond to a piece of newly acquired information about a fixed state of the world.

6. The Recursive Wrinkle

An obvious objection presents itself. The predictor modelled your full reasoning chain—including, presumably, any Schrödinger-inspired reframe you might apply. If the predictor anticipated that you would read this paper, reason through the information-revelation argument, and decide to one-box, then the predictor would have predicted one-boxing and placed the million in the box. Does this not render the whole exercise circular?

It does render it circular. But the circle is a virtuous one...

7. Relation to Functional Decision Theory

Readers familiar with the recent literature may notice a surface resemblance to Functional Decision Theory (FDT), developed by Yudkowsky and Soares. FDT also recommends one-boxing...

8. Conclusion: From Decision to Discovery

Newcomb’s Problem has resisted resolution for over fifty years because the debate has been conducted on a playing field that guarantees a draw...

One-box. The box has just told you what is inside it.