Your watch trims a microdose of insulin while you sleep. You wake up steady and never knew there was a decision to make. Your car eases off the gas a block early and you miss a crash you never saw. A parental app softens a friend’s harsh message so a fight never starts. Each act feels like care arriving before awareness, the kind of help you would have chosen if you had the chance to choose.
Now the edges blur. The same systems mute a text you would have wanted to read, raise your insurance score by quietly steering your routes, or nudge you away from a protest that might have mattered. You only learn later, if at all. You approve some outcomes after the fact, you resent others, and you cannot tell where help ends and shaping begins.
The conundrum
When AI acts before we even know a choice exists, what counts as consent? If we would have said yes, does approval after the fact make the intervention legitimate, or did the loss of the moment matter? If we would have said no, was the harm averted worth taking authorship away, or did the pattern of unseen nudges change who we become over time? The same preemptive act can be both protection and control, depending on timing, visibility, and whose interests set the default. How should a society draw that line when the line is only visible after the decision has already been made?