Permission economy – The EU will roll out mandatory biometric screening for non‑EU travelers this October, fingerprints, facial recognition, and other data capture at the border. In the same week, TikTok announced “Footnotes,” a tool meant to “protect” users by labeling sensitive content and encouraging wellness prompts for younger audiences.
On the surface, these developments appear to prioritize safety. Safer borders. Safer screens. But there is a growing tension worth noting: when does protection stop being about safety and start being about permission?
And if safety comes with conditions that serve the system more than the individual, is it still safety, or just management?
The Soft Sell of Control
Modern systems rarely present themselves as controlling. They frame themselves as “streamlining,” “protecting,” and “making things safer.” A passport scan at a border is safety. A wellness nudge on a teenager’s TikTok feed is care.
Yet this framing obscures the trade-off. Control wrapped in the language of care often escapes scrutiny. Gratitude for being protected can mask the gradual loss of autonomy.
Autonomy Shrunk by Inches
Biometrics and platform safety tools are not inherently negative. The issue lies in the creep, the quiet expansion of what is collected, controlled, and categorized.
Borders are no longer simply checkpoints; they are data collection hubs, storing faces and fingerprints indefinitely (Miltgen et al., 2024). Social platforms aren’t just labeling content; they are shaping what audiences can see, how long they can see it, and what qualifies as “safe.”
If systems decide what is “safe” to see, do, or become, how much of that safety is truly ours—and how much of it belongs to them?
The Permission Economy
This dynamic reflects a broader cultural shift: the rise of the permission economy. Movement, expression, and even leisure are increasingly contingent on silent approval from institutional and algorithmic systems.
Platforms, in particular, claim to protect vulnerable users but often optimize “safety” in ways that protect their own interests first (Wisniewski et al., 2021). For example, if TikTok can create algorithms to encourage healthier scrolling for teens, why isn’t there a robust system to detect and flag adult accounts interacting with minors’ content? Safety measures tend to expand where they benefit the platform, not necessarily where users need them most.

Why It Matters
The point isn’t to reject safety measures or disconnect from digital life. It’s to remain aware of when protection morphs into containment. Perhaps the harder question isn’t whether safety works, but who it ultimately works for.
Whose version of safety is being lived in? And what are the unseen costs of being “approved” at every border, every click, every scroll?
Because safety isn’t free. And if it always comes with conditions, it’s worth asking—protected from what, or from whom? (Sage et al., 2023).
Related Reads:
The Villain Era Is a Marketing Lie, But That Doesn’t Mean It’s Powerless
Hearts, Minds, and Hashtags: How War Propaganda Took Over Your Feed
How TikTok, Food Rituals, and Sunscreen Became Coping Mechanisms