
Welcome to the Age of Algorithmic Determinism
And the quiet outsourcing of human agency to platforms.
by Ali Rahman
Friends, Peers, Colleagues,
I am worried. I’m worried. As we become increasingly dependent on algorithmic systems to navigate the world, we are undermining our capacity to make choices. We are losing the ability to explore, to discover, and to encounter possibilities that are not already presented to us. Put simply, we are forfeiting our human agency, and with it, any real hope for a human future.
What am I talking about? I’m talking about how much of what we see, think about, and act upon is now mediated by systems designed to shape our attention and behavior. In a remarkably short time, we’ve grown comfortable outsourcing large parts of our judgment to algorithms and LLMs, and we’ve invited these systems into nearly every facet of our lives, from relationships to mental health, under the banner of optimization, without ever pausing to ask what, exactly, we are optimizing for.
All of this erodes our ability to make decisions, choose paths and determine our own futures. In interacting with interfaces all day we are presented with the illusion of choice; we’re always selecting things, but most of the choices we make online are inconsequential, and confined to a narrow set of options preselected for us, rather than defined by us.
I’m calling this phenomenon algorithmic determinism.
In philosophy, determinism describes a world where the future is already written, fixed by a continuous chain of cause and effect. Algorithmic determinism is different. It does not lock in outcomes, but steadily narrows the inputs, while eroding our capacities for curiosity, creativity, spontaneity, and judgment. Over time, it limits what we are able to see, imagine, and choose, until choice itself becomes little more than a formality.
In the rest of this essay, I want to explain how this kind of determinism actually takes shape. Not through a single technology or platform, but through a set of overlapping mechanisms that work together to limit our sense of the possible, and direct us towards predictable outcomes.
In brief, I will look at how:
Taken together, these forces do not determine outcomes outright. They produce a closed epistemic environment that steadily erodes human agency, even if we continue to experience our lives as a series of choices.
Question: if a system can predict your behaviour with near perfect accuracy, are you living in a deterministic universe? How can you have free will if your every action can be anticipated?
The systems we’re talking about here can’t actually predict your future actions with any meaningful degree of accuracy. In spite of the millions of behavioural signals they collect from you (without consent), they can only infer what you might do next based on correlative, historical data. They are wrong far more often than they are right. And the quality of their predictions craters when users move offline. It’s one thing to predict the next click, it’s a whole other thing to anticipate how someone will behave in the real world, with all its rich texture and variability. And that is a problem for platforms, because their value proposition to advertisers and data brokers depends on being able to offer reliable prediction.
Their solution? Rig the game. Reshape the environment to manipulate your behaviour to make it more predictable. Keep you in a state of heightened emotional arousal to hold your attention and diminish your capacity for critical reflection. Control what you see, and what vanishes from your feeds. Determine which actions are rewarded, which ones are frictionless, while burying any mechanisms that might give you even a modicum of actual agency over what appears in your feeds behind labyrinthine menu systems.
Shoshanna Zuboff predicted all of this in her 2019 masterpiece, The Age of Surveillance Capitalism, which tracks how the economic logic of platforms (and their responsibilities to shareholders) force them to move from prediction to manipulation to control.
Their value is locked in being able to confidently trade “behavioural futures”, and the best way to predict the future is to constrain it and channel it down predetermined paths. They are engineering their environments to engineer you.
Do you remember combing through bins at record stores, browsing shelves in bookstores or libraries, talking to opinionated video store clerks about new releases, making mixtapes for friends? It’s easy to romanticize the physicality of vinyl or paper, but what we lost wasn’t material. It was friction. Discovery once required effort. You had to search, compare, tolerate uncertainty, and sit with things you didn’t immediately like. That work wasn’t an inconvenience to be optimized away. It was a cognitive and social practice that trained judgment, curiosity, and an internal sense of direction.
Today, culture is increasingly served to us rather than discovered by us. Content arrives passively, without effort or intent on our part. And to be fair, much of what we’re served is perfectly acceptable. It aligns with our existing tastes. But this convenience comes at a cost. When discovery is frictionless, curiosity is never asked to overcome resistance. The skills required to explore, wander, and decide for ourselves begin to atrophy.
Algorithmic curation flattens culture by prioritizing the familiar and discarding difference. When we are served only the inoffensive and the recognizable, we lose the discomfort essential for growth. As Kyle Chayka argues in Filterworld, this machine-guided seamless curation has turned us into "docile consumers" of culture, trading our individual tastes for a homogenized "average”, while atrophying our tolerance for the unfamiliar.
This homogenization is paired with a radical isolation. Hyperpersonalization fragments culture into discrete, private streams, weakening the collective spaces where meaning was once negotiated and shared and where bonding and bridging social capital flourished. The thrill of social discovery, of discussing, debating and dissecting a piece of art vanishes from our lives.
Over time, this narrows our exposure to ideas, aesthetics, perspectives and people that might disrupt us or push us somewhere new. More importantly, it weakens the mental muscles we use to explore and orient ourselves in the world. We need those muscles. Free will is impossible without them, because choice requires more than selection. It requires the ability to seek, to tolerate uncertainty, and to move toward what we do not already recognize.
Moreover, culture has always functioned as a kind of sandbox, a low-stakes training ground where we could experiment with ideas and disagreement together. When discovery becomes individualized and frictionless, we lose that shared space for learning how to live with difference.
While curation saps our will to explore, the boundaries of what we can explore are increasingly governed by algorithms and their opaque rules. This brings us to the question of visibility.
The early internet was messy and imperfect, but it was open. Anyone could publish. Communities formed around shared interests rather than metrics. Discovery happened through links, forums, blogs, and collaborative projects. You could get lost. You could stumble into worlds you didn’t know existed.
Today, access to information, culture, and one another is mediated through a small number of platforms (including search engines) that present themselves as neutral connectors, but are in fact gatekeepers of the web. As researcher Tarleton Gillespie has shown, platforms govern visibility through ranking, recommendation, moderation, and monetization, determining what is legible, relevant, and reachable in the first place.
Rather than resort to censorship, platforms govern visibility by downrankinging, deprioritizing, demonetizing, and shadowbanning content that doesn’t conform to their logic. We in turn respond not by demanding more transparency or control over what platforms render visible, but by altering our own behaviour to satisfy the requirements of the platform.
Put simply, we learn what gets traction and adapt accordingly. Certain formats, tones, rhetorical moves, and ideological positions circulate more reliably than others. To remain visible, expression becomes narrower and more predictable.
Over time, not only the form of what we say changes, but its substance as well. We stop experimenting and deviating, staying within the bounds the system can recognize and reward. Think about the impact these systems have on our civic structures when they determine what news, beliefs, and ideas enter the public sphere.
This matters, because what we cannot see, we eventually cannot imagine. And this limits agency because:
This is where algorithmic determinism becomes fully social. Futures are not closed off through force or prohibition, but through absence. Possibilities that do not appear cannot be debated, pursued, or collectively worked toward. The horizon closes not because the exits are locked, but because they have been rendered invisible.
To no one’s surprise, emerging research is suggesting that prolonged use of LLMs weakens critical thinking. Cognitive offloading is not a new phenomenon, nor is it inherently bad. Any technology, from the calculator to the stone tablet, can extend our mental space, but there is always a cost. For example, the Google Effect describes how we tend not to remember information we can easily search for online. But scanning, sorting, and selecting search results still required a measure of discernment. There was friction. There was variation. There were outliers. There was the need to judge.
As we increasingly bypass the search engine and bring our queries straight to LLMs (or have them answered in Google AI summaries), we eliminate almost all cognitive effort between question and answer. The system collapses multiplicity into a single, confident output. And because that output is delivered in smooth, authoritative language, it feels settled in a way that discourages further inquiry.
But critical thinking is a process, not an output. It is formed through hesitation, comparison, uncertainty, responsibility and conviction. LLMs allow us to leapfrog over this process, delivering us seemingly coherent answers without demanding our deliberation or accountability. As we become more dependent on these tools, we find ourselves validating outputs, rather than working through questions. There’s also the phenomenon of automation bias, the propensity for humans to favor suggestions from automated decision-making systems even in the face of contradictory evidence from human sources. LLMs put us face to face with automation bias multiple times a day. Repeated exposure erodes our cognitive confidence, until trusting the machine feels safer than trusting ourselves. Over time, authorship erodes. When ideas, language, and structure are generated externally, expression becomes less reflective and more derivative. Learning follows the same path. Knowledge is retrieved rather than worked through. Without effort, our understanding of the world becomes brittle, and without process, wisdom cannot form.
This is where algorithmic determinism turns inward. The more we rely on these systems, the less we practice critical thinking, which in turn makes us all the more dependent on these systems. We stop experiencing ourselves as the authors of our own thoughts and begin to distrust our ideas when they contradict machine-generated outputs.
This is how algorithmic determinism completes its circuit. Not by forcing decisions upon us, but by making it easier to stop making them at all. When judgment is outsourced, agency is not taken. It is given away.
In his works The Burnout Society and Psychopolitics, philosopher Byung Chul Han asserts that as we internalize the logic of neoliberalism (laissez-faire market capitalism), we are “becoming entrepreneurs of our own lives”, seeing ourselves as “projects” rather than “people”. He laments the frantic, restless momentum of contemporary life as it leaves virtually no room for stillness.
For Han, it is in rest, ease and leisure that creativity springs forth, and new possibilities become visible. For Han, all of this self-optimization, this managerial pursuit of efficiency in our own lives is a form of social control. He references Foucault, who described how in a surveillance society one is kept in line by the threat of being caught, while in our society we subjugate ourselves, by constantly striving, grinding, pursuing improvement without ever meaningfully questioning what we are optimizing for. Rather than resist the system or demand change to pursue a more equitable society where we might all enjoy more leisure time, we instead focus on perfecting ourselves within it.
Put simply: when liberation is off the table, optimization takes its place.
It is within this context that we invite algorithms into our lives, ostensibly to confer us an advantage in our pursuit of self-optimization. The body becomes a site of biometric data production, and our human experience is reduced to micro-metrics that can all be tweaked through consistent behavioural changes, whether it’s our body mass index or our moods, our sleep or our sexual health, there’s an app for that. Algorithms decide which potential romantic partners are visible to us on dating apps, they decide who among our friends is worthy of visibility, and who fades entirely from view.
We literally stand up when our watches tell us to. We chastise ourselves when our watches say “check your rings” because we haven’t reached our fitness goals that day. Is this self-improvement, or is it domestication?When you couple this with automation bias, the slow demise of critical thinking and the fact that we spend on average 2.5 hours daily on social platforms that dictate the boundaries of what is conceivable and what is invisible, all while atrophying our drive to discover, we come to realize that these algorithms aren’t helping us achieve our goals, they are training us to be subservient. We are conforming to their logic, not the reverse.
In keeping us accountable to our goals, we become dependent on them as a source of external validation. And while we might derive some short term benefit by using these systems, in an economy like ours their primary accountability is not to us, but to their shareholders. Their overarching goal is not to serve us, but to keep us engaged. So there is always more growth, more hacks, bigger lifts, more complex routines. We’re never done with these apps, we are never enough, we become incapable of simply living our lives confident and grateful for who we are.
All of this and the fact that as AI tools become increasingly adept at pantomiming human interaction, they are displacing the class of allied health professionals like therapists, trainers, dieticians whose primary accountability was to their patients. These professionals were trained to be compassionate, and to adapt their guidelines to fit the lives of their patients. Moreover, they are part of our communities. When we use these tools we bypass human interactions, driving us deeper into social isolation which itself is a kind of domestication. We are, after all, easier to control when we are alone, and our primary gateway to other people is via platforms.
Philosopher Mark Fisher wrote extensively about the “slow cancellation of the future”: the sense that at some point after the proverbial end of history in the 90s, meaningful progress seemed to grind to a halt as neoliberalism (the belief that every aspect of life and nature is a commodity to monetized in markets), became the dominant ideology in society, resulting in governments becoming protectors of private wealth rather than stewards of public good.
For Fisher, this was most visible in culture, where ever quickening cycles of retro, remix and reboot displaced genuine innovation. This shift is also evident in technology, where clean energy went from being an inevitability to an impossibility, or more visibly, where each generation of iPhone is not meaningfully different from the last. In spite of increasingly dense computational power, standards of living are not increasing, lifespans are flatlining, quality of life indicators are decreasing and energy consumption continues to increase. The solar system remains out of reach, cancer still kills people and Keynes’ 15-hour work week has not come to pass, with people working more hours per week than ever before.
For Fisher, this all stems from the failure of imagination inherent to neoliberalism, which is incapable of conceiving of anything but the accumulation of private wealth. As a result, we find ourselves stuck in a perpetual present, with no sense that a better tomorrow is possible. We come to believe that now is all there is, all there ever was and all that ever will be.
Platforms and predictive systems hyper-accelerate this sense of presentism. How? It is fundamental to their structures. These systems learn exclusively from the past. They draw on historical, correlative data to anticipate what is likely to happen next, and they are engineered to steer behaviour toward outcomes that can be predicted and repeated.
On its own, this might not seem especially significant. But when systems are designed both to predict behaviour and to shape the conditions under which behaviour occurs, the future stops being open-ended. It becomes derivative. What has already happened is treated as the most reliable guide to what should happen again, while deviation, novelty, and radical experimentation are increasingly filtered out as inefficiency, risk, or noise.
At sufficient scale, this produces a closed temporal loop. The system does not need to dictate a single outcome to function deterministically. It only needs to ensure that the range of possible futures remains narrow, predictable, and aligned with what has already been observed. The future is no longer something we move toward through imagination or collective choice. It is continuously derived from the past, optimized in advance, and quietly foreclosed. All of the mechanisms we described above are past-bound. And the more dependent we become on them, the more we are anchoring ourselves to a closed-loop version of ourselves.
The data that algorithmic platforms use to infer our future actions represents only the thinnest sliver of the human experience. It stretches back in time no more than 15 years, and in most cases fewer. That silver of data doesn’t come close to representing the richness of the human experience, not least of all because it doesn’t account for anything that can’t be quantified. Our emotional, sensory and social experiences are vastly more complex and nuanced than what can be reduced to metrics.
LLMs meanwhile, are trained on vast amounts of data, but of a very specific kind, which they constantly recycle. Without heavy and deliberate prompting, they generate slop that is polished to take the shape of an insight but is little more than a string of words that simulates meaning, without inherently possessing it.
And so, the more we use these tools to plan for and envision our futures, the more we are anchoring ourselves to a very limited, constrained, predictable version of ourselves.
Thank you for reading this far. This is the opening salvo in a series I will be writing exploring algorithmic determinism and how we can resist it. The first step is being aware that it is happening.
I hope, after reading this, you can see how these systems are controlling what we see, how we behave, while atrophying our capacities to critique, resist or challenge the burgeoning dominance of big tech in every aspect of our lives. These systems want to define what matters and what doesn’t. They want us to believe that real-life can only be encountered on their platforms.
Now you know. Now you can see it. That’s the first step. The next is recognizing what spaces cannot be controlled. Those are the ones that are not easily monetized. They are offline. They are third spaces. They are conversations with friends in real-life in real-time. They are encounters with nature. They are relationships These are places where we can reconnect with ourselves as free people, uncontrolled, unobserved, unoptimized.
Have a difficult communications problem? Reach out, we’re always happy to chat.