Be it through the visions and implementations of big corporations or the passion of individuals, the reality projected by software has taken deep roots in the daily life of most people, sometimes inconspicuously. Software shapes the way we see and act in the world.
Platforms - the making of a shared reality
The algorithm - the good, the bad, and the biased
Looking at the online landscape some landmarks are unmistakable. What used to be a highly decentralized place has accreted around a few giants but instead of pearls we’re left with platforms. Instead of wild fields of weeds and flowers we have the sanitized agricultural landscape of uniformity. I think that a lot of us will recognize their online activity in these analyses of web traffic share:
We're talking of course about YouTube, the modern-family children of Meta, TikTok, etc. Big corporations’ occupation of the land-and-mind territory. Within each of these walled gardens, are debates around the algorithm: it is good, bad, biased, rewards interactions, short content, long content, certains tags, and a lot of other things. Always this name has a halo, an aura of omnipotence - the algorithm rules over the platform's content and its consumers.
There it is, content. For the platform is nothing without it, nor is the content anything without the platform. A weird symbiotic relationship installs in which the platform invites content to colonize it. An imbalanced relationship in which the platform has the power to crush and replace it's symbiote whenever it feels like. Stripped away from the flair, a platform is nothing but private, for-profit, infrastructure trying to foster an economy. The algorithm is both carrot and stick, it signals and shapes what is and what is not desired as content.
A diluted reality
These platforms and their algorithms compose an overwhelming share of our collective digital experience and drive the construction of many individual realities.
The goal however is not to bring up and foster people's development. It is to get users and keep them forever. A good way to do so is to identify content that's as broadly palatable as possible to push it on as many people as possible. Economy of scale applied to our cultural nutrition. If YouTube wants us to watch as many videos as possible to serve as many ads as possible, they have to recommend stuff that will get that one click more - even after having watched the same mind-numbing ad twenty times already (for those who have not yet moved to browsers that still support ad-blocking). Recommendation systems are incentivized to get users into the blandest, most average content available. There's no shortage of it. Instead of building up its users like a good mentor would do by curating its pupils education, algorithms herd us down back to the average.
The other idea is to keep the user within the platform (caution, might induce nausea), remove any opportunities for people to seep through unkempt hedges into other walled gardens, or worse, an open one. What is the creator-economy if not a remake of feudalism applied to what humans do best only when free of judgement and incentives?
The point I want to make is that none of the software tools we use are the neutral artifacts. Some are consciously designed to limit the need for users to think, some wear the ideology of their creator, willingly or not. Whether we're aware of it or not we are subjected to their influence.
It would be very surprising that this level of cognitive normalisation, replacing or mediating billions of idiosyncratic explorations, did not have a deleterious impact on collective cognition. - Robin Berjon
A post-truth and post-realism reality
It’s hard to tell precisely when we stopped having lives and started, in earnest, to have stories. Perhaps since the story became a major form of currency
Laurence Scott - Picnic Comma Lightning
Laurence's book speaks of lives lived and perceived as stories, of collected data being weaved into narratives, of a world where “fake news” and “alternative facts” are coexisting concepts. If we think about it that way, the very trendy “Data Analyst” jobs might be renamed “Story-teller” without losing its meaning. The making of a narrative that describes our continuous world from discrete sets of data is bound to require blank-filling that will bear the narrator's own biases and preconceptions.
Another major point of the book is that our relationship to the digital world has propelled us into a post-realism world. As an increasing proportion of our media consumption, communication, artistic production - our lives really - happens online, we learn to be human through these experiences more than we draw from the everyday interactions with a physical world and fellow human beings trying to make sense of their lives. Laurence calls post-realism this life where content that was carefully thought out and crafted as an entertaining - or worse, as an engaging experience - becomes the model on which we learn to be people. The artificial starts bleeding into the real as we learn how to handle relationships with TV shows and reality TV, or worse, were people feel relieve to be able to follow and identifiy with gits who are “telling it like it is”. We can learn how to manage work-life with hustle-bros. Understand the current political landscape and modern life requirements from the heavily summarized thoughts of a few techno-plutocrats. Are we going to finally forget to water the plants of our own relationships to tend to garden hedges ?
In essence life imitates art becomes life imitates content but the mindset that produces content is not the one that creates art.
As we feed on that derivative material, we internalize not only the content that is consumed, but also the modality through which it is delivered.
Software as a reality filter
Software: the encoding of that which appears, filtered though historical, cultural, and social context
Through this overview of the rather well-known content-platform-advertising trifecta I hope I convinced you of something that runs deeper: the reality bending capability of platforms is not due to content only but also comes from the form itself - software.
Phenomenology studies reality as the experience of “that which appears” filtered through historical, cultural, and social context. I propose we use the definition of software as “the encoding of that which appears, filtered though historical, cultural, and social context”.
Software are models and conceived by humans. By definition they don’t grasp the full range of complexity that is reality, but it is also impossible for them to capture a universal understanding of it. Their embedding of software in our daily lives gives many other opportunities for reality-bending, bias-reinforcing, behavior-inducing details. These reality-gaps can be carefully studied and introduced to favor addictive behavior, or introduced through careless generalizations and optimizations.
Here are a couple examples, in no particular order, with which we’ll get a bit closer to the point I am verbosely trying to make: software engineers are bias-programmers because they are biased-programmers.
Infinite lists
From convenient UX to soothing mindlessness. This one needs little explanation. It is probably one of the most common convenience trade-off of modern UIs. It is easy to use and reason about. It is also a good way to have your brain forget that it should stop at some point.
How desperate do we have to be to click to the second page of Google's search results ? How neat to have that stopper triggering us into refining our search rather than plod through. Near-invisible, infinite lists are a staple of attention-seeking platforms.
Turn left after 200m to arrive at McDonald’s
I’ve just ousted myself as a filthy metric system user, an artifact of my own historical, cultural, and social contexts.
GPS applications are a massively convenient piece of technology. But when they double as recommender systems, things get blurry. How does Google Maps choose what businesses to display when we’re scrolling around a destination point ? Is there a preference for certain shops or restaurants that are more likely to cater to a broad audience ? Is it bound to visitor's reviews and are any of them real ? Is it tied to some obscure Google search we made a while ago and completely forgot about ? I’m not trying to put a conspiracy on blast - my point is that getting presented with that kind of content influences the way we perceive a place before we’re even there. We then carry that preconception with us if we ever actually visit the place. I have no backing research or numbers so let me pull that claim out of nowhere: I'm pretty sure we're more likely to notice places in real life after having read their name or seen their store-front in an application, GPS or other.
More transparently, Waze straight up advertised for McDonald's on their map application.
AI and LLMs
The introduction of AI in software engineering will do nothing to alleviate the data sampling and interpretation issue as the current state-of-the-art LLMs are essentially bias-machines that were trained on unfathomably big amount of biased data. Are the biases going to cancel out ? Which of them will remain ? It is hard to forget the half-terrifying half-hilarious demise of Tay, Microsoft’s racist chatbot. LLMs also had their moments when they generated racially diverse nazis.
I won't dive too deep into generative AI as a technology itself as a lot of other people have done so in many brilliant ways. Notably:
I mainly want to mention another layer of the generative AI conundrum: Fatally LLM-generated software is going to be a new wave of human-reality interfaces with a deep and inscrutable embedding of our collective biases.
A last thought: trust issues
Be it a book, newspaper, scientific article, YouTube video, information and learning should call for careful examination, cross-checking, challenging for it to be meaningful. This requires a tremendous effort. Seeing the list of cited works at the end of a well-sourced book is reassuring, but it also makes it clear that reading all that material is going to take a time investment that might be impossible, or uncalled for. So we trust. We trust an author, it’s sources list, it’s outlet, the general reputation of the newspaper we are reading.
Digital media and the volume of information it delivers call for immediacy, moving on to the next video, article, link. As we went from the age of information to the age of content, keeping up with the sheer amount of produced material leaves us with less time to reflect on what is consumed, less time for critical thinking, less drive to exert a conscious internal restructuring of the material. We forego the all important agency part of being an active agent of the information flow. Do we accept that information as true ? Is it relevant at all ? Should we share and spread it ? Oh ! Someone’s calling, an email arrived, the Teams meeting started. That information will be trivia for a future conversation.
When reasoning about the substrate on which all that information is presented or used, we need to trust even harder. There is little to no transparency on who built a program or application, for what purpose and based on which assumptions. Sometimes we feel the inadequacy: the UI is all wonky, it lacks the tool we need to make it work for our own needs, it is buggy or bluntly misrepresents our reality. But when everything works smoothly, how to know if a piece of software deserves the attention and trust we credit it with ?
Beyond the dodgy terms of service and privacy notices, there is a lot of room for biases to be imposed onto us - but sometimes accepting these is beyond our control.
Some nice readings
If you want to read about this stuff without suffering my own biases and shortcomings.
https://berjon.com/stewardship/ - just do yourself a favor and binge-read the whole blog