As an avid fan of the Three Body Problem trilogy, I found Yancey Strickler’s Dark Forest Theory of the Internet surprisingly apt. For those who haven’t read the series yet, Dark Forest theory is an analogy made in the second book comparing interplanetary relations to that of a literal “dark forest.” Despite the forest being filled with life, no animal makes a sound, for fear of getting hunted down by predators that lurk freely in the darkness. In the time it would take for an animal to cry out for peace, it could also be killed, so silence reigns.
Strickler argues the internet has become like such a forest, and points to events surrounding the 2016 election as a pivotal moment for public expression:
“The internet of today is a battleground. The idealism of the ’90s web is gone. The web 2.0 utopia — where we all lived in rounded filter bubbles of happiness — ended with the 2016 Presidential election when we learned that the tools we thought were only life-giving could be weaponized too. The public and semi-public spaces we created to develop our identities, cultivate communities, and gain knowledge were overtaken by forces using them to gain power of various kinds (market, political, social, and so on).”
Whether or not the election presents an effective demarcation of internet eras, it’s a near certainty the realization of disinformation has transformed the internet as a medium of communication for the worse. There’s an instinctual second-guessing to the content we publish online now that has substantially raised the activation energy required to put legitimate, high-signal content out there, and I believe this has made us all a little more silent, and a little more unwilling to speak out in and shed light on our once peaceful forest of ones and zeroes.
Creating illumination in today’s dark forest invites a level of skepticism, offense, and public scrutiny many independent authors may never be prepared for. To shield against this, journalists have adopted a style of defensive writing present in much academic literature, anticipating counter-arguments and rebuttals at every turn. As a result, the presentation of the arguments that are published publicly is smothered by the preparation for an inevitable and overwhelming opposition, while our true reservations remain withheld only for those we can trust to pull punches. Steadily, we are entering an age of bicameralism in public literature, where the fear of veritable backlash forces the apologist and revolutionary within each of us to manifest in tandem.
In recent years, I believe people have begun to select their audiences more carefully, opting instead toward “intranet” solutions that allow for selective exposure to opinions and content. A steady increase in the usage of and reliance on subscription-based newsletters, group chats, and private social networks evidences this, with user counts for messaging applications having long topped those of popular social networks.
(Figure from Visual Capitalist. Numbers of reported monthly active users in millions.)
Perhaps the most dangerous side effect of this trend is the decentralization of information that accompanies a relative increase in communication with fewer people. As smaller and smaller groups and communities precipitate out of the internet proper, the allure of echo chambers and perfect worlds becomes quite enticing, and can influence and inform our lives on levels we may not even be fully conscious of.
Describing the surprising permeation of group chats into his lifestyle, Max Read writes in Intelligencer:
As feeds grew hostile, though, the rise of the smartphone, with its full-screen keyboard and its array of free messaging options, gave us a new, context-specific, decentralized social network: the group chat. Over the last few years, I and most of the people I know have slowly attempted to extricate our social lives from Facebook. Now it’s the group chat that structures and enables my social life. I learn personal news about friends from group chats more often than I do on Facebook; I see more photos of my friends through group chats than I do on Instagram; I have better and less self-conscious conversations in group chats than I do on Twitter.
In a follow up to his original post, Strickler pinpoints two particular issues that accompany transitions like Read's:
Departing from public spaces creates more room for malicious actors to gain visibility and wreak havoc at scale
The selection of information that follows isolation is arguably more harmful than confronting toxicity and disinformation
While these threats to our internet's well-being are legitimate, is there anything we can even really do about them? Nobody wants to deal with the toxicity that could accompany most meaningful public posts, and it’s unreasonable to expect casual modes of communication to go so far as to cover every base and publish academic quality writing, so what options remain?
Strickler suggests “relearning” our online presence as his solution to breaking back into public spaces. The approach he posits uses a brute-force attempt to content creation as a tool to minimize the barrier between our true selves and our online personalities. While this did end up working out for him, I personally believe many of us would find this approach quite difficult to replicate. Driven what he viewed as societal obligation, Strickler rigorously habitualized public engagement with a discipline few have and fewer are actually willing to commit to the world of social networking.
I believe much of the stigma that is associated with interaction over public channels particularly stems from recent increases in polarization surrounding controversial issues and unproductive disagreement. Public communication of any kind now more than ever runs some amount of intrinsic risk of personal unhappiness and damage to one’s reputation, risk that is largely eliminated with a curated audience. At the end of the day, people just don’t like being wrong.
Mitigating this risk is non-trivial as the problem is rooted in the human psyche, not the way public spaces online are structured. While traditional community feedback mechanisms were designed to solve this problem by helping people self-correct the content they generate to achieve better community fit, research suggests that simple, voting based feedback mechanisms actually tend to make matters worse. Instead of returning with content that was more likely to garner up-votes after making posts with high down-vote ratios, subjects in this study consistently returned with increasingly worse performing content. Furthermore, these same subjects also tended to return with more content than subjects that were positively rated, suggesting down-voting posts accomplishes nearly the exact opposite effect it was intended to.
Still, I believe the solution to revitalizing public communication can be found via experimentation with alternative feedback systems, as baked within them lie incentive schemes that appeal to the very same risk/reward mechanisms I think are responsible for our collective movement away from mainstream social networks. While I’m not sure what the optimal system outcome (if there exist any) of such a process would look like, I encourage networking platforms to be more creative in this endeavor, and test worlds in which feedback mechanisms may be entirely non-existent, or limited in unique ways (length, features, etc). It is through such an experimental process I believe we may not merely shine light upon the dark forest our internet has become, but set it on fire, burning away the very obstacles within us before which we hide from our changing world.