Learning from boring things

A case for flagging down the unremarkable

Technology is a word that lives on a bleeding edge, and images of fire, language, or ballpoint pens don’t exactly come to mind when we use it now.

Perhaps I'm a pessimist in some sense, but things we bucket into “technology” today seem to buy the title because they are just not right, be it due to insufficiency in practical application, or a lack of the requisite scale to lay opinions and uncertainties in the industry to rest. The best stuff we’ve built routinely disappears from surrounding conversations, graduating from the dynamism of it all on some side of cause and effect. Overwhelmingly, they tend to remind us less of technology than they do our own humanity.


An old Times article describing the humble beginnings of wireless cell phones dates the first call made from a hand-held, wireless cell phone back to 1973, by Martin Cooper, a Motorola executive. At over ninety years of age, Cooper recollects this first call from a perspective few people now are capable of relating to:

Remember, this was the first public call ever made and I only cared about one thing: Was the phone going to work? This thing was a handmade prototype — thousands of parts carefully wired together by an engineer, not a production guy — and there were only two in existence.

When we hit the green call icon now, we rarely consider if our phones will work. The thoughts on our minds are a discrete step ahead of the call connecting, and our focuses are instead on the topic of the call before the call is even made. This transition in our thoughts surrounding our use of the cell phone is quite powerful, however natural it has become, because it clearly suggests our relationship with our phones is one that has become invisible. The cell phone is no longer just a tool— it is a modern human faculty and it is very likely one future generations will continue to grow up expecting to incorporate.

After his call, Cooper worried his creation would prove too much of a distraction for people to handle, and made sure to advertise that these devices would come with “off switches.” Now, there is little talk about whether the fundamental ability to call another person is anything between the curse of distraction or symbol of social status early adopters of the cell phone once thought it would come to be. Our phones might be silenced, but, from the moment we first get our hands on them, we never really “turn off” our ability to make phone calls.


All this said, I believe envisioning what solutions would look like when they are not merely useful, but so well-integrated into our lives that they are hard to notice is a technique that is generally underutilized, and allows us to treat technology as a relationship, rather than a point of focus. It brings us to interrogate the methodologies we use and the design decisions we make from a speculative perspective that I find has remarkable power to teach us valuable things not only about the solutions we create, but the roles these things could actually play in our day-to-day.

Go, Fight, Win.

Playing the devil’s advocate against the value of failure in a world far too full of it

For the last couple decades, messaging surrounding the value of entrepreneurial failure has provided insurance for innovators to pursue risky ideas with little reservation. A ridiculous amount of lore around failure in modern academia and entrepreneurship has since precipitated from the fallout, with some authors going so far as to describe success as a story entirely contingent on failure. Before anyone could really second-guess the whole thing, we have surrounded ourselves with much literature reifying failure as a surprisingly desirable outcome.

Why has failure become so popular? For me, three reasons immediately come to mind:

  1. Educational value: It’s rational to believe humans substantively improve after failure. We’re logical beings, so, more often than not, we tend to avoid making the same mistakes more than once.

  2. Emotional value: There is some amount of gratification that is experienced in conducting postmortem analyses. Detailing the when, how, and why’s of our failures help us feel more confident in tackling future problems.

  3. Cultural value: Narratives of redemption are uniquely inspiring. We remember successful individuals who capitalized on lessons from past failures because these stories can be relatable and romantic.

While there are probably some arguments I’m missing here, these reasons alone suggest failures can make for some incredibly good writing. Try hard enough, and in nearly every failure one can find a compelling story to tell, an interested audience to sell it to, and a nominal amount of education to provide other people with.

Despite the quality of stories that tend to precipitate from failure, I believe much of our obsession with extracting value out of failure is pretty flawed. To me, the belief that we are able to “learn from failure” is predicated on the world in which a failure occurs remaining consistent among repeat encounters with a specific problem, a large assumption that doesn’t necessarily hold true in today’s world. As a result of this, I think we’re much worse at tracing back and root-causing failures than we think we are, an effect Thiel refers to as “over-determination” in an old podcast with Tim Ferris:

 “Most businesses fail for more than one reason. So when a business fails, you often don’t learn anything at all because the failure was over-determined.

“You will think it failed for Reason 1, but it failed for Reasons 1 through 5. And so the next business you start will fail for Reason 2, and then for 3 and so on.”

Although this critique clearly doesn’t hold true in some highly defined, rule-based systems, failure elsewhere does present quite an unpredictable enemy, and it is exactly because we lack all the information required to eliminate its possibility that we begin hallucinating in our pursuits to find quick and simple answers to hard, multitudinous questions.

Ultimately, the issues above stem from the idea that the problems we are failing at now tend to be complex beyond the possibility of much useful reflection, however attractive such a reflection may seem when this is the case. While many reflections on failure show nuance and probably do have legitimate lessons associated with them, there are actually not all that many times we see ourselves explicitly citing prior lessons derived from failure as critical reasons for success later on.


Even if there is value to be had from reflecting upon failure, it feels to me massively overrated, at least relative to the value of success. I believe a large part of this effect can simply be attributed to a disparity in empathy, especially when a failure seems inadvertent and appears publicly. We’ve all made mistakes before, so more often than not we hope for others the recovery we once hoped for ourselves. On the contrary, there’s a celebration in every success, and the occasion calls for surprisingly little further commentary. Beyond just this “win”, however, success is also a fundamentally more productive experience. Jason Fried puts this quite bluntly in Rework:

Another common misconception: You need to learn from your mistakes. What do you really learn from mistakes? You might learn what not to do again, but how valuable is that? You still don’t know what you should do next.

Contrast that with learning from your successes. Success gives you real ammunition. When something succeeds you know what worked — and you can do it again. And the next time, you’ll probably do it even better.

A study on performance persistence Rework later cites substantiates this. After reviewing thousands of entrepreneurs’ performances*, researchers at Harvard Business concluded that while already-successful entrepreneurs are more likely to go on to found successful businesses, entrepreneurs whose companies failed the first time are almost no likely to do any better in future ventures than entirely new entrepreneurs. These results suggest that (with respect to the problem of starting a new venture) failure actually had little to no effect on future success, while the experience of initial success did. Other, more recent studies have also arrived at similar conclusions, with this study of nearly ten thousand German venture-backed companies even identifying entrepreneurs that experience failure as more likely to fail in the future.

*: Only venture-backed companies are surveyed, and success in the study is defined as going public or filing for public offering. Some arguments suggest conclusions from exclusively venture-backed companies may not be representative of the entire entrepreneurial population.

Perhaps the greatest issue with reporting failure as a predictor of success for new ventures is not merely its factual inaccuracy, but the way it encourages the evaluation of business prospects to be rooted in some artificial narrative, rather than in people and ideas. For the aspiring entrepreneur, I believe this framing is dangerous. On the scale of individual actors, mantras like “fail early, and fail often” encourage innovation without prudence. They lower the critical thinking required to focus time, skill, and capital into building a product, in a mechanical, brute-force search for good ideas. When such expenditures prove fruitless in their pursuit, as they often do, they rehabilitate people involved, enabling them to swiftly return to the culture of “reckless failure” they once embraced. Within this circular, self-fulfilling prophecy, our very definition of failure gradually morphs, turning prior failures into justification for future success and present failures into “necessary costs” they never were. I think we really need to escape this cycle, however purposeful or healthy the concept of a trivial failure appears. Instead, we should despise failure and study our successes, doing whatever we can (while in our power and aligned with our beliefs) to avoid “learning the hard way”, lest we end up with a broken incentive scheme for putting our thoughts before actions.

If the situation arises that a failure does demand retrospection, I strongly recommend scoping the exact failure that is being addressed as narrowly as possible. I believe addressing failures from the most logically constrained perspectives possible is the most productive approach to developing postmortems that could prove useful in the future, while contextualizing failures by exaggerating their nature and the stories behind them leads to aporetic chaos, potentially even causing further failure.


The phrase “Go, Fight, Win” was something an old debate coach would tell my team shortly before we entered debate competitions against teams we had a particularly hard chance of beating. Regardless of the severity of our losses (of which there were many), or the gap in experience between other teams and ourselves, this phrase, simple as it may be, made crystal clear to us that we weren’t just at the tournament to gain experience— we were there to win. In forgetting about the possibility of failure, we better immersed ourselves in our efforts to succeed, and, when it counted most, doing so made all the difference. It focused our minds and placed our eyes on the prize—where they really belonged.

Fire in the Dark Forest

Why the "more public, more problems" attitude is killing the internet

As an avid fan of the Three Body Problem trilogy, I found Yancey Strickler’s Dark Forest Theory of the Internet surprisingly apt. For those who haven’t read the series yet, Dark Forest theory is an analogy made in the second book comparing interplanetary relations to that of a literal “dark forest.” Despite the forest being filled with life, no animal makes a sound, for fear of getting hunted down by predators that lurk freely in the darkness. In the time it would take for an animal to cry out for peace, it could also be killed, so silence reigns.

Strickler argues the internet has become like such a forest, and points to events surrounding the 2016 election as a pivotal moment for public expression:

“The internet of today is a battleground. The idealism of the ’90s web is gone. The web 2.0 utopia — where we all lived in rounded filter bubbles of happiness — ended with the 2016 Presidential election when we learned that the tools we thought were only life-giving could be weaponized too. The public and semi-public spaces we created to develop our identities, cultivate communities, and gain knowledge were overtaken by forces using them to gain power of various kinds (market, political, social, and so on).”

Whether or not the election presents an effective demarcation of internet eras, it’s a near certainty the realization of disinformation has transformed the internet as a medium of communication for the worse. There’s an instinctual second-guessing to the content we publish online now that has substantially raised the activation energy required to put legitimate, high-signal content out there, and I believe this has made us all a little more silent, and a little more unwilling to speak out in and shed light on our once peaceful forest of ones and zeroes.

Creating illumination in today’s dark forest invites a level of skepticism, offense, and public scrutiny many independent authors may never be prepared for. To shield against this, journalists have adopted a style of defensive writing present in much academic literature, anticipating counter-arguments and rebuttals at every turn. As a result, the presentation of the arguments that are published publicly is smothered by the preparation for an inevitable and overwhelming opposition, while our true reservations remain withheld only for those we can trust to pull punches. Steadily, we are entering an age of bicameralism in public literature, where the fear of veritable backlash forces the apologist and revolutionary within each of us to manifest in tandem.


In recent years, I believe people have begun to select their audiences more carefully, opting instead toward “intranet” solutions that allow for selective exposure to opinions and content. A steady increase in the usage of and reliance on subscription-based newsletters, group chats, and private social networks evidences this, with user counts for messaging applications having long topped those of popular social networks.

(Figure from Visual Capitalist. Numbers of reported monthly active users in millions.)

Perhaps the most dangerous side effect of this trend is the decentralization of information that accompanies a relative increase in communication with fewer people. As smaller and smaller groups and communities precipitate out of the internet proper, the allure of echo chambers and perfect worlds becomes quite enticing, and can influence and inform our lives on levels we may not even be fully conscious of.

Describing the surprising permeation of group chats into his lifestyle, Max Read writes in Intelligencer:

As feeds grew hostile, though, the rise of the smartphone, with its full-screen keyboard and its array of free messaging options, gave us a new, context-specific, decentralized social network: the group chat. Over the last few years, I and most of the people I know have slowly attempted to extricate our social lives from Facebook. Now it’s the group chat that structures and enables my social life. I learn personal news about friends from group chats more often than I do on Facebook; I see more photos of my friends through group chats than I do on Instagram; I have better and less self-conscious conversations in group chats than I do on Twitter.

In a follow up to his original post, Strickler pinpoints two particular issues that accompany transitions like Read's:

  1. Departing from public spaces creates more room for malicious actors to gain visibility and wreak havoc at scale

  2. The selection of information that follows isolation is arguably more harmful than confronting toxicity and disinformation

While these threats to our internet's well-being are legitimate, is there anything we can even really do about them? Nobody wants to deal with the toxicity that could accompany most meaningful public posts, and it’s unreasonable to expect casual modes of communication to go so far as to cover every base and publish academic quality writing, so what options remain?

Strickler suggests “relearning” our online presence as his solution to breaking back into public spaces. The approach he posits uses a brute-force attempt to content creation as a tool to minimize the barrier between our true selves and our online personalities. While this did end up working out for him, I personally believe many of us would find this approach quite difficult to replicate. Driven what he viewed as societal obligation, Strickler rigorously habitualized public engagement with a discipline few have and fewer are actually willing to commit to the world of social networking.


I believe much of the stigma that is associated with interaction over public channels particularly stems from recent increases in polarization surrounding controversial issues and unproductive disagreement. Public communication of any kind now more than ever runs some amount of intrinsic risk of personal unhappiness and damage to one’s reputation, risk that is largely eliminated with a curated audience. At the end of the day, people just don’t like being wrong.

Mitigating this risk is non-trivial as the problem is rooted in the human psyche, not the way public spaces online are structured. While traditional community feedback mechanisms were designed to solve this problem by helping people self-correct the content they generate to achieve better community fit, research suggests that simple, voting based feedback mechanisms actually tend to make matters worse. Instead of returning with content that was more likely to garner up-votes after making posts with high down-vote ratios, subjects in this study consistently returned with increasingly worse performing content. Furthermore, these same subjects also tended to return with more content than subjects that were positively rated, suggesting down-voting posts accomplishes nearly the exact opposite effect it was intended to.

Still, I believe the solution to revitalizing public communication can be found via experimentation with alternative feedback systems, as baked within them lie incentive schemes that appeal to the very same risk/reward mechanisms I think are responsible for our collective movement away from mainstream social networks. While I’m not sure what the optimal system outcome (if there exist any) of such a process would look like, I encourage networking platforms to be more creative in this endeavor, and test worlds in which feedback mechanisms may be entirely non-existent, or limited in unique ways (length, features, etc). It is through such an experimental process I believe we may not merely shine light upon the dark forest our internet has become, but set it on fire, burning away the very obstacles within us before which we hide from our changing world.

Simple answers, Strictly typed

What it really means to "break up big tech"

The collision of tech and politics voiced in Warren’s antitrust campaign masks an absurdly complicated task in four deceptively simple words: break up big tech. Her real plan is twofold: to unwind prior mergers and force the divestiture of large tech platforms. The moves Warren proposes are based on the notion that price is now an outmoded indicator of abuse, and that injustices in a connected world take the form of data collection, manipulation, and aggregation. So while politics “takes aim” on “big tech”, what we’re really doing is assessing the impacts that various forms of data sharing and usage actually have on industry competition and individual consumer agency.

In many ways, I do believe data should be treated like and accounted for as capital, since the monetary value it has to advertising agencies and other interested third-parties is very real. Unlike capital, however, the value data can have for different actors varies wildly, and information that may be completely useless in one application could end up becoming invaluable elsewhere. To this end, I find the phrase “big tech” fairly misleading – it’s not tech’s size that’s at issue here, but its interdependence and its connectivity. The issue with how we’re approaching it, however, is by dealing with its size anyway, when size and data are not actually all that functionally dependent. Tech can be small and wield obnoxious amounts of personal data (like many weather apps), and tech can be large and store no personal data at all. In current market conditions, tech is free to reap the best of the two worlds, but not constrained to hold positions in both.

I believe the “edge” data provides people with has been overestimated across the board: in tech, pop culture, and in politics. Nobody has the right story, because there are incentives and narratives baked within each perspective that justify actions on any side. Nobody is truly free from data. Technologists see in data hidden relationships they hope to translate directly into quick value adds – and capital flow. Pop culture sees in the technologists’ use of data a plethora of doomsday scenarios, each “AI takeover” more abrupt than the last. Politicians see in the technologists’ use of data an opportunity to appeal to the masses by showing to them the ends of their agency and painting the perils big tech presents them. At the end of the day, overvaluing our personal data may let politics entwine our lives more deeply with its branding and enable tech companies to raise money faster at ever higher valuations, but what’s in it for us? A more secure sense of self? A schadenfreude high from a rebellion against the tech elite?


I’m not saying the antitrust probes into tech are baseless. While I believe we’ve exaggerated the impacts of data moats on competition in the tech sector, partly due to a unique alignment of incentives between tech and politics to do so, tech giants armed with data do have substantive, unfair advantages over their competitors, and it’s probably a good time, if not a little late in the game, to revisit exactly how these corporations are cashing in on our information. While the legislative branch alone may be somewhat poorly equipped to correct the tech sector where it actually matters, there certainly exist plenty of domain experts, journalists, and third-party tech companies that can all bring useful, insider insights into developing healthy data usage policies and arbitration mechanisms. It is my hope that these organizations rise to the occasion when the time comes to help the law use not the sledgehammer, but the scalpel when dealing with tech.

foci

A little bit of who I am and what this is really about.

Hi! I'm Sid, and if you're reading this, you’ve just landed on foci, my newsletter. Here's a little bit about me, and why I plan on writing content here:

I'm currently a student at Olin College, which is a very small engineering school near Boston. It aims to reinvent the way engineering is currently taught in other institutions through a rigorous implementation of project-based education. At the college, I'm studying computer science, because I believe that software can have particularly sizable downstream impacts for society. I have opinions on how beneficial, if at all, some of these impacts are (more on this in what may become a later post).

More specifically, I am interested in productivity. From the noise of the information era, certain tools, software or otherwise, seem to have emerged not as victors, but as part of the very fabric constituting the modern world, my world. Many of these tools have a singular purpose and accomplish them well. Together, they've revolutionized the way we collectively think, and help us make important decisions. To me, improving and enabling peoples' interactions with computers is an important goal both achievable and worth prioritizing. While it is indeed all too easy to glorify the purpose and intent of software, both before and after the product or the dent it may make, this is something I really believe I could do to leave the world a little better off.

I have slowly been working toward this goal, albeit in small ways. I spearheaded automation engineering at Stanford's financial management services, reducing the hours they need to test their software with every new release. At Infinira Software, I researched expense reporting, and found a cool way to apply some clustering algorithms to white-on-white scans of multiple documents. As it turned out, no one really looked into this before, so I documented the technology in a journal. A while later at Dataxu, I analyzed the business value of a new software deployment technology, and presented my findings to their developers.

The present is a fascinating time for me, as I am now working a couple jobs in finance, a field that I've recently taken a deep interest in. As I see things, there is a big difference in the type of value addition that accompanies software and finance, so bridging this gap has presented me with the unique opportunity to reexamine where I want to focus my efforts in the first place. One of the reasons for my writing this newsletter is my belief that through documentation and reflection I will be better able to capitalize on the valuable time and experiences these fleeting formative years have and will, for the near future, bring me. In doing so, I also hope to provide my readers with content they may find insightful, engaging, or amusing, at the very least.

Loading more posts…