Fear of Opportunity Cost

The fear of opportunity cost (FOOC) ironically costs the most. With the hesitation to commit to X from the fear of missing out on Y, decision paralysis becomes inevitable. The constant weighing of options and potential outcomes prevents us from committing to any single path, and trying to optimize for the most “efficient” future trajectories results in overanalysis of everything, accomplishment of nothing, and the inability to take action. Of course, given infinite time and resources this wouldn’t be a problem. We could just search every possible space and then choose the best course of action. The major bottleneck here is that we have very constrained reserves, yet we need to somehow make sure it’s spent on something worthwhile. And this is a scary thought. Sometimes, the anticipation of working on the “wrong” things can feel worse than not working on anything at all. And for some people, trying and failing feels worse than failing without trying, because 1) you can at least say that you didn’t really try anyways, 2) you didn't really sacrifice anything in the failure, and 3) the opportunity cost was unrealized to you.

So I’ve been thinking, how can one overcome FOOC? Perhaps the first step is to recognize that any action is better than no action, because inaction leads to no new information (generally speaking of course). Even if we end up taking the worst possible action, at least we gain new information to which we can update and add to our knowledge. Like in reinforcement learning, if an agent never takes an action, it will remain in the same state forever. But when it starts to take steps based on some initial priors, despite some moves resulting in current penalties, it ultimately helps to maximize the expected value of total reward in the future. Progress is not about making the perfect choice, but rather about making a choice and learning whatever you can from the outcome. I mean, I don’t even think the perfect choice exists, even in hindsight. Some of the most successful outcomes resulted from things that seemed random during that moment. If Steve Jobs never took a college calligraphy class, the Macintosh might have never been invented. You never know how hidden interactions between different factors can unroll the causal chain of events. So instead of aiming to minimize “uncertainty”, we should learn to harness whatever randomness that comes and extract the most we can from it. Assessing current individual pros and cons with the attempt to predict the future sometimes yields worse results than taking bold leaps first and assessing your environment afterwards. Maybe calculated, small, and incremental steps aren’t always as useful as large sporadic jumps.

A funny analogy I thought of: In terms of minimizing a neural network’s loss function, stochastic gradient descent (SGD) generally works better than standard gradient descent (GD). It typically converges faster than GD by taking large randomized leaps forward, rather than taking small linear steps opposing the calculated gradient. Even more so, SGD also minimizes the potential of settling at a local minimum, which is a problem with GD, while improving the chances of finding the global minimum. Perhaps these insights apply to life: Whereas cautiously calculated steps may lead us to settle for something suboptimal, spontaneity may be a better driver at minimizing costs and progressing more impactful opportunities.