Wants & Needs

I don't believe building great products is necessarily about solving problems. I'd categorise a 'great product' as one people (increasingly) want and that also works well. We're not as rational as we'd like to believe, and sometimes want things for the sake of it. Other times, we don't want things despite how useful they appear.

I think we'd all agree loneliness is a pretty massive problem, but it's one I'm yet to see addressed sufficiently through technology. Many—including myself—have tried to address this but have been unsuccessful. One could argue it's about the way it's previously been addressed. This is plausible. Perhaps the technology required to optimally address this doesn't exist yet. But I highly doubt it in this case. 

I believe in the potential of technology to improve peoples' lives, but I also recognise we're driven by incentives, and there's little incentive to address loneliness. After all, money is the strongest incentive, and addressing loneliness—through an app for example—seems difficult to monetise. I'm unsure whether this is due to stigma, compartmentalisation, or a bit of both. That said, perhaps this is one problem better addressed 'offline'. 

There's no succinct way to close this, but I saw a pretty apt quote on HackerNews: "You can make something people want, or you can make people want something."

Notes:

  1. I think a lot of startup advice—including that on solving 'painful problems'—is B2B-focused and overlooks many companies which succeeded with the B2C model. Many of the B2C products upgraded existing solutions e.g. Spotify, Dropbox, Zoom, Slack. 
  2. I typically shy away from writing about startup principles as it's both overdone and often generalised.


Visionary Capital

Startup culture heavily relies on certain narratives and messaging. The stories of near-death fundraises, office-floor sleeping bags, and last-ditch pivots are retold with increasing drama on every podcast and at every fireside chat. The implication that most founding stories need to be interesting—containing Shakespearean elements of struggle, drama or betrayal—is especially harmful for young entrepreneurs.

The 'visionary' archetype has subsequently emerged as a natural idol created by the VC/startup ecosystem. This mythologised figure aligns almost perfectly with the overarching narrative: contrarian, eccentric, egotistical and slightly controversial. The archetype has since become aspirational: for VCs to spot, and for first-time founders to become.

It's certainly important to dream big. Many of our greatest innovations exist not because certain people saw what others couldn't, but ultimately pursued bold and daring ideas with high conviction. You would feel on top of the world too if a big bet paid off, made you millions, and changed how entire industries operate. Anyone would feel on top of the world especially if the big bet paid off despite others doubting you. However, pioneers were previously deemed visionaries only after achieving success. As a result, there were fewer visionaries, and thankfully fewer thought leaders on podcasts without a leg to stand on.

Today, we're experiencing technological innovations of an unprecedented nature. Most notably, the cost of building products – financially and otherwise – has decreased massively, and we're seeing a rise in competition across multiple sectors. In the same way there are thousands of products being launched each month, there are hundreds of thousands of founders emerging out there. It's easier than ever to start a company, but building products isn't enough anymore to stand out. So, now we have more and more self-proclaimed visionaries than ever, either over-dramatising aspects of startup culture and/or simply regurgitating dictums by industry veterans – or attempting to debunk them for the sake of engagement, i.e. 'ragebaiting'. Perhaps the real artificial intelligence we should be worried about already exists on tech Twitter.

There's nothing inherently wrong with thought leadership, but when it primarily serves to validate the pursuit of ego over impact, we're creating a dangerous template for the next generation. This isn't new either – for the longest while, this idealised archetype has involved grandiose thinking, inflated sense of special destiny, and godlike power excused as necessary quirks of visionary leadership. Oftentimes, these are revealed to be nothing more than a facade, masking fragile egos that have somehow amassed outsized influence.

However, unlike other industries where unchecked hubris might only affect a company, technology's unprecedented reach means ego-driven founders can affect billions of lives across healthcare, education, journalism, and beyond. The same personality traits that might be harmless (or even helpful) in building a photo-sharing app become concerning when applied to reshaping democracy or developing artificial intelligence.

While previous generations of industrialists could influence how we work or travel, today's tech leaders can reshape how we think, connect, and understand reality itself. Perhaps it's time to acknowledge that the stakes are too high for ego to lead the way.

Discontent

This is another one from the Short Decades series.

In their original forms, the likes of MySpace, Facebook and Twitter provided users with platforms for connection-building through growing their network, adding friends, and maintaining relationships. Hence, social networks. These days, TikTok, Instagram and others are primarily content vehicles, optimised not for connection but for consumption and, increasingly, monetisation. Hence, social media.

There's a powerful feedback loop between language and behaviour, and the terministic shift from social network to social media certainly transformed user behaviour. Once platforms became framed as 'media', the focus fundamentally shifted from "who do I know?" to "what can I share/engage with?" and success became measured in views/likes rather than meaningful connections. The semantic reframing didn't just describe the transformation of these platforms, but instead helped enable and accelerate it; in the same way calling something a "news feed" versus a "friend updates" subtly shapes how we approach and consume that information.

More than ever, we're in a state of perpetual performance through meticulously engineered attempts at authenticity. The 'photo dumps' are ironically often more curated than traditional posts we once knew and loved Instagram for. The 'morning routine' videos, where people wake up, set up their tripods, and crawl back into bed, bemuses me. These seemingly "authentic" moments require extensive preparation, multiple takes, careful curation and constant editing.

In a constant cycle of content creation or consumption, therein lies a pressure to participate in this aspirational theatre that is digital performance which extends beyond our 'online presence'. These very interactions fueled my support for digital monism–the belief that the 'online' and 'offline' worlds are interchangeable and interdependent. On social media, I specifically liken this to an agora where users occupy the role of performer and audience, where users simultaneously watch and are being watched, often internalising this gaze even when offline. This digital visibility creates a continuous self-monitoring that transcends online spaces, affecting how we behave, dress, and interact in physical environments with the assumption of potential documentation. I would argue that such a persistent state of visibility blurs the boundary between performing and authentic living.

I believe the constant pressure to perform alongside the abundance of consumable content has enabled the commodification of human experience. I think some types of 'content' are 1) worth monetising and 2) with few long-term consequences. Specifically, 'content' which decenters the self: possibly including comedy, fitness, career, food, come to mind. We've instead extended beyond these into a blurry genre which doesn't just exist on screens but has become interwoven with all aspects of existence, further cementing that we're not dealing with two separate realms but a synthesised reality. Why? Simply because various aspects of the ordinary human lifestyle now demand documentation and distribution. It's rather homogenous, and subsequently quite exhausting.

The issue at hand supersedes the existence of social media platforms, but their impacts on our lives offline. There are now more 'Instagrammable' spaces than ever: focused on aesthetics and faux-ambience void of natural human insight. More notably, I find it interesting how intimate moments have turned from sacred and personal to opportunities for documentation and distribution. Are we not exhausted from the constant curation?

We're privileged to be alive at a time where we're able to capture various moments and share them with such ease. I simply worry about the long-term consequences of purporting an online digital maincharacterism which extends offline. I suppose I wonder if we're heading towards a future focused on creating content rather than actually being content.

Twenty

I briefly wrote a weekly Substack series titled Short Decades. It's still one of my favourite essays although it's a bit different to what & how I typically write.

In Yoruba culture, proverbs (òwe) reflect a worldview of pragmatic realism through encoded insights about human nature, virtues and cosmic order. These sayings serve as vessels of generational wisdom which transform lived experience into actionable guidance. This philosophy aligns with a broader African saying "An elder sees sitting down what a child cannot see standing up."

Recently I've been contending with "Ogún ọmọdé kìí ṣeré ogún ọdún". In other words, twenty children cannot play together for twenty years. I suppose it can be considered a simple (but harsh) truth.

I heard that saying for the first time just months following my elementary school graduation. At the time, I understood it as nothing more than a poignant observation about how time and circumstances naturally separate even the closest bonds. My friends and I had branched out to different schools across the country, many of those bonds would not remain the same, and we would inevitably each form new ones. In many ways, I no longer resonate with the proverb.

If you've been following these essays, you might've noticed a thematic interest in how language shapes thought. In this case, I believe the specific juxtaposition of 'children' and 'play' does more than just mark time's passage, but also reinforces the disparity in age/wisdom – perhaps to trivialise the nature of child-like companionship compared to the 'adult-style' social bonding.

While I appreciate Yoruba philosophy's staunch realism, I find myself questioning the absence of a counter-narrative to this seemingly hyperbolic adage. Yes, twenty children cannot play together for twenty years – I understood this even as a child. But perhaps the more interesting question is whether a handful of connections, carefully chosen and deliberately maintained, can defy the proverb's implied inevitability.

Many of my friends are also from Lagos, Nigeria, but are scattered across the world. However, this dispersion only amplifies the difficulties in nurturing friendships. Hence, this paradigm is extremely intriguing on a personal level – more so considering the timely festive season where many of us return home. Despite my friends and I being dispersed across different countries over the years, I am glad to still have friendships spanning 5, 10, and even 20 years. I am therefore able to provide some firsthand perspective.

I believe all adult relationships are fundamentally rooted in choice: we actively choose to begin, maintain, or terminate relationships. Regarding relational endings, these typically stem from either situational or personal changes. I believe this best aligns with Derek Parfit's 'Relation R,' which argues that psychological continuity and connectedness supersede personal identity as determinants of lasting relationships. When applied to friendship, particularly in the context of the diaspora, this framework illustrates why maintaining long-term connections is so complex. It's not necessarily about staying in touch, but about maintaining connection with someone who, like you, is constantly evolving under vastly different influences.

As explored in one of my earlier essays, today's social media platforms are content vehicles for consumption instead of network ecosystems for connection-building. Instagram, for example, is particularly sticky: most people use it daily, but not to see their friends anymore. Instead, our friends' story updates compete with brand ads and influencer posts for our dwindling attention. Alternatively, seeing 'friends' daily in real-life school or university settings was different: it was just us & them. Now, the dominance of social platforms has degraded the friend-friend paradigm to mulch.

This shift has also created a peculiar paradox in how we maintain relationships beyond the digital sphere. Social media promises constant connection but delivers something more insidious: an environment characterised by endless identity performances which prevent authentic and tangible connection.

The result? A striking irony where we possess extensive means for connection, yet meaningful connection feels increasingly elusive. The ease of maintaining surface-level digital connections has paradoxically raised the activation energy required for deeper engagement. From doomscrolling short-form content to the proliferation of hookup culture, we've become accustomed to treating things as consumable rather than preservable. Our relationships increasingly mirror our consumption patterns – quick, disposable, and optimised for hedonistic pursuits rather than lasting value. Perhaps this is another feature of late-stage capitalism manifesting in our social dynamics. Who knows?

I suppose it's only natural to resist accepting potentially harsh truths, especially when the outcome isn't involuntary. But I fail to understand how we've elevated this into virtue by rationalising distance over effort, independence over interdependence, and loss over preservation. On one level, I suppose there's no benefit in deeply valuing something that society no longer seems to. But in an ever-changing world—politically, socially, and technologically—the one thing you want to be able to count on is each other.

I know twenty children can't play together for twenty years; I've known that since I was a child myself. Yet somehow it seems we've rebranded naivety as believing relationships can endure, and wisdom as expecting them to fade – embracing "natural drift" as an inevitable occurrence rather than a genuine possibility. I fear this self-fulfilling prophecy not only justifies our passive disconnection from existing bonds, but potentially leaves us perpetually guarded in new connections we're told are temporary by design.

Yeah, Write

I started my first blog at 7. Tumblr was pretty big at the time, and I randomly found myself writing about the football transfer market a few times a week. For some reason, people liked what I had to say. Till date, I can't understand why, and I wonder if they would've been interested had they known my age at the time. I didn't particularly enjoy writing, nor did I have any intention of becoming a sports analyst. Truthfully, I don't think I knew what I was even doing. I simply enjoyed something, found myself writing about, and enjoyed that I was able to write.

Each day, most of us tend to write more words than we speak. Consider emails, texts, tweets, comments, to-do lists, meeting and/or class notes, etc. In most cases, we don't necessarily enjoy writing (or typing rather); it can feel tiresome and possibly annoying — emails probably top the list. In the near future, we can expect AI-powered tools to write emails for us automatically. This isn't a bad thing by any means – emails can be tiresome, and I'm a huge advocate of finding tools to make one's day-to-day far more efficient. Also, the advent of smarter email tools probably has relatively insignificant consequences in the long-run.

Assuming these succeed —and they likely will as emails are a burden— the obvious next step is more tools to make writing even 'easier'. We're already seeing many of these for various 'formal' use cases, but I expect we'll see more tools for autonomous daily use. Consider a text-messaging tool which has been trained on your personality, writing style and relational context. Or a speech-to-text tool which turns your messy thoughts into clearer thoughts for your daily journalling. On the surface, tools like these would likely be helpful and save time. But, I worry we'll simply find ourselves writing less and less for ourselves and each other.

The reason we may write less is not because of these new tools. Instead, these tools have emerged from a perception of daily writing as burdensome or laborious. I think we'll write less in general, and there's enough to indicate we'll use brain-computer interfaces in many daily tasks over the next two decades. Of course, the rate at which this occurs depends on the technology itself. As it stands, I believe we neither enjoy nor appreciate a) the process of writing and b) written text itself enough. If enough of us cease to write, we would eventually evolve past the need for it, possibly leading to a relaxation of selective pressure on humans to fulfil the need for written communication. Should this occur, I don't believe we will completely forgo writing over time, nor will we risk the evolutionary regression or cognitive atrophy which would occur as a result. Instead, we will likely 'write' with more advanced means.

My concern essentially involves the dichotomy between generated text and written text; we use both on a daily basis. The former implies I/O processes: email responses, meeting notes, to-do lists, etc. These are systematic processes which prioritise precision, accuracy and efficiency. It's easy to see why these are the first wave of productivity tools. Written text, on the other hand, requires a level of cognition. We use these often when required to think about what we're writing: tweets, DMs, journaling. Written text requires creativity, personality, and an ineffable sense of 'humanness'. It's also easy to see why these aren't as easy to automate compared to the others.

There is something extremely agentic and intentional about consciously written text. We appreciate consciously written text because we possess consciousness, and therefore can fathom the inner workings required to produce such text. To Kill A Mockingbird is a classic because of Harper Lee's experiences in the racially segregated South. Things Fall Apart would not have been the same without Chinua Achebe's experiences with cultural conflict in colonial Nigeria. Frankly, it's the same with most art forms: there are entire disciplines dedicated to interpreting the consciousness displayed in film, text, visual art, etc. Beyond art forms and communicative mediums, writing has played an integral role in our anthropological evolution: the documentation of religious texts, important historical events, scientific discoveries, and so on.

There is a strong relationship between consciousness, agency, and freedom. We're simply in the earliest stages of astronomical technological advancements, and while I'm eager, I think it's important we retain those three core elements. I simply believe writing is a fairly unadorned way to do so.

This writing needn't be overly serious or formal, but I believe writing effortlessly is one way to write more. I've often likened Twitter—and I suppose now Bluesky—to J.S. Mill's marketplace of ideas, and I suppose that's one way to maintain agentic writing. In my experience, I write more— and hopefully better—about things which feel natural. The more we enjoy it, the more we'll do it. It's pretty important we begin enjoying it in order to preserve its quality.

I implore you to write something slightly different each week. For instance, yesterday I wrote a mini script of The Office: Nigeria. Next week, I may possibly write my first thread on Twitter. Who knows? Either way, the freedom to consciously express my own thoughts and exercise my agency is not one I take for granted, and not one any of us should.

Writing is an extremely cognitive process, and we'll still require degrees of cognition in the future — whether for prompt engineering with chatbots or text conversion with neural implants (a stretch). LLMs have fundamentally reshaped how we process and interact with information, and we're still in the early days. It's extremely necessary for us to see writing as agency and therefore begin writing with agency. Writing regularly, whether it's fiction, social media, or personal reflections, helps us retain our distinctly human ability to demonstrate our consciousness with words. And in every moment of writing, we affirm a fundamental truth: to write is to choose.

African Intelligence

Afrobeats has grown exponentially in the decades since its origin. Multiple artists sell out arenas globally, the genre consistently garners billions of streams each year, and the Recording Academy recently created a new category specifically for African music.

However, Afrobeats has changed significantly, leaving most long-time listeners of the genre with mixed feelings about this. Why? Afrobeats is quite personal to our culture, and the trade-off between authenticity and commercialisation feels antinomical. The evolution of any genre is inevitable, but this becomes trickier once it becomes a trade-off between authenticity and mainstream appeal. I'm not surprised by this and I'm sure others aren't either. Nigerian culture has generally become more popular over the last two decades, with our film, cuisine, and art gaining traction in global spaces.

Therefore, it makes sense that Afrobeats follows suit within the ongoing globalisation of Nigerian culture. Although this increased exposure should be a good thing, I remain sceptical. Our artists are more able than ever to achieve market-fit due to this increased recognition, playing a critical role in cementing our positioning as the 'cool kids' of Africa. However, it is worth noting that this rapid growth was not serendipitous or purely random.

For the longest time, the core lever for Afrobeats' commercialisation has been through features and collaborations with other artists. On the surface, it's a seemingly innocent 'quid pro quo' which promises growth for both parties. Instead, I feel it's been an exploitative dynamic akin to the resource extraction observed in former colonial states. We could discuss the symbolism of Wizkid's 'collaboration' with Drake on One Dance, where his voice can barely be heard. Or, perhaps Beyoncé's The Gift which featured primarily Afrobeats artists and yet conveniently included no African countries on the tour.

One could argue that the tables seem to have turned, as we now see many Afrobeats artists featuring foreign artists on their own projects. Case in point: global acts like Justin Bieber, Nicki Minaj, and Chris Brown have been featured on multiple Afrobeats projects over the last 5 years. I believe the initial quid pro quo stands, but we falsely believe we're in the driver's seat. Afrobeats is trendy, and this time it conveniently benefits the foreign (primarily American) artists to be featured within the genre, instead of vice versa.

I am not as concerned about all aspects of Nigerian culture losing their authenticity. In many cases, some degree of cultural synthesis can be beneficial. Nollywood, for example, could benefit from raising its standard to match the production quality of its counterparts. Likewise, I've long dreamt of Nigerian-fusion dishes which are finally becoming possible through increased exposure. However, I am primarily skeptical regarding the influence of cultural convergence with Afrobeats due to the propensity for its assetization.

Assetization is a key aspect of technoscientific capitalism wherein non-financial mediums are transformed into financial assets which allow for investments and subsequent returns. Therefore, the newly converted asset becomes a mechanism which can be controlled, traded, and capitalised through the revenue stream.

Many art forms have been commodified for centuries, and music is no different. However, music has become easier to both commodify and assetize due to its low barrier to entry for consumption and increased accessibility through digital streaming services. We've seen this happen through increased vehicles for investment within the music industry, primarily through the artists' catalogues and royalties. Over time, the artists' output becomes the means for rent extraction, both in the short- and long-run.

My concerns regarding Afrobeats deepen with the influx of labels such as UMG, Atlantic, DefJam and Warner Music which have all signed, partnered with, or acquired Nigerian record labels or artists. Given Afrobeats' growing global popularity, and Africa's increasing recognition as an emerging market in terms of technology and culture, I wouldn't be surprised to see artists like Rema or Burna Boy sell their catalogues over the next 7-10 years.

This isn't inherently problematic, as the process is voluntary and requires artists' consent and the artists stand a chance to benefit. However, this is especially significant given our colonial history riddled with exploitation and control. For many of our artists, global success represents far more than growth, but the shattering of barriers and the creation of pathways for future stars. It signals a bold representation of our country, despite its tumultuous past and seemingly bleak future. Even though Fela partnered with labels such as EMI and MCA, he would be rolling in his grave if he saw our artists signing away ownership of their catalogs. I'm aware this hasn't happened yet, but it looms on the horizon–is Afrobeats yet another resource to be extracted, or even worse, appropriated?

Altering our sound for mass appeal is bittersweet in itself, but signing away ownership of our music raises profound questions about who truly owns our culture. While the financial incentives are evident and enticing, the artists are obliged to hold onto what we can still call ours. We must remember that this isn't a level playing field, and the rules which apply to our global counterparts carry different implications for us. Lest we forget, these were our literal masters less than a century ago, and the word means something very different to our people. The word carries a weight of history that we cannot ignore.

Cide Effects

TW: suicide

It's World Suicide Prevention Month and this year's theme is "changing the narrative on suicide". I found this problematic, not because there isn't a dire need, but for two reasons. For one, 'raising awareness' appears futile without (social, if not political) change. Also, I believe the term itself enables and reinforces the stigma it aims to eradicate.

Earlier this year, I began examining whether the word 'suicide' influenced its stigma. It stems from the Latin suicidium (sui - "of oneself" and caedere - "to kill") and replaced the far more accusatory 'self-murder.' The term 'suicide' had become established in the English language by the mid-18th century, resonating with earlier terms like suicist and suicism which were rooted in notions of selfishness—a prejudice that persists today.

Today, the term 'suicide' has false associations of despair and futility and failure with ending one's life. The stigma is unsurprising considering how the '-cide' suffix in suicide ostensibly aligns with words like homicide, infanticide, and genocide, which all denote murder rather than mere death.

Catherine Ruff provides some useful perspectives on individuals who ended their lives during the Stoic era. At first glance, Stoicism appears at odds with suicide through its teachings of resilience, dignified endurance of hardships, and apatheia. Yet, Zeno, the founder of Stoicism, strangled himself to death. Similarly, historical literature has depicted self-inflicted deaths without stigma, and in some instances, as acts of heroism: from biblical Saul to Shakespeare's Ophelia. Similarly, Cato the Younger, Socrates, Brutus, Cassius, and Mark Antony's deaths were viewed as acts of martyrdom.

It's worth asking what has since changed and why. I believe it's for two reasons.

On one level, I suppose the moral condemnation intensified with the rise of Christianity and other Abrahamic religions. By framing life as sacred and suffering as redemptive, taking one's own life became (viewed) not just a personal tragedy, but as a spiritual transgression. Given the church's historical influence over laws, this theological judgment filtered into law and culture, where those who ended their lives were posthumously shamed. Why? The act was stripped of its context and meaning, flattened into a symbol of weakness or wrongdoing. This significant—and arguably regressive—shift likely emerged through a semantic narrowing, where a once-complex (and accepted) act became reduced to a singular, negative interpretation.

I suppose it was also significantly influenced by the advent of modern psychiatry. By framing the desire to end one's life as a symptom of mental illness, specifically depression, the focus inevitably shifted from morality to pathology. So, where the advent of religion produced semantic narrowing, the emergence of modern psychiatry likely caused a cognitive narrowing which reduced complex social, existential, and philosophical factors to diagnoses. The person became a patient, and their decision—a malfunction.

It's ultimately worth remembering that ending one's life was illegal in the UK until the late 20th century.

These days, there is little space left for conversations about autonomy, dignity, or moral reasoning in the decision. Instead, dominant narratives insist on prevention at all costs, reinforcing the binary that to want to die is either irrational or sick. Although this framework may save lives, it can also silence those for whom the struggle is not illness but meaning.

I suppose if we must begin with language to achieve the aim of changing the narrative—and ideally for good. I suppose future prevention means replacing the outdated term; perhaps this may eradicate its stigma once and for all.