Weekend Food For Thought WFFT
On Today's Menu: The USA and It's Debt, The Greater Bay Area, Who's Winning the Model Race, The Energy Security Fallout, AI and Human Workforce, Lessons from Outlier Founders, and much more...
Hello from Lisboa,
I hope you had an interesting and productive week.
Nate Silver stated; “Distinguishing the signal from the noise requires both scientific knowledge and self-knowledge: the serenity to accept the things we cannot predict.”
May you find both signal and serenity in the flow below…
1 Getting Visual
2 If You Read One Thing Today - Make Sure it is This
3 Consequential Thinking about Consequential Matters
4 Big Ideas
5 Big thinking
6 Living the Dream
1 Getting Visual
The Big Picture: The basic geography of world trade is centered around three poles #1 EU, #2 China, #3 US…
Spotlight: The US’ need to roll a lot of debt, likely at higher rates - “Ten trillion dollars in existing US government debt will need to be refinanced over the coming 12 months. The budget deficit this year is about $2 trillion. Total gross corporate bond issuance in 2026 is likely to be around $2 trillion because of increased supply from hyperscalers. Adding it all up, the total amount of investment grade supply coming to the market this year is around $14 trillion. The bottom line is that the growing supply of investment grade fixed income product is putting upward pressure on rates and credit spreads.” - Apollo Research
Spotlight: Uncomfortable math - For every $5 the US government receives in tax revenue, $1 is spent on servicing the national debt.
Key Trend - The US is Betting on AI infrastructure and Industrial Policy increasingly financed with debt - AI Capex was likely all of US GDP Growth in Q4 2025…what will carry it in 2026?
Mega Trend: Electrification - Global power demand is projected to rise by nearly 50% by 2040…
Spotlight - Tech and Geopolitical Trends converge in driving the world to electric…75% of the world lives in fossil fuel importing countries. Every $10 oil price rise adds $160bn/year to the global import bill. The single biggest fix: EVs replacing imported oil in transport could save importers $600bn/year.
Spotlight - ASEAN countries leaping into EVs…
Key Trend - Renewable and Nuclear Energy - China leads Renewable + Battery innovation, manufacturing and implementation and unlike most countries China has also managed to control rising costs for building nuclear plants. They are doing so by indigenization - i.e. building a domestic supply chain and skilled domestic workforce and are harnessing the dynamics of Wright’s Law…
2 If You Read One Thing Today - Make Sure it is This
Grace Shao left the “Greater Bay Area” behind to travel to San Francisco and the Bay Area - I just got back from there myself and her observations resonated and are worth pondering - go do it here:
Some Takeaways
“I haven’t been to San Francisco in nearly a decade. Driving in from the airport at 11 pm, it was mostly pitch black, but you could see the glimmer of billboards with flashes of words like “inference,” “agent,” “to do and to be done.”
Then the next day, I looked closer, something you’d only see in San Francisco. A city in an echo chamber of hype and doomism, but also just raw (love/hate) passion for technology.”
“For a city so full of contradictions, at least from headlines, it’s home to some of the wealthiest self-made people, while homelessness continues to plague the city. But that’s also what I realized: headlines often only capture the extremes. The majority of our lives aren’t reflected in any of that. And I wondered, do the headlines around China-hate/fear also showcase the reality? How much of the U.S.–China story is in the headlines, and how much is lived reality within the ecosystem?
I went in with curiosity and openness. As someone who’s covered tech and worked in tech for a big chunk of my career, I’ve never actually lived in SF. I’ve visited a few times, mostly when I was young, when my father would drive us through California on long road trips along the Pacific Coast Highway.
As I’ve lived in BJ, NY, and now in HK, I’ve realized my circle was more with people in public investing, allocating capital to businesses, or covering stories of capital movement, rather than with builders. So I was most keen to hear directly from founders and early-stage investors.”
“A few weeks ago, I had lunch with Dan Wang in Hong Kong over some spicy Hunanese.
He asked me, “Why do you choose to live in HK?
I confidently said: the nature, the city, the safety, the convenience, the proximity (and distance) to the mainland, the ability to be bicultural so naturally, and the education that allows my kids to be trilingual (quadrilingual) easily, at least easier than when I was raised in North America.
He nodded, and paused, the way that he is, a thoughtful thinker, he said, I see, but nowhere compares with San Francisco for its innovation and its belief in meritocracy.
I nodded. Yes. That, maybe, Hong Kong lags.
In some instances, the city feels like the opposite of Hong Kong. With ample space, as everything is so far apart from each other. I naively first thought I could just buffer 30 mins between each meeting as everything is about 15mins away in Hong Kong via MTR or taxi. In reality, everything was about an hour away as I traveled from SF downtown to Mountain View, Santa Clara, Cupertino, Palo Alto for meetings…
While multi-millionaires are in Patagonia vests and dressed no differently from the average man on the streets, in Hong Kong, Van Cleef necklaces and Hermès purses have somehow become the first aspirational purchases, eroding the intended desired “exclusivity” and taste.
In that sense, oh how I love the culture and vibes here. Every coffee shop I walked into had this mix of energy and ease. People were in comfortable shoes, relaxed, but animated, talking about ideas”
“As I’ve gotten older, I’ve kind of realized how little I know, and I’ve become more comfortable saying I don’t know or not commenting on things I do not have insight into. For years, I’ve read people who justified writing about China because they have an East Asian studies degree, studied abroad in Shanghai during uni, or lived in China in the 1990s. Or worse, “I’ve read a lot of books about China.”
Surely we can see how some of those views can become tunnel-visioned, outdated, or out of touch. It proves again that knowledge can be obtained through 远程教育, but insights, intuition, judgment, and understanding need human experience. You have to see, hear, and experience that life for yourself to understand the nuance.
So it’s with that in mind that I never really write about Silicon Valley. I compare tech companies’ business models and read the literature, but I don’t usually write first-person commentaries. Even for this piece, I feel ample impostor syndrome.
Alas, today, I’m going to take a stab at it and share some takeaways.”
“From conversations with public investors, the first obvious interest I detected is that American investors have been watching China’s IPO market in Hong Kong revive, and USD capital has been finding ways to return to the Chinese market. But then I realized how fragmented and unrepresentative that group and interest were.
Venture capitalists
The VCs I met mainly fell into two camps, and there was no in between: the ones that didn’t really care about China, and the ones actively exploring avenues to invest in Chinese companies, looking for the next Manus, the ones that hope to go global from the very start. The bet is on young teams that still want to find an exit abroad.
Founders and builders
The builders, though, that was the most interesting bit.
While many Chinese AI startups are looking to sell to the US on day one. Even a Chinese-heritage founder seemed not to take the Chinese market seriously. As he showed me his products, I asked, wouldn’t this be too expensive to compete in China? He simply shrugged it off. ‘It’ll sell here in the U.S.’
“…what you’re seeing is that quality-of-life math no longer makes sense. It’s not the 1990s anymore, when immigrating to the West was objectively an improvement in quality of life /future prospects for most Chinese families.
Of course, there are those who choose to stay in America and usually choose it for the space, lifestyle, multiculturalism, politics, or ideals.”
“For big tech, it seems complacency remains. ‘We’re the best, because America is, well, that remains. The grind will need to wait as I find work-life balance. All very fair and warranted, but even the German Chancellor is getting triggered after his China visit and saying Western workers do not realize 4-day work weeks will not get them ahead, that complacency may be a hindrance to fast growth, not growth, but fast-paced growth.
Overall, only the NVIDIA team seemed extremely on top of the latest happenings and seriously aware of Chinese competitors and the ecosystem’s development.
Most others were completely indifferent to Chinese models and Chinese AI.”
Chinese Big Tech in the US
The obvious takeaway here is that the ambitions of global expansion remains but the strategies are shifting. A decade ago, many of the BBATs were looking to hire local teams, work with local IPs, and ship consumer products natively for locals.
That delusion has burst because 1/ many managers sent here simply have trouble managing local staff, the work culture clash is just too challenging, 2/ local staff is too expensive, and on the creative level, many don’t see eye to eye, or there is a choice to dismiss HQ, 3/ cost is high, but returns are not.
So the more proven business model is to continue R&D and product design in China, and then ship the product with the help of locally hired sales and support teams. TikTok paved the way, and there is no economic sense in not doing so now. Thus, we’re seeing offices being closed down, and talents facing two choices: return to China? Stay and assimilate? Ok, maybe there is a third choice, but a non-mainstream one - move to Singapore?”
“Recently, the investment world seems to have somehow suddenly woken up to China’s industrial strengths. From Bill Gurley to John Arnold, joining Patrick O’Shaughnessy on Invest Like the Best to discuss how NIO’s factories are much more advanced than American car manufacturing plants, which are at least decades behind in terms of technological innovation. And that understanding has now extended into all physical AI and embodied AI realms.
It’s said that the former Google executive Eric Schmidt, who now teaches at Stanford, has become something of an advocate on campus and in classrooms, urging students to just jump on a plane and see what is happening in China for themselves instead of listening to hearsay.
As he wrote in an op-ed for TIME, “As AI becomes integrated into our physical world, we’re hurtling into a new chapter of embodied intelligence. Unlike the past few years, where China has been playing catch-up in AI models, China is pulling ahead of the U.S. in physical AI.”
While his focus has been heavily on hardware, he also touches on the fact that, beyond the robotics hype, we’re neglecting anything physical - that includes consumer wearables. It is no secret that China has had a huge hardware lead. It’s not just drones, industrial robots, and humanoids. What people are missing is that the innovation in what hardware can look like is also increasingly coming out of China.”
“For VCs, the question on wearables is brutally simple: even if the demand is there, if your hardware supply chain isn’t in China, how are you cost-efficient? Wearables are unforgiving. You’re fighting BOM costs, tooling, firmware quirks, certification, returns, and a user who won’t tolerate “almost works.” Margins are thin, iteration speed matters, and the entire game becomes: who can ship, learn, tweak, and ship again without bleeding out.”
“Schmidt and Xu’s argument is that the next chapter of AI is physical, and China is pulling ahead not necessarily on chatbots, but on embodied systems that leave the screen and show up in real life. They even point out that Chinese startups showed up at CES with AI-enabled hardware across everything from smart home appliances and wearables to a variety of robots. The wearable angle is the “everyday consumer” version of that same story: deployment, iteration, and a hardware ecosystem that tightens the loop.”
“Six days of intense AI-ing and no care for any jet lag, I was finally relaxed, looking around. Everyone was sunbathing and seemingly enjoying one of those perfect California afternoons — clear sky, warm sun, the kind of day that finally makes you understand why people still put up with the rent and chaos.
Just before that, I was having coffee with a VC who has been investing in ‘national defense’ tech, a hot sector that seemingly popped up in recent years amid the back-and-forth of the trade war and the rise of China’s soft influence globally. The conversation kept circling back to one word: winning. “So who’s going to win?” “Do you think they’ll win?” “We must win.”
Who’s winning the model race?
Whether Anthropic is winning against OpenAI. Whether open-source is winning. Who’s winning at inference cost? Is China winning? Who’s winning in China? Will China win? In what way and why will they win (or not)?
The word kept coming up, and it threw me off a little. Not because it was entirely wrong, but because it felt a bit narrow.
On a business level, the AI market has moved past the point where “who has the smartest model” is the most useful question.
The more interesting question now is where value is actually being captured, and where it is already being competed away.
That is what brought me back to an analogy I heard earlier that week from another VC investor: AI is starting to look a lot like the oil business. And the ‘four-year Albertan’ in me could not resist that one.
Tokens are crude oil
Models produce raw intelligence. They generate tokens. But tokens by themselves are not the end product that any (most) customer actually wants. What customers pay for is legal work completed, code shipped, claims processed, research synthesized, and decisions supported. They pay for refined output.
A lot of investor attention has naturally gone to the infrastructure layer.
But once the model is understood as an intermediate good rather than the final product, the analytical center of gravity shifts.
The question is no longer just who can produce intelligence, but who can turn it into something usable, trusted, repeatable, and economically legible. In other words, who can refine it?
At the bottom of the chain are the token producers: the frontier labs and open-weight model builders – think OpenAI, Anthropic, Google DeepMind, Meta, DeepSeek, Qwen. They produce the raw capability. This layer is expensive to build, technically impressive, and still moving fast. But the model is not the final product, any more than crude oil is gasoline. And what most enterprises/consumers are paying for is gasoline.
Of course, the first form of refinement occurs within the labs themselves, where raw model capabilities are turned into products like ChatGPT, Claude, and Gemini. This is already a real business. It may not yet fully fund the drilling, but it is clearly one viable business model.
The labs are packaging intelligence into interfaces, workflows, and, increasingly, work products that users consume directly. This is why the “labs will be commoditized immediately” argument has always felt too simplistic.
The best labs are not only producing crude; they are trying to capture the downstream markup themselves.
External third party refinement occurs when AI is embedded in specific workflows, industries, and customer relationships. Harvey is doing that in legal. We see AI inside defense software. AI inside clinical workflows, compliance, finance back offices, customer support, and vertical SaaS.
This layer takes cheap model output and turns it into something more valuable by leveraging better context, process, service, trust, industry know-how, and distribution. That’s why software cannot be just all “vibe coded”.
This model also most closely resembles where software value was historically captured. AWS built many important first-party products, but the cloud era’s great value creators were often third-party companies, such as Datadog, ServiceNow, and Workday, that sat atop shared infrastructure and added enough domain value to justify an independent, premium-priced existence.
That pattern now looks increasingly relevant to AI as well.
A model can answer a question. A product can route the answer, compare it with prior cases, attach it to a record, push it to the right person, maintain an audit trail, and make it fit into an institution’s existing way of working.
In the coming year, the focus is shifting away from AGI, at least for businesses; it is the conversion of raw intelligence into finished economic output where much of the real business value lies.”
The spread is the business
The easiest way to understand any AI company is to ask one question: how wide is the spread between its token input cost and the value of the output it delivers?
In much of today’s knowledge work, service professionals still charge based on labor scarcity, credentialing, and process complexity. AI changes the cost structure underneath that. If a model can perform meaningful parts of that work at very low marginal cost, the battle shifts to who captures the markup between cheap intelligence and high-value delivered outcome.
But what protects the spread from collapsing?
So I think there are a few factors to examine that can help separate genuinely durable businesses from products that merely look exciting in the current moment.
The first is workflow ownership.
If the AI product is deeply embedded in a complex, vertical specific, high-consequence workflow, it becomes much harder to replace with a generic model plus a thin wrapper. Customers don’t just buy the output; they buy the reliability that it appears in the right place, in the right format, connected to the right systems, with the right approvals and audit trails. So products have to have that extra moat. Once a product becomes part of the enterprise’s operating system or process, then the competitive question changes. It is no longer simply whether another model is cheaper or smarter. It is whether the entire surrounding workflow can be rewired without cost, risk, or disruption, which usually it cannot.
The second is accumulated context.
Clever prompting at this point gets absorbed into the base model’s capabilities surprisingly quick (they suck that data in like Kirby). Real accumulated context: customer records, prior work product, compliance history, integration into adjacent systems, operational memory. The more useful the product becomes because it sits inside a growing body of context, the less interchangeable it becomes. From closely following the Chinese AI market, one pattern is clear: anything engineering-related — tooling, prompts, harnesses — is usually not sustainable for very long.
Non-engineering advantages like domain expertise, user data, distribution, and user habits are more durable. The things people can demo most easily are often the things that travel fastest across the market.
The things that stay sticky are usually quieter. They sit in behavior, habit, data exhaust, and institutional memory.
The third is captive customers and limited competition.
So this is why the viral X post is relevant. If 50 startups are all refining tokens into generic customer support chatbots, the spread collapses quickly. If you are one of a small number of companies trusted to operate in medicine, national defense, or other regulated environments, the spread can hold for longer because access itself is scarce.
The opportunity to arbitrage the gap between low token input cost and high output value is simply too attractive — which is why so many refineries are gravitating toward the high-value sectors first.
Over time, competition narrows the gap, but if there is structural scarcity of competitors — regulatory barriers, security clearances, deep institutional trust — the spread persists. In those cases, what is scarce is not intelligence itself, but goes back to permission, trust, and workflow access.
This is why the word “wrapper” has become too dismissive to be analytically useful. There is a massive difference between a generic chatbot product and a company embedded in a regulated workflow with years of context and trust built into the product. This is also why the language has started to shift from “wrapper” to “harness.” The market is already trying to distinguish between a thin interface and something more deeply embedded.
The next phase of the market will probably focus on categorizing thin refinement versus deep refinement. One is easier to copy because it is mostly a presentation layer. The second kind is harder to copy because it is intertwined with the base infrastructure and processes.”
China already shows what happens when crude oil gets cheap
In the U.S., the value chain still appears to have two visibly distinct profit models.
There is one at the model layer because frontier labs still command attention, capital, and, in many cases, a real commercial premium. And then there is one at the application layer, where companies try to build products on top of that intelligence, whether it is in the form of a wrapper or a harness.
In China, that picture already looks slightly different. Strong open-weight models have become widely available, and the economics of access have moved lower much faster. That makes it harder to build the entire investment case around proprietary intelligence alone - models are the most competitive if anything.
This does not mean the model layer disappears, nor that model quality ceases to matter. It means the center of gravity shifts. More of the competition, more of the product differentiation, and more of the monetization pressure have moved downstream.
My point is that China is not just another market. It may be an early preview of what happens when model access becomes cheaper, more abundant, and less differentiating. The logic of competition becomes easier to see there because the layers have already compressed.
To borrow from my earlier writing on LLM business models, I once argued that the dish is only as good as the quality of the fish. That may still be true for sashimi, maybe ceviche too. But here I have to counter my own analogy a bit: sometimes the quality of the cooking matters more than the fish’s rarity.
Because this in turn forces companies to differentiate elsewhere: distribution, product design, speed of iteration, vertical depth, ecosystem leverage, and the ability to build a product that feels native to how users actually behave. It forces them, in other words, to refine better. And we’ve written much about this- see AI Proem’s previous coverage.
And one thing that strikes me, each time I speak to investors in the Bay Area, is how poorly understood this still is. Not because people are unintelligent. Far from it. But because too much of the U.S. discourse still frames Chinese AI through a geopolitical or ideological lens rather than considering market structure and technological development.
So you get questions about whether open-source adoption in China is mainly state-led, or whether local edge deployment is mostly driven by paranoia about cloud privacy. Those questions themselves show how far the framing can be from the actual reality on the ground.
China started embracing open-weight AI for the same reasons markets adopt cheaper, workable inputs everywhere: it is effective, it reduces costs, and fierce downstream competition forces product companies to move fast. As we have written repeatedly here at AI Proem, it started as an economic one.”
Why it matters for global AI
First, commoditized crude does not kill the refinery business. If anything, it can expand it. Lower input costs and lower barriers to entry create more experimentation, more verticalization, and more attempts to package intelligence into different end uses.
Second, the winning refineries increasingly differentiate on distribution, product sense, and contextual fit rather than on raw model intelligence. That is not some uniquely Chinese curiosity. It is a likely preview of what happens in any market once the question “who has the smartest model?” becomes less decisive than the question “who has the most economically useful product?”
Third, spreads become thinner. Intense competition compresses the gap between token input cost and final selling price. That means Chinese refineries often have to run leaner, move faster, and iterate with more urgency. If open-source continues closing the gap globally, more of the AI world may converge toward that structure. The standalone profit pool at the model layer may not disappear, but it does get pressured. And once that happens, the central investment question shifts quite dramatically. One stops asking primarily who owns the best model and starts asking who owns the workflow in which the model becomes economically indispensable.
Some (especially public) investors I spoke to in San Francisco were not unaware of this risk. But many of them felt oddly complacent about it, as if the direction of travel was obvious but somehow still dismissible. The defenses all sounded familiar: the frontier labs will maintain a lead; the market is still so early that commoditization does not matter yet; if the major labs eventually monetize at a huge scale, who cares what the end state looks like; the U.S. government will ultimately step in and slow distillation. Perhaps some of those things will prove partly true?
In commodity markets, analysts obsess over the marginal producer because that is what sets the price. Something similar is happening here.
China deserves close study, not as an exotic side case, but because it may be showing what happens when the marginal cost of intelligence falls faster than the narrative can comfortably absorb.”
“Perhaps the strongest labs evolve into more like full-stack platforms, fostering ecosystems atop their intelligence while also capturing direct end-user demand. That would likely be the most durable version of the story. But API access alone will likely be insufficient, and first-party chat products may be insufficient as well if the rest of the market becomes more efficient at refinement than current enthusiasm assumes.
One broader thought underlies all of this: the pattern of upstream capability commoditizing while downstream refinement captures value is not unique to AI. One sees versions of it across many waves of technology. In the cloud, raw infrastructure became foundational, but much of the most durable value was captured by the software companies that turned that infrastructure into category-defining systems of work. In mobile, the platforms and devices mattered enormously, but so did the companies that transformed mobile distribution into new end markets and behaviors.
In AI, tokens are the crude. But the more enduring question is the same as it has always been in these technology shifts: where does the markup actually live once the raw input becomes cheaper, and what keeps that markup from being competed away?
So, let me ask you again, ‘who’s winning?’ or ‘who will win?’”
3 Consequential Thinking about Consequential Matters
This EMBER Report looks at “The energy security fallout: from fossil fuel fragility to electric independence” - Plenty of Consequential Thinking about Consequential Matters…Go ponder it here in full:
https://ember-energy.org/app/uploads/2026/03/Report_The_energy_security_fall-out_from_fossil_fuel_fragility_to_electric_independence.pdf
Some Takeaways
The fragility of global fossil fuel supply underscores why scaling renewables and electrification is essential for lasting energy security.
Today’s fossil insecurities: Three-quarters of the world’s population live in fossil-importing countries. Net importers spent $1.7 trillion in 2024. If fuel prices rise, this number rises. For every $10 per barrel increase in oil prices, global net import costs rise by around $160 billion per year.
Clean energy offers a permanent solution: Scaling electrotech - EVs, renewables and heat pumps – to replace imported fossil fuels in road transport, heating, and power, would enable importers to cut their fossil fuel imports by 70%.
Electrotech is already at the scale to cushion shocks: The global fleet of electric vehicles avoided oil consumption equivalent to 70% of Iran’s exports in 2025. Global solar growth in 2025 alone could displace gas-fired electricity equivalent to all LNG exports through the Strait of Hormuz that year.
The lasting consequences: This crisis will accelerate what was already underway. Asia, which imports 40% of its oil through the Strait of Hormuz, now faces the same reckoning Europe did in 2022 — but with increasingly cost-competitive electrotech alternatives available. The bull case for LNG as Asia’s transition fuel is now much weaker. And peak oil has been brought sharply forward: the International Energy Agency has already cut its 2026 demand growth forecast, and the peak it previously put at 2029 may already be here.”
“Oil is the Achilles’ heel of the global economy. In particular, Asia’s oil vulnerability has been exposed by the current crisis. This is Asia’s Ukraine moment.”
“Unlike the oil crises of the 1970s, there is now a better alternative. Electric vehicles are increasingly cost-competitive with gasoline cars. Oil volatility means EVs are a common-sense choice for countries wishing to insulate themselves from future shocks.”
The world’s most vulnerable chokepoint
The Strait of Hormuz is narrow, shallow, and carries a fifth of the world’s oil and LNG. The wider Gulf region, within range of cheap drones, makes up 29% of global oil output and 17% of gas. It is also a major route for trade in fertilisers, aluminium, sulphur, and ammonia. There is no other bottleneck in the global commodity system where so much passes through so little.
Where the 2022 crisis was about Europe and gas, this one is about Asia and oil. 80% of the oil and 90% of the LNG that transits Hormuz is bound for Asian markets. That is roughly 40% of Asian oil demand and over a quarter of Asian LNG imports. Japan, South Korea, India and Thailand all depend on it as their main source of supply.
Yet fossil fuel dependency stretches beyond the countries most dependent on the Strait of Hormuz. The exposure is global, and the import bill is large.
“At the global level, the Gulf’s LNG matters less than its oil. Gulf LNG makes up less than 1% of world primary energy; whereas Gulf oil provides 9%. Analysis of IEA data by Ember shows that 79% of the global population live in oil-importing countries. In 2023, 62 countries imported 99% or more of their oil and 89 countries imported more than 80% of their oil, including Spain (99%), Japan (99%), Germany (96%), Türkiye (92%) and India (87%).
The dependency comes at a high price. Net importers spent $1.7 trillion on fossil fuel imports in 2024. Two-fifths of the global population (92 countries) leak over 3% of GDP abroad in net fossil fuel imports. And as prices rise, so does the bill. For every $10 per barrel increase in the oil price, global net import costs rise by about $160 billion a year. For every $1 per million British thermal unit (MMBtu) increase in the LNG gas price, global net import costs rise by about $20 billion per year.”
Volatility is structural, not episodic
This is the second major fossil fuel crisis in four years. The question is whether this is a run of bad luck, or whether we are in a new world where such crises are structurally more likely.
The underlying shifts point to the latter. Most notably the change in US incentives. In the early 2010s, the United States was the world’s largest petroleum importer. Now it is a net exporter as we noted in Energy security in an insecure world. That changes its incentives.
The pax Americana – the US-led security framework that underwrote a global economy built on the constant, just-in-time arrival of fossil fuels – appears to be waning. The architecture that kept fossil fuels flowing reliably for seven decades is starting to show cracks.
This alone would be cause for concern. But it coincides with a broader deterioration. Global armed conflicts are on the rise. Tariff levels and trade uncertainty are now at their highest in decades. Oil volatility indices have risen to levels that, outside 2022, have not been seen this millennium. As the world gets less stable, the risk of such dependency becomes less tolerable. The fossil fuel system, reliant on continuous trade through a handful of chokepoints, is becoming more fragile, not less.”
The new electrotech alternative
In the past, there were few alternatives to fossil fuel dependency. Now there is electrotech – EVs, solar, wind, batteries, and heat pumps. Countries can affordably slash imported fuels across the whole economy. For many, this is already cushioning the blow.
Every country can be energy independent with electrotech
The technology to slash fossil fuel import dependency already exists. Proven technologies can electrify over three-quarters of the global economy. And every country in the world has enough wind and solar potential to power that demand with home-grown energy.
Three levers do the bulk of the work. Solar and wind replace imported fossil-fuelled power generation. Electric vehicles replace imported oil in road transport. Heat pumps replace imported gas and oil in heating. If all three were scaled to replace imported fossil fuels in their respective sectors, importers could reduce their bill by around 70%.
The superlever today is EVs. They are price-competitive with combustion cars and readily available. Replacing imported oil used in road transport with EVs would reduce importers’ bills by over a third - around $600 billion per year. The second-largest lever is renewables, able to reduce importers ‘ bills by a fifth.
Sceptics argue electrotech merely exchanges one dependency for another – Saudi oil out, Chinese solar panels in. But this is to confuse renting and owning. A solar panel, once installed, produces power for three decades with no fuel cost, no price increase or supply risk. An EV, once bought, runs on domestic electricity, which can largely come from local wind and solar. Fossil fuels require continuous imports. Every barrel, every cargo, every pipeline flow must be repeated, indefinitely.”
“Electrotech is already at the scale and price to cushion fossil shocks
In the world of electrotech, four years is a long time. Since the 2022 energy crisis, electrotech has become cheaper, better, and more readily available. It is already at a scale to cushion part of the Hormuz shock.
Solar panels have halved in price since 2022. Annual solar installations have nearly tripled in four years. Battery prices have fallen by 36%. Annual deployment of grid batteries is seven times higher. The total cost of dispatchable solar – panel plus battery – is now just $76/MWh for countries that import tariff-free. EVs are increasingly at sticker price parity with combustion cars, and EV sales have doubled since 2022.
Electrotech is now operating at a scale that can ameliorate a major crisis like Hormuz:
The growth in global solar generation in 2025 alone could displace gas-fired electricity equal to all LNG exported through the Strait of Hormuz that year. In 2025, 82 million tonnes of LNG went through the strait; used in gas power plants, that could generate about 600 TWh of electricity. The IEA calculated that global solar generation rose by more than 600 TWh in 2025.
Based on global EV sales, we calculate that electric vehicles displaced 1.7 million barrels per day of oil worldwide in 2025 – up from 1.3 million barrels in 2024 – not yet approaching the 20 million barrels per day of all oil demand that passes through the Strait of Hormuz, but still nearly as much as Iran’s 2.4 million barrels per day of exports.”
Across many countries, rapid EV deployment is already slowing the growth of oil
demand. Without this wave of electrification, petrol demand today would be significantly higher — particularly in fast-growing Asian economies where mobility demand continues to expand.”
“The current crisis adds further momentum. Higher and more volatile fuel prices strengthen the economic case for EV adoption, motivating faster uptake and reinforcing their role in curbing future oil demand growth.
The savings from electric vehicles are already being banked. With oil at $80 per barrel, China saves over $28 billion a year in avoided oil imports through its current fleet of EVs alone; Europe about $8 billion per year and India $0.6 billion per year.
Wind and solar go further still. Across net coal-and-gas importers in 2024, the import bill reductions run into the tens of billions: roughly $60 billion for China, around $9 billion each for Germany and Brazil, and $5-7 billion in Spain, the UK and Japan.”
Conclusion
At some point the Strait of Hormuz will reopen. Prices will ease. The crisis will fade from the headlines. But the structural logic will not change, and the next disruption will not be long in coming. Every year of continued fossil fuel import dependency is another year of exposure to a system that has shown, repeatedly, that it cannot be relied upon. The technology to end that dependency exists. The only question is how many more crises it takes. The countries with the foresight to invest in electrotech now will be better able to weather the next storm.”
4 Big Ideas
A lot of focus has been on how AI will sweep in and make much of the human workforce redundant - the history of technological progress tells a different more nuanced story, below are two essays that explores some of these themes in more detail. First David Oks dives into why; “There’s a lot more to replacing labor than just automating tasks” and then Julien Ben of Sequoia looks at software - currently at the eye of disruption - and how; “The next $1T company will be a software company masquerading as a services firm.”…In short, things evolve and so does our systems and the people and societies that has the ability to adapt and evolve fast tends to harness these leaps…
Read the two essays in full here:
https://sequoiacap.com/article/services-the-new-software/
Some Takeaways
“…what happened to bank tellers?
Autor, Bessen, Vance, and the like are right to point out that ATMs did not reduce bank teller employment. But they miss the second half of the story, which is that another technology did. And that technology was the iPhone.
The huge decline in bank teller employment that we’ve seen over the last 15-odd years is mainly a story about iPhones and what they made possible.
But why? Why did the ATM, literally called the automated teller machine, not automate the teller, while an entirely orthogonal technology—the iPhone—actually did?
The answer, I think, is complementarity.
In my last piece, on why I don’t think imminent mass job loss from AI is likely, I talked a lot about complementarity. The core point I made was that labor substitution is about comparative advantage, not absolute advantage: the relevant question for labor impacts is not whether AI can do the tasks that humans can do, but rather whether the aggregate output of humans working with AI is inferior to what AI can produce alone. And I suggested that given the vast number of frictions and bottlenecks that exist in any human domain—domains that are, after all, defined around human labor in all its warts and eccentricities, with workflows designed around humans in mind—we should expect to see a serious gap between the incredible power of the technology and its impacts on economic life.
That gap will probably close faster than previous gaps did: AI is not “like” electricity or the steam engine; an AI system is literally a machine that can think and do things itself. But the gap exists, and will exist even as the technology continues to amaze us with what it can now accomplish.
But by talking about why ATMs didn’t displace bank tellers but iPhones did, I want to highlight an important corollary, which is that the true force of a technology is felt not with the substitution of tasks, but the invention of new paradigms. This is the famous lesson of electricity and productivity growth, which I’ll return to in a future piece.
When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it’s quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement.
It’s only when we see the construction of entirely new paradigms that the full power of a technology can be realized.
The ATM substituted tasks; but the iPhone made them irrelevant.”
“…in the 1950s and ‘60s, as Western economies were booming and enjoying their magnificent postwar economic expansions, labor was getting much more expensive. This was a good thing—it was simply the other side of rising wages—but it was also painful for enterprises that relied on lots of manual labor. And so we find that all the fashionable business concepts of the 1950s and ‘60s revolved around reducing labor costs to the maximum extent possible. It’s no coincidence that it was in the 1950s that the word “automation” entered the English language.
It used to be, for instance, that when you went shopping you’d have your stuff retrieved for you by a small army of clerks running around the shop; indeed that’s still how it’s done in places like India with an abundance of cheap labor. But humans were getting expensive in the 1950s and ‘60s, so everyone wanted to reduce the human component, and so in that period you saw the rise of supermarkets and discount stores, where the whole innovation is getting the stuff yourself. (Sam Walton’s Made in America is a good record of what that revolution was like from the inside; consumers tended to be quite happy with the whole thing, since corporate savings could be passed on in the form of cheaper goods.) And it’s the same reason why in the ‘50s and ‘60s you saw the rise of laundromats, vending machines, self-service gas stations, and “fast food” restaurants like McDonald’s.
So in the 1950s and ‘60s, the goal of every single business that employed humans was to find ways to replace humans with machines: in economic terms, to substitute capital for labor. And even though they were a relatively labor-light business to start with, this was true of banks as well. This was the case in the United States, but it was actually particularly true in Europe, where labor unrest among bank employees was an ongoing headache. (Financial sector employees were actually some of the most militant of all white-collar workers during this period: because of prolonged strikes by bank employees, Irish banks were closed 10 percent of the time between 1966 and 1976.)
Enter the computer. In the 1960s, to the great relief of bank management teams, it became possible to imagine that computers could be used to reduce the role of human labor in the banking process.
There were two key innovations that made this possible. The first was IBM’s invention of the magnetic stripe card in the 1960s: this was a thin strip of magnetized tape, bonded to a plastic card, that could encode and store data like account numbers, and which could be read by a machine when swiped through a card reader. And the second was Digital Equipment Corporation’s pioneering minicomputer, which dramatically reduced the price and size of general-purpose computing.
And so, bringing those two innovations together, you could finally imagine a machine that could do, programmatically, what a human teller might do: that could identify a customer automatically, via the magnetic stripe; that could communicate with the central servers of a bank to verify the customer’s account balance; and that could dispense cash or accept deposits accordingly.
And so in the 1960s, teams working concurrently in Sweden and the United Kingdom pioneered the earliest versions of what would eventually become known as the automated teller machine. These were primitive devices—they had the tendency to “eat” payment cards and to dispense incorrect amounts of money, and they didn’t see much uptake—but by the late 1960s it was clear where things were going. IBM, at that point the largest technology company in the world, soon took interest in the technology, and for the next few years groups of IBM engineers refined the technological and infrastructural layer to make the ATM functional.
And by the mid-1970s, after years of technical investment, the ATM was finally ready for prime time. By that point IBM, then enjoying its peak of influence, had decided the market wasn’t worth the investment, and so it ceded the nascent ATM industry to a company called Diebold.
And in 1977 the ATM finally got its big break. Citibank, then the second-largest deposit bank in the United States, decided to make ATMs the subject of a large push: they spent a large sum installing the machines across its deposit branches. The New York Times reported it as “a $50 million gamble that the consumer can be wooed and won with electronic services.” But the response was tepid. In the same New York Times article, we encounter a scene from a bank branch in Queens where one of Citibank’s ATMs was installed: “most of the customers,” the article reports, “preferred to wait in line a few moments and deal with the teller rather than test the new machines.”
But Citibank’s gamble paid off. Consumer wariness toward ATMs turned out to be temporary: the advantages of the ATM over the human teller were obvious. Running an ATM was cheaper than paying a human—each ATM transaction cost the bank just 27 cents, compared to $1.07 for a human teller—and this could either be passed to the consumer in the form of lower fees or simply kept as profit. And ATMs were also just more convenient. An ATM could do in 30 seconds what would take a human teller at least a few minutes; and while a human teller was only available during business hours, ATMs could be used at any time of day.
And the benefits for the bank were even greater. ATMs were expensive to install, but once they were installed they were wonderfully lucrative and had low maintenance costs. The fee opportunities were wonderful, since banks could charge fees on out-of-network transactions. And since ATMs were not legally considered to be branches, banks could deploy ATMs without running afoul of banking laws that restricted interstate bank branching.
All of this meant that banks had a really strong incentive to put ATMs everywhere. And so they did. In 1975 there were about 31 ATMs per one million Americans; by the year 2000, that number had grown to 1,135, a 37-fold increase in just 25 years.
And what did this do to the bank tellers?
The natural expectation is that ATMs would make human bank tellers obsolete, or at least strongly reduce demand for bank teller jobs. And indeed the number of bank tellers per branch declined significantly: from 21 tellers per branch to about 13 per branch once ATMs had hit saturation. But this decline in teller intensity corresponded with an increase in aggregate teller employment. The number of ATMs per capita grew dramatically after 1975; but the number of bank tellers increased along with it. Bank tellers did become a smaller share of total employment, since the increase in bank teller employment was smaller than the increase in other occupations; but at no point in the period between 1970 and 2010 did the number of bank tellers actually enter a prolonged decline.
Why is that? Why did ATMs, which automated the bulk of the teller’s job, not lead to a decrease in teller employment?
We find the most elegant explanation in a paper from David Autor:
First, by reducing the cost of operating a bank branch, ATMs indirectly increased the demand for tellers: the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent. Second, as the routine cash-handling tasks of bank tellers receded, information technology also enabled a broader range of bank personnel to become involved in “relationship banking.” Increasingly, banks recognized the value of tellers enabled by information technology, not primarily as checkout clerks, but as salespersons, forging relationships with customers and introducing them to additional bank services like credit cards, loans, and investment products.
We thus have a classic case of the Jevons effect. Teller labor was an input into an output that we can call “financial services.” ATMs allowed us to produce that output more efficiently and economize on the use of the labor input. But demand for the output was sufficiently elastic that more efficient production meant more demand: and demand increased to the point that there was actually greater demand for the labor input as well. And—this part is not quite the classic Jevons effect—the greater demand suggested to banks that there had been certain functions that were previously considered incidental to the teller job, like “relationship banking,” which were actually quite useful. And so ATMs were a truly complementary technology for the bank teller.
By the 2010s, people had begun to notice that there had been no mass unemployment of bank tellers. In 2015, James Bessen published a book called Learning by Doing, using the non-automation of bank tellers as a central example; soon it became a sort of load-bearing parable about what Matt Yglesias called “the myth of technological unemployment.” From Bessen the story diffused to Autor and Acemoglu; then to the economics bloggers; then to people like Eric Schmidt, who cited the ATM story in 2017 as one reason why he was a “denier” on the question of technological job loss. And they were right: ATMs really didn’t reduce bank teller employment.
But there was an ironic element to all of this: at the exact moment that people started talking about how technology had not displaced bank tellers, it stopped being true.”
“…but iPhones actually did
In the 2010s, bank teller employment entered a period of prolonged decline. This was not a product of the financial crisis that peaked in 2008: bank teller employment was roughly the same in 2010 as it had been in 2007. And the decline was not rapid but gradual. It continued even as banks returned to full health as the Great Recession abated. First there was a severe decline that started after 2010; then a slight recovery at the end of the decade; and then a collapse during the COVID years from which bank teller employment has never recovered. In 2010, there were 332,000 full-time bank tellers in the United States; by 2016, there were 235,000; by 2022, there were just 164,000.
This was not a long-delayed ATM shock: the ATM had reached full saturation long before. It was, rather, the effect of another technology, one that had nothing to do with banking. It was a product of the iPhone.
Apple first introduced the iPhone in 2007. By 2010, it was clear that the iPhone-style smartphone, with a touchscreen and an app store, was going to be the defining technological paradigm of the years to come: people were going to conduct huge portions of their life through the prism of the smartphone, which soon became simply “the phone.” And just as more forward-thinking institutions like Citibank knew in the 1970s that ATMs were the future, the smarter banks knew by the early 2010s that the future lay in what they called mobile banking.
The mobile banking vision was simple: the banking customers of the future would do all their banking via their banks’ mobile apps. They would buy things via payment cards or, later, via Apple Pay; they would check their balance or make deposits through the banking app; the customer’s relationship with the bank would be mediated entirely via the app.
In this new world, there was no reason for the physical bank location to exist. Indeed there were new entrants, like Revolut or Klarna, that existed entirely as mobile apps. The branch was a thing of the past.
Mobile banking succeeded much more rapidly than the ATM did—which is remarkable, considering that mobile banking was a much bigger change than the ATM. I remember, as a kid, opening my first bank account at the Chase branch in my hometown, and the excitement of occasionally visiting there to deposit any checks I might have. I’m still a Chase customer, and I interact frequently with my Chase account for all sorts of reasons. But it’s been many years since I visited a physical Chase location. My relationship with Chase has transcended any need for the branch. I don’t think I’m alone in this: the Chase branch in my hometown, where I would once deposit checks, closed in 2023. The building now houses a doctor’s office.
And so the rise of mobile banking removed any real reason to have bank branches.”
“ATM had been an innovation within the existing world of physical banking, and thus its replacement of the bank teller could inevitably only be partial; as long as people were still visiting the bank branch, it was useful to repurpose tellers as “relationship bankers.” But when branch visits declined that stopped making sense. The iPhone represented a wholly different way of banking, and within it there was no real need for the bank teller: and so a large institution like Bank of America was able to reduce its headcount from 288,000 in 2010 to 204,000 in 2018.
Of course, the transition to mobile banking also created jobs: banks now needed software developers to build and maintain the digital interface, and they needed customer service representatives to handle any problems that might emerge. And so a “mid-skill” job was replaced by a thin stratum of “high-skill” jobs and a vast army of “low-skill” ones. The term for this in labor economics is “job polarization.”
So that’s the irony of the parable of the bank teller. Technology did kill the bank teller job. It wasn’t the ATM that did it, but the iPhone.”
“The lesson is worth stating plainly. The ATM tried to do the teller’s job better, faster, cheaper; it tried to fit capital into a labor-shaped hole; but the iPhone made the teller’s job irrelevant. One automated tasks within an existing paradigm, and the other created a new paradigm in which those tasks simply didn’t need to exist at all. And it is paradigm replacement, not task automation, that actually displaces workers—and, conversely, unlocks the latent productivity within any technology. That’s because as long as the old paradigm persists, there will be labor-shaped holes in which capital substitution will encounter constant frictions and bottlenecks.
This has, I think, serious implications for how we’re thinking about AI.”
“The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.”
“The real productivity gains from AI—and the real threat of labor displacement—will come not from the “drop-in remote worker,” but from something like Dwarkesh Patel’s vision of the fully-automated firm. At some point in the life of every technology, old workflows are replaced by new ones, and we discover the paradigms in which the full productive force of a technology can best be expressed. In the past this has simply been a fact of managerial turnover or depreciation cycles. But with AI it will likely be the sheer power of the technology itself, which really is wholly unlike anything that has come before, and unlike electricity or the steam engine will eventually be able to build the structures that harness its powers by itself.
I don’t think we’ve really yet learned what those new structures will look like. But, at the limit, I don’t quite know why humans have to be involved in those: though I suspect that by the time we’re dealing with the fully-automated organizations of the future, our current set of concerns will have been largely outmoded by new and quite foreign ones, as has always been the case with human progress.
But, however optimistic I might be about the human future, I don’t think it’s worth leaning on the history of past technologies for comfort. The ATM parable is a comforting narrative; and in times of uncertainty and fear we search naturally for solace and comfort wherever it may come. But even when it comes to bank tellers, it’s only the first half of the story.”
“Services: The New Software”
Every founder building an AI tool is asking the same question: what happens when the next version of Claude makes my product a feature? They’re right to worry. If you sell the tool, you’re in a race against the model. But if you sell the work, every improvement in the model makes your service faster, cheaper, and harder to compete with. A company might spend $10K a year for QuickBooks and $120K on an accountant to close the books. The next legendary company will just close the books.
Intelligence vs Judgement
Writing code is mostly intelligence. Knowing what to build next is judgement.
Translating a spec into code, testing, debugging: the rules are complex but they are rules. Judgement is different. It requires experience and taste, instinct built on years of practice. Deciding which feature to build next, whether to take on tech debt, when to ship before it’s ready.
A year ago, most Cursor users treated AI as autocomplete. Today, more tasks are started by agents than by humans. Software engineering accounts for over half of all AI tool usage across professions. Every other category is still in single digits.
The reason is that software engineering is primarily intelligence work. AI has crossed the threshold where it can do most of the intelligence work autonomously and leave the judgement to humans. Software engineering got there first. It is coming to every single profession.”
“Copilots and Autopilots
A copilot sells the tool. An autopilot sells the work.
Until recently, AI models were still developing intelligence and judgement, so the right approach was to build a copilot first: put AI in the hands of a professional and let them decide what to do with it. Harvey sells to law firms. Rogo sells to investment banks. The professional is the customer, the tool makes them more productive, and they take responsibility for the output.
Today, the models are intelligent enough that in some categories the best place to start is as an autopilot. Crosby sells to the company that needs an NDA drafted, not to outside counsel. WithCoverage sells to the CFO who needs insurance, not to the broker. The customer is buying the outcome directly. The work budget in any profession dwarfs the tool budget, and autopilots capture the work budget from day one.
The higher the intelligence ratio in any field, the sooner autopilots will win.”
“The Convergence
Today’s judgement will become tomorrow’s intelligence.
As AI systems accumulate proprietary data about what good judgement looks like in their domain, the frontier will shift. Copilots and autopilots will converge. The copilot-to-autopilot transition has already begun in several categories. But the starting position matters because it determines where autopilots can win customers now and begin compounding the data that will eventually let them handle judgement too.”
“For every dollar spent on software, six are spent on services.
The total addressable market for autopilots is all labour spend in a category, insourced and outsourced combined. But the right place to start is where outsourcing already exists.
If a task is already outsourced, it tells you three things. One, the company has accepted that this work can be done externally. Two, there’s an existing budget line that can be substituted cleanly. Three, the buyer is already purchasing an outcome. Replacing an outsourcing contract with an AI-native services provider is a vendor swap. Replacing headcount is a reorg.
The playbook: companies should start with the outsourced, intelligence-heavy task. Nail distribution. Expand toward the insourced, judgement-heavy work as the AI compounds. The outsourced task is the wedge. The insourced work is the long-term TAM.”
Opportunity Map
Plotting every services vertical on an intelligence-to-judgement spectrum and outsourced-to-insourced ratio produces a priority map with labour TAM in brackets. The list is illustrative.
“In 2025, the fastest-growing AI companies were copilots. In 2026, many will try to become autopilots. They have the product and the customer knowledge. But they also face the innovator’s dilemma: selling the work means cutting their own customers out of doing it. That’s the opening for pure-play autopilots.”
5 Big Thinking
Serial Acquirers podcast speaks with Will Thorndike, author of The Outsiders, which is a great book, at the Redeye Serial Acquirers Conference - it’s a conversation rich on lessons from outlier founders/CEOs - you can listen to it here:
https://www.redeye.se/video/event-presentation/1157651/fireside-chat-with-will-thorndike-in-conversation-with-chris-mayer-at-the-redeye-serial-acquirers-conference-2026-march-13
Or read the notes here:
Some Takeaways
“…a few points about outliers/outsiders that contribute to their success:
They systematically schedule white space: blocked-out time in their weekly calendars to be alone, think, and read.
They almost entirely cut out investor relations: typical public CEO spends ~20% of time on IR; outsiders “just cut that out, freeing up that time to be spent on other things.”
They were obsessive about talent and retention: “all eight had really excellent cultures and had exceptional retention of top talent, better in both cases than the peers.”
They were highly data-oriented and rational in all decision-making, including the willingness to kill sacred cows (Singleton closing Packard Bell, the Grahams selling The Washington Post)
All eight were first-time CEOs — “the most surprising finding in the book maybe.”
6 Living the Dream
Have a Great weekend when You get to that stage,
Sune












