DOGE and GDP
Changes in liquidity at the government level significantly impact market returns. It’s challenging and often nearly impossible to predict those liquidity changes unless they are clearly announced and followed through. In 2020, an emergency $10T in stimulus added to US liquidity set off a massive and rapid bull market followed by an inflationary period. In 2022, rate hikes to reduce inflation reduced liquidity, pushed the Nasdaq down 33%, and popped the SPAC/NFT/Crypto bubble. Meanwhile, government spending has continued to rise and has had a persistent positive impact on GDP growth regardless of the rate hikes.
That brings us to DOGE, the official-sounding but unofficial Department Of Government Efficiency, which aims to reduce wasteful government spending. Heading into Trump’s 2nd term, government spending is roughly 35% of GDP. GDP growth tends to go hand in hand with growth in the stock market, so this government efficiency goal spearheaded by Elon and Vivek Ramaswamy is not an insignificant market variable.
It’s unclear where Trump, Vivek or Elon will start their DOGE activities or how they will implement them. Elon has stated he’s targeting $2T in cuts, about 7% of overall GDP. Elon Musk’s track record is so strong that most leaders in business and tech simply resolve to “don’t bet against him.” There’s no question that current US government spending is a bit of a runaway disaster. No one likes austerity or markets that don’t constantly go higher each year. We have no realistic visibility today regarding where DOGE will focus, but it’s something to keep in mind from an investment perspective.
On a related topic, but not necessarily investment-related, Boston also has a runaway spending issue. An unfortunate key variable is the Boston public school system, which is roughly 40% of Boston’s budget and climbing despite declining enrollment. I don’t envy whoever attempts to reform BPS. For now, Mayor Wu has opted to raise taxes on corporate real estate rather than address spending. You can read about this in more detail via a post from our friends at Groma.
Polymarket
Polymarket is a fascinating tool. Free markets are decently predictive as prices are more likely to reflect reality when there are real financial risks and rewards on the line and not just written opinions. It’s interesting to compare the percent likelihood of an outcome vs the opinion articles. Polymarket showed Trump with a 95% chance of winning well before the press announced the outcome. 95% is not the same as 100%, but it’s close enough for people to assume the likelihood of the outcome without needing a definitive answer.
I stated in the last article that Polymarket is now large (billions of dollars in volume) and operating in a questionable legal gray area. We have securities and derivatives regulations in the US for good reasons, and Polymarket’s operation outside of the US at a significant size is taunting regulators. Unsurprisingly, the CEO’s NYC apartment was raided last week, and his cell phone was confiscated. If you look at the volume of trades over $100k in size, you’ll see many transactions with no potential profit, ie, buying contracts for $1.00 to receive $1.00. This is almost certainly evidence of money laundering. Thanks to the blockchain, it’s there for everyone to see. Gambling has historically been helpful to money launderers as “profits” can help justify where the funds came from. Money laundering is everywhere, including real estate, TD Bank, and countless other examples.
I like Polymarket, but at its current pace, its days are numbered. The counter to that statement is lower regulation and enforcement under Trump. Maybe Polymarket can fly under the radar until the regulations catch up because this tool is in high demand, and it would be nice if it could operate legally. We already have light versions of these betting markets that are entirely legal, like the Fed’s future interest rate market, which also presents outcomes in terms of percentage likelihood.
Conceptual AI limits
There are ongoing debates about whether progress in generative AI models is plateauing, either in LLMs like ChatGPT or perhaps more broadly. Tying together a few different concepts I wanted to attempt to explain why I don’t see us anywhere close to the end of what AI can do, and here is why.
Computation Irreducibility: Stephen Wolfram proposed this concept and has talked about it quite a bit during the ChatGPT LLM boom.
To bake a cake, you need to mix the ingredients, let them react, and put it in the oven. If you skip any steps, you change the outcome, and you no longer end up with something that can be called cake. When you don’t follow precise steps, the outcome becomes unpredictable and likely not useful. Baking a cake is an irreducible activity. For an AI to replicate this process, it must accurately complete each step AND stitch those steps together in the perfect order.
Most work is like this. Competency and mastery depend on an understanding of the fundamentals and the ability to stitch them together. Especially in large corporations where each worker specializes, those jobs are often explicitly defined by a set of computationally irreducible activities.
Units of productive intelligence: Venture capitalist Elad Gil recently spoke about how AI is underhyped and introduced the idea of units of productive intelligence. The opportunity in AI is to produce increasingly valuable units of productive intelligence. For driverless cars, that increasing ability and value looks like the progression from cruise control to adaptive cruise control, to hands-free highway driving, to completing a drive from A to B.
I don’t think there is a lack of data whatsoever. All you have to do to get more data is to observe the world. There is plenty of data. The real challenge is that data needs to be organized into identifiable irreducible activities combined to produce increasingly valuable units of productive intelligence. There’s no limit to that work. Maybe ChatGPT can’t get any better as a highly capable generalist, or maybe it can’t. If it can’t, then it might be time for the GPTs to start specializing in their respective fields.
Identity Management
The airport security system, Clear, has ambitions to be an identity platform. It makes sense since Clear is tasked with national security-level identity verification. If Clear knows who I am, why can’t I use that data to apply for a mortgage, register a new vehicle or sign into Instagram? Apple knows who I am, and I can use that identity to sign into different applications. Google also knows everything about me. We’ve clearly crossed the privacy chasm where it’s impossible to prevent companies from learning everything about us. They can’t help it at this point. I’m not sure where this goes, but I’d imagine multiple players emerge. Google could easily build an identity platform but add that to the list of things they can but won’t do. Perhaps I will use Clear in a few years to apply for an updated passport, file my taxes and sign new investment agreements.
Weekly Articles by Osbon Capital Management:
"*" indicates required fields