Skip to content
r > g; Specialization in the age of AI

This entry of Machine Readable is a collaboration between Alexander Huras, Founding Engineer, Halcyon, and Jeff Fisher, Communications Lead, Halcyon. Jeff's Twitter addiction contributed to the inspiration for this post, and Alexander's subject-matter expertise and domain knowledge contributed to its substance.

Anyone who reads the news, doomscrolls Twitter, or subscribes to Machine Readable (sign up here) already knows: there’s been a lot of conversation about the value of AI lately. But this week, the conversation took an interesting turn.

Apropos of seemingly nothing more than escalating (and, admittedly, sometimes warranted) skepticism of AI startups, Harvey.AI, a legal AI platform, came under scrutiny by the Silicon Valley Twitterati. Three days later, Harvey responded by announcing they closed an impressive Series C round - so there is certainly an argument over whether the criticism was deserved.

While we appreciated Harvey’s show-don’t-tell response, the subsequent debate that was sparked about the value of smaller verticalized AI companies vs the more generalist incumbents — and how the advances in foundational models may or may not impact the broader ecosystem — was much more interesting.

As one of those smaller verticalized AI companies that uses techniques similar to Harvey, we fundamentally believe that there will always be a role for specialists, and that they are better suited to solving specific, high-value business problems.

Harvey uses Retrieval Augmented Generation, or RAG, as a technique to leverage LLMs. We at Halcyon do the same. As we’ve said before, the biggest part of the RAG problem is the R. For customers with very specific information needs (like lawyers researching case law, or energy investors accessing policy documents) getting the right information is far more important than how that information is presented. If you ask those customers what they care about, there’s no debate: r > g. 

(Now, there is a good debate over the best way to be good at Retrieval. While some companies like Harvey have invested in fine-tuning their own variations of foundation models, we think that improving the data going in will be a way more direct, efficient, and generally higher value thing to do than trying to refine that data at the final step. The evaluation systems/datasets required to meaningfully update weights in a model the size of any current-generation LLMs are quite sophisticated, reasonably expensive, and sensitive to being done poorly. At Halcyon we’ve invested more in robust data collection and our own search architecture that does not depend on LLMs.)

The theory that foundational model updates are all that matters is only true if you’re building a frontend to ChatGPT. If you’ve built a customer service chatbot with a relatively constrained set of responses to draw from, yes, you will definitely be impacted by big advances to foundational models. While there are some startups doing this, you typically see this implementation within larger corporations that have a big enough customer base to require a service like a chatbot or other basic text generation service.

Conversely, products and features built by subject matter experts that have deep domain expertise and intimate knowledge of customer problems will be comparatively insulated from these updates. At Halcyon, we’ve spent a lot of time curating our catalog and encoding our institutional knowledge about the energy and broader decarbonization ecosystem into our software. Harvey didn’t get a first-mover advantage in the legal AI space by being first to submit an API request to OpenAI - they got it by being the first to encode their specialized knowledge about the legal space into their products. In both instances, that domain knowledge both improves Retrieval ability via including more relevant information in a given response as well as makes up for one of LLMs biggest shortcomings: they are currently in a race to the average, and in many products the average is not valuable enough to justify a company.

That said, it isn’t all sunshines and rainbows in startup land. The LLM providers themselves aren’t just sitting around counting their money. They’ve been absorbing related products, rendering outputs, doing RAG-based search, and allowing users to manage files and easily share results externally. They are well capitalized, highly competent, and will likely subsume many of the verticalized startups (and neither Harvey nor Halcyon are immune) that do not have specialized expertise, specialized data, specialized products, or some combination of the three.

(A potential exception here could be companies willing to get their hands dirty and take on the digital equivalent of cleaning septic tanks. Just by virtue of doing undesirable work that others don’t want to do, they can probably fly under the radar longer - at least until they start making enough money to attract attention.)

And that brings us to the crux of the matter: whether or not a generalist model can compete with a specialist solution is less important — and less valuable — than figuring out exactly what problem to solve, exactly what product to build to help solve it, and exactly how best to monetize that product.

In those areas, history has shown that smaller companies will naturally be more agile, more responsive, and less wedded to existing business models (even if they are less resourced) than larger incumbents.

Comments or questions? We’d love to hear from you - sayhi@halcyon.eco, or find us on LinkedIn and Twitter