Intelligence Has Never Been This Cheap

The Chinese AI lab recently released several open-source large language models called “DeepSeek v3 and R1” that shocked the world. 

For once, the emphasis was on matching the performance of OpenAI 4o & o1 models but more than 10 times cheaper to train and deploy. 

I expect dramatic declines in costs to trigger a Cambrian explosion of new GenAI applications that start to encroach on traditional software, particularly SaaS products and a wide range of tools for human knowledge work. 

Steep declines in costs will likely shift the focus of the GenAI industry in 2025 to applications, particularly AI agents. Like the early days of the internet, it will be open season for a new generation of GenAI applications to move beyond "chatbots".

The Chinese lab spent an estimated $6 million to train DeepSeek v3 compared with $60-100 million for OpenAI GPT-4.

The inference cost of the DeepSeek v3 is 10x cheaper than the equivalent OpenAI 4o. The inference cost of the DeepSeek R1 reasoning model is 27x cheaper than the equivalent OpenAI o1, even with additional novel features.

If anything, I expect the curve of improvement to get much steeper.

Further improvement in transformer architecture and training algorithms could lead to another 10-fold improvement. New hardware from Nvidia, Cerebras, and Groq could deliver a comparable boost. Combined, that would be a 100-fold decline in costs per unit of performance.

The Chinese open-source LLMs can be hosted on all cloud providers, resulting in brutal competition for AI labs that train foundational models.

With Meta and Chinese AI labs such as Qwen and DeepSeek releasing state of the art LLMs, I expect AI labs like Mistral and Cohere will find it hard to remain independent. 

The large decline in LLM costs and increasing speed gives SigTech the ability to 1) run multiple versions of solutions in parallel 2) tackle increasingly complex jobs via a large team of agents at affordable costs and 3) finetune an open-source LLM to create domain-specific SigTech LLMs.

We believe these developments will transform research in capital markets. 

The moat for research will be driven by access to private proprietary data and analytics.

Any research team that cannot beat this new benchmark will struggle to survive.