System engineer checking code on multiple monitors and working with app developer colleague in it agency office. Coder analyzing algorithm on screens, developing new user interface.

Zeki Data Unveils DeepSeek’s Hiring Trend to Challenge the US’ AI Dominance

 • 

DeepSeek is redefining what it means to operate in the world of AI. Unlike traditional AI start-ups chasing aggressive industry applications, DeepSeek functions like an academic research hub. The company prioritises raw academic expertise, handpicking emerging talent straight from completing their PhDs or Master’s degrees in computer science with little to no applied experience.  

This is a new revelation that has emerged with the release of Deepseek’s R1 LLM model. DeepSeek first made headlines, shook the market and challenged US-led AI dominance on January 20th by proving that groundbreaking AI can be achieved at a fraction of the usual cost. By replicating the results of LLMs created by US tech giants without need for extensive compute power or advanced AI chips, DeepSeek’s R1 model marked a “Moneyball” moment for AI.  

What made the difference?  

At the heart of this success lies DeepSeek’s rigorous talent selection and unique approach to work. Zeki Data tracks and evaluates the top 300,000 AI researchers across 10,000 companies globally, including 2,500 experts behind 100+ major Generative AI models. Our rich data tracks the performance, career positioning, reputation and skills of each individual to find undervalued talent and deep tech companies in the market. 

Within our data, we identified 50 researchers actively involved in building DeepSeek’s R1 model. The overall DeepSeek team is small, reportedly around 150 people, and not all are scientists and engineers. So, 50 is a good representative sample of their research talent.  

More research hub than start-up  

DeepSeek’s focus on cultivating young, specialist talent from the ground up ensures a team laser-focused on innovation. Zeki’s data shows that compared to well-known players like OpenAI and Google DeepMind, DeepSeek employs a higher proportion of PhD graduates with little to no experience in substantive applied research roles in the industry. The chart below shows that OpenAI, Google and Google DeepMind hire fewer PhDs with such little experience to work on their LLM models.  

DeepSeek’s strategy involves hiring individuals who have demonstrated exceptional capabilities in their degrees at addressing niche optimisation problems for LLMs. Many have won awards as outstanding graduates or built early career visibility through participation in mathematics-based national competitions.   

The majority of DeepSeek’s research talent was trained at Peking University, such as Fuli Luo, DeepSeek’s Principal Researcher, who gained international recognition with eight publications at the prestigious ACL conference. Similarly, Damai Da, a lead researcher at DeepSeek, stands out as one of the most prolific research authors and graduated from Peking University. 

Microsoft’s missed opportunity   

Many of DeepSeek’s key researchers once held connections to Microsoft’s Asia Research programme. Out of 50 identified researchers at DeepSeek, nine had interned, received funding or worked there

Daya Guo, the lead author of DeepSeek’s paper publishing the R1 model, completed his PhD in 2023 as part of a joint programme between Sun Yat-sen University and Microsoft Asia. 

Similarly, Yu Wu, now leading DeepSeek’s LLM Alignment Team, was awarded a Microsoft Research Asia PhD Fellowship in 2017. Microsoft seemingly missed an opportunity to retain these influential researchers, allowing them to contribute to DeepSeek’s rise.   

Openness and collaboration fuel innovation   

A defining characteristic of DeepSeek’s team is their commitment to transparency and collaboration. Every one of the 50 identified researchers in our data has chosen to publish their findings on specialist academic publication platforms where their research is showcased internationally. This aligns with DeepSeek’s operational philosophy – to attract and retain the best emerging talent who do not want to work behind closed doors.   

Such transparency distinguishes DeepSeek’s talent pool from our broader dataset, where only 50% of researchers who work on LLMs follow this practice. 

The efficiency edge 

It’s crucial to note that while DeepSeek’s R1 model represents a significant leap in terms of cost and efficiency, the achievement is not a breakthrough in LLM capability itself.

Instead, Google and Google DeepMind retain the upper hand when it comes to raw talent horsepower and accelerating innovations in AI. Google has a long reputation for building data infrastructure and compute at scale with great efficiency. Google and Google DeepMind hold the majority market share of all talent outside China with direct experience in building new LLM models. We see a strong historical correlation between the pace of new model releases and the ability of LLM teams to acquire new talent. 

DeepSeek’s innovation shows how existing processes can be refined and executed more efficiently, but future game-changing capabilities may still come from organisations with greater investment in infrastructure and resources.  

Nevertheless, DeepSeek’s success puts significant pressure on emerging players like Mistral AI, which finds itself squeezed between the resources of US tech giants and DeepSeek’s efficient, cost-effective processes. 

Geopolitical ripple effects   

DeepSeek’s innovation does more than redefine competition—it has geopolitical implications for the AI landscape. Which countries will be motivated to develop their own national LLMs, now that DeepSeek has demonstrated it costs little more than a rounding error in a national budget? The only thing standing in their way is finding the right talent

China already has a deep pool of skilled talent to rely on. But today, the global AI talent market isn’t just driven by the US or China. Wherever Indian-trained innovators go, innovation tends to follow. 

Countries such as India are well-poised to take advantage of this shift, it has grown its market share of global AI talent from 4% to 8% in just over a decade, with an inflection point in 2018. It has the talent to now go its own way on LLMs. 

Other countries could also hire Indian-trained talent to help meet their national LLM ambitions. Indian-trained PhDs in AI are the most mobile and in demand globally. 44% now work outside India, with 225 founding AI companies in their host countries. For example, the United Arab Emirates is increasingly attracting highly skilled Indian talent. 

Innovation insights delivered to you

Zeki’s bi-weekly Insights Newsletter provides actionable market perspectives.


Discover more from Zeki

Subscribe now to keep reading and get access to the full archive.

Continue reading