main-article-of-news-banner.png

How Brain Drain Shaped AI Innovations of the Current World

 

We find ourselves at an inflection point in the development of a novel AI technology. The success of AI models such as ChatGPT has stirred up a plethora of emotions overhauling the human conditions—excitement, anxiety, fear, curiosity, and anticipation. These emotions are shared not only by normies, but also by researchers actively building these powerful machines. 

Along these lines, a petition was signed by prominent AI researchers calling for a six-month halt on the development of models larger than GPT-4. Naturally, this proposal was met with vehement opposition. Andrew Ng and Yann LeCun recently came together to discuss why they believed that legislating a pause was a wrong idea.

LeCun, who is a Chief AI Scientist at MetaAI, made a clear distinction between AI research and the products that emerge from it. While he acknowledges the need to regulate the deployment of AI products, LeCun argues that calling for a halt in research would only impede progress and would not serve any meaningful purpose.

“My first reaction to [the letter] is that calling for a delay in research and development smacks me of a new wave of obscurantism,” said LeCun. “Why slow down the progress of knowledge and science? Then, there is the question of products. . . I’m all for regulating products that get in the hands of people. I don’t see the point of regulating research and development. I don’t think that serves any purpose other than reducing the knowledge that we could use to actually make technology better, safer,” he said during the conversation. 

But, the question remains whether there is such a sharp distinction between research and industry when it comes to AI research? To understand this, it is important to know where the research is happening in the first place. 

 

Industry outpaces academia

The Stanford University AI Index Report 2023 indicates that the for-profit AI companies are racing ahead of academia. “Until 2014, most significant machine learning models were released by academia. Since then, AI companies have taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” reads the report. 

Skilled computer scientists around the globe are being lured away from academia by alluring employment offers from the private sector. In addition to this, the enticing perks, such as access to diverse data sets, abundant computing resources, and the ability to make a significant impact on millions of individuals through commercial products and you see why AI research is happening in corporations more than universities.  

This trend, argues a blog, is resulting in a significant brain drain that has already had an impact on research and education. 

Since a small number of corporations have recruited the majority of the top AI researchers, their expertise and knowledge are not being shared with society at large. This is problematic because the concentration of innovation in few companies is what prevents any opportunity to mitigate the substantial disruption and negative consequences that AI could bring. 

 

Red Neuronal Artificial, Ana

Profit-making machines

Evidently, when it comes to research in corporations, it is no charity work. The end goal has always been to either launch new products or integrate into existing products. DeepMind, for instance, earned its first profits drawing revenue from research and development carried out for other companies under the Alphabet umbrella, including Google, YouTube and X, the moonshot division. 

Notably, this was a few months after DeepMind was denied the bid for having the same legal structure as a non-profit by Google. Similar was the case with OpenAI, which shifted from non-profit to ‘capped-profit’ to attract capital—a few years ago—alongside giving exclusive licence of GPT-3/4 to Microsoft for commercial applications and use cases. 

Remaining non-profits will follow suit. AI research is expensive, and only large corporations have that kind of capital to keep pouring bucks for training and compute. 

In a four-year old video, Sam Altman, the co-founder of OpenAI, emphasised the importance of product-oriented AI research. Stressing on getting to market as quickly as possible, he says, “You want people to have a bias towards action. Startups, especially in their early days, win by moving in very quickly. Initially, you never get as much data as you would like and you never have as much time to deliberate as you would like. And yet, you need people who will act with much less data than they’d like to have and much less certainty. If they act and it doesn’t work, they adapt really quickly”. 

Thus, the idea that we should keep pushing the throttle for more research while letting the product people handle the regulation aspect seems half-baked. The current state of AI is such that research is heavily inclined towards finding a product market fit.  

 

A force of resistance

In a recent New York Times’ podcast, Google CEO Sundar Pichai emphasised that OpenAI’s approach to release the product sooner in order to give society a chance to understand and adapt is a “reasonable point” to take. It is clear that big tech wants to steer AI research in a particular direction, and the ones making a case for slowing the pace of progress are fighting this “voice of power”. 

In this light, LeCun’s own position of being Metas Chief AI Scientist needs to be taken into consideration to understand what is informing his position in the current AI discourse.

One of the signees of the open letter Yoshua Bengio, considered to be the Godfather of AI, explained that a year ago, he perhaps wouldn’t have signed the letter but with the arrival of ChatGPT, there is witnessed a shift in the attitudes of companies for whom the “challenge of commercial competition has increased tenfold”. 

LeackStat 2023