The Next Phase: OpenAI CEO Signals Shift from Giant AI Models

The Next Phase: OpenAI CEO Signals Shift from Giant AI Models

The capabilities of the ChatGPT chatbot developed by the startup company OpenAI have been astounding, leading to a surge of interest and investment in the field of artificial intelligence. However, last weekend, the company's CEO, Sam Altman, issued a warning: the research strategy behind creating this robot has lost its utility. It remains unclear where future developments will come from.

In recent years, OpenAI has taken existing machine learning algorithms and scaled them to previously unimaginable levels, achieving impressive breakthroughs in the field of artificial intelligence language models. The company may have used trillions of text data and thousands of powerful computer chips in training their latest model, GPT-4. This process incurred a cost of over $100 million.

However, Altman stated that even with the scaling up of models, further progress may not be attainable. Last weekend, at an event held at the Massachusetts Institute of Technology (MIT), he told attendees, "I believe we are at the end of the era of these giant models. We will improve them in other ways."

Altman's assertion signals an unexpected twist in the competition to develop and apply new artificial intelligence algorithms. Since OpenAI launched ChatGPT in November last year, Microsoft has added chatbot functionality to its Bing search engine using its underlying technology, and Google introduced "Bard," a chatbot competitor to Bing. Many have eagerly adopted these new-generation chatbots to assist with work or personal tasks.

Meanwhile, numerous well-funded startups are investing heavily in building larger-scale algorithms, striving to catch up with OpenAI's technology. OpenAI initially introduced ChatGPT as an upgraded version of GPT-3, and users now have access to versions powered by the more robust GPT-4.

Altman's statement suggests that GPT-4 might be the last major advancement achievable through OpenAI's strategy of scaling up models and providing more data. He did not specify which research strategy or technology might replace it. OpenAI mentioned in the description of GPT-4 that the benefits of scaling up models are diminishing, and Altman noted that there are objective limitations on the number of data centers the company can build and the speed at which they can be constructed.

Nick Frost, co-founder of Canada's Cohere Technologies and a former employee of Google's AI division, believes that Altman's view that scaling up models will not always be effective is indeed reasonable. He also thinks that progress in transformer models (GPT-4 and its machine learning model competitors) is not solely about scale.

Frost said, "There are many ways to make transformers better and more useful, and many of them are unrelated to scaling up models." He pointed out that new AI model designs or structures, as well as further fine-tuning based on human feedback, are promising directions that many researchers are exploring.

Each version of OpenAI's influential language algorithm, including GPT-4, includes artificial neural networks that, once trained, can predict the words that will follow a given text segment.

GPT-2 was an earlier version of this language model, introduced in 2019, with a maximum of 1.5 billion parameters. At the time, GPT-2 was already significantly larger than previous systems, in part because OpenAI's researchers found that increasing the scale made the model more coherent.

GPT-3, introduced in 2020, was even larger, with a staggering 175 billion parameters. Its comprehensive ability to generate poetry, emails, and other text convinced other companies and research institutions that they should scale up their own artificial intelligence models to a similar or even larger scale.

When ChatGPT was unveiled in November last year, some tech experts speculated that when GPT-4 was released, it would be a model of mind-boggling scale and complexity. However, when OpenAI finally unveiled this new artificial intelligence model, the company did not disclose its exact size, perhaps because scale was no longer as crucial.

The astonishing capabilities of ChatGPT, the chatbot developed by the startup company OpenAI, have led to a surge in interest and investment in the field of artificial intelligence. However, last weekend, the CEO of the company, Sam Altman, issued a warning: the research strategy behind creating this machine has lost its utility. It remains unclear where future developments will come from.

In recent years, OpenAI has taken existing machine learning algorithms and scaled them to unprecedented levels, achieving impressive breakthroughs in the field of artificial intelligence language models. The company may have employed trillions of text data and thousands of powerful computer chips in training its latest model, GPT-4, incurring costs exceeding one hundred million dollars.

However, Altman stated that even with the expansion of model size, further progress is not achievable. Last weekend, he told an audience at an event held at the Massachusetts Institute of Technology in the United States, "I believe we are at the end of the era of these giant models. We will improve them by other means."

Altman's assertion suggests an unexpected turn in the competition to develop and apply new artificial intelligence algorithms. Since OpenAI launched ChatGPT in November last year, Microsoft has integrated chatbot capabilities into its Bing search engine using its underlying technology, and Google has introduced a competing chatbot named "Bard" to challenge Bing. Many have flocked to try out these new-generation chatbots to assist with work or personal tasks.

Meanwhile, numerous well-funded startups are investing heavily in building larger-scale algorithms in an effort to catch up with OpenAI's technology. OpenAI initially released ChatGPT as an upgraded version of GPT-3, and users can now access a more powerful version driven by GPT-4.

Altman's statement implies that GPT-4 might be the last significant advancement that OpenAI's strategy of scaling models and providing more data can achieve. He did not specify which research strategy or technology might replace it. In describing GPT-4 in its paper, OpenAI stated that the benefits of scaling up the model appear to be diminishing, and there are objective limits to the number of data centers the company can build and the speed at which it can construct them.

Nick Frost, co-founder of the Canadian company Cohere and a former employee of Google's AI division, agrees with Altman's view that scaling up models may not always be effective. He also believes that progress in transformers (GPT-4 and its core machine learning model competitors) is not just about scale.

He said, "There are many ways to make transformers better and more useful, and many of them have nothing to do with scaling up the model." He pointed out that new artificial intelligence model designs (or architectures) and further fine-tuning based on human feedback are promising directions that many researchers are beginning to explore.

Each version of OpenAI's influential language algorithm group includes artificial neural networks that, after training, can predict the words that follow a given text segment. GPT-2 was an earlier model in this language model family, introduced in 2019, with up to 1.5 billion parameters. At the time, GPT-2 was already massive compared to previous systems, partly because OpenAI researchers found that increasing the scale could make the model more coherent.

GPT-3, introduced in 2020, was even larger in scale, with a staggering 175 billion parameters. Its ability to generate poetry, emails, and other text outputs convinced other companies and research institutions that they should scale up their own AI models to a similar or even larger scale.

When ChatGPT was unveiled in November last year, some tech experts speculated that when GPT-4 was released, it would be a model of dizzying scale and complexity. However, when OpenAI finally announced this new AI model, the company did not disclose its exact size, perhaps because size is no longer as crucial.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Topainews.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.