2024 Лучшие бесплатные Deepnude AI: AI обнаженные
Ниже представлен список лучших AI-приложений для создания ню, которые стоит проверить!
OpenAI, a prominent player in the AI landscape, is exploring the possibility of creating its own AI chips, a strategic move prompted by the ongoing chip shortage, which has disrupted the AI model training process. Discussions on AI chip strategies have been ongoing within OpenAI for some time, with options including the acquisition of an AI chip manufacturer or the development of in-house chip design efforts.
CEO Sam Altman has identified the acquisition of additional AI chips as a top priority for OpenAI, recognizing the critical role hardware plays in advancing their AI capabilities.
Currently, like many AI organizations, OpenAI relies on GPU-based hardware for developing models like ChatGPT, GPT-4, and DALL-E 3. GPUs are renowned for their ability to perform parallel computations, making them invaluable for training state-of-the-art AI models. However, the recent boom in generative AI has placed immense strain on the GPU supply chain. Companies like Nvidia, a major GPU manufacturer, are grappling with a supply shortage, with their best-performing AI chips reportedly sold out until 2024.
OpenAI also heavily depends on GPUs to run and serve its AI models, utilizing GPU clusters in the cloud to handle customer workloads. However, the cost associated with GPUs is considerable.
Analyst Stacy Rasgon's analysis suggests that if ChatGPT queries were to scale to a tenth of Google Search's magnitude, it would necessitate an initial investment of approximately $48.1 billion worth of GPUs and an annual expenditure of around $16 billion worth of chips to maintain operations.
OpenAI is contemplating joining the ranks of tech giants like Google, Amazon, and Microsoft in developing its own AI chips. Google has its Tensor Processing Unit (TPU) for training large generative AI systems, while Amazon offers proprietary chips (Trainium and Inferentia) to its AWS customers for both training and inferencing. Microsoft is reportedly collaborating with AMD to create an in-house AI chip called Athena, a chip that OpenAI is said to be testing.
OpenAI is well-positioned for substantial investment in research and development (R&D). The company, having raised over $11 billion in venture capital, is approaching an annual revenue of $1 billion. Additionally, it is considering a share sale that could potentially elevate its secondary-market valuation to $90 billion, as reported by The Wall Street Journal.
Venturing into the hardware business, especially in AI chips, is notoriously challenging. Several AI chip companies have faced difficulties, including Graphcore, which saw its valuation reduced by $1 billion after a deal with Microsoft fell through. The company subsequently planned job cuts due to a challenging macroeconomic environment. Similarly, Habana Labs, an Intel-owned AI chip company, laid off around 10% of its workforce, and Meta encountered issues with its custom AI chip efforts, leading to the abandonment of some experimental hardware projects.
Developing a custom AI chip, if OpenAI decides to pursue it, is a long-term endeavor that could take years and cost hundreds of millions of dollars annually. The success of such a venture will depend not only on OpenAI's commitment but also on the willingness of its investors, including Microsoft, to support this potentially risky bet.
In conclusion, OpenAI's exploration of AI chip development underscores the critical importance of hardware in advancing AI capabilities. While challenges loom large in this endeavor, OpenAI's significant resources and commitment to innovation make it a formidable player in the evolving AI landscape.