OpenAI News
open_in_new https://openai.com/news/rss.xml
The OpenAI blog
Feed Info
[2025-12-18T08:23:03.138Z] Updated feed with 784 items
[https://openai.com/news/rss.xml]
Copy Link
Grid View
List View
Flow View
OpenAI Residency
As part of our effort to support and develop AI talent, we’re excited to announce the OpenAI Residency.
OpenAI’s API now available with no waitlist
Wider availability made possible by safety progress.
Solving math word problems
We’ve trained a system that solves grade school math problems with nearly twice the accuracy of a fine-tuned GPT-3 model. It solves about 90% as many problems as real kids: a small sample of 9-12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems.
Summarizing books with human feedback
Scaling human oversight of AI systems for tasks that are difficult to evaluate.
TruthfulQA: Measuring how models mimic human falsehoods
Helen Toner joins OpenAI’s board of directors
Today, we’re excited to announce the appointment of Helen Toner to our board of directors.
OpenAI Codex
We’ve created an improved version of OpenAI Codex, our AI system that translates natural language to code, and we are releasing it through our API in private beta starting today.
Introducing Triton: Open-source GPU programming for neural networks
We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.
Evaluating large language models trained on code
Improving language model behavior by training on a curated dataset
Our latest research finds we can improve language model behavior with respect to specific behavioral values by fine-tuning on a small, curated dataset.
OpenAI Scholars 2021: Final projects
We’re proud to announce that the 2021 class of OpenAI Scholars has completed our six-month mentorship program and have produced an open-source research project with stipends and support from OpenAI.
Will Hurd joins OpenAI’s board of directors
OpenAI is committed to developing general-purpose artificial intelligence that benefits all humanity, and we believe that achieving our goal requires expertise in public policy as well as technology. So, we’re delighted to announce that Congressman Will Hurd has joined our board of directors.
GPT-3 powers the next generation of apps
Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.
Multimodal neurons in artificial neural networks
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.
Understanding the capabilities, limitations, and societal impact of large language models
Scaling Kubernetes to 7,500 nodes
We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models.
DALL·E: Creating images from text
We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
CLIP: Connecting text and images
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
Organizational update from OpenAI
It’s been a year of dramatic change and growth at OpenAI.
OpenAI licenses GPT-3 technology to Microsoft
OpenAI has agreed to license GPT-3 to Microsoft for their own products and services.
Generative language modeling for automated theorem proving
Learning to summarize with human feedback
We’ve applied reinforcement learning from human feedback to train language models that are better at summarization.
OpenAI Scholars 2020: Final projects
Our third class of OpenAI Scholars presented their final projects at virtual Demo Day, showcasing their research results from over the past five months.
Procgen and MineRL Competitions
We’re excited to announce that OpenAI is co-organizing two NeurIPS 2020 competitions with AIcrowd, Carnegie Mellon University, and DeepMind, using Procgen Benchmark and MineRL.
Image GPT
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.
OpenAI API
We’re releasing an API for accessing new AI models developed by OpenAI.
Language models are few-shot learners
AI and efficiency
We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.
Jukebox
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.
Improving verifiability in AI development
We’ve contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of Intelligence, Mila, Schwartz Reisman Institute for Technology and Society, Center for Advanced Study in the Behavioral Sciences, and Center for Security and Emerging Technologies. This report describes 10 mechanisms to improve the verifiability of claims made about AI systems. Developers can use these tools to provide evidence that AI systems are safe, secure, fair, or privacy-preserving. Users, policymakers, and civil society can use these tools to evaluate AI development processes.
OpenAI Residency
As part of our effort to support and develop AI talent, we’re excited to announce the OpenAI Residency.
OpenAI’s API now available with no waitlist
Wider availability made possible by safety progress.
Solving math word problems
We’ve trained a system that solves grade school math problems with nearly twice the accuracy of a fine-tuned GPT-3 model. It solves about 90% as many problems as real kids: a small sample of 9-12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems.
Summarizing books with human feedback
Scaling human oversight of AI systems for tasks that are difficult to evaluate.
TruthfulQA: Measuring how models mimic human falsehoods
Helen Toner joins OpenAI’s board of directors
Today, we’re excited to announce the appointment of Helen Toner to our board of directors.
OpenAI Codex
We’ve created an improved version of OpenAI Codex, our AI system that translates natural language to code, and we are releasing it through our API in private beta starting today.
Introducing Triton: Open-source GPU programming for neural networks
We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.
Evaluating large language models trained on code
Improving language model behavior by training on a curated dataset
Our latest research finds we can improve language model behavior with respect to specific behavioral values by fine-tuning on a small, curated dataset.
OpenAI Scholars 2021: Final projects
We’re proud to announce that the 2021 class of OpenAI Scholars has completed our six-month mentorship program and have produced an open-source research project with stipends and support from OpenAI.
Will Hurd joins OpenAI’s board of directors
OpenAI is committed to developing general-purpose artificial intelligence that benefits all humanity, and we believe that achieving our goal requires expertise in public policy as well as technology. So, we’re delighted to announce that Congressman Will Hurd has joined our board of directors.
GPT-3 powers the next generation of apps
Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.
Multimodal neurons in artificial neural networks
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.
Understanding the capabilities, limitations, and societal impact of large language models
Scaling Kubernetes to 7,500 nodes
We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models.
DALL·E: Creating images from text
We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
CLIP: Connecting text and images
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
Organizational update from OpenAI
It’s been a year of dramatic change and growth at OpenAI.
OpenAI licenses GPT-3 technology to Microsoft
OpenAI has agreed to license GPT-3 to Microsoft for their own products and services.
Generative language modeling for automated theorem proving
Learning to summarize with human feedback
We’ve applied reinforcement learning from human feedback to train language models that are better at summarization.
OpenAI Scholars 2020: Final projects
Our third class of OpenAI Scholars presented their final projects at virtual Demo Day, showcasing their research results from over the past five months.
Procgen and MineRL Competitions
We’re excited to announce that OpenAI is co-organizing two NeurIPS 2020 competitions with AIcrowd, Carnegie Mellon University, and DeepMind, using Procgen Benchmark and MineRL.
Image GPT
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.
OpenAI API
We’re releasing an API for accessing new AI models developed by OpenAI.
Language models are few-shot learners
AI and efficiency
We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.
Jukebox
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.
Improving verifiability in AI development
We’ve contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of Intelligence, Mila, Schwartz Reisman Institute for Technology and Society, Center for Advanced Study in the Behavioral Sciences, and Center for Security and Emerging Technologies. This report describes 10 mechanisms to improve the verifiability of claims made about AI systems. Developers can use these tools to provide evidence that AI systems are safe, secure, fair, or privacy-preserving. Users, policymakers, and civil society can use these tools to evaluate AI development processes.
OpenAI Residency
As part of our effort to support and develop AI talent, we’re excited to announce the OpenAI Residency.
OpenAI’s API now available with no waitlist
Wider availability made possible by safety progress.
Solving math word problems
We’ve trained a system that solves grade school math problems with nearly twice the accuracy of a fine-tuned GPT-3 model. It solves about 90% as many problems as real kids: a small sample of 9-12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems.
Summarizing books with human feedback
Scaling human oversight of AI systems for tasks that are difficult to evaluate.
TruthfulQA: Measuring how models mimic human falsehoods
Helen Toner joins OpenAI’s board of directors
Today, we’re excited to announce the appointment of Helen Toner to our board of directors.
OpenAI Codex
We’ve created an improved version of OpenAI Codex, our AI system that translates natural language to code, and we are releasing it through our API in private beta starting today.
Introducing Triton: Open-source GPU programming for neural networks
We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.
Evaluating large language models trained on code
Improving language model behavior by training on a curated dataset
Our latest research finds we can improve language model behavior with respect to specific behavioral values by fine-tuning on a small, curated dataset.
OpenAI Scholars 2021: Final projects
We’re proud to announce that the 2021 class of OpenAI Scholars has completed our six-month mentorship program and have produced an open-source research project with stipends and support from OpenAI.
Will Hurd joins OpenAI’s board of directors
OpenAI is committed to developing general-purpose artificial intelligence that benefits all humanity, and we believe that achieving our goal requires expertise in public policy as well as technology. So, we’re delighted to announce that Congressman Will Hurd has joined our board of directors.
GPT-3 powers the next generation of apps
Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.
Multimodal neurons in artificial neural networks
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.
Understanding the capabilities, limitations, and societal impact of large language models
Scaling Kubernetes to 7,500 nodes
We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models.
DALL·E: Creating images from text
We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
CLIP: Connecting text and images
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
Organizational update from OpenAI
It’s been a year of dramatic change and growth at OpenAI.
OpenAI licenses GPT-3 technology to Microsoft
OpenAI has agreed to license GPT-3 to Microsoft for their own products and services.
Generative language modeling for automated theorem proving
Learning to summarize with human feedback
We’ve applied reinforcement learning from human feedback to train language models that are better at summarization.
OpenAI Scholars 2020: Final projects
Our third class of OpenAI Scholars presented their final projects at virtual Demo Day, showcasing their research results from over the past five months.
Procgen and MineRL Competitions
We’re excited to announce that OpenAI is co-organizing two NeurIPS 2020 competitions with AIcrowd, Carnegie Mellon University, and DeepMind, using Procgen Benchmark and MineRL.
Image GPT
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.
OpenAI API
We’re releasing an API for accessing new AI models developed by OpenAI.
Language models are few-shot learners
AI and efficiency
We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.
Jukebox
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.
Improving verifiability in AI development
We’ve contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of Intelligence, Mila, Schwartz Reisman Institute for Technology and Society, Center for Advanced Study in the Behavioral Sciences, and Center for Security and Emerging Technologies. This report describes 10 mechanisms to improve the verifiability of claims made about AI systems. Developers can use these tools to provide evidence that AI systems are safe, secure, fair, or privacy-preserving. Users, policymakers, and civil society can use these tools to evaluate AI development processes.