Feed

đź‘» Guest User
Aggregodo, © 2025 OctoSpacc

OpenAI News

open_in_new https://openai.com/news/rss.xml

The OpenAI blog

Feed Info
[2025-12-18T19:03:06.158Z] Updated feed with 790 items
[https://openai.com/news/rss.xml]
Copy Link
Grid View List View Flow View
OpenAI Five defeats Dota 2 world champions
OpenAI Five is the first AI to beat the world champions in an esports game, having won two back-to-back games versus the world champion Dota 2 team, OG, at Finals this weekend. Both OpenAI Five and DeepMind’s AlphaStar had previously beaten good pros privately but lost their live pro matches, making this also the first time an AI has beaten esports pros on livestream.
OpenAI News over 6 years ago
OpenAI Five Finals
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
OpenAI News over 6 years ago
Implicit generation and generalization methods for energy-based models
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models. Generation in EBMs spends more compute to continually refine its answers and doing so can generate samples competitive with GANs at low temperatures, while also having mode coverage guarantees of likelihood-based models. We hope these findings stimulate further research into this promising class of models.
OpenAI News over 6 years ago
OpenAI Scholars 2019: Meet our Scholars
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics, economics, quantum physics, and business innovation.
OpenAI News almost 7 years ago
OpenAI LP
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.
OpenAI News almost 7 years ago
Introducing Activation Atlases
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent. As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes will let us identify weaknesses and investigate failures.
OpenAI News almost 7 years ago
Neural MMO: A massively multiagent game environment
We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, variable number of agents within a persistent and open-ended task. The inclusion of many agents and species leads to better exploration, divergent niche formation, and greater overall competence.
OpenAI News almost 7 years ago
Spinning Up in Deep RL: Workshop review
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI.
OpenAI News almost 7 years ago
AI safety needs social scientists
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases. The aim of this paper is to spark further collaboration between machine learning and social science researchers, and we plan to hire social scientists to work on this full time at OpenAI.
OpenAI News almost 7 years ago
Better language models and their implications
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.
OpenAI News almost 7 years ago
Computational limitations in robust classification and win-win results
OpenAI News almost 7 years ago
OpenAI Fellows Summer 2018: Final projects
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month apprenticeship.
OpenAI News almost 7 years ago
How AI training scales
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized.
OpenAI News about 7 years ago
Quantifying generalization in reinforcement learning
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already helped clarify a longstanding puzzle in reinforcement learning. CoinRun strikes a desirable balance in complexity: the environment is simpler than traditional platformer games like Sonic the Hedgehog but still poses a worthy generalization challenge for state of the art algorithms.
OpenAI News about 7 years ago
Spinning Up in Deep RL
We’re releasing Spinning Up in Deep RL, an educational resource designed to let anyone learn to become a skilled practitioner in deep reinforcement learning. Spinning Up consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.
OpenAI News about 7 years ago
Learning concepts with energy functions
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points. Our model learns these concepts after only five demonstrations. We also show cross-domain transfer: we use concepts learned in a 2d particle environment to solve tasks on a 3-dimensional physics-based robot.
OpenAI News about 7 years ago
Plan online, learn offline: Efficient learning and exploration via model-based control
OpenAI News about 7 years ago
Reinforcement learning with prediction-based rewards
We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on Montezuma’s Revenge.
OpenAI News about 7 years ago
Learning complex goals with iterated amplification
We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond human scale, by demonstrating how to decompose a task into simpler sub-tasks, rather than by providing labeled data or a reward function. Although this idea is in its very early stages and we have only completed experiments on simple toy algorithmic domains, we’ve decided to present it in its preliminary state because we think it could prove to be a scalable approach to AI safety.
OpenAI News about 7 years ago
OpenAI Scholars 2019: Applications open
We are now accepting applications for our second cohort of OpenAI Scholars, a program where we provide 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
OpenAI News about 7 years ago
OpenAI Fellows Winter 2019 & Interns Summer 2019
We are now accepting applications for OpenAI Fellows and Interns for 2019.
OpenAI News about 7 years ago
FFJORD: Free-form continuous dynamics for scalable reversible generative models
OpenAI News about 7 years ago
OpenAI Scholars 2018: Final projects
Our first cohort of OpenAI Scholars has now completed the program.
OpenAI News over 7 years ago
The International 2018: Results
OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20–35 minutes of both games.
OpenAI News over 7 years ago
Large-scale study of curiosity-driven learning
OpenAI News over 7 years ago
OpenAI Five Benchmark: Results
Yesterday, OpenAI Five won a best-of-three against a team of 99.95th percentile Dota players: Blitz, Cap, Fogged, Merlini, and MoonMeander—four of whom have played Dota professionally—in front of a live audience and 100,000 concurrent livestream viewers.
OpenAI News over 7 years ago
Learning dexterity
We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.
OpenAI News over 7 years ago
Variational option discovery algorithms
OpenAI News over 7 years ago
OpenAI Scholars 2018: Meet our Scholars
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitioners.
OpenAI News over 7 years ago
OpenAI Five Benchmark
The OpenAI Five Benchmark match is now over!
OpenAI News over 7 years ago
OpenAI Five defeats Dota 2 world champions
OpenAI Five is the first AI to beat the world champions in an esports game, having won two back-to-back games versus the world champion Dota 2 team, OG, at Finals this weekend. Both OpenAI Five and DeepMind’s AlphaStar had previously beaten good pros privately but lost their live pro matches, making this also the first time an AI has beaten esports pros on livestream.
OpenAI News over 6 years ago
OpenAI Five Finals
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
OpenAI News over 6 years ago
Implicit generation and generalization methods for energy-based models
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models. Generation in EBMs spends more compute to continually refine its answers and doing so can generate samples competitive with GANs at low temperatures, while also having mode coverage guarantees of likelihood-based models. We hope these findings stimulate further research into this promising class of models.
OpenAI News over 6 years ago
OpenAI Scholars 2019: Meet our Scholars
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics, economics, quantum physics, and business innovation.
OpenAI News almost 7 years ago
OpenAI LP
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.
OpenAI News almost 7 years ago
Introducing Activation Atlases
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent. As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes will let us identify weaknesses and investigate failures.
OpenAI News almost 7 years ago
Neural MMO: A massively multiagent game environment
We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, variable number of agents within a persistent and open-ended task. The inclusion of many agents and species leads to better exploration, divergent niche formation, and greater overall competence.
OpenAI News almost 7 years ago
Spinning Up in Deep RL: Workshop review
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI.
OpenAI News almost 7 years ago
AI safety needs social scientists
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases. The aim of this paper is to spark further collaboration between machine learning and social science researchers, and we plan to hire social scientists to work on this full time at OpenAI.
OpenAI News almost 7 years ago
Better language models and their implications
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.
OpenAI News almost 7 years ago
Computational limitations in robust classification and win-win results
 
OpenAI News almost 7 years ago
OpenAI Fellows Summer 2018: Final projects
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month apprenticeship.
OpenAI News almost 7 years ago
How AI training scales
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized.
OpenAI News about 7 years ago
Quantifying generalization in reinforcement learning
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already helped clarify a longstanding puzzle in reinforcement learning. CoinRun strikes a desirable balance in complexity: the environment is simpler than traditional platformer games like Sonic the Hedgehog but still poses a worthy generalization challenge for state of the art algorithms.
OpenAI News about 7 years ago
Spinning Up in Deep RL
We’re releasing Spinning Up in Deep RL, an educational resource designed to let anyone learn to become a skilled practitioner in deep reinforcement learning. Spinning Up consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.
OpenAI News about 7 years ago
Learning concepts with energy functions
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points. Our model learns these concepts after only five demonstrations. We also show cross-domain transfer: we use concepts learned in a 2d particle environment to solve tasks on a 3-dimensional physics-based robot.
OpenAI News about 7 years ago
Plan online, learn offline: Efficient learning and exploration via model-based control
 
OpenAI News about 7 years ago
Reinforcement learning with prediction-based rewards
We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on Montezuma’s Revenge.
OpenAI News about 7 years ago
Learning complex goals with iterated amplification
We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond human scale, by demonstrating how to decompose a task into simpler sub-tasks, rather than by providing labeled data or a reward function. Although this idea is in its very early stages and we have only completed experiments on simple toy algorithmic domains, we’ve decided to present it in its preliminary state because we think it could prove to be a scalable approach to AI safety.
OpenAI News about 7 years ago
OpenAI Scholars 2019: Applications open
We are now accepting applications for our second cohort of OpenAI Scholars, a program where we provide 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
OpenAI News about 7 years ago
OpenAI Fellows Winter 2019 & Interns Summer 2019
We are now accepting applications for OpenAI Fellows and Interns for 2019.
OpenAI News about 7 years ago
FFJORD: Free-form continuous dynamics for scalable reversible generative models
 
OpenAI News about 7 years ago
OpenAI Scholars 2018: Final projects
Our first cohort of OpenAI Scholars has now completed the program.
OpenAI News over 7 years ago
The International 2018: Results
OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20–35 minutes of both games.
OpenAI News over 7 years ago
Large-scale study of curiosity-driven learning
 
OpenAI News over 7 years ago
OpenAI Five Benchmark: Results
Yesterday, OpenAI Five won a best-of-three against a team of 99.95th percentile Dota players: Blitz, Cap, Fogged, Merlini, and MoonMeander—four of whom have played Dota professionally—in front of a live audience and 100,000 concurrent livestream viewers.
OpenAI News over 7 years ago
Learning dexterity
We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.
OpenAI News over 7 years ago
Variational option discovery algorithms
 
OpenAI News over 7 years ago
OpenAI Scholars 2018: Meet our Scholars
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitioners.
OpenAI News over 7 years ago
OpenAI Five Benchmark
The OpenAI Five Benchmark match is now over!
OpenAI News over 7 years ago
OpenAI Five defeats Dota 2 world champions
OpenAI Five is the first AI to beat the world champions in an esports game, having won two back-to-back games versus the world champion Dota 2 team, OG, at Finals this weekend. Both OpenAI Five and DeepMind’s AlphaStar had previously beaten good pros privately but lost their live pro matches, making this also the first time an AI has beaten esports pros on livestream.
OpenAI News over 6 years ago
OpenAI Five Finals
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
OpenAI News over 6 years ago
Implicit generation and generalization methods for energy-based models
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models. Generation in EBMs spends more compute to continually refine its answers and doing so can generate samples competitive with GANs at low temperatures, while also having mode coverage guarantees of likelihood-based models. We hope these findings stimulate further research into this promising class of models.
OpenAI News over 6 years ago
OpenAI Scholars 2019: Meet our Scholars
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics, economics, quantum physics, and business innovation.
OpenAI News almost 7 years ago
OpenAI LP
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.
OpenAI News almost 7 years ago
Introducing Activation Atlases
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent. As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes will let us identify weaknesses and investigate failures.
OpenAI News almost 7 years ago
Neural MMO: A massively multiagent game environment
We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, variable number of agents within a persistent and open-ended task. The inclusion of many agents and species leads to better exploration, divergent niche formation, and greater overall competence.
OpenAI News almost 7 years ago
Spinning Up in Deep RL: Workshop review
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI.
OpenAI News almost 7 years ago
AI safety needs social scientists
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases. The aim of this paper is to spark further collaboration between machine learning and social science researchers, and we plan to hire social scientists to work on this full time at OpenAI.
OpenAI News almost 7 years ago
Better language models and their implications
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.
OpenAI News almost 7 years ago
Computational limitations in robust classification and win-win results
OpenAI News almost 7 years ago
OpenAI Fellows Summer 2018: Final projects
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month apprenticeship.
OpenAI News almost 7 years ago
How AI training scales
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized.
OpenAI News about 7 years ago
Quantifying generalization in reinforcement learning
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already helped clarify a longstanding puzzle in reinforcement learning. CoinRun strikes a desirable balance in complexity: the environment is simpler than traditional platformer games like Sonic the Hedgehog but still poses a worthy generalization challenge for state of the art algorithms.
OpenAI News about 7 years ago
Spinning Up in Deep RL
We’re releasing Spinning Up in Deep RL, an educational resource designed to let anyone learn to become a skilled practitioner in deep reinforcement learning. Spinning Up consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.
OpenAI News about 7 years ago
Learning concepts with energy functions
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points. Our model learns these concepts after only five demonstrations. We also show cross-domain transfer: we use concepts learned in a 2d particle environment to solve tasks on a 3-dimensional physics-based robot.
OpenAI News about 7 years ago
Plan online, learn offline: Efficient learning and exploration via model-based control
OpenAI News about 7 years ago
Reinforcement learning with prediction-based rewards
We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on Montezuma’s Revenge.
OpenAI News about 7 years ago
Learning complex goals with iterated amplification
We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond human scale, by demonstrating how to decompose a task into simpler sub-tasks, rather than by providing labeled data or a reward function. Although this idea is in its very early stages and we have only completed experiments on simple toy algorithmic domains, we’ve decided to present it in its preliminary state because we think it could prove to be a scalable approach to AI safety.
OpenAI News about 7 years ago
OpenAI Scholars 2019: Applications open
We are now accepting applications for our second cohort of OpenAI Scholars, a program where we provide 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
OpenAI News about 7 years ago
OpenAI Fellows Winter 2019 & Interns Summer 2019
We are now accepting applications for OpenAI Fellows and Interns for 2019.
OpenAI News about 7 years ago
FFJORD: Free-form continuous dynamics for scalable reversible generative models
OpenAI News about 7 years ago
OpenAI Scholars 2018: Final projects
Our first cohort of OpenAI Scholars has now completed the program.
OpenAI News over 7 years ago
The International 2018: Results
OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20–35 minutes of both games.
OpenAI News over 7 years ago
Large-scale study of curiosity-driven learning
OpenAI News over 7 years ago
OpenAI Five Benchmark: Results
Yesterday, OpenAI Five won a best-of-three against a team of 99.95th percentile Dota players: Blitz, Cap, Fogged, Merlini, and MoonMeander—four of whom have played Dota professionally—in front of a live audience and 100,000 concurrent livestream viewers.
OpenAI News over 7 years ago
Learning dexterity
We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.
OpenAI News over 7 years ago
Variational option discovery algorithms
OpenAI News over 7 years ago
OpenAI Scholars 2018: Meet our Scholars
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitioners.
OpenAI News over 7 years ago
OpenAI Five Benchmark
The OpenAI Five Benchmark match is now over!
OpenAI News over 7 years ago