The robotics company behind the Neo humanoid robot, 1X, has unveiled a new AI model that it says understands the dynamics of the real world and can help bots learn new information on their own.
This physics-based model, called 1X World Model, uses a combination of video and prompts to give Neo robots new abilities. The video allows Neo robots to learn new tasks they weren’t previously trained on, according to 1X.
This release comes as 1X is gearing up to release its Neo humanoids into the home. The company opened up preorders for its humanoids in October with plans to ship the bots this year. A 1X spokesperson declined to share a timeline of when these bots were shipping or share any information regarding how many have been ordered beyond saying preorders exceeded expectations.
“After years of developing our world model and making Neo’s design as close to human as possible, Neo can now learn from internet-scale video and apply that knowledge directly to the physical world,” Bernt Børnich, founder and CEO of 1X said in a statement. “With the ability to transform any prompt into new actions — even without prior examples — this marks the starting point of Neo’s ability to teach itself to master nearly anything you could think to ask.”
Saying that the bot can transform any prompt into a new action is a lofty claim and not entirely accurate; you can’t tell a Neo to drive a car and it will suddenly know how to parallel park, for instance. But there is some learning going on.
1X isn’t saying the world model allows today’s Neo bots to do a new task right away from capturing video and being prompted, a company spokesperson clarified. Instead, the bot takes video data linked to specific prompts and then sends that back into the world model. That model is then fed back into the network of bots to give them a better understanding of the physical world and more know-how.
It also gives users insight into how Neo is thinking of behaving or reacting to a certain prompt. That kind of behavioral information could help 1X train these models to a point where robots will be able to react to a prompt of something they’ve never done before.
Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.
Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.
|
October 13-15, 2026
Original Source: https://techcrunch.com/2026/01/13/neo-humanoid-maker-1x-releases-world-model-to-help-bots-learn-what-they-see/
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
