Something shifted at CES in January 2026You may have noticed something different at this year’s Consumer Electronics Show: humanoid robots were on the factory floor, not on the concept stage. Boston Dynamics showed Atlas performing autonomous tasks at a Hyundai facility. Jensen Huang stood on stage and said the words out loud: “The ChatGPT moment for robotics is here.”He was not being hyperbolic. He was describing something already happening.💡Physical AI, systems that perceive, reason, and act in the real world, is moving from pilot programs to production. The question for organizations is no longer whether to pay attention. It is how to get ahead of it.Whether you’re ready or not, physical AI is here. The question is: are you ready for the era of the robots?Firstly, how is physical AI different from generative AI? As most of you will already know, generative AI lives in the digital world. It learns from existing text, images, and code. Physical AI has to contend with the messiness of reality, and that often means creating physical training data from scratch through simulation, sensor capture, and real-world interaction. There’s no internet-scale dataset of “how to pick up a fragile object without breaking it.”And it’s broader than just robots. 💡Autonomous systems of all kinds fall under this umbrella, from self-driving vehicles to AI-managed logistics networks to intelligent building infrastructure. What they share is the need to make decisions in real time, in environments that don’t stay still.Thanks to advances in world models, vision language models, and simulation-to-reality training, modern physical AI systems can reason, adapt, and generalize across environments. They can figure things out, and yes, they can pick up fragile objects without breaking them. That changes things quite a lot…The rise of Agent Experience (AX)AI agents are becoming active participants in commerce, logistics, and enterprise systems. This shift is creating demand for a new product layer built for machines rather than humans, where negotiation, semantic visibility, and autonomous execution matter as much as traditional UX.AI Accelerator InstituteRohan MitraSecondly, how does physical AI work?Physical AI follows a continuous four-step loop: PerceivesReasonsActsAdapts Sensors and cameras feed the system a real-time picture of its environment. A foundation model, typically a vision-language-action (VLA) model, interprets that input and decides what to do next. The robot or system then acts on that decision, and the outcome feeds back into the loop to improve future behavior.💡What makes modern physical AI different from traditional robotics is that last step, adaptation. Older systems followed fixed instructions. Current systems learn from what happens, generalize to new situations, and get better over time. That shift from hard-coded to adaptive is what makes physical AI genuinely new, and genuinely worth paying attention to.5 steps to help prepare for physical AI, todayYes, the change is big, but the good news? None of this requires you to have a robot on the payroll (just yet).The better news: starting now puts you ahead of the majority of your competitors. Here are 5 steps you can take to help you prepare for AI, starting today:1. Audit your existing stack for physical AI readiness, not just inventoryMost organizations already have robotic systems, but the question is not what you have, it’s what it can reason about. Run a capability audit across three dimensions: Perception (what sensors feed the system and at what fidelity) Actuation (hard-coded or adaptive?) and integration (open APIs or closed proprietary stack?)The output is a gap map. Sensor-rich but logic-poor systems are prime candidates for a VLA (vision-language-action) model layer on top. Anything that cannot ingest or output structured data is a physical AI blocker. Flag it now, before it becomes someone else’s emergency later down the line. 2. Actually run sim-to-real experiments, don’t just understand the conceptNVIDIA’s Isaac Sim and Isaac Lab are the current standard. Domain randomization, varying lighting, friction coefficients, and object masses are what force generalization rather than memorization.💡The Cosmos world foundation models go further, generating physically plausible video from action-conditioned prompts and enabling policy evaluation before hardware is involved. The practical workflow: define a policy, randomize aggressively, evaluate with Cosmos rollouts, then deploy to hardware only when sim success rate plateaus above roughly 85 to 90 percent.If you don’t have a robotics platform yet, AWS RoboMaker and the Genesis physics engine are accessible entry points.Rise of the AI scientist | Synthetic task scaling explainedSynthetic task scaling introduces a new training approach where AI agents learn through experience, closing the gap between knowledge and execution. Are you ready for the rise of the AI scientist?AI Accelerator InstituteAndrew Lovell3. Redesign your data architecture around spatial and temporal requirementsText-centric infrastructure fails physical AI not because of volume, but because of data type and latency profile. The core requirements look different from a standard enterprise stack:Sensor fusion from LiDAR, RGB-D cameras, IMUs, and force-torque sensors requires sub-millisecond time-series indexing. InfluxDB or TimescaleDB, not Postgres.3D scene representations need queryable spatial databases, not blob storage.Edge inference is non-negotiable. Physical AI cannot tolerate 200ms cloud round-trips for closed-loop control; you need on-device inference (NVIDIA Jetson Orin or equivalent).Start by establishing a telemetry pipeline capturing structured logs from systems you already operate. That becomes your fine-tuning corpus later.If your architecture is cloud-first for everything, it needs rethinking before deployment.4. Design for human-robot teaming at the architecture level, not the HR levelLet’s be honest: robots are coming for jobs that involve doing the same thing over and over 4,000 times in a cold warehouse.Effective collaboration requires shared situational awareness, clear authority handoff protocols, and observable AI reasoning. A robot that fails silently is not a productivity tool; it is a very expensive source of confusion.Models like OpenVLA and RT-2 can generate natural language rationale alongside action outputs. Think of it as giving the robot the ability to say “I stopped because I wasn’t sure about that” rather than just stopping.Define your graceful degradation protocol before deployment, not during an incident.5. Build your compliance infrastructure now, while there’s still optionalityThe regulatory wave is closer than most teams realize. Key deadlines to have on your radar, depending on your area:EU AI Act (Annex III): High-risk classification for most commercial humanoid systems, triggering conformity assessments and mandatory human oversight. Deadline: Q3 2027.EU Machinery Regulation: CE marking obligations for collaborative robots.US OSHA guidance: Autonomous co-worker standards expected in H1 2027.ISO 10218 and ISO/TS 15066: The baseline standards regulators are building on.💡Appoint a technical safety lead and start your operational risk register. It is required for conformity assessments and nearly impossible to reconstruct retroactively.Get legal and engineering in the same room before your next deployment decision. Treat compliance as a parallel work-stream, not a last-minute audit.Conclusion: Get ready, but don’t panic.None of this has to happen overnight. But the teams that deploy physical AI confidently starting from 2027 are the ones doing the unglamorous infrastructure work today. Audit your stack, run your simulations, fix your data architecture, design for humans and robots working together, and get ahead of the regulators. Future you will be grateful.Discover more at the Innodata GenAI Summit, May 21st 2026The Innodata GenAI Summit: The Future of Trustworthy AI: World Models, Physical AI, Agentic Systems takes place on 21 May in London.300+ builders and tech leaders in one room for a full day of practitioner-led sessionsFour frontier tracks: world models and grounded intelligence, autonomous systems and trust, physical AI and the intelligent edge, and data, evaluation and intelligence infrastructureTrack 3 is dedicated entirely to physical AI and the intelligent edgeZero vendor pitches. Just the people doing the work, talking honestly about what is shipping and what still has a long way to goDon’t miss your chance to network with the foundational model creators, proprietary builders, and enterprise leaders shaping the future of the AI industry. Discover more
5 ways to prepare for physical AI, today
Related Posts
AI Now is Hiring!
We are at a pivotal moment in the fight to shape the future of AI and its role in society. AI Now is scaling up our team to meet the moment, looking to make three hires to help us grow the organization as we enter our next phase: More information on each role can be […]
The post AI Now is Hiring! appeared first on AI Now Institute.
Man Finds $1 Million Worth of Yu-Gi-Oh Cards in a Dumpster
It was already a sordid tale of online drama, blurry photographs, and erratic TikToks. Then his mom started posting.