Dystooic - The Machine Age Part 1 – Introduction to Physical AI


November 24, 2025

Dystopic Newsletter

The Machine Age Part 1 – Introduction to Physical AI

Agility Robotics warehouse logistics testing at Amazon

You can watch Agility Robotics at work on an Amazon Warehouse floor in the following video clip HERE

The Robots Are Coming

The media is saturated with coverage of the impact on the workplace of Large Language Model (LLM) AI from industry titans like OpenAI, Anthropic, and Google to name a few. The pace of development and capabilities in written/spoken language and symbolic knowledge of LLM AI is breathtaking. According to the International AI Safety Report, AI has surpassed humans in areas ranging from image and language interpretation/understanding to computer coding and general problem solving.

AI’s progress is not slowing down. Gordon Moore, co-founder of Intel, predicted in an article he wrote for Electronics magazine in 1965, that the number of transistors on an integrated circuit would double every 18 to 24 months. This observation became known as “Moore's Law”. For the last 60+ years, Moore’s prediction has held true, and, by extension, AI’s power will grow in parallel – this is not sensationalism; it is a sobering fact. Our human intellect and capabilities evolved over hundreds of thousands of years. The capabilities of our creation, AI, will double every 2 years.

As breathtaking as the pace of LLM AI continues to be, that is not the entire story. There is another entire branch of AI, Physical AI, that is quietly undergoing an equivalent explosive growth curve.

Physical AI is the driving force behind solutions spanning self-driving taxis, military drone swarms, missile defense, industrial and humanoid robotics, augmented reality, and many more. If you think LLM AI is going to have an impact on your life – “you ain’t seen nothing yet!”

The opening Image of this newsletter shows Agility Robotics’ warehouse logistics robots being tested at an Amazon warehouse in early 202

The Agility Robotics example is a simple logistics humanoid robot executing limited tasks in a factory/warehouse environment, based on 2023 technology levels. The latest generation of advanced human robots, like Tesla’s Optimus Gen 3, which will be released next year, is a quantum leap beyond these basic Agility Robotics units. . Optimus has human-like hands covered in artificial sensing “skin” that can feel pressure, temperature, and

surface roughness just like your human hand. That is just one of the many innovations in the latest generation of robots. The following Tesla video illustrates how quickly progress is being made in real-time robotic sensing and action (view video HERE)

This is the first newsletter in a series focusing on Physical A and humanoid robotics. Along the way, we will touch on a breadth of non-humanoid robot applications and examples, as aspects of Physical AI apply to many alternative use cases. This first newsletter will be an introduction to Phyical AI and a look at the VLA Vision-Language-Action model at the heart of the robotics revolution already underway. The second newsletter in the series takes a deep dive into the design, components, and costs of a humanoid robot. The final newsletter in the series will look at the deployment of Physical AI & human robotics, business models, and their social impact.

We are truly entering “The Machine Age,” … given that it is Dystopic … let's take a look at the Technology Behind Today’s AI News.

Physical AI - Different yet Interdependent with LLM AI

"We’re never going to get to human-level intelligence by just training on text.”
-Yann LeCun, Chief AI Scientist, Meta, considered to be one of the founders of LLM AI

Yann LeCun, considered one of the founders of LLM AI, was very serious when he made this statement at the MIT Generative AI Impact Symposium in October 2025. In the same conference, he also showed a degree of pessimism about human robots when he stated :

“First of all, we’re missing something big that we need AI systems to learn from natural, high-bandwidth sensory data like video. We’re never going to get to human-level intelligence by just training on text. … A four-year-old has seen as much data through vision as the biggest LLMs trained on all the publicly available text.”

Yann LeCun is resigning from Meta and is rumored to be founding a Physical AI startup. He may be a bit late and also wrong in his opinion.

As it turns out, Tesla’s self-driving car and robo taxi technology is based solely on video cameras to build AI World Models (aka a Physical AI model) of their surroundings and to interpret all the elements in view: roads, buildings, pedestrians, and vehicles of every kind. It would appear that Yann LeCun has failed to appreciate the several decades of development, the years, and the millions of miles of driving training data that field work has brought.

So what is Physical AI?

Here is the textbook definition of Physical AI:

“Physical/World Model AI refers to systems that learn internal models of how the physical world works to reason, predict, and interact with it. These models go beyond simple data processing by understanding physics, cause-and-effect, and 3D geometry. Physical AI specifically refers to the engineering and implementation of these models to perform tasks in the real world, such as self-driving cars, robotics, and optimizing manufacturing.”

Physical AI is defined by the following features:

World Models: Much like a blind person who holds a mental picture of their home’s layout and furniture placement to move about, so they can. Physical AI creates an internal model of their surroundings (i.e., its world) to simulate and predict outcomes.

Physical-world understanding and physical limits: Unlike LLMs that rely solely on text, world models are trained on diverse data, such as video, images, and sensor data, to capture the complexities of physical reality. They can learn the principles of physics, cause-and-effect, and 3D geometry.

Real-Time observation and reasoning: unconsciously, humans make 100’s of visual, auditory, temperature, pressure, and balance measurements every 20 to 30 milliseconds as we go about our day. Physical AI must make some or all of these measurements in an equal amount of time, or better, to function in the real world. It took Moore's law (computing capability) and over a decade of training for self-driving cars to operate safely in real time.

Physical AI applications: Practical applications of world models for interacting with the physical environment. Examples include:

  • Robotics: Robots performing complex tasks like laundry, folding clothes, or assembling products.
  • Autonomous systems: Self-driving cars that can navigate and react to real-world conditions. Drone and Missile Defense are also evolving because reaction time is too short for “human in the loop “ decision making
  • Intelligent automation: AI that monitors and optimizes manufacturing processes or supply chains.

Reasoning and planning: By building an internal model of the world, AI can plan and make informed decisions. A robot on a factory floor must not only navigate to a destination, but also avoid humans, equipment, or other robots it may encounter in real time along the way. It must plan, adapt, and replan over and over to meet its objectives

We are all familiar with ChatGPT, Anthropic, Grok, or other AI platforms using LLM’s. A look at the difference between these Large Language Model (LLM) AI and Physical AI may give a better perspective on this emerging technology.

Watch the fascinating video HERE

Read Physical Intelligence (TT)’s blog and watch additional video HERE

How does it differ from LLM AI?

Conceptually, LLM and Physical AI are quite different. LLMs learn from data, text, static images, or text converted from voice, building statistical relationships with language’s symbolic patterns. Physical AI learns from the real world around it and the limitations imposed by physics to build causal and predictive simulations of how the physical world works.

The following table breaks down the differences a bit further:

Applications of Physical AI

Elements of physical AI are the driving force behind a number of applications, including robotics, augmented reality, autonomous transport, and military applications such as drone swarming and missile defense. Let’s take a closer look at applications

Augmented Reality: AR glasses and goggles like Apple's Vision Pro (https://www.apple.com/apple-vision-pro/) make extensive use of forward-facing stereo video cameras to recognize and track objects in their field of view. These intelligent vision capabilities enable AR glasses to accept user input via virtual typing and navigation via hand gestures and eye movements, using outward- and inward-facing cameras. AR uses a subset of Physical AI, which focuses on vision and spatial and object recognition.

Military Applications: the Ukraine-Russia war has driven the evolution of sea, land, and air drones for combat. Early in the war, human “operators” controlled every aspect of a drone mission. As Russia began using Radio and GPS jamming to counter Ukraine's moves, the Ukrainians began adding greater and greater autonomy to their drones. To ensure missions could survive losses and overwhelm Russian defenses, Ukraine rolled out intelligent drone swarms to handle complex missions autonomously.

Today, Physical AI systems like defense start-up Andural’s Lattice software platform provide an AI-powered command-and-control system, camera, and RF detection sensors to perform missions with both human oversight and high levels of autonomy. Drone operators can initiate the mission with a single button press, while the drone handles the complex tasks of flight and engagement. Physical-AI, like Lattice, autonomously navigates to a target, identifies it, and then provides the visual data for the operator to confirm the kill/intercept. The drone can also be set to automatically execute the Kill Chain – Understand, Decide, Act.

Military drones and the US Air Force's “Loyal Wingman” Collaborative Combat Aircraft (CCA) represent a full embodiment of Military Phyical AI. Reconnaissance, early warning, and missile defense systems extensively use AI to assist or even replace human-in-the-loop decision-making in time-critical situations. Situations. The newly announced Golden Dome space-based missile defense system will have to use Physical AI extensively to counter an attack in the limited time available

Autonomous Vehicles: The technology has arrived, is mature, and is safe. Waymo, Xpeng, Tesla, and a host of other start-ups and major car companies have developed self-driving Physical AI platforms. Just as important is that these platforms are actually safer than human drivers, as extensive Waymo data shows. Living in Austin. I’ve used Waymo several times – it's truly amazing.

Robitics: This is the fundamental implementation for Physical AI. Today’s industrial robots, based on pre-programming for narrow tasks like robotic arms in auto manufacturing or sorting packages in warehouses, are being replaced by more general-purpose robotics. Given that the world’s infrastructure has been designed for humans who walk on two legs, have two arms with two 10-digit, highly dexterous hands, it is no wonder that the focus is on general-purpose humanoid robots – a single mass-produced robot design to do any work a human can do and more is the obvious and economical solution.

Now that we understand the applications, how exactly does Physical AI work

Physical AI Model: VLA Vision-language-Action

Without thinking about it, many of us have been using purpose-built single-function robotic vacuums from companies like Rumba Shark. These primitive robots are armed with a pressure sensor to detect running into objects like walls or furniture, and IR LED sensors to detect the edges of steps (and avoid a fatal stumble down those steps). By random motion and using its Wi-Fi connection to measure distance from its home base, your simple robot sweeper builds a world view of the room(s) it cleans, as shown in the accompanying diagram.

The robot vacuum example is instructive to gain insight into how Physical AI works.

  • VISION: The IR and pressure sensors are sampled in real time and navigate the room being cleaned.
  • LANGUAGE: The latest APP for my robotic vacuum allows me to voice commands and the unit to “Clean”, “Pause”, or “Stop.”
  • ACTION: Commands are turned into actions, in case of clearing the room or returning to the dock. The internal software and sensors

both map out the room and determine spaces that can't be cleaned, and the robot vacuums till the room is both mapped and the areas that can be swept are swept.

Vision-Language-Action (VLA) models are at the heart of Physical AI. By unifying vision, language, and action data at scale, VLA models aim to learn policies that generalise across diverse tasks, objects, embodiments, and environments. This generalisation capability is expected to enable robots to solve novel downstream tasks with minimal or no additional task-specific data, facilitating more flexible and scalable real-world deployment.

The following diagram from an IEEE, A Survey on Vision-Language-Action Models for Embodied AI illustrates the architectural elements of VLA Model:

  • Language Encoder: recognizes voice command instructions, making use of trained LLM AI that is in wide use today (ChatGPT, etc.)
  • Vision Encoder: on a moment-to-moment, real-time basis, interpret where the robot is in its physical environmentand the balance and position of arms, legs, hands, and fingers relative to the objects in the environment. This is the robot's current State. Visual training allows the robot to recognize objects and build a World Model.
  • Action Decoder: converts commands into a series of tasks to execute the Instruction provided by the Language Encoder set in the context Vision Encoder has helped create in the World View of the robot's surroundings. The action decoder is trained using Reinforcement Learning, including direct training (human supervised learning), experience from other robots, and synthetic simulation in a virtual World Model. Training also results in Dynamic Learning, the cause-and-effect of an action in the environment. Examples of gravity's effects are the distance between a table and the wall, the weight of objects, and the force required to pick up a glass versus a brick, all of which factor into the Reasoning that impacts each Action a robot takes, along the way to implement the command Instruction.

As a VLA example, the following diagram provides the breakdown for the instruction: “Clean up the room”. The Action Decoder operates on two levers: high-level and low-level. The high-level task planner understands what a clean room (living room) is supposed to look like (World View), a toy car has been left on the floor, and it normally sits near a camera on the coffee table. At a high level, the robot must pick up the car, move the coffee table, and place the toy car near the camera. At a low level, each high-level task is broken down into a “control policy”, the set of commands driving the motor/actuators to move the legs, arms, and hands to execute the main task.

As a simple analogy, humans and robots intrinsically have brains, nervous systems, and bodies. Just as the human mind, VLA models operate on both conscious and subconscious levels. At the conscious level, the VLA performs a slow, deliberate “scene”, a World View model for perception and reasoning. At the subconscious level, fast thinking and intuitive policy keep limbs in balance, movements within the scene (walking, etc.), and the hand on the target (our toy car)

Now that we have a basic understanding of the VLA model that drives Physical AI in robots, we turn to two examples of the real-world robotic computing platforms, the brains, that will bring the vision of human robots to life …

Nvidia and Tesla – Two Titans with Separate AI Chip and Platform Roadmaps

Just reading the news, you would think that Nvidia and, to a lesser extent, AMD have the AI chip market cornered. Tesla, whose business model has always centered on virtual integration, having control of the complete supply from components all the way to end product production, has its own custom AI chipsets, the A15 & A16.

These AI chipsets power Tesla’s FSD Full Self-Driving and Autopilot systems. AI5 and AI6 chips are expected to be rolled out to the company’s consumer products, from Optimus to the Cybercab, to the next-generation Roadster. The A16 is designed for a significant improvement in performance per watt and per dollar for specific AI workloads.

In addition, as Musk reported on X “In [Tesla’s] supercomputer cluster, it would make sense to put many AI5/AI6 chips on a board, whether for inference or training, simply to reduce network cabling complexity & cost by a few orders of magnitude.” Today, Tesla uses Nvidia H100/H200 flagship AI chips for all of its training. The primary objective of the AI6 is to eliminate the "two-language problem" where models trained on Nvidia hardware must be translated and re-validated for Tesla's in-house hardware based on A13/A15 platforms, which slows down the feedback loop from training to deployment.

While Tesla has its own capabilities, human Robots and FSD cars, Nvidia hasn’t been idly standing by. In March 2025, Nvidia announced its GROOT N1 Issac, the world’s first open, fully customizable foundation model for generalized humanoid reasoning and skills.

According to Nvidia:

The GR00T N1 foundation model features a dual-system architecture, inspired by principles of human cognition. “System 1” is a fast-thinking action model, mirroring human reflexes or intuition. “System 2” is a slow-thinking model for deliberate, methodical decision-making.
Powered by a vision language model, System 2 reasons about its environment and the instructions it has received to plan actions. System 1 then translates these plans into precise, continuous robot movements. System 1 is trained on human demonstration data and a massive amount of synthetic data generated by the NVIDIA Omniverse™ platform

GROOT N1 provides robotics training in live, human-assisted, and synthetic environments (i.e.,Nvidia Omniverse). Thet results from training are loaded into NVIDIA® Jetson Thor™ series modules, which power the actual robot. Jetson and its predecessor AGX Orin are used by a vast majority of second and third-generation robotics platforms

Why Human Robots? Why Now?

Major innovations are possible because new technologies that didn’t previously exist become available, and products that implement them reach price points that enable profitable businesses. In short, a TECHNOLOGY and ECONOMIC INFLECTION POINT

Advancements in AI models, computing, memory, communications, sensors (vision, pressure, temperature, etc.), and mechanical gears and actuators have reached a maturity and cost that enable affordable humanoid robots to become a reality.

In 2024, second-generation humanoid robots, Xpeng Iron Man or Tesla Optimus Gen3 cost well over $100,000 a unit. That same year, hardware costs for humanoid robots declined 40% year-on-year. An India Today article reported, “When mass production eventually kicks in, he [Musk] expects Optimus [Gen 3] to sell for $20,000 to $30,000, the price of a modest family car, but one that could apparently change the world.”

“It’s only a matter of time before we see brownfield retail and light-industrial sites desperate for robotic workers under $25/hour. And it won’t be too long after that implementation that the tipping point for consumer crossover is reached (around when costs dip below a used Honda Civic, or a Robotics-as-a-Service company provides you a subscription to handle your chores for $500 a month).”
Citrini Research - Thematic Primer: Humanoid Robots

Market analysts vary in their predictions from optimistic (~2030, a 5-year horizon) to pessimistic (~2035, or a 10-year horizon) for commercial products to be available, In all cases, analysts share the view outlined by Morgan Stanley:

  • The Humanoid Robot market could surpass $5 trillion by 2050, including sales from supply chains and networks for repair, maintenance and support
  • There could be more than 1 billion by 2050, with 90% used for industrial and commercial purposes.”

The US, Canada, Western Europe, China, and Japan are all experiencing population aging and declining birth rates. Couple the demographic decline with the politically charged issues limiting or even prohibiting immigration in advanced countries, and within a decade, you have a market for 10’s of millions, or even 100’s of millions, of humanoid robots. They will work in factories, stores, and in our homes as maids, gardeners, and caregivers for senior citizens.

While there may be Luddites who try to regulate and protect industries and segments of the population from the use of humanoid robotics, the demographics of an aging population and ever-lowering birth rates around the world will make humanoid robotics the ONLY viable answer to the coming labor crisis.

We previously noted that the world’s infrastructure has been designed for humans who walk on two legs, have two arms, and have two 10-digit, highly dexterous hands. It is no wonder that the focus is on general-purpose humanoid robots – a single mass-produced design to do any form of work a human can do, and even perform surgery (especially since robots perform surgeries today that human hands simply cannot perform)

Beyond a single mass-produced product capable of navigating our man-made world, Humanoid robots have another advantage: training. There is much more workplace data to train humanoid robots than other kinds of specialty robots. If there is anything that AI needs, it is massive amounts of data for training.

The self-driving car industry has been around for over a decade and has logged 100s of millions of Rider-Only miles, with a safety record far better than that of human drivers. This data can be repurposed to train humanoid robots to navigate on the urban and suburban streets and interpret their surroundings. AR vision capabilities that identify and track objects can also be repurposed.

Repurposing of AI research and development provides companies like Apple, Tesla, and Xpeng an important “leg up” on their competitors. In addition to real-world data, Nvidia and others have created “virtual world” platform models of spaces and equipment, to train humanoid robots for work without physical world training.

How close are we to the New Machine age? … Check out the following video of robust based on Figure AI’s HELIX platform driven robots putting away groceries HERE.

In The Next Dystopic newsletter …

Next week, we will continue our series on Humanoid Robots: The Machine Age Part 2 – A Look Under the Hood. A deep dive into the components and costs of materials that make up a humanoid robot.

In other News …

U.S.-China Economic and Security Review 2025

The U.S.-China Economic and Security Review Commission released its 2025 annual report. The report notes two significant issues.

China Shock 2.0, a repeat of similar conditions that produced "China Shock 1.0,” a period of significant economic disruption caused by China's rapid entry into the global market, primarily from 2000 to 2015. Following China's accession to the World Trade Organization (WTO) in 2001, Chinese exports were dumped in foreign markets, particularly in manufacturing. As a result, job losses and wage declines occurred in manufacturing-heavy regions of the United States and other countries.

Today, China’s economy and export sectors are now so large and dominant in so many critical supply chains; China Shock 2.0 is set to have a major impact on a wide range of developed and developing economies alike.

China Shock 2.0 Is a Culmination of China’s Massive Market Distortions, Failure to Rebalance Its Economy, and Effort to Have Global Markets Absorb Its Excesses

While chaotic, the Trump Administration trade and tariff regime is aimed to limit the impact in the US from the current Chinese economic distortions to blunt ‘China Shock 2.0.”

China’s Military Has Rapidly Developed Space Capabilities

From 1957 ( the launch of Sputnik) through the end of the first Cold War (1990s), the US and Russia carried out a “Space Race”. As we enter a new Second Cold War, China is a “pacing threat” and for all practical purposes the US and China have entered a new space race in both commercial applications(SpaceX – Starlink, etc) and in weaponization of space.

As Chance Saltzman, Chief of Space Operations of the United States Space Force noted in a testimony to the U.S.-China Economic and Security Review Commission:

[China’s rapid military buildup of its space capabilities over recent years are] Mind boggling … China’s potent and expanding arsenal of space-based capabilities multiplies its combat potential many times over” and threatens the U.S. military’s access to—and effective use of—space in conflict
The formation of the Space Force, the Proliferated Warfighter Space Architecture (reconnaissance communications and early warning), Golden Dome (missile defense), and US Counterspace capabilities are critical in holding China in check.

The report is a great read and I highly recommend it

Speaking of space developments…

Jeff Bezos’ Blue Origin becomes the second space launch company capable of landing and reusing a Launch vehicle's Primary Stage.

Thursday, November 13, 2025, Jaff Bazeos’ Blue Origin New Glenn launch vehicle to successfully deliver its payload and recover its first stage for reuse. Up till then, only SpaceX, which routinely returns and reuses its Falcon series launch vehicles' first stage, was capable of this technical feat.

According to Blue Origin:

The New Glenn orbital launch vehicle successfully completed its second mission, deploying NASA’s Escape and Plasma Acceleration and Dynamics Explorers (ESCAPADE) twin-spacecraft into the designated loiter orbit, and landing the fully reusable first stage on Jacklyn in the Atlantic Ocean.
ESCAPADE will use two identical spacecraft to investigate how the solar wind interacts with Mars’ magnetic environment and how this interaction drives the planet’s atmospheric escape.

Operation Southern Spear

The US Navy’s most advanced carrier, Gerald Ford, CV78, arrived in the Caribbean, completing the largest naval buildup since the Cuban Missile Crisis. The naval armada amassed along the coast of Venezuela, centered around the Island nation of Trinada and Tobago, includes the following assets:

  • Gerald Ford Carrier Strike Group – including a guided missile cruiser, 3 destroyers, and a fast attack submarine. Learn more about carrier strike groups in a previous Dystopic newsletter HERE
  • USS Iwo Jima Amphibious Ready Group - the amphibious assault ship USS Iwo Jima (LHD 7), the amphibious transport dock ship USS San Antonio (LPD 17), and the amphibious transport dock ship USS Fort Lauderdale (LPD 28). Carrying the 2200-strong 22nd Marine Expeditionary Unit (MEU) and supporting assault aircraft.
  • A Navy Surface Group attached to the Southern Command- an additional guided missile cruiser, three destroyers, and several littoral combat ships.
  • US Air Force and Marine Fighter and Bomber assets

In short, enough firepower to subdue any Central or South American country, including Venezuela. US assets are located from the US border with Mexico to the Tierra del Fuego at the southern tip of South America.

“ Mobilization Means War”

- German Chancellor Theobald von Bethmann Hollweg (prior to the onset of WW1)

The US does not reposition a naval armada without a reason. This begs the question:

Are US forces lying off the coast of Venezuela there to threaten and force regime change, or to conquer and install regime change?

Oddly, the situation has the feel of our attack on Iran during the 12-Day Iran-Israel War. It appears to be on a short fuse.

How this plays out, only time will tell.

Trump’s New Russia-Ukraine Peace Plan: another 180-degree flip.

Trump proposes a new Peace plan that is nearly identical to an old peace plan that Ukraine and our European allies previously rejected. In doing so, the plan would give Russia the main fortress cities in Ukraine's Donbas region, Sloviansk, Kramatorsk, Kostyantynivka, and Druzhkivka. These cities form a critical defensive line for Ukraine called the “forest belt.”

To make matters worse, a European peacekeeping force would not be permitted.

This feels like a complete capitulation rather than a fair peace. As I’ve said in many of my Dystoic newsletters. Russia only understands force. This 180-degree turnabout is poorly timed and a strategic mistake. Ukraine's energy campaign on Russian refining and oil infrastructure was having an effect. This peace deal disrupts Ukraine's successful campaign.

As the BBC notes:

“The US president appears oblivious to the very real concern expressed by Ukrainians … the 'peace terms' effectively dictated by Moscow will leave Ukraine unable to sufficiently defend itself for the day when President Putin comes back for more.”

My bet is that in the end, Putin will reject the deal, and the time was used to replenish his troops and supplies to the Ukraine front.

That’s a wrap for this week …

Dystopic- The Technology Behind Today's News

Thank you for your readership and support. Please recommend Dystopic to friends and family who are interested, or just share this email.

Not on the Dystopic mail list? New Readers can sign up for Dystopic HERE

If you have missed a Dystopic News letter, you can find select back copies HERE

Finally, pick up a copy of my book or listen to the new audiobook:

How The Hell Did We Get Here? A Citizen's Guide to The New Cold War and Rebuilding of Deterrence

Available on Amazon USA HERE, Amazon Internationally (on your local Amazon page), or through Barnes & Noble and other major retailers online

See you next week!


Follow Me on Social

Unsubscribe | Update your profile | 113 Cherry St #92768, Seattle, WA 98104-2205