Insights
February 10, 2026

Why Physical AI

Exploring the new economics of edge computing and the promise of physical AI

Last week’s sell-off in software stocks reflected a growing unease that AI could compress the terminal value of many SaaS companies. When agents can perform entire workflows on their own, long-standing assumptions about durability begin to crack. What the market was really asking, in real time, was a more fundamental question: What’s defensible in the age of AI?

Intelligence is getting easier to buy. Models are making it cheaper to decide. Agents are making it cheaper to execute. And as more businesses tap into the same pools of commoditized intelligence, proprietary context is getting harder to own. 

We believe that software still matters. The moat is just moving.

As the cost of intelligence goes down, the premium is shifting to what informs decisions: context. Context can come in many forms, including customer behavior, operational trails, network graphs, and human feedback loops. In this era, some of the hardest signals to replicate won’t be scraped from the internet. They’ll be found in the physical world.

Physical AI creates proprietary context through devices operating in real environments that are unpredictable and constantly changing. Every interaction becomes a learning opportunity, turning the physical world into a living dataset that continues to generate new edge cases long after the product is shipped.

This is why the once “contrarian” bet of venture dollars flowing into the physical world is becoming more mainstream. Agents may compress many digital workflows that sit on shared data, but they do not create new ground truth. Deployment does.

We have hit an inflection in edge economics that makes this possible. For years, turning messy sensor streams into reliable real-time decisions was too costly. Now model costs have collapsed, sensors are cheaper, and edge compute is good enough to make always-on systems practical. It is suddenly economical to instrument the physical world.

At NewView, we have leaned into physical AI with our investments in Halter, Verkada, Netradyne, and Stord. Although these companies look different on the surface, they share a common model: deploy durable nodes into the field, learn from real world usage, and convert operational complexity into compounding advantage.

Four shifts unlocking physical AI

For years, hardware was a tax on software margins and velocity. Physical AI flips that logic because the hardware is the data engine, and the software improves with every deployment.

1. Sensing is cheaper and more ubiquitous.

Cameras, accelerometers, thermals, acoustics, radar, lidar, and industrial telemetry are all less expensive and more accessible. We can build, measure, log, and improve more than ever. 

You can see this clearly in fleets. Always-on video across thousands of vehicles used to be possible in theory and brutal in practice, requiring installs, storage, bandwidth, footage management, and driver buy-in. Today the sensing stack is cheap and reliable enough that continuous instrumentation is becoming normal.

Netradyne shows the second-order effect of this. Its in-vehicle vision system observes driving context continuously and turns that stream into real-time coaching, instead of a postmortem report days later. The loop compounds: more vehicles produce more edge cases across weather, lighting, routes, and behavior. More edge cases produce better models. Better models earn trust and expand deployment. Trust expands the workflows you can own beyond safety into operations and compliance.

2. On-device AI has crossed an important threshold.

More compute is moving to the edge. Decisions happen locally instead of waiting on a network round trip, which improves latency and reliability. This reduces how much raw data needs to move across the network, and keeps sensitive signals on device by default, which matters in regulated and privacy sensitive environments. Advances in model compression and distillation have made it possible to shrink capable models onto constrained edge devices, closing the gap between what runs in the cloud and what runs on a camera or gateway.

Verkada illustrates this well. Physical security is a first 60 seconds problem. Whether forced entry, a propped door after hours, or an unfamiliar person at a restricted entrance, the window to respond to an issue is short. To be effective, the security system must surface the right signal immediately and keep working, even if there’s degraded connectivity.

The leverage compounds at the platform layer. When video, access control, intercom, alarms, and sensors are centralized in one system, it delivers a coherent narrative and workflow: who badged in, what the camera saw, what was said at the door, and what triggered the alert. As the footprint grows across buildings and environments, the system learns from more real-world variance, detection improves, false positives drop, and customers adopt it for more of their security operations.

3. Connectivity is becoming the default, even in unusual places.

Connectivity is turning physical deployments into software deployments. Devices can be monitored continuously, updated remotely, and improved after installation instead of waiting for the next hardware cycle.

We see this in practice with Halter, a virtual fencing and pasture management platform that enables farmers and ranchers to monitor cattle, shift grazing patterns, and increase yield on their land. Halter’s solar-powered collars maintain continuous connectivity with localized towers and benefit from constant software upgrades, made possible in part by a shift in power efficiency that lets always-on compute run on solar and battery in places that were previously uneconomical to instrument. Every incremental Halter collar becomes a live node that is used to improve Halter’s behavior models, resulting in a tighter feedback loop between animals, land, and grazing decisions. 

4. Messy real-world data is now legible.

Real-world systems produce noisy, incomplete, and sometimes contradictory signals. Logs are inconsistent, devices emit data that was never designed for training, and workflows still rely on “tribal knowledge” that lives beyond a database. For a long time, this was the ceiling. The world was instrumentable, but not interpretable at scale.

Today's multimodal models can fuse video, text logs, and sensor telemetry in a single reasoning pass, instead of processing each stream in isolation. That makes it possible to reconcile partial truths and interpret the full operational picture in time to act.

Take, for example, the commerce operations platform, Stord. Logistics is a maze of mismatched warehouse scans, delayed carrier events, inventory that is right in one system and wrong in another, and exceptions that require human judgment. Stord turns that noise into coordination to deliver Prime-like fulfillment and delivery for e-commerce brands. Their integrated physical and digital platform flags risk early, reroutes orders faster, and tightens the delivery promise for consumers. 

The context advantage

While intelligence is increasingly commoditized, proprietary context is not. Physical AI is built on a different scarce resource than most pure software: ground truth earned through deployment in the field. The physical world is unconstrained and ever-changing. Every season, site, route, and edge case produces new data you cannot simulate or backfill. Anywhere deployment meets variance, context compounds.

That does not mean physical AI is easy. You have to deal with reliability, service, sometimes regulation, and the fact that atoms do not compile. But the old knock on hardware-dependent businesses — longer cycles, harder to iterate — is losing its force. When devices update over the air, learn from deployment, and improve without a truck roll, the iteration loop starts to resemble software more than traditional hardware. And once hardware is embedded in a customer's operations, it becomes the foundation for recurring software revenue that is harder to displace than a SaaS login. If you thread that needle, you can earn durability that is hard to unwind.

Last week’s sell-off in software stocks reflected a growing unease that AI could compress the terminal value of many SaaS companies. When agents can perform entire workflows on their own, long-standing assumptions about durability begin to crack. What the market was really asking, in real time, was a more fundamental question: What’s defensible in the age of AI?

Intelligence is getting easier to buy. Models are making it cheaper to decide. Agents are making it cheaper to execute. And as more businesses tap into the same pools of commoditized intelligence, proprietary context is getting harder to own. 

We believe that software still matters. The moat is just moving.

As the cost of intelligence goes down, the premium is shifting to what informs decisions: context. Context can come in many forms, including customer behavior, operational trails, network graphs, and human feedback loops. In this era, some of the hardest signals to replicate won’t be scraped from the internet. They’ll be found in the physical world.

Physical AI creates proprietary context through devices operating in real environments that are unpredictable and constantly changing. Every interaction becomes a learning opportunity, turning the physical world into a living dataset that continues to generate new edge cases long after the product is shipped.

This is why the once “contrarian” bet of venture dollars flowing into the physical world is becoming more mainstream. Agents may compress many digital workflows that sit on shared data, but they do not create new ground truth. Deployment does.

We have hit an inflection in edge economics that makes this possible. For years, turning messy sensor streams into reliable real-time decisions was too costly. Now model costs have collapsed, sensors are cheaper, and edge compute is good enough to make always-on systems practical. It is suddenly economical to instrument the physical world.

At NewView, we have leaned into physical AI with our investments in Halter, Verkada, Netradyne, and Stord. Although these companies look different on the surface, they share a common model: deploy durable nodes into the field, learn from real world usage, and convert operational complexity into compounding advantage.

Four shifts unlocking physical AI

For years, hardware was a tax on software margins and velocity. Physical AI flips that logic because the hardware is the data engine, and the software improves with every deployment.

1. Sensing is cheaper and more ubiquitous.

Cameras, accelerometers, thermals, acoustics, radar, lidar, and industrial telemetry are all less expensive and more accessible. We can build, measure, log, and improve more than ever. 

You can see this clearly in fleets. Always-on video across thousands of vehicles used to be possible in theory and brutal in practice, requiring installs, storage, bandwidth, footage management, and driver buy-in. Today the sensing stack is cheap and reliable enough that continuous instrumentation is becoming normal.

Netradyne shows the second-order effect of this. Its in-vehicle vision system observes driving context continuously and turns that stream into real-time coaching, instead of a postmortem report days later. The loop compounds: more vehicles produce more edge cases across weather, lighting, routes, and behavior. More edge cases produce better models. Better models earn trust and expand deployment. Trust expands the workflows you can own beyond safety into operations and compliance.

2. On-device AI has crossed an important threshold.

More compute is moving to the edge. Decisions happen locally instead of waiting on a network round trip, which improves latency and reliability. This reduces how much raw data needs to move across the network, and keeps sensitive signals on device by default, which matters in regulated and privacy sensitive environments. Advances in model compression and distillation have made it possible to shrink capable models onto constrained edge devices, closing the gap between what runs in the cloud and what runs on a camera or gateway.

Verkada illustrates this well. Physical security is a first 60 seconds problem. Whether forced entry, a propped door after hours, or an unfamiliar person at a restricted entrance, the window to respond to an issue is short. To be effective, the security system must surface the right signal immediately and keep working, even if there’s degraded connectivity.

The leverage compounds at the platform layer. When video, access control, intercom, alarms, and sensors are centralized in one system, it delivers a coherent narrative and workflow: who badged in, what the camera saw, what was said at the door, and what triggered the alert. As the footprint grows across buildings and environments, the system learns from more real-world variance, detection improves, false positives drop, and customers adopt it for more of their security operations.

3. Connectivity is becoming the default, even in unusual places.

Connectivity is turning physical deployments into software deployments. Devices can be monitored continuously, updated remotely, and improved after installation instead of waiting for the next hardware cycle.

We see this in practice with Halter, a virtual fencing and pasture management platform that enables farmers and ranchers to monitor cattle, shift grazing patterns, and increase yield on their land. Halter’s solar-powered collars maintain continuous connectivity with localized towers and benefit from constant software upgrades, made possible in part by a shift in power efficiency that lets always-on compute run on solar and battery in places that were previously uneconomical to instrument. Every incremental Halter collar becomes a live node that is used to improve Halter’s behavior models, resulting in a tighter feedback loop between animals, land, and grazing decisions. 

4. Messy real-world data is now legible.

Real-world systems produce noisy, incomplete, and sometimes contradictory signals. Logs are inconsistent, devices emit data that was never designed for training, and workflows still rely on “tribal knowledge” that lives beyond a database. For a long time, this was the ceiling. The world was instrumentable, but not interpretable at scale.

Today's multimodal models can fuse video, text logs, and sensor telemetry in a single reasoning pass, instead of processing each stream in isolation. That makes it possible to reconcile partial truths and interpret the full operational picture in time to act.

Take, for example, the commerce operations platform, Stord. Logistics is a maze of mismatched warehouse scans, delayed carrier events, inventory that is right in one system and wrong in another, and exceptions that require human judgment. Stord turns that noise into coordination to deliver Prime-like fulfillment and delivery for e-commerce brands. Their integrated physical and digital platform flags risk early, reroutes orders faster, and tightens the delivery promise for consumers. 

The context advantage

While intelligence is increasingly commoditized, proprietary context is not. Physical AI is built on a different scarce resource than most pure software: ground truth earned through deployment in the field. The physical world is unconstrained and ever-changing. Every season, site, route, and edge case produces new data you cannot simulate or backfill. Anywhere deployment meets variance, context compounds.

That does not mean physical AI is easy. You have to deal with reliability, service, sometimes regulation, and the fact that atoms do not compile. But the old knock on hardware-dependent businesses — longer cycles, harder to iterate — is losing its force. When devices update over the air, learn from deployment, and improve without a truck roll, the iteration loop starts to resemble software more than traditional hardware. And once hardware is embedded in a customer's operations, it becomes the foundation for recurring software revenue that is harder to displace than a SaaS login. If you thread that needle, you can earn durability that is hard to unwind.

This post is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation to invest in any securities. NewView may have an ownership interest in the companies discussed, which may present conflicts of interest. This post is intended for financially sophisticated investors; NewView does not solicit or make its services generally available to the public. See Terms of Use for more information. Past performance is not indicative of future results. Any forward-looking statements are based on current expectations and involve risks and uncertainties; actual results may differ materially.