In road safety, perception quality sets the ceiling for performance. Despite progress, the human toll remains high, with an estimated 39,345 roadway deaths in the United States in 2024 and a fatality rate of 1.20 per 100 million vehicle miles travelled. Regulators and test programs are raising baselines so automated systems can reliably perceive, predict and prevent crashes in busy, low-light and adverse-weather conditions.
Nishant Bhanot, a Senior Sensing Systems Engineer at Waymo and an IEEE Senior Member, operates at this junction of sensing, perception and safety. His operating principle is direct: define perception requirements and validation thresholds as Service Level Agreements that tie to program Key Performance Indicators, then track their impact on reliability and Return on Investment as platforms move from pilot to scale.
Seeing What Matters, When It Matters
Elevating trust begins with measurable outcomes the public and regulators can verify. AAA’s latest survey shows only 13% of U.S. drivers would trust riding in a self-driving vehicle. The U.S. Automatic Emergency Braking (AEB) standard is designed to save at least 360 lives per year while pushing performance in higher-risk scenarios. Independent tests reflect the shift, with 22 of 30 newly evaluated models earning good or acceptable ratings in a tougher front-crash-prevention evaluation. Closing the gap between regulatory expectations and real on-road behaviour calls for grounded, disciplined systems engineering.
Bhanot’s work turns these expectations into architecture. At Waymo, he leads sensing and perception architecture for current and next-generation platforms by defining perception requirements, running trade studies and executing V&V strategies tied to safety and performance, so detection and classification remain stable across modalities, environments and operating domains. “Safety is specific. It is the right field of view, the right dynamic range and the right validation thresholds for the environments we actually drive. When those are explicit and enforced, trust follows,” notes Bhanot.
Inside the Sensor Stack, From Pixels and Points to Decisions
The driving perception problem is defined by the diversity of scenarios an automated system must read correctly. Euro NCAP 2025 (European New Car Assessment programme) low-speed protocol specifies 10 pedestrian and cyclist scenarios that capture intersections, reversing and dooring cases. The U.S. rule on automatic braking requires systems to avoid collisions at speeds up to 62 mph and detect pedestrians in both daylight and darkness. At the same time, the Federal Highway Administration notes that the nighttime roadway fatality rate remains roughly 3 times higher than daytime, reinforcing the role of sensor fusion across lighting extremes. To detect a pedestrian in darkness where a camera loses reliability, the system needs fused inputs from sensors that read the world through different physics.
Bhanot has built and integrated such multi-modal sensor suites into systems to meet these requirements. At one of his prior roles, he developed sensor suites and system-level requirements for autonomous systems and built MATLAB and Python simulation pipelines to analyze coverage and latency budgets, focusing on cross-functional integration to balance safety, performance and cost “Modalities are teammates. Cameras, lidar and radar each contribute when conditions change. The architecture question is not ‘either/or’, it is how to align coverage and compute so the system sees the right thing at the right time,” observes Bhanot.
Proving Performance Before Public Roads
Validation must close the gap between intent and behavior. States reported 3,304 pedestrian deaths in the first half of 2024, underscoring why large-scale validation frameworks are indispensable. Peer-reviewed analysis covering 25.3 million autonomous miles compares safety performance between autonomous and human-driven operations. Globally, the World Health Organization reports 1.19 million annual road deaths, a reminder that every incremental improvement in sensing and perception contributes to measurable public benefit.
Bhanot’s record is validation-first. At Ford, he led a software-in-the-loop pipeline and scenario library for BlueCruise and adjacent ADAS features, catching integration issues early and accelerating launch timelines. The governance thread extends beyond programs; as an Associate Editor for two SARC journals, he helps sustain rigorous peer review that keeps safety research credible and comparative across the field. “Good validation is contrary to a phase—it is an operating model. The right scenarios, the right KPIs and the discipline to pause when tests reveal something new—that is how you earn public trust,” says Bhanot.
Scaling From Pilots to Programs, What Operations Demand
Execution is where sensing choices, perception reliability and cadence meet operational scale. The semiconductor sector, which underpins on-board compute for perception, saw global chip sales reach $64.9 billion in August 2025. The World Semiconductor Trade Statistics organization forecasts 2025 revenues of $700.9 billion, while J.D. Power reports an average of 190 problems per 100 vehicles in its 2024 Dependability Study—evidence that reliability remains a defining metric even as systems grow more complex. As hardware gets easier to acquire, the real challenge shifts from securing compute to handling the complexity of stitching it into a working system. At that point, what matters is not the chip itself but the discipline of how the system is built and operated.
Bhanot has repeatedly moved programs from pilot to production. At Applied Intuition, he led the perception systems engineering function for the autonomous trucking division, and scaled the team from 2 to more than 20 engineers, delivered multiple generations of sensor configurations and enabled public-road deployments in the U.S. and Japan that generated over 90,000 real world test miles. Those operational rhythms—from hiring to scenario coverage and triage—established a repeatable pipeline for domain-compliant scaling. “Programs win on rhythm. Clear priorities, disciplined triage, credible data volumes and clean release paths let you add capability without destabilizing operations,” states Bhanot.
Looking Ahead — The AI Sensor Decade
The next wave of autonomy will be defined by sensing depth and computational transparency. Analysts project the global semiconductor market will exceed more than $1 trillion by 2030, propelled by edge-AI hardware for perception and decision-making. That surge is creating a new benchmark for multidisciplinary engineers who can bridge physical sensing, digital simulation and data ethics.
Bhanot’s trajectory—from simulation frameworks at Ford to perception architectures at Waymo—illustrates how disciplined engineering converts technical progress into societal value. His leadership as a Globee Awards judge underscores that same commitment to measurable integrity and public trust that defines the next decade of sensing innovation. “Trust grows when systems behave the same way on a quiet road at noon and a complex intersection at night. That is the bar for sensing and perception for autonomous systems today,” notes Bhanot.