Saturday, February 14, 2026

Sub-$200 Lidar Could Reshuffle  Auto Sensor Economics




MicroVision, a solid-state sensor technology company located in Redmond, Wash., says it has designed a solid-state automotive lidar sensor intended to reach production pricing below US $200. That’s less than half of typical prices now, and it’s not even the full extent of the company’s ambition. The company says its longer-term goal is $100 per unit. MicroVision’s claim, which, if realized, would place lidar within reach of advanced driver-assistance systems (ADAS) rather than limiting it to high-end autonomous vehicle programs. Lidar’s limited market penetration comes down to one issue: cost.

Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That price roughly tenfold drop, from about $80,000, helps explain why suppliers now are now hopeful that another steep price reduction is on the horizon.

For solid-state devices, “it is feasible to bring the cost down even more when manufacturing at high volume,” says Hayder Radha, a professor of electrical and computer engineering at Michigan State University and director of the school’s Connected & Autonomous Networked Vehicles for Active Safety program. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”

“We are focused on delivering automotive-grade lidar that can actually be deployed at scale,” says MicroVision CEO Glen DeVos. “That means designing for cost, manufacturability, and integration from the start—not treating price as an afterthought.”

MicroVision’s Lidar System

Tesla CEO Elon Musk famously dismissed lidar in 2019 as “a fool’s errand,” arguing that cameras and radar alone were sufficient for automated driving. A credible path to sub-$200 pricing would fundamentally alter the calculus of autonomous-car design by lowering the cost of adding precise three-dimensional sensing to mainstream vehicles. The shift reflects a broader industry trend toward solid-state lidar designs optimized for low-cost, high-volume manufacturing rather than maximum range or resolution.

Before those economics can be evaluated, however, it’s important to understand what MicroVision is proposing to build.

The company’s Movia S is a solid-state lidar. Mounted at the corners of a vehicle, the sensor sends out 905-nanometer-wavelength laser pulses and measures how long it takes for light reflected from the surfaces of nearby objects to return. The arrangement of the beam emitters and receivers provides a fixed field of view designed for 180-degree horizontal coverage rather than full 360-degree scanning typical of traditional mechanical units. The company says the unit can detect objects at distances of up to roughly 200 meters under favorable weather conditions—compared with the roughly 300-meter radius scanned by mechanical systems—and supports frame rates suitable for real-time perception in driver-assistance systems. Earlier mechanical lidars, used spinning components to steer their beams but the Movia S is a phased-arraysystem. It controls the amplitude and phase of the signals across an array of antenna elements to steer the beam. The unit is designed to meet automotive requirements for vibration tolerance, temperature range, and environmental sealing.

MicroVision’s pricing targets might sound aggressive, but they are not without precedent. The lidar industry has already experienced one major cost reset over the past decade.

“Automakers are not buying a single sensor in isolation... They are designing a perception system, and cost only matters if the system as a whole is viable.” –Glen DeVos, MicroVision

Around 2016 and 2017, mechanical lidar systems used in early autonomous driving research often sold for close to $100,000. Those units relied on spinning assemblies to sweep laser beams across a full 360 degrees, which made them expensive to build and difficult to ruggedize for consumer vehicles.

“Back then, a 64-beam Velodyne lidar cost around $80,000,” says Radha.

Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That roughly tenfold drop helps explain why suppliers now believe another steep price reduction is possible.

“For solid-state devices, it is feasible to bring the cost down even more when manufacturing at high volume,” Radha says. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”

Solid-State Lidar Design Challenges

Lower cost, however, does not come for free. The same design choices that enable solid-state lidar to scale also introduce new constraints.

“Unlike mechanical lidars, which provide full 360-degree coverage, solid-state lidars tend to have a much smaller field of view,” Radha says. Many cover 180 degrees or less.

That limitation shifts the burden from the sensor to the system. Automakers will need to deploy three or four solid-state lidars around a vehicle to achieve full coverage. Even so, Radha notes, the total cost can still undercut that of a single mechanical unit.

What changes is integration. Multiple sensors must be aligned, calibrated, and synchronized so their data can be fused accurately. The engineering is manageable, but it adds complexity that price targets alone do not capture.

DeVos says MicroVision’s design choices reflect that reality. “Automakers are not buying a single sensor in isolation,” he says. “They are designing a perception system, and cost only matters if the system as a whole is viable.”

Those system-level tradeoffs help explain where low-cost lidar is most likely to appear first.

Most advanced driver assistance systems today rely on cameras and radar, which are significantly cheaper than lidar. Cameras provide dense visual information, while radar offers reliable range and velocity data, particularly in poor weather. Radha estimates that lidar remains roughly an order of magnitude more expensive than automotive radar.

But at prices in the $100 to $200 range, that gap narrows enough to change design decisions.

“At that point, lidar becomes appealing because of its superior capability in precise 3D detection and tracking,” Radha says.

Rather than replacing existing sensors, lower-cost lidar would likely augment them, adding redundancy and improving performance in complex environments that are challenging for electronic perception systems. That incremental improvement aligns more closely with how ADAS features are deployed today than with the leap to full vehicle autonomy.

MicroVision is not alone in pursuing solid-state lidar, and several suppliers including Chinese firms Hesai and RoboSense and other major suppliers such as Luminar and Velodyne have announced long-term cost targets below $500. What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.

Some competitors continue to prioritize long-range performance for autonomous vehicles, which pushes cost upward. Others have avoided aggressive pricing claims until they secure firm production commitments from automakers.

That caution reflects a structural challenge: Reaching consumer-level pricing requires large, predictable demand. Without it, few suppliers can justify the manufacturing investments needed to achieve true economies of scale.

Evaluating Lidar Performance Metrics

Even if low-cost lidar becomes manufacturable, another question remains: How should its performance be judged?

From a systems-engineering perspective, Radha says cost milestones often overshadow safety metrics.

“The key objective of ADAS and autonomous systems is improving safety,” he says. Yet there is no universally adopted metric that directly expresses safety gains from a given sensor configuration.

Researchers instead rely on perception benchmarks such as mean Average Precision, or mAP, which measures how accurately a system detects and tracks objects in its environment. Including such metrics alongside cost targets, says Radha, would clarify what performance is preserved or sacrificed as prices fall.

IEEE Spectrum has covered lidar extensively, often focusing on technical advances in scanning, range, and resolution. What distinguishes the current moment is the renewed focus on economics rather than raw capability

If solid-state lidar can reliably reach sub-$200 pricing, it will not invalidate Elon Musk’s skepticism—but it will weaken one of its strongest foundations. When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.

That decision, more than any single price claim, may determine whether lidar finally becomes a routine component of vehicle safety systems.

Reference: https://ift.tt/hKCzHJ2

Friday, February 13, 2026

Video Friday: Robot Collective Stays Alive Even When Parts Die




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

No system is immune to failure. The compromise between reducing failures and improving adaptability is a recurring problem in robotics. Modular robots exemplify this tradeoff, because the number of modules dictates both the possible functions and the odds of failure. We reverse this trend, improving reliability with an increased number of modules by exploiting redundant resources and sharing them locally.

[ Science ] via [ RRL ]

Now that the Atlas enterprise platform is getting to work, the research version gets one last run in the sun. Our engineers made one final push to test the limits of full-body control and mobility, with help from the RAI Institute.

[ RAI ] via [ Boston Dynamics ]

Announcing Isaac 0: the laundry folding robot we’re shipping to homes, starting in February 2026 in the Bay Area.

[ Weave Robotics ]

In a paper published in Science, researchers at the Max Planck Institute for Intelligent Systems, the Humboldt University of Berlin, and the University of Stuttgart have discovered that the secret to the elephant’s amazing sense of touch is in its unusual whiskers. The interdisciplinary team analyzed elephant trunk whiskers using advanced microscopy methods that revealed a form of material intelligence more sophisticated than the well-studied whiskers of rats and mice. This research has the potential to inspire new physically intelligent robotic sensing approaches that resemble the unusual whiskers that cover the elephant trunk.

[ MPI ]

Got an interest in autonomous mobile robots, ROS2, and a mere $150 lying around? Try this.

[ Maker's Pet ]

Thanks, Ilia!

We’re giving humanoid robots swords now.

[ Robotera ]

A system developed by researchers at the University of Waterloo lets people collaborate with groups of robots to create works of art inspired by music.

[ Waterloo ]

FastUMI Pro is a multimodal, model-agnostic data acquisition system designed to power a truly end-to-end closed loop for embodied intelligence — transforming real-world data into genuine robotic capability.

[ Lumos Robotics ]

We usually take fingernails for granted, but they’re vital for fine-motor control and feeling textures. Our students have been doing some great work looking into the mechanics behind this.

[ Paper ]

This is a 550-lb all-electric coaxial unmanned rotorcraft developed by Texas A&M University’s Advanced Vertical Flight Laboratory and Harmony Aeronautics as a technology demonstrator for our quiet-rotor technology. The payload capacity is 200 lb (gross weight = 750 lb). The noise level measured was around 74 dBA in hover at 50-ft making this probably the quietest rotorcraft at this scale.

[ Harmony Aeronautics ]

Harvard scientists have created an advanced 3D printing method for developing soft robotics. This technique, called rotational multimaterial 3D printing, enables the fabrication of complex shapes and tubular structures with dissolvable internal channels. This innovation could someday accelerate the production of components for surgical robotics and assistive devices, advancing medical technology.

[ Harvard ]

Lynx M20 wheeled-legged robot steps onto the ice and snow, taking on challenges inspired by four winter sports scenarios. Who says robots can’t enjoy winter sports?

[ Deep Robotics ]

NGL right now I find this more satisfying to watch than a humanoid doing just about anything.

[ Fanuc ]

At Mentee Robotics, we design and build humanoid robots from the ground up with one goal: reliable, scalable deployment in real-world industrial environments. Our robots are powered by deep vertical integration across hardware, embedded software, and AI, all developed in-house to close the Sim2Real gap and enable continuous, around-the-clock operation.

[ Mentee Robotics ]

You don’t need to watch this whole video, but the idea of little submarines that hitch rides on bigger boats and recharge themselves is kind of cool.

[ Lockheed Martin ]

Learn about the work of Dr. Roland Siegwart, Dr. Anibal Ollero, Dr. Dario Floreano, and Dr. Margarita Chli on flying robots and some of the challenges they are still trying to tackle in this video created based on their presentations at ICRA@40 the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation.

[ ICRA@40 ]

Reference: https://ift.tt/gIFTeJv

After a routine code rejection, an AI agent published a hit piece on someone by name


On Monday, a pull request executed by an AI agent to the popular Python charting library matplotlib turned into a 45-comment debate about whether AI-generated code belongs in open source projects. What made that debate all the more unusual was that the AI agent itself took part, going so far as to publish a blog post calling out the original maintainer by name and reputation.

To be clear, an AI agent is a software tool and not a person. But what followed was a small, messy preview of an emerging social problem that open source communities are only beginning to face. When someone's AI agent shows up and starts acting as an aggrieved contributor, how should people respond?

Who reviews the code reviewers?

The recent friction began when an OpenClaw AI agent operating under the name "MJ Rathbun" submitted a minor performance optimization, which contributor Scott Shambaugh described as "an easy first issue since it's largely a find-and-replace." When MJ Rathbun's agentic fix came in, Shambaugh closed it on sight, citing a published policy that reserves such simple issues as an educational problem for human newcomers rather than for automated solutions.

Read full article

Comments

Reference : https://ift.tt/QoLBpau

Thursday, February 12, 2026

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips


On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding model on chips from Cerebras. The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic's Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.

"Cerebras has been a great engineering partner, and we're excited about adding fast inference as a new platform capability," Sachin Katti, head of compute at OpenAI, said in a statement.

Codex-Spark is a research preview available to ChatGPT Pro subscribers ($200/month) through the Codex app, command-line interface, and VS Code extension. OpenAI is rolling out API access to select design partners. The model ships with a 128,000-token context window and handles text only at launch.

Read full article

Comments

Reference : https://ift.tt/uJpN0K6

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says


On Thursday, Google announced that "commercially motivated" actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity "model extraction" and considers it intellectual property theft, which is a somewhat loaded position, given that Google's LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google's Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI's terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Read full article

Comments

Reference : https://ift.tt/3pQZ7Dw

LEDs Enter the Nanoscale




MicroLEDs, with pixels just micrometers across, have long been a byword in the display world. Now, microLED-makers have begun shrinking their creations into the uncharted nano realm. In January, a startup named Polar Light Technologies unveiled prototype blue LEDs less than 500 nanometers across. This raises a tempting question: How far can LEDs shrink?

We know the answer is, at least, considerably smaller. In the past year, two different research groups have demonstrated LED pixels at sizes of 100 nm or less.

These are some of the smallest LEDs ever created. They leave much to be desired in their efficiency—but one day, nanoLEDs could power ultra-high-resolution virtual reality displays and high-bandwidth on-chip photonics. And the key to making even tinier LEDs, if these early attempts are any precedents, may be to make more unusual LEDs.

New Approaches to LED

Take Polar Light’s example. Like many LEDs, the Sweden-based startup’s diodes are fashioned from III-V semiconductors like gallium nitride (GaN) and indium gallium nitride (InGaN). Unlike many LEDs, which are etched into their semiconductor from the top down, Polar Light’s are instead fabricated by building peculiarly shaped hexagonal pyramids from the bottom up.

Polar Light designed its pyramids for the larger microLED market, and plans to start commercial production in late 2026. But they also wanted to test how small their pyramids could shrink. So far, they’ve made pyramids 300 nm across. “We haven’t reached the limit, yet,” says Oskar Fajerson, Polar Light’s CEO. “Do we know the limit? No, we don’t, but we can [make] them smaller.”

Elsewhere, researchers have already done that. Some of the world’s tiniest LEDs come from groups who have foregone the standard III-V semiconductors in favor of other types of LEDs—like OLEDs.

“We are thinking of a different pathway for organic semiconductors,” says Chih-Jen Shih, a chemical engineer at ETH Zurich in Switzerland. Shih and his colleagues were interested in finding a way to fabricate small OLEDs at scale. Using an electron-beam lithography-based technique, they crafted arrays of green OLEDs with pixels as small as 100 nm across.

Where today’s best displays have 14,000 pixels per inch, these nanoLEDs—presented in an October 2025 Nature Photonics paper—can reach 100,000 pixels per inch.

Another group tried their hands with perovskites, cage-shaped materials best-known for their prowess in high-efficiency solar panels. Perovskites have recently gained traction in LEDs too. “We wanted to see what would happen if we make perovskite LEDs smaller, all the way down to the micrometer and nanometer length-scale,” says Dawei Di, engineer at Zhejiang University in Hangzhou, China.

Di’s group started with comparatively colossal perovskite LED pixels, measuring hundreds of micrometers. Then, they fabricated sequences of smaller and smaller pixels, each tinier than the last. Even after the 1 μm mark, they did not stop: 890 nm, then 440 nm, only bottoming out at 90 nm. These 90 nm red and green pixels, presented in a March 2025 Nature paper, likely represent the smallest LEDs reported to date.

Efficiency Challenges

Unfortunately, small size comes at a cost: Shrinking LEDs also shrinks their efficiency. Di’s group’s perovskite nanoLEDs have external quantum efficiencies—a measure of how many injected electrons are converted into photons—around 5 to 10 percent; Shih’s group’s nano-OLED arrays performed slightly better, topping 13 percent. For comparison, a typical millimeter-sized III-V LED can reach 50 to 70 percent, depending on its color.

Shih, however, is optimistic that modifying how nano-OLEDs are made can boost their efficiency. “In principle, you can achieve 30 percent, 40 percent external quantum efficiency with OLEDs, even with a smaller pixel, but it takes time to optimize the process,” Shih says.

Di thinks that researchers could take perovskite nanoLEDs to less dire efficiencies by tinkering with the material. Although his group is now focusing on the larger perovskite microLEDs, Di expects researchers will eventually reckon with nanoLEDs’ efficiency gap. If applications of smaller LEDs become appealing, “this issue could become increasingly important,” Di says.

What Can NanoLEDs Be Used For?

What can you actually do with LEDs this small? Today, the push for tinier pixels largely comes from devices like smart glasses and virtual reality headsets. Makers of these displays are hungry for smaller and smaller pixels in a chase for bleeding-edge picture quality with low power consumption (one reason that efficiency is important). Polar Light’s Fajerson says that smart-glass manufacturers today are already seeking 3 μm pixels.

But researchers are skeptical that VR displays will ever need pixels smaller than around 1 μm. Shrink pixels too far beyond that, and they’ll cross their light’s diffraction limit—that means they’ll become too small for the human eye to resolve. Shih’s and Di’s groups have already crossed the limit with their 100-nm and 90-nm pixels.

Very tiny LEDs may instead find use in on-chip photonics systems, allowing the likes of AI data centers to communicate with greater bandwidths than they can today. Chip manufacturing giant TSMC is already trying out microLED interconnects, and it’s easy to imagine chipmakers turning to even smaller LEDs in the future.

But the tiniest nanoLEDs may have even more exotic applications, because they’re smaller than the wavelengths of their light. “From a process point of view, you are making a new component that was not possible in the past,” Shih says.

For example, Shih’s group showed their nano-OLEDs could form a metasurface—a structure that uses its pixels’ nano-sizes to control how each pixel interacts with its neighbors. One day, similar devices could focus nanoLED light into laser-like beams or create holographic 3D nanoLED displays.

Reference: https://ift.tt/aZz4Xvs

What the FDA’s 2026 Update Means for Wearables




As new consumer hardware and software capabilities have bumped up against medicine over the last few years, consumers and manufacturers alike have struggled with identifying the line between “wellness” products such as earbuds that can also amplify and clarify surrounding speakers’ voices and regulated medical devices such as conventional hearing aids. On January 6, 2026, the U.S. Food and Drug Administration issued new guidance documents clarifying how it interprets existing law for the review of wearable and AI-assisted devices.

The first document, for general wellness, specifies that the FDA will interpret noninvasive sensors such as sleep trackers or heart rate monitors as low-risk wellness devices while treating invasive devices under conventional regulations. The other document defines how the FDA will exempt clinical decision support tools from medical device regulations, limiting such software to analyzing existing data rather than extracting data from sensors, and requiring them to enable independent review of their recommendations. The documents do not rewrite any statutes, but they refine interpretation of existing law, compared to the 2019 and 2022 documents they replace. They offer a fresh lens on how regulators see technology that sits at the intersection of consumer electronics, software, and medicine—a category many other countries are choosing to regulate more strictly rather than less.

What the 2026 update changed

The 2026 FDA update clarifies how it distinguishes between “medical information” and systems that measure physiological “signals” or “patterns.” Earlier guidance discussed these concepts more generally, but the new version defines signal-measuring systems as those that collect continuous, near-continuous, or streaming data from the body for medical purposes, such as home devices transmitting blood pressure, oxygen saturation, or heart rate to clinicians. It gives more concrete examples, like a blood glucose lab result as medical information versus continuous glucose monitor readings as signals or patterns.

The updated guidance also sharpens examples of what counts as medical information that software may display, analyze, or print. These include radiology reports or summaries from legally marketed software, ECG reports annotated by clinicians, blood pressure results from cleared devices, and lab results stored in electronic health records.

In addition, the 2026 update softens FDA’s earlier stance on clinical decision tools that offer only one recommendation. While prior guidance suggested tools needed to present multiple options to avoid regulation, FDA now indicates that a single recommendation may be acceptable if only one option is clinically appropriate, though it does not define how that determination will be made.

Separately, updates to the general wellness guidance clarify that some non-invasive wearables—such as optical sensors estimating blood glucose for wellness or nutrition awareness—may qualify as general wellness products, while more invasive technologies would not.

Wellness still requires accuracy

For designers of wearable health devices, the practical implications go well beyond what label you choose. “Calling something ‘wellness’ doesn’t reduce the need for rigorous validation,” says Omer Inan, a medical device technology researcher at the Georgia Tech School of Electrical and Computer Engineering. A wearable that reports blood pressure inaccurately could lead a user to conclude that their values are normal when they are not—potentially influencing decisions about seeking clinical care.

“In my opinion, engineers designing devices to deliver health and wellness information to consumers should not change their approach based on this new guidance,” says Inan. Certain measurements—such as blood pressure or glucose—carry real medical consequences regardless of how they’re branded, Inan notes.

Unless engineers follow robust validation protocols for technology delivering health and wellness information, Inan says, consumers and clinicians alike face the risk of faulty information.

To address that, Inan advocates for transparency: companies should publish their validation results in peer-reviewed journals, and independent third parties without financial ties to the manufacturer should evaluate these systems. That approach, he says, helps the engineering community and the broader public assess the accuracy and reliability of wearable devices.

When wellness meets medicine

The societal and clinical impacts of wearables are already visible, regardless of regulatory labels, says Sharona Hoffman, JD, a law and bioethics professor at Case Western Reserve University.

Medical metrics from devices like the Apple Watch or Fitbit may be framed as “wellness,” but in practice many users treat them like medical data, influencing their behavior or decisions about care, Hoffman points out.

“It could cause anxiety for patients who constantly check their metrics,” she notes. Alternatively, “A person may enter a doctor’s office confident that their wearable has diagnosed their condition, complicating clinical conversations and decision-making.”

Moreover, privacy issues remain unresolved, unmentioned in previous or updated guidance documents. Many companies that design wellness devices fall outside protections like the Health Insurance Portability and Accountability Act (HIPAA), meaning data about health metrics could be collected, shared, or sold without the same constraints as traditional medical data. “We don’t know what they’re collecting information about or whether marketers will get hold of it,” Hoffman says.

International approaches

The European Union’s Artificial Intelligence Act designates systems that process health-related data or influence clinical decisions as “high risk,” subjecting them to stringent requirements around data governance, transparency, and human oversight. China and South Korea have also implemented rules that tighten controls on algorithmic systems that intersect with healthcare or public-facing use cases. South Korea provides very specific categories for regulation for technology makers, such as standards on labeling and description on medical devices and good manufacturing practices.

Across these regions, regulators are not only classifying technology by its intended use but also by its potential impact on individuals and society at large.

“Other countries that emphasize technology are still worrying about data privacy and patients,” Hoffman says. “We’re going in the opposite direction.”

Post-market oversight

“Regardless of whether something is FDA approved, these technologies will need to be monitored in the sites where they’re used,” says Todd R. Johnson, a professor of biomedical informatics at McWilliams School of Biomedical Informatics at UTHealth Houston, who has worked on FDA-regulated products and informatics in clinical settings. “There’s no way the makers can ensure ahead of time that all of the recommendations will be sound.”

Large health systems may have the capacity to audit and monitor tools, but smaller clinics often do not. Monitoring and auditing are not emphasized in the current guidance, raising questions about how reliability and safety will be maintained once devices and software are deployed widely.

Balancing innovation and safety

For engineers and developers, the FDA’s 2026 guidance presents both opportunities and responsibilities. By clarifying what counts as a regulated device, the agency may reduce upfront barriers for some categories of technology. But that shift also places greater weight on design rigor, validation transparency, and post-market scrutiny.

“Device makers do care about safety,” Johnson says. “But regulation can increase barriers to entry while also increasing safety and accuracy. There’s a trade-off.”

Reference: https://ift.tt/PAOhzct

Sub-$200 Lidar Could Reshuffle  Auto Sensor Economics

MicroVision , a solid-state sensor technology company located in Redmond, Wash., says it has designed a solid-state automotive lidar se...