Wednesday, February 18, 2026

Tomorrow’s Smart Pills Will Deliver Drugs and Take Biopsies




One day soon, a doctor might prescribe a pill that doesn’t just deliver medicine but also reports back on what it finds inside you—and then takes actions based on its findings.

Instead of scheduling an endoscopy or CT scan, you’d swallow an electronic capsule smaller than a multivitamin. As it travels through your digestive system, it could check tissue health, look for cancerous changes, and send data to your doctor. It could even release drugs exactly where they’re needed or snip a tiny biopsy sample before passing harmlessly out of your body.

This dream of a do-it-all pill is driving a surge of research into ingestible electronics: smart capsules designed to monitor and even treat disease from inside the gastrointestinal (GI) tract. The stakes are high. GI diseases affect tens of millions of people worldwide, including such ailments as inflammatory bowel disease, celiac disease, and small intestinal bacterial overgrowth. Diagnosis often involves a frustrating maze of blood tests, imaging, and invasive endoscopy. Treatments, meanwhile, can bring serious side effects because drugs affect the whole body, not just the troubled gut.

If capsules could handle much of that work—streamlining diagnosis, delivering targeted therapies, and sparing patients repeated invasive procedures—they could transform care. Over the past 20 years, researchers have built a growing tool kit of ingestible devices, some already in clinical use. These capsule-shaped devices typically contain sensors, circuitry, a power source, and sometimes a communication module, all enclosed in a biocompatible shell. But the next leap forward is still in development: autonomous capsules that can both sense and act, releasing a drug or taking a tissue sample.

That’s the challenge that our lab—the MEMS Sensors and Actuators Laboratory (MSAL) at the University of Maryland, College Park—is tackling. Drawing on decades of advances in microelectromechanical systems (MEMS), we’re building swallowable devices that integrate sensors, actuators, and wireless links in packages that are small and safe enough for patients. The hurdles are considerable: power, miniaturization, biocompatibility, and reliability, to name a few. But the potential payoff will be a new era of personalized and minimally invasive medicine, delivered by something as simple as a pill you can swallow at home.

The Origin of Ingestible Devices

The idea of a smart capsule has been around since the late 1950s, when researchers first experimented with swallowable devices to record temperature, gastric pH, or pressure inside the digestive tract. At the time, it seemed closer to science fiction than clinical reality, bolstered by pop-culture visions like the 1966 film Fantastic Voyage, where miniaturized doctors travel inside the human body to treat a blood clot.

A gloved hand holds a small electronic capsule, with a researcher in lab safety gear blurred in the background.One of the authors (Ghodssi) holds a miniaturized drug-delivery capsule that’s designed to release medication at specific sites in the gastrointestinal tract.Maximilian Franz/Engineering at Maryland Magazine

For decades, though, the mainstay of GI diagnostics was endoscopy: a camera on a flexible tube, threaded down the throat or up through the colon. These procedures are quite invasive and require patients to be sedated, which increases both the risk of complications and procedural costs. What’s more, it’s difficult for endoscopes to safely traverse the circuitous pathway of the small intestine. The situation changed in the early 2000s, when video-capsule endoscopy arrived. The best-known product, PillCam, looks like a large vitamin but contains a camera, LEDs, and a transmitter. As it passes through the gut, it beams images and videos to a wearable device.

Today, capsule endoscopy is a routine tool in gastroenterology; ingestible devices can measure acidity, temperature, or gas concentrations. And researchers are pushing further, with experimental prototypes that deliver drugs or analyze the microbiome. For example, teams from Tufts University, in Massachusetts, and Purdue University, in Indiana, are working on devices with dissolvable coatings and mechanisms to collect samples of liquid for studies of the intestinal microbiome.

Still, all those devices are passive. They activate on a timer or by exposure to the neutral pH of the intestines, but they don’t adapt to conditions in real time. The next step requires capsules that can sense biomarkers, make decisions, and trigger specific actions—moving from clever hardware to truly autonomous “smart pills.” That’s where our work comes in.

Building on MEMS technology

Since 2017, MSAL has been pushing ingestible devices forward with the goal of making an immediate impact in health care. The group built on the MEMS community’s legacy in microfabrication, sensors, and system integration, while taking advantage of new tools like 3D printing and materials like biocompatible polymers. Those advances have made it possible to prototype faster and shrink devices smaller, sparking a wave of innovation in wearables, implants, and now ingestibles. Today, MSAL is collaborating with engineers, physicians, and data scientists to move these capsules from lab benches to pharmaceutical trials.

As a first step, back in 2017, we set out to design sensor-carrying capsules that could reliably reach the small intestine and indicate when they reached it. Another challenge was that sensors that work well on the benchtop can falter inside the gut, where shifting pH, moisture, digestive enzymes, and low-oxygen conditions can degrade typical sensing components.

Our earliest prototype adapted MEMS sensing technology to detect abnormal enzyme levels in the duodenum that are linked to pancreatic function. The sensor and its associated electronics were enclosed in a biocompatible, 3D-printed shell coated with polymers that dissolved only at certain pH levels. This strategy could one day be used to detect biomarkers in secretions from the pancreas to detect early-stage cancer.

High-speed footage shows a small mechanical arm extending from a capsule and contacting intestinal tissue.A high-speed video shows how a capsule deploys microneedles to deliver drugs into intestinal tissue.University of Maryland/Elsevier

That first effort with a passive device taught us the fundamentals of capsule design and opened the door to new applications. Since then, we’ve developed sensors that can track biomarkers such as the gas hydrogen sulfide, neurotransmitters such as serotonin and dopamine, and bioimpedance—a measure of how easily ions pass through intestinal tissue—to shed light on the gut microbiome, inflammation, and disease progression. In parallel, we’ve worked on more-active devices: capsule-based tools for controlled drug release and tissue biopsy, using low-power actuators to trigger precise mechanical movements inside the gut.

Like all new medical devices and treatments, ingestible electronics face many hurdles before they reach patients—from earning physician trust and insurance approval to demonstrating clear benefits, safety, and reliability. Packaging is a particular focus, as the capsules must be easy to swallow yet durable enough to survive stomach acid. The field is steadily proving safety and reliability, progressing from proof of concept in tissue, through the different stages of animal studies, and eventually to human trials. Every stage provides evidence that reassures doctors and patients—for example, showing that ingesting a properly packaged tiny battery is safe, and that a capsule’s wireless signals, far weaker than those of a cellphone, pose no health risk as they pass through the gut.

Engineering a Pill-Size Diagnostic Lab

The gastrointestinal tract is packed with clues about health and disease, but much of it remains out of reach of standard diagnostic tools. Ingestible capsules offer a way in, providing direct access to the small intestine and colon. Yet in many cases, the concentrations of chemical biomarkers can be too low to detect reliably in early stages of a disease, which makes the engineering challenge formidable. What’s more, the gut’s corrosive, enzyme-rich environment can foul sensors in multiple ways, interfering with measurements and adding noise to the data.

Close-up of a microchip with a shiny surface and protruding thin pins.

Close-up of a textured surface with triangular, raised patterns in a grid formation.

Electron microscope image of a microscale 3D printed pyramid with four conical structures. Microneedle designs for drug-delivery capsules have evolved over the years. An early prototype [top] used microneedle anchors to hold a capsule in place. Later designs adopted molded microneedle arrays [center] for more uniform fabrication. The most recent version [bottom] integrates hollow microinjector needles, allowing more precise and controllable drug delivery.From top: University of Maryland/Wiley;University of Maryland/Elsevier;University of Maryland/ACS

Take, for example, inflammatory bowel disease, for which there is no standard clinical test. Rather than searching for a scarce biomarker molecule, our team focused on a physical change: the permeability of the gut lining, which is a key factor in the disease. We designed capsules that measure the intestinal tissue’s bioimpedance by sending tiny currents across electrodes and recording how the tissue resists or conducts those currents at different frequencies (a technique called impedance spectroscopy). To make the electrodes suitable for in vivo use, we coated them with a thin, conductive, biocompatible polymer that reduces electrical noise and keeps stable contact with the gut wall. The capsule finishes its job by transmitting its data wirelessly to our computers.

In our lab tests, the capsule performed impressively, delivering clean impedance readouts from excised pig tissue even when the sample was in motion. In our animal studies, it detected shifts in permeability triggered by calcium chelators, compounds that pry open the tight junctions between intestinal cells. These results suggest that ingestible bioimpedance capsules could one day give clinicians a direct, minimally invasive window into gut-barrier function and inflammation. We believe that ingestible diagnostics can serve as powerful tools—catching disease earlier, confirming whether treatments are working, and establishing a baseline for gut health.

Drug Delivery at the Right Place, Right Time

Targeted drug delivery is one of the most compelling applications for ingestible capsules. Many drugs for GI conditions—such as biologics for inflammatory bowel disease—can cause serious side effects that limit both dosage and duration of treatment. A promising alternative is delivering a drug directly to the diseased tissue. This localized approach boosts the drug’s concentration at the target site while reducing its spread throughout the body, which improves effectiveness and minimizes side effects. The challenge is engineering a device that can both recognize diseased tissue and deliver medication quickly and precisely.

With other labs making great progress on the sensing side, we’ve devoted our energy to designing devices that can deliver the medicine. We’ve developed miniature actuators—tiny moving parts—that meet strict criteria for use inside the body: low power, small size, biocompatibility, and long shelf life.

Some of our designs use soft and flexible polymer “cantilevers” with attached microneedle systems that pop out from the capsule with enough force to release a drug, but without harming the intestinal tissue. While hollow microneedles can directly inject drugs into the intestinal lining, we’ve also demonstrated prototypes that use the microneedles for anchoring drug payloads, allowing the capsule to release a larger dose of medication that dissolves at an exact location over time.

In other experimental designs, we had the microneedles themselves dissolve after injecting a drug. In still others, we used microscale 3D printing to tailor the structure of the microneedles and control how quickly a drug is released—providing either a slow and sustained dose or a fast delivery. With this 3D printing, we created rigid microneedles that penetrate the mucosal lining and gradually diffuse the drug into the tissue, and soft microneedles that compress when the cantilever pushes them against the tissue, forcing the drug out all at once.

Tissue Biopsy via Capsule

What Smart Capsules Can Do

Ingestible electronic capsules use miniaturized sensors and actuators to monitor the gut, deliver medication, and collect biological samples.

Sensing

Medical capsule emitting signals in a tube environment.Embedded sensors can probe the gut—for example, measuring the bioimpedance of the intestinal lining to detect disease—and transmit the data wirelessly.All illustrations: Chris Philpot

Drug delivery

Illustration of a capsule with spikes releasing medicine inside a transparent, tube-like structure.Miniature actuators can trigger drug release at specific sites in the gut, boosting effectiveness while limiting side effects.

Biopsy

Illustration of a capsule with gears, showing a magnified section with medicine release.A spring-loaded mechanism can collect a tiny biopsy sample from the gut wall and store it during the capsule’s passage through the digestive system.

Tissue sampling remains the gold standard diagnostic tool in gastroenterology, offering insights far beyond what doctors can glean from visual inspection or blood tests. Capsules hold unique promise here: They can travel the full length of the GI tract, potentially enabling more frequent and affordable biopsies than traditional procedures. But the engineering hurdles are substantial. To collect a sample, a device must generate significant mechanical force to cut through the tough, elastic muscle of the intestines—while staying small enough to swallow.

Different strategies have been explored to solve this problem. Torsion springs can store large amounts of energy but are difficult to fit inside a tiny capsule. Electrically driven mechanisms may demand more power than current capsule batteries can provide. Magnetic actuation is another option, but it requires bulky external equipment and precise tracking of the capsule inside the body.

Our group has developed a low-power biopsy system that builds on the torsion-spring approach. We compress a spring and use adhesive to “latch” it closed within the capsule, then attach a microheater to the latch. When we wirelessly send current to the device, the microheater melts the adhesive on the latch, triggering the spring. We’ve experimented with tissue-collection tools, integrating a bladed scraper or a biopsy punch (a cylindrical cutting tool) with our spring-activated mechanisms; either of those tools can cut and collect tissue from the intestinal lining. With advanced 3D printing methods like direct laser writing, we can put fine, microscale edges on these miniature cutting tools that make it easier for them to penetrate the intestinal lining.

Storing and protecting the sample until the capsule naturally passes through the body is a major challenge, requiring both preservation of the sample and resealing the capsule to prevent contamination. In one of our designs, residual tension in the spring keeps the bladed scraper rotating, pulling the sample into the capsule and effectively closing a hatch that seals it inside.

The Road to Clinical Use for Ingestibles

Looking ahead, we expect to see the first clinical applications emerge in early-stage screening. Capsules that can detect electrochemical, bioimpedance, or visual signals could help doctors make sense of symptoms like vague abdominal pain by revealing inflammation, gut permeability, tumors, or bacterial overgrowth. They could also be adapted to screen for GI cancers. This need is pressing: The American Cancer Society reports that as of 2021, 41 percent of eligible U.S. adults were not up to date on colorectal cancer screening. What’s more, effective screening tools don’t yet exist for some diseases, such as small bowel adenocarcinoma. Capsule technology could make screening less invasive and more accessible.

Of course, ingestible capsules carry risks. The standard hazards of endoscopy still apply, such as the possibility of bleeding and perforation, and capsules introduce new complications. For example, if a capsule gets stuck in its passage through the GI tract, it could cause bowel obstruction and require endoscopic retrieval or even surgery. And concerns that are specific to ingestibles, including the biocompatibility of materials, reliable encapsulation of electronics, and safe battery operation, all demand rigorous testing before clinical use.

A series of images shows a small paper-based battery gradually dissolving in a dish of water over 60 minutes. A microbe-powered biobattery designed for ingestible devices dissolves in water within an hour. Seokheun Choi/Binghamton University

Powering these capsules is a key challenge that must be solved on the path to the clinic. Most capsule endoscopes today rely on coin-cell batteries, typically silver oxide, which offer a safe and energy-dense source but often occupy 30 to 50 percent of the capsule’s volume. So researchers have investigated alternatives, from wireless power transfer to energy-harvesting systems. At the State University of New York at Binghamton, one team is exploring microbial fuel cells that generate electricity from probiotic bacteria interacting with nutrients in the gut. At MIT, researchers used the gastric fluids of a pig’s stomach to power a simple battery. In our own lab, we are exploring piezoelectric and electrochemical approaches to harvesting energy throughout the GI tract.

The next steps for our team are pragmatic ones: working with gastroenterologists and animal-science experts to put capsule prototypes through rigorous in vivo studies, then refining them for real-world use. That means shrinking the electronics, cutting power consumption, and integrating multiple functions into a single multimodal device that can sense, sample, and deliver treatments in one pass. Ultimately, any candidate capsule will require regulatory approval for clinical use, which in turn demands rigorous proof of safety and clinical effectiveness for a specific medical application.

The broader vision is transformative. Swallowable capsules could bring diagnostics and treatment out of the hospital and into patients’ homes. Whereas procedures with endoscopes require anesthesia, patients could take ingestible electronics easily and routinely. Consider, for example, patients with inflammatory bowel disease who live with an elevated risk of cancer; a smart capsule could perform yearly cancer checks, while also delivering medication directly wherever necessary.

Over time, we expect these systems to evolve into semiautonomous tools: identifying lesions, performing targeted biopsies, and perhaps even analyzing samples and applying treatment in place. Achieving that vision will require advances at the very edge of microelectronics, materials science, and biomedical engineering, bringing together capabilities that once seemed impossible to combine in something the size of a pill. These devices hint at a future in which the boundary between biology and technology dissolves, and where miniature machines travel inside the body to heal us from within.

Reference: https://ift.tt/1BZDSIO

Tuesday, February 17, 2026

Most VMware users still "actively reducing their VMware footprint," survey finds


More than two years after Broadcom took over VMware, the virtualization company’s customers are still grappling with higher prices, uncertainty, and the challenges of reducing vendor lock-in.

Today, CloudBolt Software released a report, "The Mass Exodus That Never Was: The Squeeze Is Just Beginning," that provides insight into those struggles. CloudBolt is a hybrid cloud management platform provider that aims to identify VMware customers’ pain points so it can sell them relevant solutions. In the report, CloudBolt said it surveyed 302 IT decision-makers (director-level or higher) at North American companies with at least 1,000 employees in January. The survey is far from comprehensive, but it offers a look at the obstacles these users face.

Broadcom closed its VMware acquisition in November 2023, and last month, 88 percent of survey respondents still described the change as “disruptive.” Per the survey, the most cited drivers of disruption were price increases (named by 89 percent of respondents), followed by uncertainty about Broadcom’s plans (85 percent), support quality concerns (78 percent), Broadcom shifting VMware from perpetual licenses to subscriptions (72 percent), changes to VMware’s partner program (68 percent), and the forced bundling of products (65 percent).

Read full article

Comments

Reference : https://ift.tt/Mpi8gqs

Estimating Surface Heating of an Atmospheric Reentry Vehicle with Simulation




Here is the current HED: Estimating Surface Heating of an Atmospheric Reentry Vehicle with Simulation Here are 5 DEK suggestions: 1. Dive into thermal data from the LOFTID aeroshell mission 2. Validate heat flux gauges with inverse analysis techniques 3. Enhance thermal models for future HIAD missions 4. Discover how COMSOL Multiphysics® aids CFD predictions 5. Gain insights from NASA Ames expert Hannah AlpertJoin Hannah Alpert (NASA Ames) to explore thermal data from the record-breaking 6-meter LOFTID inflatable aeroshell. Learn how COMSOL Multiphysics® was used to perform inverse analysis on flight thermocouple data, validating heat flux gauges and preflight CFD predictions. Attendees will gain technical insights into improving thermal models for future HIAD missions, making this essential for engineers seeking to advance atmospheric reentry design. The session concludes with a live Q&A.

Register now for this free webinar!

Reference: https://ift.tt/jeYzimO

We’re Measuring Data Center Sustainability Wrong




In 2024, Google claimed that their data centers are 1.5x more energy efficient than industry average. In 2025, Microsoft committed billions to nuclear power for AI workloads. The data center industry tracks power usage effectiveness to three decimal places and optimizes water usage intensity with machine precision. We report direct emissions and energy emissions with religious fervor.

These are laudable advances, but these metrics account for only 30 percent of total emissions from the IT sector. The majority of the emissions are not directly from data centers or the energy they use, but from the end-user devices that actually access the data centers, emissions due to manufacturing the hardware, and software inefficiencies. We are frantically optimizing less than a third of the IT sector’s environmental impact, while the bulk of the problem goes unmeasured.

Incomplete regulatory frameworks are part of the problem. In Europe, the Corporate Sustainability Reporting Directive (CSRD) now requires 11,700 companies to report emissions using these incomplete frameworks. The next phase of the directive, covering 40,000+ additional companies, was originally scheduled for 2026 (but is likely delayed to 2028). In the United States, the standards body responsible for IT sustainability metrics (ISO/IEC JTC 1/SC 39) is conducting active revision of its standards through 2026, with a key plenary meeting in May 2026.

The time to act is now. If we don’t fix the measurement frameworks, we risk locking in incomplete data collection and optimizing a fraction of what matters for the next 5 to 10 years, before the next major standards revision.

The limited metrics

Walk into any modern data center and you’ll see sustainability instrumentation everywhere. Power usage efficiency (PUE) monitors track every watt. Water usage efficiency (WUE) systems measure water consumption down to the gallon. Sophisticated monitoring captures everything from server utilization to cooling efficiency to renewable energy percentages.

But here’s what those measurements miss: End-user devices globally emit 1.5 to 2 times more carbon than all data centers combined, according to McKinsey’s 2022 report. The smartphones, laptops, and tablets we use to access those ultra-efficient data centers are the bigger problem.

Data center operations, as measured by power usage efficiency, account for only 24 percent of the total emissions.

On the conservative end of the range from McKinsey’s report, devices emit 1.5 times as much as data centers. That means that data centers make up 40 percent of total IT emissions, while devices make up 60 percent.

On top of that, approximately 75 percent of device emissions occur not during use, but during manufacturing—this is so-called embodied carbon. For data centers, only 40 percent is embodied carbon, and 60 percent comes from operations (as measured by PUE).

Putting this together, data center operations, as measured by PUE, account for only 24 percent of the total emissions. Data center embodied carbon is 16 percent, device embodied carbon is 45 percent, and device operation is 15 percent.

Under the EU’s current CSRD framework, companies must report their emissions in three categories: direct emissions from owned sources, indirect emissions from purchased energy, and a third category for everything else.

This “everything else” category does include device emissions and embodied carbon. However, those emissions are reported as aggregate totals broken down by accounting category—Capital Goods, Purchased Goods and Services, Use of Sold Products—but not by product type. How much comes from end-user devices versus datacenter infrastructure, or employee laptops versus network equipment, remains murky, and therefore, unoptimized.

Embodied carbon and hardware reuse

Manufacturing a single smartphone generates approximately 50 kg CO2 equivalent (CO2e). For a laptop, it’s 200 kg CO2e. With 1 billion smartphones replaced annually, that’s 50 million tonnes of CO2e per year just from smartphone manufacturing, before anyone even turns them on. On average, smartphones are replaced every 2 years, laptops every 3 to 4 years, and printers every 5 years. Data center servers are replaced approximately every 5 years.

Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains.

There are programs geared towards reusing old components that are still functional and integrating them into new servers. GreenSKUs and similar initiatives show 8 percent reductions in embodied carbon are achievable. But these remain pilot programs, not systematic approaches. And critically, they’re measured only in data center context, not across the entire IT stack.

Imagine applying the same circular economy principles to devices. With over 2 billion laptops in existence globally and 2-3-year replacement cycles, even modest lifespan extensions create massive emission reductions. Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains.

Yet data center reuse gets measured, reported, and optimized. Device reuse doesn’t, because the frameworks don’t require it.

The invisible role of software

Leading load balancer infrastructure across IBM Cloud, I see how software architecture decisions ripple through energy consumption. Inefficient code doesn’t just slow things down—it drives up both data center power consumption and device battery drain.

For example, University of Waterloo researchers showed that they can reduce 30 percent of energy use in data centers by changing just 30 lines of code. From my perspective, this result is not an anomaly—it’s typical. Bad software architecture forces unnecessary data transfers, redundant computations, and excessive resource use. But unlike data center efficiency, there’s no commonly accepted metric for software efficiency.

This matters more now than ever. With AI workloads driving massive data center expansion—projected to consume 6.7-12 percent of total U.S. electricity by 2028, according to Lawrence Berkeley National Laboratory—software efficiency becomes critical.

What needs to change

The solution isn’t to stop measuring data center efficiency. It’s to measure device sustainability with the same rigor. Specifically, standards bodies (particularly ISO/IEC JTC 1/SC 39 WG4: Holistic Sustainability Metrics) should extend frameworks to include device lifecycle tracking, software efficiency metrics, and hardware reuse standards.

To track device lifecycles, we need standardized reporting of device embodied carbon, broken out separately by device. One aggregate number in an “everything else” category is insufficient. We need specific device categories with manufacturing emissions and replacement cycles visible.

To include software efficiency, I advocate developing a PUE-equivalent for software, such as energy per transaction, per API call, or per user session. This needs to be a reportable metric under sustainability frameworks so companies can demonstrate software optimization gains.

To encourage hardware reuse, we need to systematize reuse metrics across the full IT stack—servers and devices. This includes tracking repair rates, developing large-scale refurbishment programs, and tracking component reuse with the same detail currently applied to data center hardware.

To put it all together, we need a unified IT emission-tracking dashboard. CSRD reporting should show device embodied carbon alongside data center operational emissions, making the full IT sustainability picture visible at a glance.

These aren’t radical changes—they’re extensions of measurement principles already proven in data center context. The first step is acknowledging what we’re not measuring. The second is building the frameworks to measure it. And the third is demanding that companies report the complete picture—data centers and devices, servers and smartphones, infrastructure and software.

Because you can’t fix what you can’t see. And right now, we’re not seeing 70 percent of the problem.

Reference: https://ift.tt/s0w5lJz

Monday, February 16, 2026

This Former Physicist Helps Keep the Internet Secure




When Alan DeKok began a side project in network security, he didn’t expect to start a 27-year career. In fact, he didn’t initially set out to work in computing at all.

DeKok studied nuclear physics before making the switch to a part of network computing that is foundational but—like nuclear physics—largely invisible to those not directly involved in the field. Eventually, a project he started as a hobby became a full-time job: maintaining one of the primary systems that helps keep the internet secure.

Alan DeKok


Employer

InkBridge Networks

Occupation

CEO

Education

Bachelor’s degree in physics, Carleton University; master’s degree in physics, Carleton University

Today, he leads the FreeRADIUS Project, which he cofounded in the late 1990s to develop what is now the most widely used Remote Authentication Dial-In User Service (RADIUS) software. FreeRADIUS is an open-source server that provides back-end authentication for most major internet service providers. It’s used by global financial institutions, Wi-Fi services like Eduroam, and Fortune 50 companies. DeKok is also CEO of InkBridge Networks, which maintains the server and provides support for the companies that use it.

Reflecting on nearly three decades of experience leading FreeRADIUS, DeKok says he became an expert in remote authentication “almost by accident,” and the key to his career has largely been luck. “I really believe that it’s preparing yourself for luck, being open to it, and having the skills to capitalize on it.”

From Farming to Physics

DeKok grew up on a farm outside of Ottawa growing strawberries and raspberries. “Sitting on a tractor in the heat is not particularly interesting,” says DeKok, who was more interested in working with 8-bit computers than crops. As a student at Carleton University, in Ottawa, he found his way to physics because he was interested in math but preferred the practicality of science.

While pursuing a master’s degree in physics, also at Carleton, he worked on a water-purification system for the Sudbury Neutrino Observatory, an underground observatory then being built at the bottom of a nickel mine. He would wake up at 4:30 in the morning to drive up to the site, descend 2 kilometers, then enter one of the world’s deepest clean-room facilities to work on the project. The system managed to achieve one atom of impurity per cubic meter of water, “which is pretty insane,” DeKok says.

But after his master’s degree, DeKok decided to take a different route. Although he found nuclear physics interesting, he says he didn’t see it as his life’s work. Meanwhile, the Ph.D. students he knew were “fanatical about physics.” He had kept up his computing skills through his education, which involved plenty of programming, and decided to look for jobs at computing companies. “I was out of physics. That was it.”

Still, physics taught him valuable lessons. For one, “You have to understand the big picture,” DeKok says. “The ability to tell the big-picture story in standards, for example, is extremely important.” This skill helps DeKok explain to standards bodies how a protocol acts as one link in the entire chain of events that needs to occur when a user wants to access the internet.

He also learned that “methods are more important than knowledge.” It’s easy to look up information, but physics taught DeKok how to break down a problem into manageable pieces to come up with a solution. “When I was eventually working in the industry, the techniques that came naturally to me, coming out of physics, didn’t seem to be taught as well to the people I knew in engineering,” he says. “I could catch up very quickly.”

Founding FreeRADIUS

In 1996, DeKok was hired as a software developer at a company called Gandalf, which made equipment for ISDN, a precursor to broadband that enabled digital transmission of data over telephone lines. Gandalf went under about a year later, and he joined CryptoCard, a company providing hardware devices for two-factor authentication.

While at CryptoCard, DeKok began spending more time working with a RADIUS server. When users want to connect to a network, RADIUS acts as a gatekeeper and verifies their identity and password, determines what they can access, and tracks sessions. DeKok moved on to a new company in 1999, but he didn’t want to lose the networking skills he had developed. No other open-source RADIUS servers were being actively developed at the time, and he saw a gap in the market.

The same year, he started FreeRADIUS in his free time and it “gradually took over my life,” DeKok says. He continued to work on the open-source software as a hobby for several years while bouncing around companies in California and France. “Almost by accident, I became one of the more senior people in the space. Then I doubled down on that and started the business.” He founded NetworkRADIUS (now called InkBridge Networks) in 2008.

By that point, FreeRADIUS was already being used by 100 million people daily. The company now employs experts in Canada, France, and the United Kingdom who work together to support FreeRADIUS. “I’d say at least half of the people in the world get on the internet by being authenticated through my software,” DeKok estimates. He attributes that growth largely to the software being open source. Initially a way to enter the market with little funding, going open source has allowed FreeRADIUS to compete with bigger companies as an industry-leading product.

Although the software is critical for maintaining secure networks, most people aren’t aware of it because it works behind the scenes. DeKok is often met with surprise that it’s still in use. He compares RADIUS to a building foundation: “You need it, but you never think about it until there’s a crack in it.”

27 Years of Fixes

Over the years, DeKok has maintained FreeRADIUS by continually making small fixes. Like using a ratcheting tool to make a change inch by inch, “you shouldn’t underestimate that ratchet effect of tiny little fixes that add up over time,” he says.

He’s seen the project through minor patches and more significant fixes, like when researchers exposed a widespread vulnerability DeKok had been trying to fix since 1998. He also watched a would-be successor to the network protocol, Diameter, rise and fall in popularity in the 2000s and 2010s. (Diameter gained traction in mobile applications but has gradually been phased out in the shift to 5G.) Though Diameter offers improvements, RADIUS is far simpler and already widely implemented, giving it an edge, DeKok explains.

And he remains confident about its future. “People ask me, ‘What’s next for RADIUS?’ I don’t see it dying.” Estimating that billions of dollars of equipment run RADIUS, he says, “It’s never going to go away.”

About his own career, DeKok says he plans to keep working on FreeRADIUS, exploring new markets and products. “I never expected to have a company and a lot of people working for me, my name on all kinds of standards, and customers all over the world. But it worked out that way.”

This article appears in the March 2026 print issue as “Alan DeKok.”

Reference: https://ift.tt/wVCcBGU

Sunday, February 15, 2026

NASA Let AI Drive the Perseverance Rover




In December, NASA took another small, incremental step towards autonomous surface rovers. In a demonstration, the Perseverance team used AI to generate the rover’s waypoints. Perseverance used the AI waypoints on two separate days, traveling a total of 456 meters without human control.

“This demonstration shows how far our capabilities have advanced and broadens how we will explore other worlds,” said NASA Administrator Jared Isaacman. “Autonomous technologies like this can help missions to operate more efficiently, respond to challenging terrain, and increase science return as distance from Earth grows. It’s a strong example of teams applying new technology carefully and responsibly in real operations.”


Universe Today logo; text reads "This post originally appeared on Universe Today."

Mars is a long way away, and there’s about a 25-minute delay for a round trip signal between Earth and Mars. That means that one way or another, rovers are on their own for short periods of time.

The delay shapes the route-planning process. Rover drivers here on Earth examine images and elevation data and program a series of waypoints, which usually don’t exceed 100 meters apart. The driving plan is sent to NASA’s Deep Space Network (DSN), which transmits it to one of several orbiters, which then relay it to Perseverance. (Perseverance can receive direct comms from the DSN as a back up, but the data rate is slower.)

AI Enhances Mars Rover Navigation

In this demonstration, the AI model analyzed orbital images from the Mars Reconnaissance Orbiter’s HiRISE camera, as well as digital elevation models. The AI, which is based on Anthropic’s Claude AI, identified hazards like sand traps, boulder fields, bedrock, and rocky outcrops. Then it generated a path defined by a series of waypoints that avoids the hazards. From there, Perseverance’s auto-navigation system took over. It has more autonomy than its predecessors and can process images and driving plans while in motion.

There was another important step before these waypoints were transmitted to Perseverance. NASA’s Jet Propulsion Laboratory has a “twin” for Perseverance called the “Vehicle System Test Bed” (VSTB) in JPL’s Mars Yard. It’s an engineering model that the team can work with here on Earth to solve problems, or for situations like this. These engineering versions are common on Mars missions, and JPL has one for Curiosity, too.

“The fundamental elements of generative AI are showing a lot of promise in streamlining the pillars of autonomous navigation for off-planet driving: perception (seeing the rocks and ripples), localization (knowing where we are), and planning and control (deciding and executing the safest path),” said Vandi Verma, a space roboticist at JPL and a member of the Perseverance engineering team. “We are moving towards a day where generative AI and other smart tools will help our surface rovers handle kilometer-scale drives while minimizing operator workload, and flag interesting surface features for our science team by scouring huge volumes of rover images.”

AI’s Expanding Role in Space Exploration

AI is rapidly becoming ubiquitous in our lives, showing up in places that don’t necessarily have a strong use case for it. But this isn’t NASA hopping on the AI bandwagon. They’ve been developing automatic navigation systems for a while, out of necessity. In fact, Perseverance’s primary means of driving is its self-driving autonomous navigation system.

One thing that prevents fully-autonomous driving is the way uncertainty grows as the rover operates without human assistance. The longer the rover travels, the more uncertain it becomes about its position on the surface. The solution is to re-localize the rover on its map. Currently, humans do this. But this takes time, including a complete communication cycle between Earth and Mars. Overall, it limits how far Perseverance can go without a helping hand.

NASA/JPL is also working on a way that Perseverance can use AI to re-localize. The main roadblock is matching orbital images with the rover’s ground-level images. It seems highly likely that AI will be trained to excel at this.

It’s obvious that AI is set to play a much larger role in planetary exploration. The next Mars rover may be much different than current ones, with more advanced autonomous navigation and other AI features. There are already concepts for a swarm of flying drones released by a rover to expand its explorative reach on Mars. These swarms would be controlled by AI to work together and autonomously.

And it’s not just Mars exploration that will benefit from AI. NASA’s Dragonfly mission to Saturn’s moon Titan will make extensive use of AI. Not only for autonomous navigation as the rotorcraft flies around, but also for autonomous data curation.

“Imagine intelligent systems not only on the ground at Earth, but also in edge applications in our rovers, helicopters, drones, and other surface elements trained with the collective wisdom of our NASA engineers, scientists, and astronauts,” said Matt Wallace, manager of JPL’s Exploration Systems Office. “That is the game-changing technology we need to establish the infrastructure and systems required for a permanent human presence on the Moon and take the U.S. to Mars and beyond.”

Reference: https://ift.tt/edrpxSF

Saturday, February 14, 2026

Sub-$200 Lidar Could Reshuffle  Auto Sensor Economics




MicroVision, a solid-state sensor technology company located in Redmond, Wash., says it has designed a solid-state automotive lidar sensor intended to reach production pricing below US $200. That’s less than half of typical prices now, and it’s not even the full extent of the company’s ambition. The company says its longer-term goal is $100 per unit. MicroVision’s claim, which, if realized, would place lidar within reach of advanced driver-assistance systems (ADAS) rather than limiting it to high-end autonomous vehicle programs. Lidar’s limited market penetration comes down to one issue: cost.

Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That price roughly tenfold drop, from about $80,000, helps explain why suppliers now are now hopeful that another steep price reduction is on the horizon.

For solid-state devices, “it is feasible to bring the cost down even more when manufacturing at high volume,” says Hayder Radha, a professor of electrical and computer engineering at Michigan State University and director of the school’s Connected & Autonomous Networked Vehicles for Active Safety program. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”

“We are focused on delivering automotive-grade lidar that can actually be deployed at scale,” says MicroVision CEO Glen DeVos. “That means designing for cost, manufacturability, and integration from the start—not treating price as an afterthought.”

MicroVision’s Lidar System

Tesla CEO Elon Musk famously dismissed lidar in 2019 as “a fool’s errand,” arguing that cameras and radar alone were sufficient for automated driving. A credible path to sub-$200 pricing would fundamentally alter the calculus of autonomous-car design by lowering the cost of adding precise three-dimensional sensing to mainstream vehicles. The shift reflects a broader industry trend toward solid-state lidar designs optimized for low-cost, high-volume manufacturing rather than maximum range or resolution.

Before those economics can be evaluated, however, it’s important to understand what MicroVision is proposing to build.

The company’s Movia S is a solid-state lidar. Mounted at the corners of a vehicle, the sensor sends out 905-nanometer-wavelength laser pulses and measures how long it takes for light reflected from the surfaces of nearby objects to return. The arrangement of the beam emitters and receivers provides a fixed field of view designed for 180-degree horizontal coverage rather than full 360-degree scanning typical of traditional mechanical units. The company says the unit can detect objects at distances of up to roughly 200 meters under favorable weather conditions—compared with the roughly 300-meter radius scanned by mechanical systems—and supports frame rates suitable for real-time perception in driver-assistance systems. Earlier mechanical lidars, used spinning components to steer their beams but the Movia S is a phased-arraysystem. It controls the amplitude and phase of the signals across an array of antenna elements to steer the beam. The unit is designed to meet automotive requirements for vibration tolerance, temperature range, and environmental sealing.

MicroVision’s pricing targets might sound aggressive, but they are not without precedent. The lidar industry has already experienced one major cost reset over the past decade.

“Automakers are not buying a single sensor in isolation... They are designing a perception system, and cost only matters if the system as a whole is viable.” –Glen DeVos, MicroVision

Around 2016 and 2017, mechanical lidar systems used in early autonomous driving research often sold for close to $100,000. Those units relied on spinning assemblies to sweep laser beams across a full 360 degrees, which made them expensive to build and difficult to ruggedize for consumer vehicles.

“Back then, a 64-beam Velodyne lidar cost around $80,000,” says Radha.

Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That roughly tenfold drop helps explain why suppliers now believe another steep price reduction is possible.

“For solid-state devices, it is feasible to bring the cost down even more when manufacturing at high volume,” Radha says. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”

Solid-State Lidar Design Challenges

Lower cost, however, does not come for free. The same design choices that enable solid-state lidar to scale also introduce new constraints.

“Unlike mechanical lidars, which provide full 360-degree coverage, solid-state lidars tend to have a much smaller field of view,” Radha says. Many cover 180 degrees or less.

That limitation shifts the burden from the sensor to the system. Automakers will need to deploy three or four solid-state lidars around a vehicle to achieve full coverage. Even so, Radha notes, the total cost can still undercut that of a single mechanical unit.

What changes is integration. Multiple sensors must be aligned, calibrated, and synchronized so their data can be fused accurately. The engineering is manageable, but it adds complexity that price targets alone do not capture.

DeVos says MicroVision’s design choices reflect that reality. “Automakers are not buying a single sensor in isolation,” he says. “They are designing a perception system, and cost only matters if the system as a whole is viable.”

Those system-level tradeoffs help explain where low-cost lidar is most likely to appear first.

Most advanced driver assistance systems today rely on cameras and radar, which are significantly cheaper than lidar. Cameras provide dense visual information, while radar offers reliable range and velocity data, particularly in poor weather. Radha estimates that lidar remains roughly an order of magnitude more expensive than automotive radar.

But at prices in the $100 to $200 range, that gap narrows enough to change design decisions.

“At that point, lidar becomes appealing because of its superior capability in precise 3D detection and tracking,” Radha says.

Rather than replacing existing sensors, lower-cost lidar would likely augment them, adding redundancy and improving performance in complex environments that are challenging for electronic perception systems. That incremental improvement aligns more closely with how ADAS features are deployed today than with the leap to full vehicle autonomy.

MicroVision is not alone in pursuing solid-state lidar, and several suppliers including Chinese firms Hesai and RoboSense and other major suppliers such as Luminar and Velodyne have announced long-term cost targets below $500. What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.

Some competitors continue to prioritize long-range performance for autonomous vehicles, which pushes cost upward. Others have avoided aggressive pricing claims until they secure firm production commitments from automakers.

That caution reflects a structural challenge: Reaching consumer-level pricing requires large, predictable demand. Without it, few suppliers can justify the manufacturing investments needed to achieve true economies of scale.

Evaluating Lidar Performance Metrics

Even if low-cost lidar becomes manufacturable, another question remains: How should its performance be judged?

From a systems-engineering perspective, Radha says cost milestones often overshadow safety metrics.

“The key objective of ADAS and autonomous systems is improving safety,” he says. Yet there is no universally adopted metric that directly expresses safety gains from a given sensor configuration.

Researchers instead rely on perception benchmarks such as mean Average Precision, or mAP, which measures how accurately a system detects and tracks objects in its environment. Including such metrics alongside cost targets, says Radha, would clarify what performance is preserved or sacrificed as prices fall.

IEEE Spectrum has covered lidar extensively, often focusing on technical advances in scanning, range, and resolution. What distinguishes the current moment is the renewed focus on economics rather than raw capability

If solid-state lidar can reliably reach sub-$200 pricing, it will not invalidate Elon Musk’s skepticism—but it will weaken one of its strongest foundations. When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.

That decision, more than any single price claim, may determine whether lidar finally becomes a routine component of vehicle safety systems.

Reference: https://ift.tt/hKCzHJ2

Tomorrow’s Smart Pills Will Deliver Drugs and Take Biopsies

One day soon, a doctor might prescribe a pill that doesn’t just deliver medicine but also reports back on what it finds inside you—and t...