Thursday, April 9, 2026

GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale




ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integrates heterogeneous systems—including drones, robots, sensors, and human operators—into a unified zero-trust architecture. Through Secure Runtime Assurance (SRTA) and Secure Spatio-Temporal Reasoning (SSTR), ZTASP continuously verifies system integrity, enforces safety constraints, and enables resilient operation even under degraded conditions.

ZTASP has progressed beyond conceptual design, with operational validation at Technology Readiness Level (TRL) 7 in mission critical environments. Core components, including Saluki secure flight controllers, have reached TRL8 and are deployed in customer systems. While initially developed for high-consequence mission environments, the same assurance challenges are increasingly present across domains such as healthcare, transportation, and critical infrastructure.

Download this free whitepaper now!

Reference: https://ift.tt/BQqbGFO

Chip Can Project Video the Size of a Grain of Sand




By many estimates, quantum computers will need millions of qubits to realize their potential in applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubits has run into the problem of trying to control millions of laser beams.

That’s exactly the challenge scientists from MIT, the University of Colorado at Boulder, Sandia National Laboratories, and the MITRE Corporation were trying to overcome when they developed an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. It comes in the form of a less-than-0.1-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg cells.

“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the diamond-based quantum computer effort, called Quantum Moonshot, and a professor of quantum engineering at the University of Colorado at Boulder. Their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels to differentiate them from physical pixels— per second per square millimeter, more than fifty times the capability of previous technology, such as micro-electromechanical systems (MEMS) micromirror arrays.

“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says Henry Wen, a visiting researcher at MIT and a photonics engineer at QuEra Computing.

The chip’s distinguishing feature is an array of tiny metallic cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski-jumps” for light. Light is channeled along the length of each cantilever via a waveguide, and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric which expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.

Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of four thin layers of material and is curled approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers when cooled. On top of its four layers of material, each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width.

A micro-cantilever wiggles and waggles to project light in the right place.Matt Saha, Y. Henry Wen, et al.

What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ light beams to generate the right colors at the right time was a substantial effort, according to Andy Greenspon, a researcher at MITRE who also worked on the project. Now, the team has successfully projected the movie A Charlie Brown Christmas through the chip.

A warped projection of the Mona Lisa. The chip projected a roughly 125-micrometer image of the Mona Lisa.Matt Saha, Y. Henry Wen, et al.

Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area, would allow them to control all of the qubits with many fewer lasers.

Another process that Wen thinks the chip could improve is scanning objects for 3D printing. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen.

Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a lab-on-a-chip for cell biology or drug development. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”

Reference: https://ift.tt/GQ5xS6f

Wednesday, April 8, 2026

Iran-linked hackers disrupt operations at US critical infrastructure sites


Hackers working on behalf of the Iranian government are disrupting operations at multiple US critical infrastructure sites, likely in response to the country's ongoing war with the US, a half-dozen government agencies are warning.

In an advisory published Tuesday, the FBI, Cybersecurity and Infrastructure Security Agency, National Security Agency, Environmental Protection Agency, Department of Energy, and US Cyber Command “urgently" warned that the APT, or advanced persistent threat group, is targeting PLCs, short for programmable logic controllers. These devices, typically the size of a toaster, sit in factories, water treatment centers, oil refineries, and other industrial settings, often in remote locations. They provide an interface between computers used for automation and physical machinery.

Operational disruption and financial loss

“Since at least March 2026, the authoring agencies identified (through engagements with victim organizations) an Iranian-affiliated APT-group that disrupted the function of PLCs,” the advisory stated. “These PLCs were deployed across multiple US critical infrastructure sectors (including Government Services and Facilities, Waste Water Systems (WWS), and Energy sectors) within a wide variety of industrial automation processes. Some of the victims experienced operational disruption and financial loss.”

Read full article

Comments

Reference : https://ift.tt/DP0RaF1

Thousands of consumer routers hacked by Russia's military


The Russian military is once again hacking home and small office routers in widespread operations that send unwitting users to sites that harvest passwords and credential tokens for use in espionage campaigns, researchers said Tuesday.

An estimated 18,000 to 40,000 consumer routers, mostly those made by MikroTik and TP-Link, located in 120 countries, were wrangled into infrastructure belonging to APT28, an advanced threat group that’s part of Russia’s military intelligence agency known as the GRU, researchers from Lumen Technologies' Black Lotus Labs said. The threat group has operated for at least two decades and is behind dozens of high-profile hacks targeting governments worldwide. APT28 is also tracked under names including Pawn Storm, Sofacy Group, Sednit, Tsar Team, Forest Blizzard, and STRONTIUM.

Technical sophistication, tried-and-true techniques

A small number of routers were used as proxies to connect to a much larger number of other routers belonging to foreign ministries, law enforcement, and government agencies that APT28 wanted to spy on. The group then used its control of routers to change DNS lookups for select websites, including, Microsoft said, domains for the company’s 365 service.

Read full article

Comments

Reference : https://ift.tt/rLFXaf3

Tuesday, April 7, 2026

Temple University Student Highlights IEEE Membership Perks




Kyle McGinley graduated from high school in 2018 and, like many teenagers, he was unsure what career he wanted to pursue. Recuperating from a sports injury led him to consider becoming a physical therapist for athletes. But he was skilled at repairing cars and fixing things around the house, so he thought about becoming an engineer, like his father.

McGinley, who lives in Sellersville, Pa., took some classes at Montgomery County Community College in Blue Bell, while also working. During his years at the college, he took a variety of courses and was drawn to electrical engineering and computing, he says. He left to pursue a bachelor’s degree in electrical and computer engineering in Philadelphia at Temple University, where he is currently a junior.

Kyle McGinley


MEMBER GRADE

Student member

UNIVERSITY

Temple, in Philadelphia

MAJOR

Electrical and computer engineering

The 26-year-old is also a teaching assistant and a research assistant at Temple. His research focuses on applying artificial intelligence to electrical hardware and robotics. He helped build an AI-integrated android companion to assist in-home caregivers.

Temple recognized McGinley’s efforts last year with its Butz scholarship, which is awarded annually to an electrical and computer engineering undergraduate with an interest in software development, AI development systems, health education software, or a similar field.

An IEEE student member, he is active within the university’s student branch.

“My career ambition after I graduate is to gain real-world experience in the engineering industry to learn skills outside of academia,” he says. “Long term, I want to do project management or work in a technical lead role, with the primary goal of creating impactful projects that I can be proud of.”

Building a robot aide

McGinley is a teaching assistant for his digital circuit design course. In a class of 35 students, it can be a struggle for some to digest the professor’s words, he says.

“My job is to answer students’ questions if they are having problems following the professor’s lecture or are confused about any of the topics,” he says. “In the lab, I help students debug code or with hardware issues they have on the FPGA [field-programmable gate array] boards.”

He also conducts research for the university’s Computer Fusion Lab under the supervision of IEEE Senior Member Li Bai, a professor of electrical and computer engineering. McGinley writes software programs at the lab.

“In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”

One such assignment was working with the Temple School of Social Work at the Barnett College of Public Health to build a robot companion integrated with AI to assist individuals with Parkinson’s disease and their caregivers.

“I realized the need for this with my grandmother, when she was taking care of my grandfather,” he says. “It was a lot for her, trying to remember everything.”

Using the latest software and hardware, he and three classmates rebuilt an older lab robot. They installed an operating system and used Python and C++ for its control, perception, and behavior, he says. The students also incorporated Google’s Gemini AI to help with routine tasks such as scheduling medication reminders and setting alarms for upcoming doctor visits.

A small humanoid robot standing on a kitchen counter.Kyle McGinley helped build an AI-integrated android to assist individuals with Parkinson’s disease and their caregivers.Temple University of Public Health

The AI-integrated android was intended to assist, not replace, the caregivers by handling the mental load of remembering tasks, he says.

“This was one of the cool things that drew me to working in the robotics field,” he says. “Something where AI could be used to help caregivers do simple tasks.”

The benefits of a student branch

McGinley joined Temple’s IEEE student branch last year after one of his professors offered extra credit to students who did so. After attending meetings and participating in a few workshops, he found he really liked the club, he says, adding that he made new friends and enjoyed the camaraderie with other engineering students.

After the student branch’s board members got to know McGinley better, they asked him to become the club’s historian and manage its social media account. He also helps with event planning, creating and posting fliers, taking pictures, and shooting videos of the gatherings.

The branch has benefited from McGinley’s involvement, but he says it’s a two-way street.

“The biggest things I’ve learned are being held accountable and being reliable,” he says. “I am responsible for other people knowing what’s going on.”

Being an active volunteer has improved his communication skills, he says.

“Learning to clearly communicate with other people to make sure everyone is on the same page is important,” he says. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”

He encourages students to join their university’s IEEE branch.

“I know it can be scary because you might not know anyone, but it honestly can’t hurt you; it could actually benefit you,” he says. “Being active is going to help you with a lot of skills that you need.

“You’ll definitely get opportunities that you would have never known about, like a scholarship or working in the research lab. I would have never gotten these opportunities if I hadn’t shown up. Joining IEEE and being active is the best thing you can do for your career.”

Reference: https://ift.tt/XRU5Vp6

Decentralized Training Can Help Solve AI’s Energy Woes




Artificial intelligence harbors an enormousenergy appetite. Such constant cravings are evident in thehefty carbon footprint of thedata centers behind the AI boom and the steady increase over time ofcarbon emissions from training frontierAI models.

No wonder big tech companies are warming up tonuclear energy, envisioning a future fueled by reliable, carbon-free sources. But whilenuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.

Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in asolar-powered home. Instead of constructing more data centers that requireelectric grids to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.

Hardware in harmony

Training AI models is a huge data center sport, synchronized across clusters of closely connectedGPUs. But ashardware improvements struggle to keep up with the swift rise in size oflarge language models, even massive single data centers are no longer cutting it.

Tech firms are turning to the pooled power of multiple data centers—no matter their location.Nvidia, for instance, launched theSpectrum-XGS Ethernet for scale-across networking, which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly,Cisco introduced its8223 router designed to “connect geographically dispersed AI clusters.”

Other companies are harvesting idle compute inservers, sparking the emergence of aGPU-as-a-Service business model. TakeAkash Network, a peer-to-peercloud computing marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.

“If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEOGreg Osuri. “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”

Software in sync

In addition to orchestrating thehardware, decentralized AI training also requires algorithmic changes on thesoftware side. This is wherefederated learning, a form of distributedmachine learning, comes in.

It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explainsLalana Kagal, a principal research scientist atMIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who leads theDecentralized Information Group. The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.

But there are drawbacks to distributing both data and computation. The constant back and forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.

“A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”

To overcome these hurdles, researchers atGoogle DeepMind developedDiLoCo, a distributed low-communication optimizationalgorithm. DiLoCo forms whatGoogle DeepMind research scientistArthur Douillard calls “islands of compute,” where each island consists of a group ofchips. Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.

An improved version dubbedStreaming DiLoCo further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.

AI development platformPrime Intellect implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameterINTELLECT-1 model trained across five countries spanning three continents. Upping the ante,0G Labs, makers of a decentralized AIoperating system,adapted DiLoCo to train a 107-billion-parameter foundation model under a network of segregated clusters with limited bandwidth. Meanwhile, popularopen-sourcedeep learning frameworkPyTorch included DiLoCo in itsrepository of fault tolerance techniques.

“A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”

A more energy-efficient way to train AI

With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.

And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting tradeoff of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”

Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created itsStarcluster program. One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.

Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest inbatteries for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.

Backend work is already underway to enablehomes to participate as providers in the Akash Network, and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.

Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”

Reference: https://ift.tt/oWNJl9Y

Why AI Systems Fail Quietly




In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong.

Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do.

This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems.

When Systems Fail Without Breaking

Consider a hypothetical enterprise AI assistant designed to summarize regulatory updates for financial analysts. The system retrieves documents from internal repositories, synthesizes them using a language model, and distributes summaries across internal channels.

Technically, everything works. The system retrieves valid documents, generates coherent summaries, and delivers them without issue.

But over time, something slips. Maybe an updated document repository isn’t added to the retrieval pipeline. The assistant keeps producing summaries that are coherent and internally consistent, but they’re increasingly based on obsolete information. Nothing crashes, no alerts fire, every component behaves as designed. The problem is that the overall result is wrong.

From the outside, the system looks operational. From the perspective of the organization relying on it, the system is quietly failing.

The Limits of Traditional Observability

One reason quiet failures are difficult to detect is that traditional systems measure the wrong signals. Operational dashboards track uptime, latency, and error rates, the core elements of modern observability. These metrics are well-suited for transactional applications where requests are processed independently, and correctness can often be verified immediately.

Autonomous systems behave differently. Many AI-driven systems operate through continuous reasoning loops, where each decision influences subsequent actions. Correctness emerges not from a single computation but from sequences of interactions across components and over time. A retrieval system may return contextually inappropriate and technically valid information. A planning agent may generate steps that are locally reasonable but globally unsafe. A distributed decision system may execute correct actions in the wrong order.

None of these conditions necessarily produces errors. From the perspective of conventional observability, the system appears healthy. From the perspective of its intended purpose, it may already be failing.

Why Autonomy Changes Failure

The deeper issue is architectural. Traditional software systems were built around discrete operations: a request arrives, the system processes it, and the result is returned. Control is episodic and externally initiated by a user, scheduler, or external trigger.

Autonomous systems change that structure. Instead of responding to individual requests, they observe, reason, and act continuously. AI agents maintain context across interactions. Infrastructure systems adjust resource in real time. Automated workflows trigger additional actions without human input.

In these systems, correctness depends less on whether any single component works, and more on coordination across time.

Distributed-systems engineers have long wrestled with issues of coordination. But this is coordination of a new kind. It’s no longer about things like keeping data consistent across services. It’s about ensuring that a stream of decisions—made by models, reasoning engines, planning algorithms, and tools, all operating with partial context—adds up to the right outcome.

A modern AI system may evaluate thousands of signals, generate candidate actions, and execute them across a distributed infrastructure. Each action changes the environment in which the next decision is made. Under these conditions, small mistakes can compound. A step that is locally reasonable can still push the system further off course.

Engineers are beginning to confront what might be called behavioral reliability: whether an autonomous system’s actions remain aligned with its intended purpose over time.

The Missing Layer: Behavioral Control

When organizations encounter quiet failures, the initial instinct is to improve monitoring: deeper logs, better tracing, more analytics. Observability is essential, but it only shows that the behavior has already diverged—it doesn’t correct it.

Quiet failures require something different: the ability to shape system behavior while it is still unfolding. In other words, autonomous systems increasingly need control architectures, not just monitoring.

Engineers in industrial domains have long relied on supervisory control systems. These are software layers that continuously evaluate a system’s status and intervene when behavior drifts outside safe bounds. Aircraft flight-control systems, power-grid operations, and large manufacturing plants all rely on such supervisory loops. Software systems historically avoided them because most applications didn’t need them. Autonomous systems increasingly do.

Behavioral monitoring in AI systems focuses on whether actions remain aligned with intended purpose, not just whether components are functioning. Instead of relying only on metrics such as latency or error rates, engineers look for signs of behavior drift: shifts in outputs, inconsistent handling of similar inputs, or changes in how multi-step tasks are carried out. An AI assistant that begins citing outdated sources, or an automated system that takes corrective actions more often than expected, may signal that the system is no longer using the right information to make decisions. In practice, this means tracking outcomes and patterns of behavior over time.

Supervisory control builds on these signals by intervening while the system is running. A supervisory layer checks whether ongoing actions remain within acceptable bounds and can respond by delaying or blocking actions, limiting the system to safer operating modes, or routing decisions for review. In more advanced setups, it can adjust behavior in real time—for example, by restricting data access, tightening constraints on outputs, or requiring extra confirmation for high-impact actions.

Together, these approaches turn reliability into an active process. Systems don’t just run, they are continuously checked and steered. Quiet failures may still occur, but they can be detected earlier and corrected while the system is operating.

A Shift in Engineering Thinking

Preventing quiet failures requires a shift in how engineers think about reliability: from ensuring components work correctly to ensuring system behavior stays aligned over time. Rather than assuming that correct behavior will emerge automatically from component design, engineers must increasingly treat behavior as something that needs active supervision.

As AI systems become more autonomous, this shift will likely spread across many domains of computing, including cloud infrastructure, robotics, and large-scale decision systems. The hardest engineering challenge may no longer be building systems that work, but ensuring that they continue to do the right thing over time.

Reference: https://ift.tt/v4pPMaI

GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale

ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integ...