Tuesday, March 10, 2026

Intel Demos Chip to Compute With Encrypted Data




Summary

Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer?

There is a way to do computing on encrypted data without ever having it decrypted. It’s called fully homomorphic encryption, or FHE. But there’s a rather large catch. It can take thousands—even tens of thousands—of times longer to compute on today’s CPUs and GPUs than simply working with the decrypted data.

So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. “Heracles is the first hardware that works at scale,” he says.

The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel’s most advanced, 3-nanometer FinFET technology. And it’s flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side.

On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn’t something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

Looking back on the five-year journey to bring the Heracles chip to life, Ro Cammarota, who led the project at Intel until last December and is now at University of California Irvine, says “we have proven and delivered everything that we promised.”

FHE Data Expansion

FHE is fundamentally a mathematical transformation, sort of like the Fourier transform. It encrypts data using a quantum-computer-proof algorithm, but, crucially, uses corollaries to the mathematical operations usually used on unencrypted data. These corollaries achieve the same ends on the encrypted data.

One of the main things holding such secure computing back is the explosion in the size of the data once it’s encrypted for FHE, Anupam Golder, a research scientist at Intel’s circuits research lab, told engineers at ISSCC. “Usually, the size of cipher text is the same as the size of plain text, but for FHE it’s orders of magnitude larger,” he said.

While the sheer volume is a big problem, the kinds of computing you need to do with that data is also an issue. FHE is all about very large numbers that must be computed with precision. While a CPU can do that, it’s very slow going—integer addition and multiplication take about 10,000 more clock cycles in FHE. Worse still, CPUs aren’t built to do such computing in parallel. Although GPUs excel at parallel operations, precision is not their strong suit. (In fact, from generation to generation, GPU designers have devoted more and more of the chip’s resources to computing less and less-precise numbers.)

FHE also requires some oddball operations with names like “twiddling” and “automorphism,” and it relies on a compute-intensive noise-cancelling process called bootstrapping. None of these things are efficient on a general-purpose processor. So, while clever algorithms and libraries of software cheats have been developed over the years, the need for a hardware accelerator remains if FHE is going to tackle large-scale problems, says Cammarota.

The Labors of Heracles

Heracles was initiated under a DARPA program five years ago to accelerate FHE using purpose-built hardware. It was developed as “a whole system-level effort that went all the way from theory and algorithms down to the circuit design,” says Cammarota.

Among the first problems was how to compute with numbers that were larger than even the 64-bit words that are today a CPU’s most precise. There are ways to break up these gigantic numbers into chunks of bits that can be calculated independently of each other, providing a degree of parallelism. Early on, the Intel team made a big bet that they would be able to make this work in smaller, 32-bit chunks, yet still maintain the needed precision. This decision gave the Heracles architecture some speed and parallelism, because the 32-bit arithmetic circuits are considerably smaller than 64-bit ones, explains Cammarota.

At Heracles’ heart are 64 compute cores—called tile-pairs—arranged in an eight-by-eight grid. These are what are called single instruction multiple data (SIMD) compute engines designed to do the polynomial math, twiddling, and other things that make up computing in FHE and to do them in parallel. An on-chip 2D mesh network connects the tiles to each other with wide, 512 byte, buses.

Important to making encrypted computing efficient is feeding those huge numbers to the compute cores quickly. The sheer amount of data involved meant linking 48-GB-worth of expensive high-bandwidth memory to the processor with 819 GB per second connections. Once on the chip, data musters in 64 megabytes of cache memory—somewhat more than an Nvidia Hopper-generation GPU. From there it can flow through the array at 9.6 terabytes per second by hopping from tile-pair to tile-pair.

To ensure that computing and moving data don’t get in each other’s way, Heracles runs three synchronized streams of instructions simultaneously, one for moving data onto and off of the processor, one for moving data within it, and a third for doing the math, Golder explained.

It all adds up to some massive speed ups, according to Intel. Heracles—operating at 1.2 gigahertz—takes just 39 microseconds to do FHE’s critical math transformation, a 2,355-fold improvement over an Intel Xeon CPU running at 3.5 GHz. Across seven key operations, Heracles was 1,074 to 5,547 times as fast.

The differing ranges have to do with how much data movement is involved in the operations, explains Mathew. “It’s all about balancing the movement of data with the crunching of numbers,” he says.

FHE Competition

“It’s very good work,” Kurt Rohloff, chief technology officer at FHE software firm Duality Technology, says of the Heracles results. Duality was part of a team that developed a competing accelerator design under the same DARPA program that Intel conceived Heracles under. “When Intel starts talking about scale, that usually carries quite a bit of weight.”

Duality’s focus is less on new hardware than on software products that do the kind of encrypted queries that Intel demonstrated at ISSCC. At the scale in use today “there’s less of a need for [specialized] hardware,” says Rohloff. “Where you start to need hardware is emerging applications around deeper machine-learning oriented operations like neural net, LLMs, or semantic search.”

Last year, Duality demonstrated an FHE-encrypted language model called BERT. Like more famous LLMs such as ChatGPT, BERT is a transformer model. However it’s only one tenth the size of even the most compact LLMs.

John Barrus, vice president of product at Dayton, Ohio-based Niobium Microsystems, an FHE chip startup spun out of another DARPA competitor, agrees that encrypted AI is a key target of FHE chips. “There are a lot of smaller models that, even with FHE’s data expansion, will run just fine on accelerated hardware,” he says.

With no stated commercial plans from Intel, Niobium expects its chip to be “the world’s first commercially viable FHE accelerator, designed to enable encrypted computations at speeds practical for real-world cloud and AI infrastructure.” Although it hasn’t announced when a commercial chip will be available, last month the startup revealed that it had inked a deal worth 10 billion South Korean won (US $6.9 million) with Seoul-based chip design firm Semifive to develop the FHE accelerator for fabrication using Samsung Foundry’s 8-nanometer process technology.

Other startups including Fabric Cryptography, Cornami, and Optalysys have been working on chips to accelerate FHE. Optalysys CEO Nick New says Heracles hits about the level of speedup you could hope for using an all-digital system. “We’re looking at pushing way past that digital limit,” he says. His company’s approach is to use the physics of a photonic chip to do FHE’s compute-intensive transform steps. That photonics chip is on its seventh generation, he says, and among the next steps is to 3D integrate it with custom silicon to do the non-transform steps and coordinate the whole process. A full 3D-stacked commercial chip could be ready in two or three years, says New.

While competitors develop their chips, so will Intel, says Mathew. It will be improving on how much the chip can accelerate computations by fine tuning the software. It will also be trying out more massive FHE problems, and exploring hardware improvements for a potential next generation. “This is like the first microprocessor… the start of a whole journey,” says Mathew.

Reference: https://ift.tt/bOjGZ9R

Finite-Element Approaches to Transformer Harmonic and Transient Analysis




Explore structured finite-element methodologies for analyzing transformer behavior under harmonic and transient conditions — covering modelling, solver configuration, and result validation techniques.

What Attendees will Learn

  1. How FEM enables pre-fabrication performance evaluation — Assess magnetic field distribution, current behavior, and turns-ratio accuracy through simulation rather than physical testing.
  2. How harmonic analysis uncovers saturation and imbalance — Identify high-flux regions and current asymmetries that analytical methods may not capture.
  3. How transient simulations characterize dynamic response — Examine time-domain current waveforms, inrush behavior, and multi-cycle stabilization.
  4. How modelling choices affect simulation fidelity — Understand the impact of coil definitions, winding configurations, solver type, and material models on accuracy.

Download this free whitepaper now!

Reference: https://ift.tt/zdmrew0

Monday, March 9, 2026

How Cross-Cultural Engineering Drives Tech Advancement




Innovation rarely happens in isolation. Usually, the systems that engineers design are shaped by global teams whose members’ knowledge and ideas move across borders as easily as data.

That is especially true in my field of robotics and automation—where hardware, software, and human workflows function together. Progress depends not only on technical skill but also on how engineers frame problems and evaluate trade-offs. My career has shown me how cross-cultural experiences can shape the framing.

Working across different cultures has influenced how I approach collaboration, design decisions, and risk. I am an IEEE member and a mechanical engineer at Re:Build Fikst, in Wilmington, Mass., but I grew up in India and began my engineering education there.

Experiencing both work environments has reinforced the idea that diversity in science, technology, engineering, and mathematics fields is not only about representation; it is a technical advantage that affects how systems are designed and deployed.

Gaining experience across cultures

I began my training as an undergraduate student in electrical and electronics engineering at Amity University, in Noida. While studying, I developed a strong foundation in problem-framing and disciplined adaptability.

Working on a project requires identifying what the system needs to demonstrate and determining how best to validate that behavior within defined parameters. Rather than starting from idealized assumptions, Amity students were encouraged to focus on essential system behavior and prioritize the variables that most influenced the technology’s performance.

The approach reinforced first-principles thinking—starting from fundamental physical or system-level behavior rather than defaulting to established solutions—and encouraged the efficient use of available resources.

At the same time, I learned that efficiency has limits. In complex or safety-critical systems, insufficient validation can introduce hidden risks and reduce reliability. Understanding when simplicity accelerates progress and when additional rigor is necessary became an important part of my development as an engineer.

After getting my undergraduate degree, I moved to the United States in 2021 to pursue a master’s degree in robotics and autonomous systems at Arizona State University in Tempe. I encountered a new engineering culture in the United States.

In the U.S. research and development sector, especially in robotics and automation, rigor is nonnegotiable. Systems are designed to perform reliably across many cycles, users, and conditions. Documentation, validation, safety reviews, and reproducibility are integral to the process.

Those expectations do not constrain creativity; they allow systems to scale, endure, and be trusted.

Moving between the two different engineering cultures required me to adjust. I had to balance my instinct for efficiency with a more formal structure. In the United States, design decisions demand more justification. Collaboration means aligning with scientists, software engineers, and technicians. Each discipline brings different priorities and definitions of success to the team.

Over time, I realized that the value of both experiences was not in choosing one over the other but in learning when to apply each.

The balance is particularly critical in robotics and automation. Resourcefulness without rigor can fail at scale. A prototype that works in a controlled lab setting, for example, might break down when exposed to different users, operating conditions, or extended duty cycles.

At the same time, rigor without adaptability can slow innovation, such as when excessive documentation or overengineering delays early-stage testing and iteration.

Engineers who navigate multiple educational and professional systems often develop an intuition for managing the tension between the different experiences, building solutions that are robust and practical and that fit real-world workflows rather than idealized ones.

Much of my work today involves integrating automated systems into environments where technical performance must align with how people will use them. For example, a robotic work cell (a system that performs a specific task) might function flawlessly in isolation but require redesign once operators need clearer access for loading materials, troubleshooting faults, or performing routine maintenance. Similarly, an automated testing system must account not only for ideal operating conditions but also for how users respond to error messages, interruptions, and unexpected outputs.

In practice, that means thinking beyond individual components to consider how systems will be operated, maintained, and restored to service after faults or interruptions.

My cross-cultural background shapes how I evaluate design trade-offs and collaboration across disciplines.

How diverse teams can help improve tech design

Engineers trained in different cultures can bring distinct approaches to the same problem. Some might emphasize rapid iteration while others prioritize verification and robustness. When perspectives collide, teams ask better questions earlier. They challenge defaults, find edge cases, and design technologies that are more resilient to real-world variability.

Diversity of thought is certainly important in robotics and automation, where systems sit at the intersection of machines and people. Designing effective automation requires understanding how users interact with technology, how errors propagate, and how different environments influence the technology. Engineers with cross-cultural experience often bring heightened awareness of the variability, leading to better design decisions and more collaborative teams.

Engineers from outside of the United States play a critical role in the country’s research and development ecosystem, especially in interdisciplinary fields. Many of us act as bridges, connecting problem-solving approaches, expectations, and design philosophies shaped in different parts of the world. We translate not just language but also engineering intent, helping teams move from theories to practical deployment.

As robotics and automation continue to evolve, the challenges ahead—including scaling experimentation, improving reproducibility, and integrating intelligent systems into real-world environments—will require engineers who are comfortable working across boundaries. Navigating boundaries, which could be geographic, disciplinary, or cultural, is increasingly part of the job.

The engineering ecosystems in India and the United States are complex, mature, and evolving. My journey in both has taught me that being a strong engineer is not about adopting a single mindset. It’s about knowing how to adapt.

In an interconnected, multinational world, innovation belongs to engineers who can navigate the differences and turn them into strengths.

Reference: https://ift.tt/yMWXKjd

Do Offshore Wind Farms Pose National Security Risks?




When the Trump administration last year sought to freeze construction of offshore wind farms by citing concerns about interference with military radar and sonar, the implication was that these were new issues. But for more than a decade, the United States, Taiwan, and many European countries have successfully mitigated wind turbines’ security impacts. Some European countries are even integrating wind farms with national defense schemes.

“It’s not a choice of whether we go for wind farms or security. We need both,” says Ben Bekkering, a retired vice admiral in the Netherlands and current partner of the International Military Council on Climate and Security.

It’s a fact that offshore wind farms can degrade radar surveillance systems and subsea sensors designed to detect military incursions. But it’s a problem with real-world solutions, say Bekkering and other defense experts contacted by IEEE Spectrum. Those solutions include next-generation radar technology, radar-absorbing coatings for wind turbine blades and multi-mode sensor suites that turn offshore wind farm security equipment into forward eyes and ears for defense agencies.

How Do Wind Farms Interfere With Radar?

Wind turbines interfere with radar because they’re large objects that reflect radar signals. Their spinning blades can introduce false positives on radar screens by inducing a wavelength-shifting Doppler effect that gets flagged as a flying object. Turbines can also obscure aircraft, missiles and drones by scattering radar signals or by blinding older line-of-sight radars to objects behind them, according to a 2024 U.S. Department of Energy (DOE) report.

“Real-world examples from NATO and EU Member States show measurable degradation in radar performance, communication clarity, and situational awareness,” states a 2025 presentation from the €2-million (US$2.3-million) offshore wind Symbiosis Project, led by the Brussels-based European Defence Agency.

However, “measurable” doesn’t always mean major. U.S. agencies that monitor radar have continued to operate “without significant impacts” from wind turbines thanks to field tests, technology development, and mitigation measures taken by U.S. agencies since 2012, according to the DOE. “It is true that they have an impact, but it’s not that big,” says Tue Lippert, a former Danish special forces commander and CEO of Copenhagen-based security consultancy Heimdal Critical Infrastructure.

To date, impacts have been managed through upgrades to radar systems, such as software algorithms that identify a turbine’s radar signature and thus reduce false positives. Careful wind farm siting helps too. During the most recent designation of Atlantic wind zones in the U.S., for example, the Biden administration reduced the geographic area for a proposed zone off the Maryland coast by 79 percent to minimize defense impacts.

Radar impacts can be managed even better by upgrading hardware, say experts. Newer solid-state, phased-array radars are better at distinguishing turbines from other objects than conventional mechanical radars. Phased arrays shift the timing of hundreds or thousands of individual radio waves, creating interference patterns to steer the radar beams. The result is a higher-resolution signal that offers better tracking of multiple objects and better visibility behind objects in its path. “Most modern radars can actually see through wind farms,” says Lippert.

One of the Trump administration’s first moves in its overhaul of civilian air traffic was a $438-million order for phased-array radar systems and other equipment from Collins Aerospace, which touts wind farm mitigation as one of its products’ key features.

Close-up of a militaristic yet compact radar mounted on the rear bed of a vehicle. Saab’s compact Giraffe 1X combined surface-and-air-defense radar was installed in 2021 on an offshore wind farm near England.Saab

Can Wind Farms Aid Military Surveillance?

Another radar mitigation option is “infill” radar, which fills in coverage gaps. This involves installing additional radar hardware on land to provide new angles of view through a wind farm or putting radar systems on the offshore turbines to extend the radar field of view.

In fact, wind farms are increasingly being tapped to extend military surveillance capabilities. “You’re changing the battlefield, but it’s a change to your advantage if you use it as a tactical lever,” says Lippert.

In 2021 Linköping, Sweden-based defense contractor Saab and Danish wind developer Ørsted demonstrated that air defense radar can be placed on a wind farm. Saab conducted a two-month test of its compact Giraffe 1X combined surface-and-air-defense radar on Ørsted’s Hornsea 1 wind farm, located 120 kilometers east of England’s Yorkshire coast. The installation extended situational awareness “beyond the radar horizon of the ground-based long-range radars,” claims Saab. The U.K. Ministry of Defence ordered 11 of Saab’s systems.

Putting surface radar on turbines is something many offshore wind operators do already to track their crew vessels and to detect unauthorized ships within their arrays. Sharing those signals, or even sharing the equipment, can give national defense forces an expanded view of ships moving within and around the turbines. It can also improve detection of low altitude cruises missiles, says Bekkering, which can evade air defense radars.

Sharing signals and equipment is part of a growing trend in Europe towards “dual use” of offshore infrastructure. Expanded dual-use sensing is already being implemented in Belgium, the Netherlands and Poland, and was among the recommendations from Europe’s Symbiosis Project.

In fact, Poland mandates inclusion of defense-relevant equipment on all offshore wind farms. Their first project carries radar and other sensors specified by Poland’s Ministry of Defense. The wind farm will start operating in the Baltic later this year, roughly 200 kilometers south of Kaliningrad, a Russian exclave.

The U.K. is experimenting too. Last year West Sussex-based LiveLink Aerospace demonstrated purpose-built, dual-use sensors atop wind turbines offshore from Aberdeen. The compact equipment combines a suite of sensors including electro-optical sensors, thermal and visible light cameras, and detectors for radio frequency and acoustic signals.

In the past, wind farm operators tended to resist cooperating with defense projects, fearing that would turn their installations into military targets. And militaries were also reluctant to share, because they are used to having full control over equipment.

But Russia’s increasingly aggressive posture has shifted thinking, say security experts. Russia’s attacks on Ukraine’s power grid show that “everything is a target,” says Tobhias Wikström, CEO for Luleå, Sweden-based Parachute Consulting and a former lieutenant colonel in Sweden’s air force. Recent sabotage of offshore gas pipelines and power cables is also reinforcing the sense that offshore wind operators and defense agencies need to collaborate.

Why Is Sweden Restricting Offshore Wind?

Contrary to Poland and the U.K., Sweden is the one European country that, like the U.S. under Trump’s second administration, has used national security to justify a broad restriction on offshore wind development. In 2024 Sweden rejected 13 projects along its Baltic coast, which faces Kaliningrad, citing anticipated degradation in its ability to detect incoming missiles.

Saab’s CEO rejected the government’s argument, telling a Swedish newspaper that the firm’s radar “can handle” wind farms. Wikström at Parachute Consulting also questions the government’s claim, noting that Sweden’s entry into NATO in 2024 gives its military access to Finnish, German and Polish air defense radars, among others, that together provide an unobstructed view of the Baltic. “You will always have radars in other locations that will cross-monitor and see what’s behind those wind turbines,” says Wikström.

Politics are likely at play, says Wikström, noting that some of the coalition government’s parties are staunchly pro-nuclear. But he says a deeper problem is that the military experts who evaluate proposed wind projects, as he did before retiring in 2021, lack time and guidance.

By banning offshore wind projects instead of embracing them, Sweden and the U.S. may be missing out on opportunities for training in that environment, says Lippert, who regularly serves with U.S. forces as a reserves liaison officer with Denmark’s Greenland-based Joint Arctic Command. As he puts it: “The Chinese and Taiwanese coasts are plastered with offshore wind. If the U.S. Navy and Air Force are not used to fighting in littoral environments filled with wind farms, then they’re at a huge disadvantage when war comes.”

Reference: https://ift.tt/lWOBYH7

Sunday, March 8, 2026

Military AI Policy Needs Democratic Oversight




A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process?

The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff.

Anthropic has refused to cross two lines: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as “ideological constraints” embedded in commercial AI systems, arguing that determining lawful military use should be the government’s responsibility — not the vendor’s. As he put it in a speech at Elon Musk’s SpaceX last month, “We will not employ AI models that won’t allow you to fight wars.”

Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement.

Procurement policies

In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can decline to provide them. For example, a coalition of companies have signed an open letter pledging not to weaponize general-purpose robots. That basic symmetry is a feature of the free market.

Where the situation becomes more complicated — and more troubling — is in the decision to designate Anthropic a “supply chain risk.” That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government’s preferred contractual terms.

Using this authority in that manner marks a significant shift — from a procurement disagreement to the use of coercive leverage. Hegseth has declared that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face legal challenges, but it raises the stakes well beyond the loss of a single DOD contract.

AI governance

It is also important to distinguish between the two substantive issues Anthropic has reportedly raised.

The first, opposition to domestic surveillance of U.S. citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to monitoring Americans. A company stating that it does not want its tools used to facilitate domestic surveillance is not inventing a new principle; it is aligning itself with longstanding democratic guardrails.

To be clear, DOD is not affirmatively asserting that it intends to use the technology to surveil Americans unlawfully. Its position is that it does not want to procure models with built-in restrictions that preempt otherwise lawful government use. In other words, the Department of Defense argues that compliance with the law is the government’s responsibility — not something that needs to be embedded in a vendor’s code.

Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of harmful or high-risk tasks, including assistance with surveillance. The disagreement is therefore less about current intent than about institutional control over constraints: whether they should be imposed by the state through law and oversight, or by the developer through technical design.

The second issue, opposition to fully autonomous military targeting, is more complex.

The DOD already maintains policies requiring human judgment in the use of force, and debates over autonomy in weapons systems are ongoing within both military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness.

Reasonable people can disagree about where those lines should be drawn.

But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage.

If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear — not only to companies, but to the public.

The U.S. often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. That distinction carries less weight if AI governance is determined primarily through executive ultimatums issued behind closed doors.

There is also a strategic dimension. If companies conclude that participation in federal markets requires surrendering all deployment conditions, some may exit those markets. Others may respond by weakening or removing model safeguards to remain eligible for government contracts. Neither outcome strengthens U.S. technological leadership.

The DOD is correct that it cannot allow potential “ideological constraints” to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for corporate risk management in shaping deployment conditions. In high-risk domains — from aerospace to cybersecurity — contractors routinely impose safety standards, testing requirements and operational limitations as part of responsible commercialization. AI should not be treated as uniquely exempt from that practice.

Moreover, built-in safeguards need not be seen as obstacles to military effectiveness. In many high-risk sectors, layered oversight is standard practice: internal controls, technical fail-safes, auditing mechanisms and legal review operate together. Technical constraints can serve as an additional backstop, reducing the risk of misuse, error or unintended escalation.

Congress is AWOL

The DOD should retain ultimate authority over lawful use. But it need not reject the possibility that certain guardrails embedded at the design level could complement its own oversight structures rather than undermine them. In some contexts, redundancy in safety systems strengthens, not weakens, operational integrity.

At the same time, a company’s unilateral ethical commitments are no substitute for public policy. When technologies carry national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons and rules of engagement belong in democratic institutions.

This episode illustrates a pivotal moment in AI governance. AI systems at the frontier of technology are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially battlefield decision-making. That makes them too consequential to be governed solely by corporate policy — and too consequential to be governed solely by executive discretion.

The solution is not to empower one side over the other. It is to strengthen the institutions that mediate between them.

Congress should clarify statutory boundaries for military AI use and investigate whether sufficient oversight exists. The DOD should articulate detailed doctrine for human control, auditing and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs and procurement policy should reflect those publicly established standards.

If AI guardrails can be removed through contract pressure, they will be treated as negotiable. However, if they are grounded in law, they can become stable expectations.

Democratic constraints on military AI belong in statute and doctrine — not in private contract negotiations.

This article is adapted by the author with permission from Tech Policy Press. Read the original article.

Reference: https://ift.tt/TyWbA4B

Saturday, March 7, 2026

Laser-Based 3D Printing Could Build Future Bases on the Moon




Through the Artemis Program, NASA hopes to establish a permanent human presence on the Moon in its southern polar region. China, Russia, and the European Space Agency (ESA) have similar plans, all of which involve building bases near the permanently shadowed regions (PSRs)craters that contain water icethat dot the South Pole-Aitken Basin. For these and other agencies, it is vital that these bases be as self-sufficient as possible since resupply missions cannot be launched regularly and take several days to arrive.

Universe Today logo; text reads "This post originally appeared on Universe Today."

Therefore, any plan for a lunar base must come down to harvesting local resources to meet the needs of its crews as much as possiblea process known as In-Situ Resource Utilization (ISRU). In a recent study, researchers at The Ohio State University (OSU) proposed using a specialized laser-based 3D printing method to turn lunar regolith into hardened building material. According to their findings, this method can produce durable structures that withstand radiation and other harsh conditions on the lunar surface.

The research team was led by Sizhe Xu, a graduate research associate at OSU. He was joined by colleagues from OSU’s Department of Integrated Systems Engineering, Mechanical and Aerospace Engineering, and Materials Science & Engineering. Their paper, “Laser directed energy deposition additive manufacturing of lunar highland regolith simulant,” appeared in the journal Acta Astronautica.

Challenges of Lunar 3D Printing

The importance of ISRU for human exploration has prompted the rapid development of additive manufacturing systems, or 3D printing. These systems have proven effective at fabricating tools, structures, and habitats, effectively reducing dependence on supplies delivered from Earth. Developing such systems for long-duration missions is one of the most challenging aspects of the process, as they must be engineered to operate in the extreme environment on the Moon. This includes the lack of an atmosphere, massive temperature variations, and the ever-present problem of Moon dust.

Scientists use two types of lunar regolith for their experiments and research: Lunar Highlands Simulant (LHS-1) and Lunar Mare Simulant (LMS-1). As part of their research, the team used LHS-1, which is rich in basaltic minerals, similar to rock samples obtained by the Apollo missions. They melted this regolith with a laser to produce layers of material and fused them onto a base surface of stainless steel or glass. To assess how well these objects would fare in the lunar environment, the team tested their fabrication process under a range of different environmental conditions.

One thing they noticed was that the fused regolith adhered well to alumina-silicate ceramic, possibly because the two compounds form crystals that enhance heat resistance and mechanical strength. This revealed that the overall quality of the printed material is largely dependent on the surface onto which the regolith is printed. Other environmental factors, such as atmospheric oxygen levels, laser power, and printing speed, also affected the stability of the printed material.

Where 3D-Printed Material Could Help

Deployed to the Moon’s surface, this process could help build habitats and tools that are strong, resilient, and capable of handling the lunar environment. This has the added benefit of increasing independence from Earth, which is key to realizing long-duration missions on the Moon. In addition to assisting astronauts exploring the Moon in the near future (as part of NASA’s Artemis Program), this technology could also lead to resilient habitats that will enable a long-term human presence on the Moon, Mars, and beyond.

However, there are several unknown environmental factors that could limit the effectiveness of these systems on other worlds, and more data is needed before they can be addressed. In their study, the team suggests that instead of being powered by electricity, future scaled-up versions of their method could rely on solar or hybrid power systems. Nevertheless, the potential for space exploration is clear, and the technology also has applications for life here on Earth. Sarah Wolff, an assistant professor in mechanical and aerospace engineering and a lead author on the study, explained:

There are conditions that happen in space that are really hard to emulate in a simulant. It may work in the lab, but in a resource-scarce environment, you have to try everything to maximize the flexibility of a machine for different scenarios. If we can successfully manufacture things in space using very few resources, that means we can also achieve better sustainability on Earth. To that end, improving the machine’s flexibility for different scenarios is a goal we’re working really hard toward.

As the saying goes, “solving for space solves for Earth.” In environments where materials and resources are limited, laser-based 3D printing is one of several technologies that could support sustainable living. This applies equally to extraterrestrial environments and to regions on Earth experiencing the effects of climate change.

Reference: https://ift.tt/J4YI3yP

Friday, March 6, 2026

Amazon appears to be down, with over 20,000 reported problems


Based on over 20,000 reports, Amazon appears to be experiencing an outage.

According to Downdetector, reports of problems started increasing at 1:41 pm ET today. By 2:26 pm, ET, Downdetector received 18,320 reports of problems with Amazon’s website. The number of complaints peaked at 3:32 pm ET at 20,804. There have also been a smaller number of complaints about Amazon Prime Video and Amazon Web Services.

As of this writing, Amazon hasn’t confirmed any specific problems. However, an Amazon support account on X said at 3:02 pm ET today that “some customers may be experiencing issues” and that Amazon is working “to resolve the issue.”

Read full article

Comments

Reference : https://ift.tt/A9GDuYR

Intel Demos Chip to Compute With Encrypted Data

Summary Fully homomorphic encryption (FHE) allows computing on encrypted data without decryption, but it’s currently slow on standard...