Monday, March 9, 2026

How Cross-Cultural Engineering Drives Tech Advancement




Innovation rarely happens in isolation. Usually, the systems that engineers design are shaped by global teams whose members’ knowledge and ideas move across borders as easily as data.

That is especially true in my field of robotics and automation—where hardware, software, and human workflows function together. Progress depends not only on technical skill but also on how engineers frame problems and evaluate trade-offs. My career has shown me how cross-cultural experiences can shape the framing.

Working across different cultures has influenced how I approach collaboration, design decisions, and risk. I am an IEEE member and a mechanical engineer at Re:Build Fikst, in Wilmington, Mass., but I grew up in India and began my engineering education there.

Experiencing both work environments has reinforced the idea that diversity in science, technology, engineering, and mathematics fields is not only about representation; it is a technical advantage that affects how systems are designed and deployed.

Gaining experience across cultures

I began my training as an undergraduate student in electrical and electronics engineering at Amity University, in Noida. While studying, I developed a strong foundation in problem-framing and disciplined adaptability.

Working on a project requires identifying what the system needs to demonstrate and determining how best to validate that behavior within defined parameters. Rather than starting from idealized assumptions, Amity students were encouraged to focus on essential system behavior and prioritize the variables that most influenced the technology’s performance.

The approach reinforced first-principles thinking—starting from fundamental physical or system-level behavior rather than defaulting to established solutions—and encouraged the efficient use of available resources.

At the same time, I learned that efficiency has limits. In complex or safety-critical systems, insufficient validation can introduce hidden risks and reduce reliability. Understanding when simplicity accelerates progress and when additional rigor is necessary became an important part of my development as an engineer.

After getting my undergraduate degree, I moved to the United States in 2021 to pursue a master’s degree in robotics and autonomous systems at Arizona State University in Tempe. I encountered a new engineering culture in the United States.

In the U.S. research and development sector, especially in robotics and automation, rigor is nonnegotiable. Systems are designed to perform reliably across many cycles, users, and conditions. Documentation, validation, safety reviews, and reproducibility are integral to the process.

Those expectations do not constrain creativity; they allow systems to scale, endure, and be trusted.

Moving between the two different engineering cultures required me to adjust. I had to balance my instinct for efficiency with a more formal structure. In the United States, design decisions demand more justification. Collaboration means aligning with scientists, software engineers, and technicians. Each discipline brings different priorities and definitions of success to the team.

Over time, I realized that the value of both experiences was not in choosing one over the other but in learning when to apply each.

The balance is particularly critical in robotics and automation. Resourcefulness without rigor can fail at scale. A prototype that works in a controlled lab setting, for example, might break down when exposed to different users, operating conditions, or extended duty cycles.

At the same time, rigor without adaptability can slow innovation, such as when excessive documentation or overengineering delays early-stage testing and iteration.

Engineers who navigate multiple educational and professional systems often develop an intuition for managing the tension between the different experiences, building solutions that are robust and practical and that fit real-world workflows rather than idealized ones.

Much of my work today involves integrating automated systems into environments where technical performance must align with how people will use them. For example, a robotic work cell (a system that performs a specific task) might function flawlessly in isolation but require redesign once operators need clearer access for loading materials, troubleshooting faults, or performing routine maintenance. Similarly, an automated testing system must account not only for ideal operating conditions but also for how users respond to error messages, interruptions, and unexpected outputs.

In practice, that means thinking beyond individual components to consider how systems will be operated, maintained, and restored to service after faults or interruptions.

My cross-cultural background shapes how I evaluate design trade-offs and collaboration across disciplines.

How diverse teams can help improve tech design

Engineers trained in different cultures can bring distinct approaches to the same problem. Some might emphasize rapid iteration while others prioritize verification and robustness. When perspectives collide, teams ask better questions earlier. They challenge defaults, find edge cases, and design technologies that are more resilient to real-world variability.

Diversity of thought is certainly important in robotics and automation, where systems sit at the intersection of machines and people. Designing effective automation requires understanding how users interact with technology, how errors propagate, and how different environments influence the technology. Engineers with cross-cultural experience often bring heightened awareness of the variability, leading to better design decisions and more collaborative teams.

Engineers from outside of the United States play a critical role in the country’s research and development ecosystem, especially in interdisciplinary fields. Many of us act as bridges, connecting problem-solving approaches, expectations, and design philosophies shaped in different parts of the world. We translate not just language but also engineering intent, helping teams move from theories to practical deployment.

As robotics and automation continue to evolve, the challenges ahead—including scaling experimentation, improving reproducibility, and integrating intelligent systems into real-world environments—will require engineers who are comfortable working across boundaries. Navigating boundaries, which could be geographic, disciplinary, or cultural, is increasingly part of the job.

The engineering ecosystems in India and the United States are complex, mature, and evolving. My journey in both has taught me that being a strong engineer is not about adopting a single mindset. It’s about knowing how to adapt.

In an interconnected, multinational world, innovation belongs to engineers who can navigate the differences and turn them into strengths.

Reference: https://ift.tt/yMWXKjd

Do Offshore Wind Farms Pose National Security Risks?




When the Trump administration last year sought to freeze construction of offshore wind farms by citing concerns about interference with military radar and sonar, the implication was that these were new issues. But for more than a decade, the United States, Taiwan, and many European countries have successfully mitigated wind turbines’ security impacts. Some European countries are even integrating wind farms with national defense schemes.

“It’s not a choice of whether we go for wind farms or security. We need both,” says Ben Bekkering, a retired vice admiral in the Netherlands and current partner of the International Military Council on Climate and Security.

It’s a fact that offshore wind farms can degrade radar surveillance systems and subsea sensors designed to detect military incursions. But it’s a problem with real-world solutions, say Bekkering and other defense experts contacted by IEEE Spectrum. Those solutions include next-generation radar technology, radar-absorbing coatings for wind turbine blades and multi-mode sensor suites that turn offshore wind farm security equipment into forward eyes and ears for defense agencies.

How Do Wind Farms Interfere With Radar?

Wind turbines interfere with radar because they’re large objects that reflect radar signals. Their spinning blades can introduce false positives on radar screens by inducing a wavelength-shifting Doppler effect that gets flagged as a flying object. Turbines can also obscure aircraft, missiles and drones by scattering radar signals or by blinding older line-of-sight radars to objects behind them, according to a 2024 U.S. Department of Energy (DOE) report.

“Real-world examples from NATO and EU Member States show measurable degradation in radar performance, communication clarity, and situational awareness,” states a 2025 presentation from the €2-million (US$2.3-million) offshore wind Symbiosis Project, led by the Brussels-based European Defence Agency.

However, “measurable” doesn’t always mean major. U.S. agencies that monitor radar have continued to operate “without significant impacts” from wind turbines thanks to field tests, technology development, and mitigation measures taken by U.S. agencies since 2012, according to the DOE. “It is true that they have an impact, but it’s not that big,” says Tue Lippert, a former Danish special forces commander and CEO of Copenhagen-based security consultancy Heimdal Critical Infrastructure.

To date, impacts have been managed through upgrades to radar systems, such as software algorithms that identify a turbine’s radar signature and thus reduce false positives. Careful wind farm siting helps too. During the most recent designation of Atlantic wind zones in the U.S., for example, the Biden administration reduced the geographic area for a proposed zone off the Maryland coast by 79 percent to minimize defense impacts.

Radar impacts can be managed even better by upgrading hardware, say experts. Newer solid-state, phased-array radars are better at distinguishing turbines from other objects than conventional mechanical radars. Phased arrays shift the timing of hundreds or thousands of individual radio waves, creating interference patterns to steer the radar beams. The result is a higher-resolution signal that offers better tracking of multiple objects and better visibility behind objects in its path. “Most modern radars can actually see through wind farms,” says Lippert.

One of the Trump administration’s first moves in its overhaul of civilian air traffic was a $438-million order for phased-array radar systems and other equipment from Collins Aerospace, which touts wind farm mitigation as one of its products’ key features.

Close-up of a militaristic yet compact radar mounted on the rear bed of a vehicle. Saab’s compact Giraffe 1X combined surface-and-air-defense radar was installed in 2021 on an offshore wind farm near England.Saab

Can Wind Farms Aid Military Surveillance?

Another radar mitigation option is “infill” radar, which fills in coverage gaps. This involves installing additional radar hardware on land to provide new angles of view through a wind farm or putting radar systems on the offshore turbines to extend the radar field of view.

In fact, wind farms are increasingly being tapped to extend military surveillance capabilities. “You’re changing the battlefield, but it’s a change to your advantage if you use it as a tactical lever,” says Lippert.

In 2021 Linköping, Sweden-based defense contractor Saab and Danish wind developer Ørsted demonstrated that air defense radar can be placed on a wind farm. Saab conducted a two-month test of its compact Giraffe 1X combined surface-and-air-defense radar on Ørsted’s Hornsea 1 wind farm, located 120 kilometers east of England’s Yorkshire coast. The installation extended situational awareness “beyond the radar horizon of the ground-based long-range radars,” claims Saab. The U.K. Ministry of Defence ordered 11 of Saab’s systems.

Putting surface radar on turbines is something many offshore wind operators do already to track their crew vessels and to detect unauthorized ships within their arrays. Sharing those signals, or even sharing the equipment, can give national defense forces an expanded view of ships moving within and around the turbines. It can also improve detection of low altitude cruises missiles, says Bekkering, which can evade air defense radars.

Sharing signals and equipment is part of a growing trend in Europe towards “dual use” of offshore infrastructure. Expanded dual-use sensing is already being implemented in Belgium, the Netherlands and Poland, and was among the recommendations from Europe’s Symbiosis Project.

In fact, Poland mandates inclusion of defense-relevant equipment on all offshore wind farms. Their first project carries radar and other sensors specified by Poland’s Ministry of Defense. The wind farm will start operating in the Baltic later this year, roughly 200 kilometers south of Kaliningrad, a Russian exclave.

The U.K. is experimenting too. Last year West Sussex-based LiveLink Aerospace demonstrated purpose-built, dual-use sensors atop wind turbines offshore from Aberdeen. The compact equipment combines a suite of sensors including electro-optical sensors, thermal and visible light cameras, and detectors for radio frequency and acoustic signals.

In the past, wind farm operators tended to resist cooperating with defense projects, fearing that would turn their installations into military targets. And militaries were also reluctant to share, because they are used to having full control over equipment.

But Russia’s increasingly aggressive posture has shifted thinking, say security experts. Russia’s attacks on Ukraine’s power grid show that “everything is a target,” says Tobhias Wikström, CEO for Luleå, Sweden-based Parachute Consulting and a former lieutenant colonel in Sweden’s air force. Recent sabotage of offshore gas pipelines and power cables is also reinforcing the sense that offshore wind operators and defense agencies need to collaborate.

Why Is Sweden Restricting Offshore Wind?

Contrary to Poland and the U.K., Sweden is the one European country that, like the U.S. under Trump’s second administration, has used national security to justify a broad restriction on offshore wind development. In 2024 Sweden rejected 13 projects along its Baltic coast, which faces Kaliningrad, citing anticipated degradation in its ability to detect incoming missiles.

Saab’s CEO rejected the government’s argument, telling a Swedish newspaper that the firm’s radar “can handle” wind farms. Wikström at Parachute Consulting also questions the government’s claim, noting that Sweden’s entry into NATO in 2024 gives its military access to Finnish, German and Polish air defense radars, among others, that together provide an unobstructed view of the Baltic. “You will always have radars in other locations that will cross-monitor and see what’s behind those wind turbines,” says Wikström.

Politics are likely at play, says Wikström, noting that some of the coalition government’s parties are staunchly pro-nuclear. But he says a deeper problem is that the military experts who evaluate proposed wind projects, as he did before retiring in 2021, lack time and guidance.

By banning offshore wind projects instead of embracing them, Sweden and the U.S. may be missing out on opportunities for training in that environment, says Lippert, who regularly serves with U.S. forces as a reserves liaison officer with Denmark’s Greenland-based Joint Arctic Command. As he puts it: “The Chinese and Taiwanese coasts are plastered with offshore wind. If the U.S. Navy and Air Force are not used to fighting in littoral environments filled with wind farms, then they’re at a huge disadvantage when war comes.”

Reference: https://ift.tt/lWOBYH7

Sunday, March 8, 2026

Military AI Policy Needs Democratic Oversight




A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process?

The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff.

Anthropic has refused to cross two lines: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as “ideological constraints” embedded in commercial AI systems, arguing that determining lawful military use should be the government’s responsibility — not the vendor’s. As he put it in a speech at Elon Musk’s SpaceX last month, “We will not employ AI models that won’t allow you to fight wars.”

Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement.

Procurement policies

In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can decline to provide them. For example, a coalition of companies have signed an open letter pledging not to weaponize general-purpose robots. That basic symmetry is a feature of the free market.

Where the situation becomes more complicated — and more troubling — is in the decision to designate Anthropic a “supply chain risk.” That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government’s preferred contractual terms.

Using this authority in that manner marks a significant shift — from a procurement disagreement to the use of coercive leverage. Hegseth has declared that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face legal challenges, but it raises the stakes well beyond the loss of a single DOD contract.

AI governance

It is also important to distinguish between the two substantive issues Anthropic has reportedly raised.

The first, opposition to domestic surveillance of U.S. citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to monitoring Americans. A company stating that it does not want its tools used to facilitate domestic surveillance is not inventing a new principle; it is aligning itself with longstanding democratic guardrails.

To be clear, DOD is not affirmatively asserting that it intends to use the technology to surveil Americans unlawfully. Its position is that it does not want to procure models with built-in restrictions that preempt otherwise lawful government use. In other words, the Department of Defense argues that compliance with the law is the government’s responsibility — not something that needs to be embedded in a vendor’s code.

Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of harmful or high-risk tasks, including assistance with surveillance. The disagreement is therefore less about current intent than about institutional control over constraints: whether they should be imposed by the state through law and oversight, or by the developer through technical design.

The second issue, opposition to fully autonomous military targeting, is more complex.

The DOD already maintains policies requiring human judgment in the use of force, and debates over autonomy in weapons systems are ongoing within both military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness.

Reasonable people can disagree about where those lines should be drawn.

But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage.

If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear — not only to companies, but to the public.

The U.S. often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. That distinction carries less weight if AI governance is determined primarily through executive ultimatums issued behind closed doors.

There is also a strategic dimension. If companies conclude that participation in federal markets requires surrendering all deployment conditions, some may exit those markets. Others may respond by weakening or removing model safeguards to remain eligible for government contracts. Neither outcome strengthens U.S. technological leadership.

The DOD is correct that it cannot allow potential “ideological constraints” to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for corporate risk management in shaping deployment conditions. In high-risk domains — from aerospace to cybersecurity — contractors routinely impose safety standards, testing requirements and operational limitations as part of responsible commercialization. AI should not be treated as uniquely exempt from that practice.

Moreover, built-in safeguards need not be seen as obstacles to military effectiveness. In many high-risk sectors, layered oversight is standard practice: internal controls, technical fail-safes, auditing mechanisms and legal review operate together. Technical constraints can serve as an additional backstop, reducing the risk of misuse, error or unintended escalation.

Congress is AWOL

The DOD should retain ultimate authority over lawful use. But it need not reject the possibility that certain guardrails embedded at the design level could complement its own oversight structures rather than undermine them. In some contexts, redundancy in safety systems strengthens, not weakens, operational integrity.

At the same time, a company’s unilateral ethical commitments are no substitute for public policy. When technologies carry national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons and rules of engagement belong in democratic institutions.

This episode illustrates a pivotal moment in AI governance. AI systems at the frontier of technology are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially battlefield decision-making. That makes them too consequential to be governed solely by corporate policy — and too consequential to be governed solely by executive discretion.

The solution is not to empower one side over the other. It is to strengthen the institutions that mediate between them.

Congress should clarify statutory boundaries for military AI use and investigate whether sufficient oversight exists. The DOD should articulate detailed doctrine for human control, auditing and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs and procurement policy should reflect those publicly established standards.

If AI guardrails can be removed through contract pressure, they will be treated as negotiable. However, if they are grounded in law, they can become stable expectations.

Democratic constraints on military AI belong in statute and doctrine — not in private contract negotiations.

This article is adapted by the author with permission from Tech Policy Press. Read the original article.

Reference: https://ift.tt/TyWbA4B

Saturday, March 7, 2026

Laser-Based 3D Printing Could Build Future Bases on the Moon




Through the Artemis Program, NASA hopes to establish a permanent human presence on the Moon in its southern polar region. China, Russia, and the European Space Agency (ESA) have similar plans, all of which involve building bases near the permanently shadowed regions (PSRs)craters that contain water icethat dot the South Pole-Aitken Basin. For these and other agencies, it is vital that these bases be as self-sufficient as possible since resupply missions cannot be launched regularly and take several days to arrive.

Universe Today logo; text reads "This post originally appeared on Universe Today."

Therefore, any plan for a lunar base must come down to harvesting local resources to meet the needs of its crews as much as possiblea process known as In-Situ Resource Utilization (ISRU). In a recent study, researchers at The Ohio State University (OSU) proposed using a specialized laser-based 3D printing method to turn lunar regolith into hardened building material. According to their findings, this method can produce durable structures that withstand radiation and other harsh conditions on the lunar surface.

The research team was led by Sizhe Xu, a graduate research associate at OSU. He was joined by colleagues from OSU’s Department of Integrated Systems Engineering, Mechanical and Aerospace Engineering, and Materials Science & Engineering. Their paper, “Laser directed energy deposition additive manufacturing of lunar highland regolith simulant,” appeared in the journal Acta Astronautica.

Challenges of Lunar 3D Printing

The importance of ISRU for human exploration has prompted the rapid development of additive manufacturing systems, or 3D printing. These systems have proven effective at fabricating tools, structures, and habitats, effectively reducing dependence on supplies delivered from Earth. Developing such systems for long-duration missions is one of the most challenging aspects of the process, as they must be engineered to operate in the extreme environment on the Moon. This includes the lack of an atmosphere, massive temperature variations, and the ever-present problem of Moon dust.

Scientists use two types of lunar regolith for their experiments and research: Lunar Highlands Simulant (LHS-1) and Lunar Mare Simulant (LMS-1). As part of their research, the team used LHS-1, which is rich in basaltic minerals, similar to rock samples obtained by the Apollo missions. They melted this regolith with a laser to produce layers of material and fused them onto a base surface of stainless steel or glass. To assess how well these objects would fare in the lunar environment, the team tested their fabrication process under a range of different environmental conditions.

One thing they noticed was that the fused regolith adhered well to alumina-silicate ceramic, possibly because the two compounds form crystals that enhance heat resistance and mechanical strength. This revealed that the overall quality of the printed material is largely dependent on the surface onto which the regolith is printed. Other environmental factors, such as atmospheric oxygen levels, laser power, and printing speed, also affected the stability of the printed material.

Where 3D-Printed Material Could Help

Deployed to the Moon’s surface, this process could help build habitats and tools that are strong, resilient, and capable of handling the lunar environment. This has the added benefit of increasing independence from Earth, which is key to realizing long-duration missions on the Moon. In addition to assisting astronauts exploring the Moon in the near future (as part of NASA’s Artemis Program), this technology could also lead to resilient habitats that will enable a long-term human presence on the Moon, Mars, and beyond.

However, there are several unknown environmental factors that could limit the effectiveness of these systems on other worlds, and more data is needed before they can be addressed. In their study, the team suggests that instead of being powered by electricity, future scaled-up versions of their method could rely on solar or hybrid power systems. Nevertheless, the potential for space exploration is clear, and the technology also has applications for life here on Earth. Sarah Wolff, an assistant professor in mechanical and aerospace engineering and a lead author on the study, explained:

There are conditions that happen in space that are really hard to emulate in a simulant. It may work in the lab, but in a resource-scarce environment, you have to try everything to maximize the flexibility of a machine for different scenarios. If we can successfully manufacture things in space using very few resources, that means we can also achieve better sustainability on Earth. To that end, improving the machine’s flexibility for different scenarios is a goal we’re working really hard toward.

As the saying goes, “solving for space solves for Earth.” In environments where materials and resources are limited, laser-based 3D printing is one of several technologies that could support sustainable living. This applies equally to extraterrestrial environments and to regions on Earth experiencing the effects of climate change.

Reference: https://ift.tt/J4YI3yP

Friday, March 6, 2026

Amazon appears to be down, with over 20,000 reported problems


Based on over 20,000 reports, Amazon appears to be experiencing an outage.

According to Downdetector, reports of problems started increasing at 1:41 pm ET today. By 2:26 pm, ET, Downdetector received 18,320 reports of problems with Amazon’s website. The number of complaints peaked at 3:32 pm ET at 20,804. There have also been a smaller number of complaints about Amazon Prime Video and Amazon Web Services.

As of this writing, Amazon hasn’t confirmed any specific problems. However, an Amazon support account on X said at 3:02 pm ET today that “some customers may be experiencing issues” and that Amazon is working “to resolve the issue.”

Read full article

Comments

Reference : https://ift.tt/A9GDuYR

The Millisecond That Could Change Cancer Treatment




Inside a cavernous hall at the Swiss-French border, the air hums with high voltage and possibility. From his perch on the wraparound observation deck, physicist Walter Wuensch surveys a multimillion-dollar array of accelerating cavities, klystrons, modulators, and pulse compressors—hardware being readied to drive a new generation of linear particle accelerators.

Wuensch has spent decades working with these machines to crack the deepest mysteries of the universe. Now he and his colleagues are aiming at a new target: cancer. Here at CERN (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.

Photo of a white-haired man standing next to floor-to-ceiling experimental equipment with many tubes and wires. CERN researcher Walter Wuensch says the particle physics lab’s work on FLASH radiotherapy is “generating a lot of excitement.”CERN

Radiation therapy has been a cornerstone of cancer treatment since shortly after Wilhelm Conrad Röntgen discovered X-rays in 1895. Today, more than half of all cancer patients receive it as part of their care, typically in relatively low doses of X-rays delivered over dozens of sessions. Although this approach often kills the tumor, it also wreaks havoc on nearby healthy tissue. Even with modern precision targeting, the potential for collateral damage limits how much radiation doctors can safely deliver.

FLASH radiotherapy flips the conventional approach on its head, delivering a single dose of ultrahigh-power radiation in a burst that typically lasts less than one-tenth of a second. In study after study, this technique causes significantly less injury to normal tissue than conventional radiation does, without compromising its antitumor effect.

At CERN, which I visited last July, the approach is being tested and refined on accelerators that were never intended for medicine. If ongoing experiments here and around the world continue to bear out results, FLASH could transform radiotherapy—delivering stronger treatments, fewer side effects, and broader access to lifesaving care.

“It’s generating a lot of excitement,” says Wuensch, a researcher at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. “We accelerator people are thinking, Oh, wow, here’s an application of our technology that has a societal impact which is more immediate than most high-energy physics.”

The Unlikely Birth of FLASH Therapy

The breakthrough that led to FLASH emerged from a line of experiments that began in the 1990s at Institut Curie in Orsay, near Paris. Researcher Vincent Favaudon was using a low-energy electron accelerator to study radiation chemistry. Targeting the accelerator at mouse lungs, Favaudon expected the radiation to produce scar tissue, or fibrosis. But when he exposed the lungs to ultrafast blasts of radiation, at doses a thousand times as high as what’s used in conventional radiation therapy, the expected fibrosis never appeared.

Puzzled, Favaudon turned to Marie-Catherine Vozenin, a radiation biologist at Curie who specialized in radiation-induced fibrosis. “When I looked at the slides, there was indeed no fibrosis, which was very, very surprising for this type of dose,” recalls Vozenin, who now works at Geneva University Hospitals, in Switzerland.

How to Measure Radiation Doses


Radiation therapy uses a variety of units to refer to the amount of energy received by the patient. Here are the main ones under the International System of Units, or SI.

Gray (Gy): A measure of the absorbed dose—that is, how much radiation energy is absorbed by the body. One gray equals 1 joule of radiation energy per kilogram of matter. FLASH delivers a single dose of 40 Gy or more in a fraction of a second. Conventional radiation therapy, by contrast, may deliver a total dose of 40 to 80 Gy but over the course of several weeks.

Sievert (Sv): A measure of the effective dose—that is, the health effects of the radiation, with different types of ionizing radiation (gamma rays, X-rays, alpha particles, and so on) having different effects. One sievert equals 1 joule per kilogram weighted for the biological effectiveness of the radiation and the tissues exposed.


The pair expanded the experiments to include cancerous tumors. The results upended a long-held trade-off of radiotherapy: the idea that you can’t destroy a tumor without also damaging the host. “This differential effect is really what we want in radiation oncology, not damaging normal tissue but killing the tumors,” Vozenin says.

They repeated the protocol across different types of tissue and tumors. By 2014, they had gathered enough evidence to publish their findings in Science Translational Medicine. Their experiments confirmed that delivering an ultrahigh dose of 10 gray or more in less than a tenth of a second could eradicate tumors in mice while leaving surrounding healthy tissue virtually unharmed. For comparison, a typical chest X-ray delivers about 0.1 milligray, while a session of conventional radiation therapy might deliver a total of about 2 gray per day. (The authors called the effect “FLASH” because of the quick, high doses involved, but it’s not an acronym.)


Three sets of images comparing highly magnified tissue samples.

Many cancer experts were skeptical. The FLASH effect seemed almost too good to be true. “It didn’t get a lot of traction at first,” recalls Billy Loo, a Stanford radiation oncologist specializing in lung cancer. “They described a phenomenon that ran counter to decades of established radiobiology dogma.”

But in the years since then, researchers have observed the effect across a wide range of tumor types and animals—beyond mice to zebra fish, fruit flies, and even a few human subjects, with the same protective effect in the brain, lungs, skin, muscle, heart, and bone.

Why this happens remains a mystery. “We have investigated a lot of hypotheses, and all of them have been wrong,” says Vozenin. Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways.

Adapting Accelerators for FLASH

At the time of the first FLASH publication, Loo and his team at Stanford were also focused on dramatically speeding up radiation delivery. But Loo wasn’t chasing a radiobiological breakthrough. He was trying to solve a different problem: motion.

“The tumors that we treat are always moving targets,” he says. “That’s particularly true in the lung, where because of breathing motion, the tumors are constantly moving.”

To bring FLASH therapy out of the lab and into clinical use, researchers like Vozenin and Loo needed machines capable of delivering fast, high doses with pinpoint precision deep inside the body. Most early studies relied on low-energy electron beams like Favaudon’s 4.5-megaelectron-volt Kinetron—sufficient for surface tumors, but unable to reach more than a few centimeters into a human body. Treating deep-seated cancers in the lung, brain, or abdomen would require far higher particle energies.


Photo of floor-to-ceiling electromagnetic hardware with many tubes and pipes, some of which is copper-colored.

They also needed an alternative to conventional X-rays. In a clinical linac, X-ray photons are produced by dumping high-energy electrons into a bremsstrahlung target, which is made of a material with a high atomic number, like tungsten or copper. The target slows the electrons, converting their kinetic energy into X-ray photons. It’s an inherently inefficient process that wastes most of the beam power as heat and makes it extremely difficult to reach the ultrahigh dose rates required for FLASH. High-energy electrons, by contrast, can be switched on and off within milliseconds. And because they have a charge and can be steered by magnets, electrons can be precisely guided to reach tumors deep within the body. (Researchers are also investigating protons and carbon ions; see the sidebar, “What’s the Best Particle for FLASH Therapy?”)

Loo turned to the SLAC National Accelerator Laboratory in Menlo Park, Calif., where physicist Sami Gamal-Eldin Tantawi was redefining how electromagnetic waves move through linear accelerators. Tantawi’s findings allowed scientists to precisely control how energy is delivered to particles—paving the way for compact, efficient, and finely tunable machines. It was exactly the kind of technology FLASH therapy would need to target tumors deep inside the body.

Meanwhile, Vozenin and other European researchers turned to CERN, best known for its 27-kilometer Large Hadron Collider (LHC) and the 2012 discovery of the Higgs boson, the “God particle” that gives other particles their mass.

CERN is also home to a range of smaller linear accelerators—including CLEAR, where Wuensch and his team are adapting high-energy physics tools for medicine.

What’s the Best Particle for FLASH Therapy?


Even as research on FLASH radiotherapy advances, a central question remains: What kind of particle will deliver it best? The main contenders are electrons, protons, and carbon ions. Each has distinct advantages, limitations, and implications for cost, complexity, and clinical reach.

Electrons—long used to treat surface tumors and to generate X-rays—are light, nimble particles, far easier to control than protons or carbon ions. At low energies, they stop quickly in tissue, but new high-energy systems can drive electrons deeper. Now researchers are working on machines that combine multiple high-energy beams at different angles to let doctors sculpt radiation doses that match the tumor’s shape.

That principle underpins Billy Loo’s PHASER (Pluridirectional High-energy Agile Scanning Electron Radiotherapy) system, developed at Stanford and SLAC and licensed to a startup called TibaRay. An array of high-efficiency linacs generates X-ray beams from many directions at once. Their high output overcomes the inefficiency of electron-to-photon conversion to deliver the dose at FLASH speed. Beam convergence at the tumor and electronic shaping conform the dose in three dimensions, producing uniform coverage with relatively simple infrastructure.

Protons have led the way in early clinical trials, largely because existing proton therapy centers can be adapted to deliver FLASH doses. In 2020, the University of Cincinnati Health launched the first human FLASH trial to use proton beams, to treat cancer that had metastasized to bones. “If I want to be pragmatic, the proton beam is ready to go, so let’s move with what we have,” says Geneva University Hospitals’ Marie-Catherine Vozenin.

Protons can penetrate up to 30 centimeters, reaching deep-seated tumors. But the delivery of protons in a continuous beam limits the dose rates. Also, proton systems are far larger and more expensive than, say, X-ray machines, which will likely constrain their availability to specialized centers.

Carbon ions, used in a handful of elite facilities, offer even higher precision and biological effectiveness compared to electrons and protons. Their Bragg peak—a sudden deposition of energy at a specific depth—makes them appealing for deep or complex tumors. But that unmatched precision comes at a steep price, with each facility costing upward of US $300 million. —T.C.


Unlike the LHC, which loops particles around a massive ring to build up energy before smashing them together, linear accelerators like CLEAR send particles along a straight, one-time path. That setup allows for greater precision and compactness, making it ideal for applications like FLASH.

At the heart of the CLEAR facility, Wuensch points out the 200-MeV linear accelerator with its 20-meter beamline. This is “a playground of creativity,” he says, for the physicists and engineers who arrive from all over the world to run experiments.

The process begins when a laser pulse hits a photocathode, releasing a burst of electrons that form the initial beam. These electrons travel through a series of precisely machined copper cavities, where high-frequency microwaves push them forward. The electrons then move through a network of magnets, monitors, and focusing elements that shape and steer them toward the experimental target with submillimeter precision.

Instead of a continuous stream, the electron beam is divided into nanosecond-long bunches—billions of electrons riding the radio-frequency field like surfers. Inside the accelerator’s cavities, the field flips polarity 12 billion times per second, so timing is everything: Only electrons that arrive perfectly in phase with the accelerating wave will gain energy. That process repeats through a chain of cavities, each giving the bunches another push, until the beam reaches its final energy of 200 MeV.


Close-up photo of an etched copper disc being held under a microscope by a gloved hand.

Much of this architecture draws directly from the Compact Linear Collider study, a decades-long CERN project aimed at building a next-generation collider. The proposed CLIC machine would stretch 11 kilometers and collide electrons and positrons at 380 gigaelectron volts. To do that in a linear configuration—without the multiple passes around a ring like the LHC—CERN engineers have had to push for extremely high acceleration gradients to boost the electrons to high energies over relatively short distances—up to 100 megavolts per meter.

Wuensch leads me to a large experimental hall housing prototype structures from the CLIC effort, and points out the microwave devices that now help drive FLASH research. Though the future of CLIC as a collider remains uncertain, its infrastructure is already yielding dividends: smaller, high-gradient accelerators that may one day be as suited for curing cancer as they are for smashing particles.

The power behind the high gradients comes from CERN’s Xboxes, the X-band RF systems that dominate the experimental hall. Each Xbox houses a klystron, modulator, pulse compressor, and waveguide network to generate and shape the microwave pulses. The pulse compressors store energy in resonant cavities and then release it in a microsecond burst, producing peaks of up to 200 megawatts; if it were continuous, that’s enough to power at least 40,000 homes. The Xboxes let researchers fine-tune the power, timing, and pulse shape.

According to Wuensch, many of the recent accelerator developments were enabled by advances in computer simulation and high-precision three-dimensional machining. These tools allow the team to iterate quickly, designing new accelerator components and improving beam control with each generation.

Still, real-world challenges remain. The power demands are formidable, as are the space requirements; for all the talk of its “compact” design, the original CLIC was meant to span kilometers. Obviously, a hospital needs something that’s actually compact.

“A big challenge of the project,” says Wuensch, “is to transform this kind of technology and these kinds of components into something that you can imagine installing in a hospital, and it will run every day reliably.”

To that end, CERN researchers have teamed up with the Lausanne University Hospital (known by its French acronym, CHUV) and the French medical technology company Theryq to design a hospital facility capable of treating large and deep-seated tumors with the very short time scales needed for FLASH and scaled down to fit in a clinical setting.

Theryq’s Approach to FLASH

Theryq’s research center and factory are located in southern France, near the base of Montagne Sainte-Victoire, a jagged spine of limestone that Paul Cézanne painted dozens of times, capturing its shifting light and form.

“The solution that we are trying to develop here is something which is extremely versatile,” says Ludovic Le Meunier, CEO of the expanding company. “The ultimate goal is to be able to treat any solid tumor anywhere in the body, which is about 90 percent of the cancer these days.”

Futuristic scientific equipment setup, featuring streamlined machinery and intricate components. Theryq’s FLASHDEEP system, under development with CERN and the company’s clinical partners, has a 13.5-meter-long, 140-MeV linear accelerator. That’s strong enough to treat tumors at depths of up to about 20 centimeters in the body. The patient will remain in a supported standing position during the split-second irradiation.THERYQ

Theryq’s push to bring FLASH radiotherapy from the lab to clinic has followed a three-pronged rollout, with each device engineered for a specific depth and clinical use. The first machine, FLASHKNiFE, was unveiled in 2020. Designed for superficial tumors and intraoperative use, the system delivers electron beams at 6 or 9 MeV. A prototype installed that same year at CHUV is conducting a phase-two trial for patients with localized skin cancer.

More recently, Theryq launched FLASHLAB, a compact, 7-MeV platform for radiobiology research.

The company’s most ambitious system, FLASHDEEP, is still under development. The 13.5-meter-long electron source will deliver very high-energy electrons of as much as 140 MeV up to 20 centimeters inside the body in less than 100 milliseconds. An integrated CT scanner, built into a patient-positioning system developed by Leo Cancer Care, captures images that stream directly into the treatment-planning software, enabling precise calculation of the radiation dose. “Before we actually trigger the beam or the treatment, we make stereo images to verify at the very last second that the tumor is exactly where it should be,” says Theryq technical manager Philippe Liger.

FLASH Therapy Moves to Animal Tests

While CERN’s CLEAR accelerator has been instrumental in characterizing FLASH parameters, researchers seeking to study FLASH in living organisms must look elsewhere: CERN doesn’t allow animal experiments on-site. That’s one reason why a growing number of scientists are turning to PITZ, the Photo Injector Test Facility in Zeuthen, a leafy lakeside suburb of Berlin.

PITZ is part of Germany’s national accelerator lab and is responsible for developing the electron source for the European X-ray Free-Electron Laser. Now PITZ is emerging as a hub for FLASH research, with an unusually tunable accelerator and a dedicated biomedical lab to ensure controlled conditions for preclinical studies.

A photo showing a row of experimental electronic equipment on racks

A photo of a closeup of a gloved hand holding a sample of a purple liquid above a piece of equipment. At Germany’s Photo Injector Test Facility in Zeuthen (PITZ), the electron-beam accelerator [top] is used to irradiate biological targets in early-stage animal tests of FLASH radiotherapy [bottom].Top: Frieder Mueller; Bottom: MWFK

“The biggest advantage of our facility is that we can do a very stepwise, very defined and systematic study of dose rates,” says Anna Grebinyk, a biochemist who heads the new biomedical lab, “and systematically optimize the FLASH effect to see where it gets the best properties.”

The experiments begin with zebra-fish embryos, prized for early-stage studies because they’re transparent and develop rapidly. After the embryos, researchers test the most promising parameters in mice. To do that, the PITZ team uses a small-animal radiation research platform, complete with CT imaging and a robotic positioning system adapted from CERN’s CLEAR facility.

What sets PITZ apart is the flexibility of its beamline. The 30-meter accelerator system steers electrons with micrometer precision, producing electron bunches with exceptional brightness and emittance—a metric of beam quality. “We can dial in any distribution of bunches we want,” says Frank Stephan, group leader at PITZ. “That gives us tremendous control over time structure.”

Timing matters. At PITZ, the laser-struck photocathode generates electron bunches that are accelerated immediately, at up to 60 million volts per meter. A fast electromagnetic kicker system acts as a high-speed gatekeeper, selectively deflecting individual electron bunches from a high-repetition beam and steering them according to researchers’ needs. This precise, bunch-by-bunch control is essential for fine-tuning beam properties for FLASH experiments and other radiation therapy studies.

“The idea is to make the complete treatment within one millisecond,” says Stephan. “But of course, you have to [trust] that within this millisecond, everything works fine. There is not a chance to stop [during] this millisecond. It has to work.”

Regulating the dose remains one of the biggest technical hurdles in FLASH. The ionization chambers used in standard radiotherapy can’t respond accurately when dose rates spike hundreds of times higher in a matter of microseconds. So researchers are developing new detector systems to precisely measure these bursts and keep pace with the extreme speed of FLASH delivery.

FLASH as a Research Tool

Beyond its therapeutic potential, FLASH may also open new windows to illuminate cancer biology. “What is really, really superinteresting, in my opinion,” says Vozenin, “is that we can use FLASH as a tool to understand the difference between normal tissue and tumors. There must be something we’re not aware of that really distinguishes the two—and FLASH can help us find it.” Identifying those differences, she says, could lead to entirely new interventions, not just with radiation, but also with drugs.

Vozenin’s team is currently testing a hypothesis involving long-lived proteins present in healthy tissue but absent in tumors. If those proteins prove to be key, she says, “we’re going to find a way to manipulate them—and perhaps reverse the phenomenon, even [turn] a tumor back into a normal tissue.”

Proponents of FLASH believe it could help close the cancer care gap worldwide; in low-income countries, only about 10 percent of patients have access to radiotherapy, and in middle-income countries, only about 60 percent of patients do, according to the International Atomic Energy Agency. Because FLASH treatment can often be delivered in a single brief session, it could spare patients from traveling long distances for weeks of treatment and allow clinics to treat many more people.

High-income countries stand to benefit as well. Fewer sessions mean lower costs, less strain on radiotherapy facilities, and fewer side effects and disruptions for patients.

The big question now is, How long will it take? Researchers I spoke with estimate that FLASH could become a routine clinical option in about 10 years—after the completion of remaining preclinical studies and multiphase human trials, and as machines become more compact, affordable, and efficient. Much of the momentum comes from a growing field of startups competing to build devices, but the broader scientific community remains remarkably open and collaborative.

“Everyone has a relative who knows about cancer because of their own experience,” says Stephan. “My mother died of it. In the end, we want to do something good for mankind. That’s why people work together.”

This article appears in the March 2026 print issue.

Reference: https://ift.tt/3KoqStd

Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)




Non-terrestrial networks (NTNs) using low earth orbit (LEO) satellites present unique technical challenges, from managing large satellite constellations to ensuring reliable communication links. In this webinar, we’ll explore how to address these complexities using comprehensive modeling and simulation techniques. Discover how to model and analyze satellite orbits, onboard antennas and arrays, transmitter power amplifiers (PAs), signal propagation channels, and the RF and digital receiver segments—all within an integrated workflow. Learn the importance of including every link component to achieve accurate, reliable system performance.

Highlights include:

  • Modeling large satellite constellations
  • Analyzing and visualizing time-varying visibility and link closure
  • Using graphical apps for antenna analysis and RF component design
  • Modeling PAs and digital predistortion
  • Simulating interference effects in communication links
Reference: https://ift.tt/k1MK6Fh

How Cross-Cultural Engineering Drives Tech Advancement

Innovation rarely happens in isolation. Usually, the systems that engineers design are shaped by global teams whose members’ knowledge an...