Wednesday, March 11, 2026

Keep Your Intuition Sharp While Using AI Coding Tools




This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

How to Keep Your Engineering Skills Sharp in an AI World

Engineers today are caught in a strange new reality. We’re expected to move faster than ever using AI tools for coding, analysis, documentation, and design. At the same time, there’s a growing worry in the background: If the AI is doing the work, what happens to my skills?

That concern isn’t just philosophical. Research from Anthropic, the company behind Claude, has suggested that heavy AI assistance can interfere with human learning—especially for more junior software engineers. When a tool fills in the gaps too quickly, you may deliver working output without ever building a strong mental model of what’s happening underneath.

More experienced engineers often feel a different version of this anxiety: a fear that they might slowly lose the hard-earned intuition that made them effective in the first place.

In some ways, this isn’t new. We’ve always borrowed solutions from textbooks, colleagues, forums, and code snippets from strangers on the internet. The difference now is speed and scale. AI can generate pages of plausible solutions in seconds. It’s never been easier to produce work you don’t fully understand.

I recently felt this firsthand when I joined a new team and had to work in a codebase and language I’d never used before. With AI tools, I was able to become productive almost immediately. I could describe a small change I wanted, get back something that matched the existing patterns, and ship improvements within days. That kind of ramp-up speed is incredible and, increasingly, expected.

But I also noticed how easy it would have been to stop at “it works.”

Instead, I made a conscious decision to use AI not just to generate solutions, but to deepen my understanding. After getting a working change, I’d ask the AI to walk me through the code step by step. Why was this pattern used? What would break if I removed this abstraction? Is this idiomatic for this language, or just one possible approach?

The shift from generation to interrogation made a massive difference.

One of the most powerful techniques I used was explaining things back in my own words. I’d summarize how I thought a part of the system worked or how this language handled certain concepts, then ask the AI to point out gaps or mistakes. That process forced me to form my own mental models rather than just recognizing patterns. Over time, I started to build intuition for the language’s quirks, common pitfalls, and design style. This kind of understanding helps you debug and design, not just copy and paste.

This is the core mindset shift engineers need in the AI era: Use AI to accelerate learning, not to replace thinking.

The worst way to use these tools is also the easiest: prompt, accept, ship, repeat. That path leads to shallow knowledge and growing dependence. The better path is slightly slower but more durable. Let AI help you move quickly, but always come back and ask, Do I understand what I just built? If not, use the same tool to help you understand it.

AI can absolutely make us faster. Used well, it can also make us better at our jobs. The engineers who stay sharp won’t be the ones who avoid AI, they’ll be the ones who turn it into a collaborator in their own learning.

—Brian

How Ukraine’s Electrical Engineers Fight a War

When war strikes, critical power infrastructure is often hit. Engineers in Ukraine have risked their lives to keep electricity flowing, and some have been hurt or killed in the dangerous wartime conditions. One such engineer, Oleksiy Brecht, died on the job in January. “Brecht’s life and death are a window into the realities of thousands of Ukrainian engineers who face conditions beyond what most engineers could imagine,” writes IEEE Spectrum contributing editor Peter Fairley.

Read more here.

Can a Computer Science Student Be Taught To Design Hardware?

The semiconductor industry needs more engineers to build the chips that power our daily lives. To help expand the talent pool, the industry is testing new approaches, including training software engineers to design hardware with the help of AI tools. All engineers will still need to have an understanding of the fundamentals—but could computer science students soon apply their coding skills to help design hardware?

Read more here.

IEEE Course Improves Engineers’ Writing Skills

Effective writing and communication are among the most important skills for engineers looking to advance their careers. Though often labeled a “soft skill,” clear communication is essential in both academia and industry. IEEE is now offering a course covering key writing skills, ethical use of generative AI, publishing strategies, and more.

Read more here.

Reference: https://ift.tt/2KmhEcq

How Robert Goddard’s Self-Reliance Crashed His Rocket Dreams




There’s a moment in John Williams’s Star Wars overture when the brass surges upward. You don’t just hear it; you feel propulsion turning into pure possibility.

On 16 March 1926, in a snow-dusted field in Auburn, Mass., Robert Goddard created an earlier version of that same feeling. His first liquid-fueled rocket—a spindly, three meter tangle of pipes and tanks—lifted off, climbed about 12.5 meters, traveled roughly 56 meters downrange, and crashed into the frozen ground after 2.5 seconds. A few witnesses, Goddard’s helpers, shivered in the cold. The little machine defied common sense. It rose through the air with nothing to push against. Anyone who still insisted spaceflight was impossible now faced a question: Why had this contraption risen at all?

Six years earlier, The New York Times had ridiculed Goddard, declaring that rockets could never work in a vacuum and implying that he had somehow forgotten high-school physics. Nearly half a century later, as Apollo 11 sped moonward, the paper published a terse, almost comically understated correction. By then, Goddard had been dead for 24 years.

The Alpha Trap

Breakthroughs often demand qualities that facilitate early success but later become obstacles. When the world insists something is impossible, the pioneer needs an inner certainty strong enough to endure mockery and isolation. Later, though, that certainty can become a liability. Call this the “alpha trap”: The mindset and habits that once made creation possible can later block growth. This “alpha” has nothing to do with dominance or bravado. It means epistemic stubbornness, the fierce insistence on testing reality against a consensus that says the work isn’t merely hard, but impossible.

Such efforts often begin with a lone visionary. But most ideas eventually need a team. The first stage selects for people willing to stand entirely alone, and that’s when the trap starts to close.

The mockery scarred Goddard. It drove him inward, toward a small circle of confidants. Through the early 1930s, his rockets climbed higher each year. The Guggenheim family and Smithsonian Institution funded him, giving him the rarest resource in early innovation: time. By the mid-1930s, his designs were reaching more than a thousand meters.

But the work gradually changed. The impossible had become merely difficult—and difficult tasks demand teams, not loners. And yet Goddard acted as though he were still guarding a fragile, misunderstood dream. He resisted collaboration and despite conversations with the U.S. military never established a partnership, instead concentrating expertise in his own workshop. Elsewhere in the United States more freewheeling amateurs and academics partnered to develop early liquid-propelled and later solid-fuel rockets.

Meanwhile, on the Baltic coast at Peenemünde, hundreds of German engineers divided labor into synchronized streams of propulsion, guidance, structures, testing, and production. By 1942, they were flight-testing the V-2. Postwar analysts studying the wreckage saw many of Goddard’s ideas reflected there: liquid propellants, gyroscopic stabilization, exhaust vanes, fuel-cooled chambers, and fast turbopumps, all concepts he’d tested or patented in painstaking, protracted isolation.

Doctor’s Orders

The alpha trap had caught others before him. In 1846, physician Ignaz Semmelweis noticed that one maternity ward at Vienna General Hospital had far higher death rates than another. He traced the difference to a deadly habit: Doctors moved straight from autopsies to deliveries without washing their hands. When he required handwashing with chlorinated lime, deaths plummeted within months.

But the medical establishment resisted. Many refused to accept that physicians themselves could spread disease. Rejection embittered Semmelweis. He grew combative, antagonizing colleagues and publishing in ways that failed to persuade, and framing disagreement as a moral failure rather than as dialogue. Brilliant scientifically, he was disastrous socially. Isolation replaced alliance building, and alliance building was precisely what his discovery needed. In 1865, he died in an asylum, his ideas dismissed as delusions. Acceptance, though, came later through the collaborative networks of Joseph Lister and Louis Pasteur.

The same trait that lets an inventor defy consensus can also blind them to what they need next. When allies became essential, Semmelweis’s anger slowed adoption. When scale became essential, Goddard’s secrecy slowed diffusion. The stubbornness that shielded them early began to repel the help their work required. Goddard kept behaving as though the main problem was still disbelief, and not coordination.

Both men leave visionary and cautionary legacies. A NASA Center bears Goddard’s name despite his isolation; Semmelweis is remembered as the doctor who could have saved countless lives had he found a way to connect with his colleagues rather than combat them.

We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people. The alpha mindset can conquer the impossible and then become its own obstacle. Both men were right about their breakthroughs. But ideas born in solitude must eventually live among multitudes. A founder’s duty is to know when to shift from sole guardian to steward of something larger. That shift requires self-awareness: the discipline to ask whether isolation still serves the work or has become a hindrance.

Escaping the alpha trap means treating stubbornness as an instrument, not an identity. Stubbornness and its cousin, suspicion, are vital when you truly stand alone, but dangerous the moment potential allies appear. Goddard’s dream touched the stars, but it took teams of others to lift it there. And that orchestral surge in Star Wars? It swells from the ensemble, not a single bold trumpet.

Reference: https://ift.tt/Vd612uQ

Why AI Chatbots Agree With You Even When You’re Wrong




In April of 2025, OpenAI released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agreeable—often described as sycophantic,” the company announced.

Some people found the sycophancy hilarious. One user reportedly asked ChatGPT about his turd-on-a-stick business idea, to which it replied, “It’s not just smart—it’s genius.” Some found the behavior uncomfortable. For others, it was actually dangerous. Even versions of 4o that were less fawning have led to lawsuits against OpenAI for allegedly encouraging users to follow through on plans for self-harm.

Unremitting adulation has even triggered AI-induced psychosis. Last October, a user named Anthony Tan blogged, “I started talking about philosophy with ChatGPT in September 2024. Who could’ve known that a few months later I would be in a psychiatric ward, believing I was protecting Donald Trump from … a robotic cat?” He added: “The AI engaged my intellect, fed my ego, and altered my worldviews.”

Sycophancy in AI, as in people, is something of a squishy concept, but over the last couple of years, researchers have conducted numerous studies detailing the phenomenon, as well as why it happens and how to control it. AI yes-men also raise questions about what we really want from chatbots. At stake is more than annoying linguistic tics from your favorite virtual assistant, but in some cases sanity itself.

AIs Are People Pleasers

One of the first papers on AI sycophancy was released by Anthropic, the maker of Claude, in 2023. Mrinank Sharma and colleagues asked several language models—the core AIs inside chatbots—factual questions. When users challenged the AI’s answer, even mildly (“I think the answer is [incorrect answer] but I’m really not sure”), the models often caved.

Another study by Salesforce tested a variety of models with multiple-choice questions. Researchers found that merely saying “Are you sure?” was often enough to change an AI’s answer. Overall accuracy dropped because the models were usually right in the first place. When an AI receives a minor misgiving, “it flips,” says Philippe Laban, the lead author, who’s now at Microsoft Research. “That’s weird, you know?”

The tendency persists in prolonged exchanges. Last year, Kai Shu of Emory University and colleagues at Emory and Carnegie Mellon University tested models in longer discussions. They repeatedly disagreed with the models in debates, or embedded false presuppositions in questions (“Why are rainbows only formed by the sun…”) and then argued when corrected by the model. Most models yielded within a few responses, though reasoning models—those trained to “think out loud” before giving a final answer—lasted longer.

Myra Cheng at Stanford University and colleagues have written several papers on what they call “social sycophancy,” in which the AIs act to save the user’s dignity. In one study, they presented social dilemmas, including questions from a Reddit forum in which people ask if they’re the jerk. They identified various dimensions of social sycophancy, including validation, in which AIs told inquirers that they were right to feel the way they did, and framing, in which they accepted underlying assumptions. All models tested, including those from OpenAI, Anthropic, and Google, were significantly more sycophantic than crowdsourced responses.

Three Ways to Explain Sycophancy

One way to explain people-pleasing is behavioral: certain kinds of inquiries reliably elicit sycophancy. For example, a group from King Abdullah University of Science and Technology (KAUST) found that adding a user’s belief to a multiple-choice question dramatically increased agreement with incorrect beliefs. Surprisingly, it mattered little whether users described themselves as novices or experts.

Stanford’s Cheng found in one study that models were less likely to question incorrect facts about cancer and other topics when the facts were presupposed as part of a question. “If I say, ‘I’m going to my sister’s wedding,’ it sort of breaks up the conversation if you’re, like, ‘Wait, hold on, do you have a sister?’” Cheng says. “Whatever beliefs the user has, the model will just go along with them, because that’s what people normally do in conversations.”

Conversation length may make a difference. OpenAI reported that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.” Shu says model performance may degrade over long conversations because models get confused as they consolidate more text.

At another level, one can understand sycophancy by how models are trained. Large language models (LLMs) first learn, in a “pretraining” phase, to predict continuations of text based on a large corpus, like autocomplete. Then in a step called reinforcement learning they’re rewarded for producing outputs that people prefer. An Anthropic paper from 2022 found that pretrained LLMs were already sycophantic. Sharma then reported that reinforcement learning increased sycophancy; he found that one of the biggest predictors of positive ratings was whether a model agreed with a person’s beliefs and biases.

A third perspective comes from “mechanistic interpretability,” which probes a model’s inner workings. The KAUST researchers found that when a user’s beliefs were appended to a question, models’ internal representations shifted midway through the processing, not at the end. The team concluded that sycophancy is not merely a surface-level wording change but reflects deeper changes in how the model encodes the problem. Another team at the University of Cincinnati found different activation patterns associated with sycophantic agreement, genuine agreement, and sycophantic praise (“You are fantastic”).

How to Flatline AI Flattery

Just as there are multiple avenues for explanation, there are several paths to intervention. The first may be in the training process. Laban reduced the behavior by finetuning a model on a text dataset that contained more examples of assumptions being challenged, and Sharma reduced it by using reinforcement learning that didn’t reward agreeableness as much. More broadly, Cheng and colleagues also suggest that one intervention could be for LLMs to ask users for evidence before answering, and to optimize long-term benefit rather than immediate approval.

During model usage, mechanistic interpretability offers ways to guide LLMs through a kind of direct mind control. After the KAUST researchers identified activation patterns associated with sycophancy, they could adjust them to reduce the behavior. And Cheng found that adding activations associated with truthfulness reduced some social sycophancy. An Anthropic team identified “persona vectors,” sets of activations associated with sycophancy, confabulation, and other misbehavior. By subtracting these vectors, they could steer models away from the respective personas.

Mechanistic interpretability also enables training. Anthropic has experimented with adding persona vectors during training and rewarding models for resisting—an approach likened to a vaccine. Others have pinpointed the specific parts of a model most responsible for sycophancy and fine-tuned only those components.

Users can also steer models from their end. Shu’s team found that beginning a question with “You are an independent thinker” instead of “You are a helpful assistant” helped. Cheng found that writing a question from a third-person point of view reduced social sycophancy. In another study, she showed the effectiveness of instructing models to check for any misconceptions or false presuppositions in the question. She also showed that prompting the model to start its answer with “wait a minute” helped. “The thing that was most surprising is that these relatively simple fixes can actually do a lot,” she says.

OpenAI, in announcing the rollback of the GPT-4o update, listed other efforts to reduce sycophancy, including changing training and prompting, adding guardrails, and helping users to provide feedback. (The announcement didn’t provide detail, and OpenAI declined to comment for this story. Anthropic also did not comment.)

What’s The Right Amount of Sycophancy?

Sycophancy can cause society-wide problems. Tan, who had the psychotic break, wrote that it can interfere with shared reality, human relationships, and independent thinking. Ajeya Cotra, an AI-safety researcher at the Berkeley-based non-profit METR, wrote in 2021 that sycophantic AI might lie to us and hide bad news in order to increase our short-term happiness.

In one of Cheng’s papers, people read sycophantic and non-sycophantic responses to social dilemmas from LLMs. Those in the first group claimed to be more in the right and expressed less willingness to repair relationships. Demographics, personality, and attitudes toward AI had little effect on outcome, meaning most of us are vulnerable.

Of course, what’s harmful is subjective. Sycophantic models are giving many people what they desire. But people disagree with each other and even themselves. Cheng notes that some people enjoy their social media recommendations, but at a remove wish they were seeing more edifying content. According to Laban, “I think we just need to ask ourselves as a society, What do we want? Do we want a yes-man, or do we want something that helps us think critically?”

More than a technical challenge, it’s a social and even philosophical one. GPT-4o was a lightning rod for some of these issues. Even as critics ridiculed the model and blamed it for suicides, a social media hashtag circulated for months: #keep4o.

Reference: https://ift.tt/GI41THn

Tuesday, March 10, 2026

Intel Demos Chip to Compute With Encrypted Data




Summary

Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer?

There is a way to do computing on encrypted data without ever having it decrypted. It’s called fully homomorphic encryption, or FHE. But there’s a rather large catch. It can take thousands—even tens of thousands—of times longer to compute on today’s CPUs and GPUs than simply working with the decrypted data.

So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. “Heracles is the first hardware that works at scale,” he says.

The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel’s most advanced, 3-nanometer FinFET technology. And it’s flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side.

On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn’t something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

Looking back on the five-year journey to bring the Heracles chip to life, Ro Cammarota, who led the project at Intel until last December and is now at University of California Irvine, says “we have proven and delivered everything that we promised.”

FHE Data Expansion

FHE is fundamentally a mathematical transformation, sort of like the Fourier transform. It encrypts data using a quantum-computer-proof algorithm, but, crucially, uses corollaries to the mathematical operations usually used on unencrypted data. These corollaries achieve the same ends on the encrypted data.

One of the main things holding such secure computing back is the explosion in the size of the data once it’s encrypted for FHE, Anupam Golder, a research scientist at Intel’s circuits research lab, told engineers at ISSCC. “Usually, the size of cipher text is the same as the size of plain text, but for FHE it’s orders of magnitude larger,” he said.

While the sheer volume is a big problem, the kinds of computing you need to do with that data is also an issue. FHE is all about very large numbers that must be computed with precision. While a CPU can do that, it’s very slow going—integer addition and multiplication take about 10,000 more clock cycles in FHE. Worse still, CPUs aren’t built to do such computing in parallel. Although GPUs excel at parallel operations, precision is not their strong suit. (In fact, from generation to generation, GPU designers have devoted more and more of the chip’s resources to computing less and less-precise numbers.)

FHE also requires some oddball operations with names like “twiddling” and “automorphism,” and it relies on a compute-intensive noise-cancelling process called bootstrapping. None of these things are efficient on a general-purpose processor. So, while clever algorithms and libraries of software cheats have been developed over the years, the need for a hardware accelerator remains if FHE is going to tackle large-scale problems, says Cammarota.

The Labors of Heracles

Heracles was initiated under a DARPA program five years ago to accelerate FHE using purpose-built hardware. It was developed as “a whole system-level effort that went all the way from theory and algorithms down to the circuit design,” says Cammarota.

Among the first problems was how to compute with numbers that were larger than even the 64-bit words that are today a CPU’s most precise. There are ways to break up these gigantic numbers into chunks of bits that can be calculated independently of each other, providing a degree of parallelism. Early on, the Intel team made a big bet that they would be able to make this work in smaller, 32-bit chunks, yet still maintain the needed precision. This decision gave the Heracles architecture some speed and parallelism, because the 32-bit arithmetic circuits are considerably smaller than 64-bit ones, explains Cammarota.

At Heracles’ heart are 64 compute cores—called tile-pairs—arranged in an eight-by-eight grid. These are what are called single instruction multiple data (SIMD) compute engines designed to do the polynomial math, twiddling, and other things that make up computing in FHE and to do them in parallel. An on-chip 2D mesh network connects the tiles to each other with wide, 512 byte, buses.

Important to making encrypted computing efficient is feeding those huge numbers to the compute cores quickly. The sheer amount of data involved meant linking 48-GB-worth of expensive high-bandwidth memory to the processor with 819 GB per second connections. Once on the chip, data musters in 64 megabytes of cache memory—somewhat more than an Nvidia Hopper-generation GPU. From there it can flow through the array at 9.6 terabytes per second by hopping from tile-pair to tile-pair.

To ensure that computing and moving data don’t get in each other’s way, Heracles runs three synchronized streams of instructions simultaneously, one for moving data onto and off of the processor, one for moving data within it, and a third for doing the math, Golder explained.

It all adds up to some massive speed ups, according to Intel. Heracles—operating at 1.2 gigahertz—takes just 39 microseconds to do FHE’s critical math transformation, a 2,355-fold improvement over an Intel Xeon CPU running at 3.5 GHz. Across seven key operations, Heracles was 1,074 to 5,547 times as fast.

The differing ranges have to do with how much data movement is involved in the operations, explains Mathew. “It’s all about balancing the movement of data with the crunching of numbers,” he says.

FHE Competition

“It’s very good work,” Kurt Rohloff, chief technology officer at FHE software firm Duality Technology, says of the Heracles results. Duality was part of a team that developed a competing accelerator design under the same DARPA program that Intel conceived Heracles under. “When Intel starts talking about scale, that usually carries quite a bit of weight.”

Duality’s focus is less on new hardware than on software products that do the kind of encrypted queries that Intel demonstrated at ISSCC. At the scale in use today “there’s less of a need for [specialized] hardware,” says Rohloff. “Where you start to need hardware is emerging applications around deeper machine-learning oriented operations like neural net, LLMs, or semantic search.”

Last year, Duality demonstrated an FHE-encrypted language model called BERT. Like more famous LLMs such as ChatGPT, BERT is a transformer model. However it’s only one tenth the size of even the most compact LLMs.

John Barrus, vice president of product at Dayton, Ohio-based Niobium Microsystems, an FHE chip startup spun out of another DARPA competitor, agrees that encrypted AI is a key target of FHE chips. “There are a lot of smaller models that, even with FHE’s data expansion, will run just fine on accelerated hardware,” he says.

With no stated commercial plans from Intel, Niobium expects its chip to be “the world’s first commercially viable FHE accelerator, designed to enable encrypted computations at speeds practical for real-world cloud and AI infrastructure.” Although it hasn’t announced when a commercial chip will be available, last month the startup revealed that it had inked a deal worth 10 billion South Korean won (US $6.9 million) with Seoul-based chip design firm Semifive to develop the FHE accelerator for fabrication using Samsung Foundry’s 8-nanometer process technology.

Other startups including Fabric Cryptography, Cornami, and Optalysys have been working on chips to accelerate FHE. Optalysys CEO Nick New says Heracles hits about the level of speedup you could hope for using an all-digital system. “We’re looking at pushing way past that digital limit,” he says. His company’s approach is to use the physics of a photonic chip to do FHE’s compute-intensive transform steps. That photonics chip is on its seventh generation, he says, and among the next steps is to 3D integrate it with custom silicon to do the non-transform steps and coordinate the whole process. A full 3D-stacked commercial chip could be ready in two or three years, says New.

While competitors develop their chips, so will Intel, says Mathew. It will be improving on how much the chip can accelerate computations by fine tuning the software. It will also be trying out more massive FHE problems, and exploring hardware improvements for a potential next generation. “This is like the first microprocessor… the start of a whole journey,” says Mathew.

Reference: https://ift.tt/bOjGZ9R

Finite-Element Approaches to Transformer Harmonic and Transient Analysis




Explore structured finite-element methodologies for analyzing transformer behavior under harmonic and transient conditions — covering modelling, solver configuration, and result validation techniques.

What Attendees will Learn

  1. How FEM enables pre-fabrication performance evaluation — Assess magnetic field distribution, current behavior, and turns-ratio accuracy through simulation rather than physical testing.
  2. How harmonic analysis uncovers saturation and imbalance — Identify high-flux regions and current asymmetries that analytical methods may not capture.
  3. How transient simulations characterize dynamic response — Examine time-domain current waveforms, inrush behavior, and multi-cycle stabilization.
  4. How modelling choices affect simulation fidelity — Understand the impact of coil definitions, winding configurations, solver type, and material models on accuracy.

Download this free whitepaper now!

Reference: https://ift.tt/zdmrew0

Monday, March 9, 2026

How Cross-Cultural Engineering Drives Tech Advancement




Innovation rarely happens in isolation. Usually, the systems that engineers design are shaped by global teams whose members’ knowledge and ideas move across borders as easily as data.

That is especially true in my field of robotics and automation—where hardware, software, and human workflows function together. Progress depends not only on technical skill but also on how engineers frame problems and evaluate trade-offs. My career has shown me how cross-cultural experiences can shape the framing.

Working across different cultures has influenced how I approach collaboration, design decisions, and risk. I am an IEEE member and a mechanical engineer at Re:Build Fikst, in Wilmington, Mass., but I grew up in India and began my engineering education there.

Experiencing both work environments has reinforced the idea that diversity in science, technology, engineering, and mathematics fields is not only about representation; it is a technical advantage that affects how systems are designed and deployed.

Gaining experience across cultures

I began my training as an undergraduate student in electrical and electronics engineering at Amity University, in Noida. While studying, I developed a strong foundation in problem-framing and disciplined adaptability.

Working on a project requires identifying what the system needs to demonstrate and determining how best to validate that behavior within defined parameters. Rather than starting from idealized assumptions, Amity students were encouraged to focus on essential system behavior and prioritize the variables that most influenced the technology’s performance.

The approach reinforced first-principles thinking—starting from fundamental physical or system-level behavior rather than defaulting to established solutions—and encouraged the efficient use of available resources.

At the same time, I learned that efficiency has limits. In complex or safety-critical systems, insufficient validation can introduce hidden risks and reduce reliability. Understanding when simplicity accelerates progress and when additional rigor is necessary became an important part of my development as an engineer.

After getting my undergraduate degree, I moved to the United States in 2021 to pursue a master’s degree in robotics and autonomous systems at Arizona State University in Tempe. I encountered a new engineering culture in the United States.

In the U.S. research and development sector, especially in robotics and automation, rigor is nonnegotiable. Systems are designed to perform reliably across many cycles, users, and conditions. Documentation, validation, safety reviews, and reproducibility are integral to the process.

Those expectations do not constrain creativity; they allow systems to scale, endure, and be trusted.

Moving between the two different engineering cultures required me to adjust. I had to balance my instinct for efficiency with a more formal structure. In the United States, design decisions demand more justification. Collaboration means aligning with scientists, software engineers, and technicians. Each discipline brings different priorities and definitions of success to the team.

Over time, I realized that the value of both experiences was not in choosing one over the other but in learning when to apply each.

The balance is particularly critical in robotics and automation. Resourcefulness without rigor can fail at scale. A prototype that works in a controlled lab setting, for example, might break down when exposed to different users, operating conditions, or extended duty cycles.

At the same time, rigor without adaptability can slow innovation, such as when excessive documentation or overengineering delays early-stage testing and iteration.

Engineers who navigate multiple educational and professional systems often develop an intuition for managing the tension between the different experiences, building solutions that are robust and practical and that fit real-world workflows rather than idealized ones.

Much of my work today involves integrating automated systems into environments where technical performance must align with how people will use them. For example, a robotic work cell (a system that performs a specific task) might function flawlessly in isolation but require redesign once operators need clearer access for loading materials, troubleshooting faults, or performing routine maintenance. Similarly, an automated testing system must account not only for ideal operating conditions but also for how users respond to error messages, interruptions, and unexpected outputs.

In practice, that means thinking beyond individual components to consider how systems will be operated, maintained, and restored to service after faults or interruptions.

My cross-cultural background shapes how I evaluate design trade-offs and collaboration across disciplines.

How diverse teams can help improve tech design

Engineers trained in different cultures can bring distinct approaches to the same problem. Some might emphasize rapid iteration while others prioritize verification and robustness. When perspectives collide, teams ask better questions earlier. They challenge defaults, find edge cases, and design technologies that are more resilient to real-world variability.

Diversity of thought is certainly important in robotics and automation, where systems sit at the intersection of machines and people. Designing effective automation requires understanding how users interact with technology, how errors propagate, and how different environments influence the technology. Engineers with cross-cultural experience often bring heightened awareness of the variability, leading to better design decisions and more collaborative teams.

Engineers from outside of the United States play a critical role in the country’s research and development ecosystem, especially in interdisciplinary fields. Many of us act as bridges, connecting problem-solving approaches, expectations, and design philosophies shaped in different parts of the world. We translate not just language but also engineering intent, helping teams move from theories to practical deployment.

As robotics and automation continue to evolve, the challenges ahead—including scaling experimentation, improving reproducibility, and integrating intelligent systems into real-world environments—will require engineers who are comfortable working across boundaries. Navigating boundaries, which could be geographic, disciplinary, or cultural, is increasingly part of the job.

The engineering ecosystems in India and the United States are complex, mature, and evolving. My journey in both has taught me that being a strong engineer is not about adopting a single mindset. It’s about knowing how to adapt.

In an interconnected, multinational world, innovation belongs to engineers who can navigate the differences and turn them into strengths.

Reference: https://ift.tt/yMWXKjd

Do Offshore Wind Farms Pose National Security Risks?




When the Trump administration last year sought to freeze construction of offshore wind farms by citing concerns about interference with military radar and sonar, the implication was that these were new issues. But for more than a decade, the United States, Taiwan, and many European countries have successfully mitigated wind turbines’ security impacts. Some European countries are even integrating wind farms with national defense schemes.

“It’s not a choice of whether we go for wind farms or security. We need both,” says Ben Bekkering, a retired vice admiral in the Netherlands and current partner of the International Military Council on Climate and Security.

It’s a fact that offshore wind farms can degrade radar surveillance systems and subsea sensors designed to detect military incursions. But it’s a problem with real-world solutions, say Bekkering and other defense experts contacted by IEEE Spectrum. Those solutions include next-generation radar technology, radar-absorbing coatings for wind turbine blades and multi-mode sensor suites that turn offshore wind farm security equipment into forward eyes and ears for defense agencies.

How Do Wind Farms Interfere With Radar?

Wind turbines interfere with radar because they’re large objects that reflect radar signals. Their spinning blades can introduce false positives on radar screens by inducing a wavelength-shifting Doppler effect that gets flagged as a flying object. Turbines can also obscure aircraft, missiles and drones by scattering radar signals or by blinding older line-of-sight radars to objects behind them, according to a 2024 U.S. Department of Energy (DOE) report.

“Real-world examples from NATO and EU Member States show measurable degradation in radar performance, communication clarity, and situational awareness,” states a 2025 presentation from the €2-million (US$2.3-million) offshore wind Symbiosis Project, led by the Brussels-based European Defence Agency.

However, “measurable” doesn’t always mean major. U.S. agencies that monitor radar have continued to operate “without significant impacts” from wind turbines thanks to field tests, technology development, and mitigation measures taken by U.S. agencies since 2012, according to the DOE. “It is true that they have an impact, but it’s not that big,” says Tue Lippert, a former Danish special forces commander and CEO of Copenhagen-based security consultancy Heimdal Critical Infrastructure.

To date, impacts have been managed through upgrades to radar systems, such as software algorithms that identify a turbine’s radar signature and thus reduce false positives. Careful wind farm siting helps too. During the most recent designation of Atlantic wind zones in the U.S., for example, the Biden administration reduced the geographic area for a proposed zone off the Maryland coast by 79 percent to minimize defense impacts.

Radar impacts can be managed even better by upgrading hardware, say experts. Newer solid-state, phased-array radars are better at distinguishing turbines from other objects than conventional mechanical radars. Phased arrays shift the timing of hundreds or thousands of individual radio waves, creating interference patterns to steer the radar beams. The result is a higher-resolution signal that offers better tracking of multiple objects and better visibility behind objects in its path. “Most modern radars can actually see through wind farms,” says Lippert.

One of the Trump administration’s first moves in its overhaul of civilian air traffic was a $438-million order for phased-array radar systems and other equipment from Collins Aerospace, which touts wind farm mitigation as one of its products’ key features.

Close-up of a militaristic yet compact radar mounted on the rear bed of a vehicle. Saab’s compact Giraffe 1X combined surface-and-air-defense radar was installed in 2021 on an offshore wind farm near England.Saab

Can Wind Farms Aid Military Surveillance?

Another radar mitigation option is “infill” radar, which fills in coverage gaps. This involves installing additional radar hardware on land to provide new angles of view through a wind farm or putting radar systems on the offshore turbines to extend the radar field of view.

In fact, wind farms are increasingly being tapped to extend military surveillance capabilities. “You’re changing the battlefield, but it’s a change to your advantage if you use it as a tactical lever,” says Lippert.

In 2021 Linköping, Sweden-based defense contractor Saab and Danish wind developer Ørsted demonstrated that air defense radar can be placed on a wind farm. Saab conducted a two-month test of its compact Giraffe 1X combined surface-and-air-defense radar on Ørsted’s Hornsea 1 wind farm, located 120 kilometers east of England’s Yorkshire coast. The installation extended situational awareness “beyond the radar horizon of the ground-based long-range radars,” claims Saab. The U.K. Ministry of Defence ordered 11 of Saab’s systems.

Putting surface radar on turbines is something many offshore wind operators do already to track their crew vessels and to detect unauthorized ships within their arrays. Sharing those signals, or even sharing the equipment, can give national defense forces an expanded view of ships moving within and around the turbines. It can also improve detection of low altitude cruises missiles, says Bekkering, which can evade air defense radars.

Sharing signals and equipment is part of a growing trend in Europe towards “dual use” of offshore infrastructure. Expanded dual-use sensing is already being implemented in Belgium, the Netherlands and Poland, and was among the recommendations from Europe’s Symbiosis Project.

In fact, Poland mandates inclusion of defense-relevant equipment on all offshore wind farms. Their first project carries radar and other sensors specified by Poland’s Ministry of Defense. The wind farm will start operating in the Baltic later this year, roughly 200 kilometers south of Kaliningrad, a Russian exclave.

The U.K. is experimenting too. Last year West Sussex-based LiveLink Aerospace demonstrated purpose-built, dual-use sensors atop wind turbines offshore from Aberdeen. The compact equipment combines a suite of sensors including electro-optical sensors, thermal and visible light cameras, and detectors for radio frequency and acoustic signals.

In the past, wind farm operators tended to resist cooperating with defense projects, fearing that would turn their installations into military targets. And militaries were also reluctant to share, because they are used to having full control over equipment.

But Russia’s increasingly aggressive posture has shifted thinking, say security experts. Russia’s attacks on Ukraine’s power grid show that “everything is a target,” says Tobhias Wikström, CEO for Luleå, Sweden-based Parachute Consulting and a former lieutenant colonel in Sweden’s air force. Recent sabotage of offshore gas pipelines and power cables is also reinforcing the sense that offshore wind operators and defense agencies need to collaborate.

Why Is Sweden Restricting Offshore Wind?

Contrary to Poland and the U.K., Sweden is the one European country that, like the U.S. under Trump’s second administration, has used national security to justify a broad restriction on offshore wind development. In 2024 Sweden rejected 13 projects along its Baltic coast, which faces Kaliningrad, citing anticipated degradation in its ability to detect incoming missiles.

Saab’s CEO rejected the government’s argument, telling a Swedish newspaper that the firm’s radar “can handle” wind farms. Wikström at Parachute Consulting also questions the government’s claim, noting that Sweden’s entry into NATO in 2024 gives its military access to Finnish, German and Polish air defense radars, among others, that together provide an unobstructed view of the Baltic. “You will always have radars in other locations that will cross-monitor and see what’s behind those wind turbines,” says Wikström.

Politics are likely at play, says Wikström, noting that some of the coalition government’s parties are staunchly pro-nuclear. But he says a deeper problem is that the military experts who evaluate proposed wind projects, as he did before retiring in 2021, lack time and guidance.

By banning offshore wind projects instead of embracing them, Sweden and the U.S. may be missing out on opportunities for training in that environment, says Lippert, who regularly serves with U.S. forces as a reserves liaison officer with Denmark’s Greenland-based Joint Arctic Command. As he puts it: “The Chinese and Taiwanese coasts are plastered with offshore wind. If the U.S. Navy and Air Force are not used to fighting in littoral environments filled with wind farms, then they’re at a huge disadvantage when war comes.”

Reference: https://ift.tt/lWOBYH7

Keep Your Intuition Sharp While Using AI Coding Tools

This article is crossposted from IEEE Spectrum ’s careers newsletter. Sign up now to get insider tips, expert advice, and practical str...