Tuesday, March 3, 2026

Optimizing a Battery Electric Vehicle Thermal Management System




This webinar looks at a Battery Electric Virtual Vehicle Model of a mid-size BEV, and uses Simulink and Simscape to facilitate design exploration, component refinement, and system-level optimization. The virtual vehicle comprises five subsystems: Electric powertrain, driveline, refrigerant cycle, coolant cycle, and passenger cabin. The model will be tested using different drive cycles, cooling, and heating scenarios. The results will be analyzed to determine the impact of the different design parameters on vehicle consumption.

The resulting virtual vehicle will be used to:

  • Test different drive cycles and environmental conditions
  • Perform sensitivity analysis
  • Optimize model to improve thermal performance and consumption
Reference: https://ift.tt/baW7j2T

LLMs can unmask pseudonymous users at scale with surprising accuracy


Burner accounts on social media sites can increasingly be analyzed to identify the pseudonymous users who post to them using AI in research that has far-reaching consequences for privacy on the Internet, researchers said.

The finding, from a recently published research paper, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators. Recall—that is, how many users were successfully deanonymized—was as high as 68 percent. Precision—meaning the rate of guesses that correctly identify the user—was up to 90 percent.

I know what you posted last year

The findings have the potential to upend pseudonymity, an imperfect but often sufficient privacy measure used by many people to post queries and participate in sometimes sensitive public discussions while making it hard for others to positively identify the speakers. The ability to cheaply and quickly identify the people behind such obscured accounts opens them up to doxxing, stalking, and the assembly of detailed marketing profiles that track where speakers live, what they do for a living, and other personal information. This pseudonymity measure no longer holds.

Read full article

Comments

Reference : https://ift.tt/cdSCaKA

Monday, March 2, 2026

How Quantum Data Can Teach AI to Do Better Chemistry




Sometimes a visually compelling metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named John P. Perdew came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “Jacob’s Ladder.” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.”

Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder, using increasingly more intensive mathematics and compute power, descriptions of atomic reality became more precise. And at the very top, nature was perfectly described via impossibly intensive computation—something like what God might see.

With this metaphor in mind, we propose to extend Jacob’s Ladder beyond Perdew’s version, to encompass all computational approaches to simulating the behavior of electrons. And instead of climbing rung by rung toward an unreachable summit, we have an idea to bend the ladder so that even the very top lies within our grasp. Specifically, we at Microsoft envision a hybrid approach. It starts with using quantum computers to generate exquisitely accurate data about the behavior of electrons—data that would be prohibitively expensive to compute classically. This quantum-generated data will then train AI models running on classical machines, which can predict the properties of materials with remarkable speed. By combining quantum accuracy with AI-driven speed, we can ascend Jacob’s Ladder faster, designing new materials with novel properties and at a fraction of the cost.

Graph comparing the computational cost of simulation methods, from classical mechanics to quantum FCI. At the base of Jacob’s Ladder are classical models that treat atoms as simple balls connected by springs—fast enough to handle millions of atoms over long times but with the lowest precision. Moving up along the black line, semiempirical methods add some quantum mechanical calculations. Next are approximations based on Hartree-Fock (HF) and density functional theory (DFT), which include full quantum behavior of individual electrons but model their interactions in an averaged way. The greater accuracy requires significant computing power, which limits them to simulating molecules with no more than a few hundred atoms. At the top are coupled-cluster and full configuration interaction (FCI) methods—exquisitely accurate but, at the moment, restricted to tiny molecules or subsets of electrons due to the large computational costs involved. Quantum computing can bend the accuracy-versus-cost curve at the top of Jacob’s Ladder [orange line], making highly accurate calculations feasible for large systems. AI, trained on this quantum-accurate data, can flatten this curve [purple line], enabling rapid predictions for similar systems at a fraction of the cost of classical computing.Source: Microsoft Quantum

In our approach, the base of Jacob’s Ladder still starts with classical models that treat atoms as simple balls connected by springs—models that are fast enough to handle millions of atoms over long times, but with the lowest precision. As we ascend the ladder, some quantum mechanical calculations are added to semiempirical methods. Eventually, we’ll get to the full quantum behavior of individual electrons but with their interactions modeled in an averaged way; this greater accuracy requires significant compute power, which means you can only simulate molecules of no more than a few hundred atoms. At the top will be the most computationally intensive methods—prohibitively expensive on classical computers but tractable on quantum computers.

In the coming years, quantum computing and AI will become critical tools in the pursuit of new materials science and chemistry. When combined, their forces will multiply. We believe that by using quantum computers to train AI on quantum data, the result will be hyperaccurate AI models that can reach ever higher rungs of computational complexity without the prohibitive computational costs.

This powerful combination of quantum computing and AI could unlock unprecedented advances in chemical discovery, materials design, and our understanding of complex reaction mechanisms. Chemical and materials innovations already play a vital—if often invisible—role in our daily lives. These discoveries shape the modern world: new drugs to help treat disease more effectively, improving health and extending life expectancy; everyday products like toothpaste, sunscreen, and cleaning supplies that are safe and effective; cleaner fuels and longer-lasting batteries; improved fertilizers and pesticides to boost global food production; and biodegradable plastics and recyclable materials to shrink our environmental footprint. In short, chemical discovery is a behind-the-scenes force that greatly enhances our everyday lives.


The potential is vast. Anywhere AI is already in use, this new quantum-enhanced AI could drastically improve results. These models could, for instance, scan for previously unknown catalysts that could fix atmospheric carbon and so mitigate climate change. They could discover novel chemical reactions to turn waste plastics into useful raw materials and remove toxic “forever chemicals” from the environment. They could uncover new battery chemistries for safer, more compact energy storage. They could supercharge drug discovery for personalized medicine.

And that would just be the beginning. We believe quantum-enhanced AI will open up new frontiers in materials science and reshape our ability to understand and manipulate matter at its most fundamental level. Here’s how.

How Quantum Computing Will Revolutionize Chemistry

To understand how quantum computing and AI could help bend Jacob’s Ladder, it’s useful to look at the classical approximation techniques that are currently used in chemistry. In atoms and molecules, electrons interact with one another in complex ways called electron correlations. These correlations are crucial for accurately describing chemical systems. Many computational methods, such as density functional theory (DFT) or the Hartree-Fock method, simplify these interactions by replacing the intricate correlations with averaged ones, assuming that each electron moves within an average field created by all other electrons. Such approximations work in many cases, but they can’t provide a full description of the system.

a woman stirs a white powder inside a glove box.

The second shows white powder in test tubes.

shows a gloved hand holding a silvery disc close to an electronic apparatus. A joint project between Microsoft and Pacific Northwest National Laboratory used AI and high-performance computing to identify potential materials for battery electrolytes. The most promising were synthesized [top and middle] and tested [bottom] at PNNL. Dan DeLong/Microsoft

Electron correlation is particularly important in systems where the electrons are strongly interacting—as in materials with unusual electronic properties, like high-temperature superconductors—or when there are many possible arrangements of electrons with similar energies—such as compounds containing certain metal atoms that are crucial for catalytic processes.

In these cases, the simplified approach of DFT or Hartree-Fock breaks down, and more sophisticated methods are needed. As the number of possible electron configurations increases, we quickly reach an “exponential wall” in computational complexity, beyond which classical methods become infeasible.

Enter the quantum computer. Unlike classical bits, which are either on or off, qubits can exist in superpositions—effectively coexisting in multiple states simultaneously. This should allow them to represent many electron configurations at once, mirroring the complex quantum behavior of correlated electrons. Because quantum computers operate on the same principles as the electron systems they will simulate, they will be able to accurately simulate even strongly correlated systems—where electrons are so interdependent that their behavior must be calculated collectively.

AI’s Role in Advancing Computational Chemistry

At present, even the computationally cheap methods at the bottom of Jacob’s Ladder are slow, and the ones higher up the ladder are slower still. AI models have emerged as powerful accelerators to such calculations because they can serve as emulators that predict simulation outcomes without running the full calculations. The models can speed up the time it takes to solve problems up and down the ladder by orders of magnitude.

This acceleration opens up entirely new scales of scientific exploration. In 2023 and 2024, we collaborated with researchers at Pacific Northwest National Laboratory (PNNL) on using advanced AI models to evaluate over 32 million potential battery materials, looking for safer, cheaper, and more environmentally friendly options. This enormous pool of candidates would have taken about 20 years to explore using traditional methods. And yet, within less than a week, that list was narrowed to 500,000 stable materials and then to 800 highly promising candidates. Throughout the evaluation, the AI models replaced expensive and time-consuming quantum chemistry calculations, in some cases delivering insights half a million times as fast as would otherwise have been the case.

We then used high-performance computing (HPC) to validate the most promising materials with DFT and AI-accelerated molecular dynamics simulations. The PNNL team then spent about nine months synthesizing and testing one of the candidates—a solid-state electrolyte that uses sodium, which is cheap and abundant, and some other materials, with 70 percent less lithium than conventional lithium-ion designs. The team then built a prototype solid-state battery that they tested over a range of temperatures.

This potential battery breakthrough isn’t unique. AI models have also dramatically accelerated research in climate science, fluid dynamics, astrophysics, protein design, and chemical and biological discovery. By replacing traditional simulations that can take days or weeks to run, AI is reshaping the pace and scope of scientific research across disciplines.

However, these AI models are only as good as the quality and diversity of their training data. Whether sourced from high-fidelity simulations or carefully curated experimental results, these data must accurately represent the underlying physical phenomena to ensure reliable predictions. Poor or biased data can lead to misleading outcomes. By contrast, high-quality, diverse datasets—such as those full-accuracy quantum simulations—enable models to generalize across systems and uncover new scientific insights. This is the promise of using quantum computing for training AI models.

How to Accelerate Chemical Discovery

The real breakthrough will come from strategically combining quantum computing’s and AI’s unique strengths. AI already excels at learning patterns and making rapid predictions. Quantum computers, which are still being scaled up to be practically useful, will excel at capturing electron correlations that classical computers can only approximate. So if you train classical models on quantum-generated data, you’ll get the best of both worlds: the accuracy of quantum delivered at the speed of AI.

As we learned from the Microsoft-PNNL collaboration on electrolytes, AI models alone can greatly speed up chemical discovery. In the future, quantum-accurate AI models will tackle even bigger challenges. Consider the basic discovery process, which we can think of as a funnel. Scientists begin with a vast pool of candidate molecules or materials at the wide-mouthed top, narrowing them down using filters based on desired properties—such as boiling point, conductivity, viscosity, or reactivity. Crucially, the effectiveness of this screening process depends heavily on the accuracy of the models used to predict these properties. Inaccurate predictions can create a “leaky” funnel, where promising candidates are mistakenly discarded or poor ones are mistakenly advanced.

Quantum-accurate AI models will dramatically improve the precision of chemical-property predictions. They’ll be able to help identify “first-time right” candidates, sending only the most promising molecules to the lab for synthesis and testing—which will save both time and cost.

Another key aspect of the discovery process is understanding the chemical reactions that govern how new substances are formed and behave. Think of these reactions as a network of roads winding through a mountainous landscape, where each road represents a possible reaction step, from starting materials to final products. The outcome of a reaction depends on how quickly it travels down each path, which in turn is determined by the energy barriers along the way—like mountain passes that must be crossed. To find the most efficient route, we need accurate calculations of these barrier heights, so that we can identify the lowest passes and chart the fastest path through the reaction landscape.

Even small errors in estimating these barriers can lead to incorrect predictions about which products will form. Case in point: A slight miscalculation in the energy barrier of an environmental reaction could mean the difference between labeling a compound a “forever chemical” or one that safely degrades over time.

Accurate modeling of reaction rates is also essential for designing catalysts—substances that speed up and steer reactions in desired directions. Catalysts are crucial in industrial chemical production, carbon capture, and biological processes, among many other things. Here, too, quantum-accurate AI models can play a transformative role by providing the high-fidelity data needed to predict reaction outcomes and design better catalysts.

Once trained, these AI models, powered by quantum-accurate data, will revolutionize computational chemistry by delivering quantum-level precision. And once the AI models, which run on classical computers, are trained with quantum computing data, researchers will be able to run high-accuracy simulations on laptops or desktop computers, rather than relying on massive supercomputers or future quantum hardware. By making advanced chemical modeling more accessible, these tools will democratize discovery and empower a broader community of scientists to tackle some of the most pressing challenges in health, energy, and sustainability.

Remaining Challenges for AI and Quantum Computing

By now, you’re probably wondering: When will this transformative future arrive? It’s true that quantum computers still struggle with error rates and limited lifetimes of usable qubits. And they still need to scale to the size required for meaningful chemistry simulations. Meaningful chemistry simulations beyond the reach of classical computation will require hundreds to thousands of high-quality qubits with error rates of around 10-15, or one error in a quadrillion operations. Achieving this level of reliability will require fault tolerance through redundant encoding of quantum information in logical qubits, each consisting of hundreds of physical qubits, thus requiring a total of about a million physical qubits. Current AI models for chemical-property predictions may not have to be fully redesigned. We expect that it will be sufficient to start with models pretrained on classical data and then fine-tune them with a few results from quantum computers.

Despite some open questions, the potential rewards in terms of scientific understanding and technological breakthroughs make our proposal a compelling direction for the field. The quantum computing industry has begun to move beyond the early noisy prototypes, and high-fidelity quantum computers with low error rates could be possible within a decade.

Realizing the full potential of quantum-enhanced AI for chemical discovery will require focused collaboration between chemists and materials scientists who understand the target problems, experts in quantum computing who are building the hardware, and AI researchers who are developing the algorithms. Done right, quantum-enhanced AI could start to tackle the world’s toughest challenges—from climate change to disease—years ahead of anyone’s expectations.

Reference: https://ift.tt/QsYSFbZ

What Military Drones Can Teach Self-Driving Cars




Self-driving cars often struggle with with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.

This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.

As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.

While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.

Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.

The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.

Five Lessons From Military Drone Operations

Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.

Latency

Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.

In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.

Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.

Workstation Design

Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design.

UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.


Human Factors Interface Design Procedure Design
Army Hunter 47% 20% 20%
Army Shadow 21% 80% 40%
Air Force Predator 67% 38% 75%
Air Force Global Hawk 33% 100% 0%

Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to accidentally shut off the engine instead of firing a missile. This poor design led to the accidents where the remote operators inadvertently shut the engine down instead of launching a missile.

The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.

Operator Workload

Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.

When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research.

Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.

The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.

Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.

Training

Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program.

Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.

Contingency Planning

Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.

Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.

The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.

Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.

Reference: https://ift.tt/lAPIaOW

Sunday, March 1, 2026

IEEE President’s Note: Engineering a Modern Renaissance




Consider a powerful parallel between the advancements made during the Renaissance and the developments made by today’s engineers.

The Renaissance was a uniquely fertile era. Its ethos of curiosity and creativity fostered unprecedented collaboration across disciplines. Artists, scientists, philosophers, and patrons engaged in a shared pursuit of human potential, beauty, and advancements in art, science, and literature.

But the Renaissance wasn’t just a cultural awakening. It was a systems-level transformation: a convergence of disciplines, minds, and methods that redefined what humanity could achieve. And in many ways, it mirrors the collaborative spirit we strive for within our IEEE communities.

Collaboration Is a Catalyst

During the Renaissance, breakthroughs didn’t happen in isolation. They emerged from intersections of different disciplines. Collaboration was the norm: Artists worked with mathematicians to perfect their creations’ accuracy, and architects consulted astronomers to design buildings that reflected celestial order. It was interdisciplinary design thinking centuries before the concept was given a name.

Who’s up for a Challenge?


IEEE Impact Challenge, which launched in January, aims not only to address real-world problems through purpose-driven engineering but also to attract new members, foster cross‑disciplinary collaboration, and design a better world for all.

The IEEE Future Tech Explorers program invites IEEE members to partner with others to inspire tomorrow’s engineers and technologists by creating interactive educational experiences that spark curiosity and open doors for young minds.

The IEEE Response Quest seeks to find solutions that enable near-real-time situational awareness for those providing emergency response and relief assistance.

We welcome educators, designers, engineers, and innovators from every technical discipline to come together, collaborate across communities, and demonstrate the power of IEEE when we unite around a shared purpose.

Learn more at the IEEE Impact Challenge website.

It is at the intersections where disciplines and communities meet that the sparks of transformation ignite. The intersection of engineering and medicine gives us lifesaving devices. The intersection of computing and art produces immersive experiences from virtual, augmented, and mixed reality technology that expands human imagination. The intersection of policy and technology ensures ethical innovation. The outcomes of these crossroads remind us that progress is rarely linear. It is woven from the threads of various expertise, perspectives, and values.

When we collaborate across specialties, from electrical and biomedical to aerospace and software, we unlock new possibilities. And when we engage with industry, educators, policymakers, standard developers, and the public, we elevate those possibilities into solutions. We do it together, because no single engineer or technologist, and no one discipline can solve all the challenges we face.

The Renaissance teaches us that collaboration is a catalyst for advancing society. And so, I ask: What if we are living in a new, modern renaissance?

What if our members are today’s da Vincis, designing systems that serve humanity? What if our volunteers are modern-day patrons, investing time, talent, and heart into building a better world? What if our students and young professionals are the architects of tomorrow’s breakthroughs, fluent in computer code, ethics, and global impact, ready to collaborate across borders, sectors, and disciplines?

What if our conferences, technical standards, and humanitarian technologies are the printing presses of our time, disseminating knowledge, sparking dialogue, and scaling solutions? What if our collective imagination is the canvas upon which the next century of innovation will be painted?

And what if, like the Renaissance, our era is defined not only by invention but also by intersection, where many voices and perspectives converge to shape technologies that reflect humanity’s full spectrum?

Imagine engineers working together with ethicists to ensure responsible AI; with environmental scientists to safeguard our planet; and with local communities to design solutions that solve their challenges. Also imagine engineers partnering with disaster relief agencies to design real-time systems, restore communication networks, and deliver lifesaving technologies when survivors need them most.

So let us think like Renaissance creators. Let us design with empathy and collaborate across boundaries. Let us honor that legacy by not just preserving the past but also by building systems that empower the future for everyone.

When we unite technical excellence with human purpose, we don’t just innovate; we elevate. And in doing so, we carry forward the timeless truth of the Renaissance: Humanity’s greatest achievements are born not from isolation but from intersection and connection.

—Mary Ellen Randall

IEEE president and CEO

Please share your thoughts with me: president@ieee.org.

Reference: https://ift.tt/GWySbJ6

Letting Machines Decide What Matters




In the time it takes you to read this sentence, the Large Hadron Collider (LHC) will have smashed billions of particles together. In all likelihood, it will have found exactly what it found yesterday: more evidence to support the Standard Model of particle physics.

For the engineers who built this 27-kilometer-long ring, this consistency is a triumph. But for theoretical physicists, it has been rather frustrating. As Matthew Hutson reports in “AI Hunts for the Next Big Thing in Physics,” the field is currently gripped by a quiet crisis. In an email discussing his reporting, Hutson explains that the Standard Model, which describes the known elementary particles and forces, is not a complete picture. “So theorists have proposed new ideas, and experimentalists have built giant facilities to test them, but despite the gobs of data, there have been no big breakthroughs,” Hutson says. “There are key components of reality we’re completely missing.”

That’s why researchers are turning artificial intelligence loose on particle physics. They aren’t simply asking AI to comb through accelerator data to confirm existing theories, Hutson explains. They’re asking AI to point the way toward theories that they’ve never imagined. “Instead of looking to support theories that humans have generated,” he says, “unsupervised AI can highlight anything out of the ordinary, expanding our reach into unknown unknowns.” By asking AI to flag anomalies in the data, researchers hope to find their way to “new physics” that extends the Standard Model.

On the surface, this article might sound like another “AI for X” story. As IEEE Spectrum’s AI editor, I get a steady stream of pitches for such stories: AI for drug discovery, AI for farming, AI for wildlife tracking. Often what that really means is faster data processing or automation around the edges. Useful, sure, but incremental.

What struck me in Hutson’s reporting is that this effort feels different. Instead of analyzing experimental data after the fact, the AI essentially becomes part of the instrument, scanning for subtle patterns and deciding in real time what’s interesting. At the LHC, detectors record 40 million collisions per second. There’s simply no way to preserve all that data, so engineers have always had to build filters to decide which events get saved for analysis and which are discarded; nearly everything is thrown away.

Now those split-second decisions are increasingly handed to machine learning systems running on field-programmable gate arrays (FPGAs) connected to the detectors. The code must run on the chip’s limited logic and memory, and compressing a neural network into that hardware isn’t easy. Hutson describes one theorist pleading with an engineer, “Which of my algorithms fits on your bloody FPGA?”

This moment is part of a much older pattern. As Hutson writes in the article, new instruments have opened doors to the unexpected throughout the history of science. Galileo’s telescope revealed moons circling Jupiter. Early microscopes exposed entire worlds of “animalcules” swimming around. These better tools didn’t just answer existing questions; they made it possible to ask new ones.

If there’s a crisis in particle physics, in other words, it may not just be about missing particles. It’s about how to look beyond the limits of the human imagination. Hutson’s story suggests that AI might not solve the mysteries of the universe outright, but it could change how we search for answers.

Reference: https://ift.tt/2o0PRQy

Saturday, February 28, 2026

Xiangyi Cheng Is Bringing AR to Classrooms and Hospitals




When Xiangyi Cheng published her first journal paper as a principal investigator in IEEE Access in 2024, it marked more than a professional milestone. For Cheng, an IEEE member and an assistant professor of mechanical engineering at Loyola Marymount University, in Los Angeles, it was the latest waypoint in a career shaped by curiosity, persistence, and a belief that technology should serve people—not the other way around.

The paper’s title was “Mobile Devices or Head-Mounted Displays: A Comparative Review and Analysis of Augmented Reality in Healthcare.”

XIANGYI CHENG


Employer

Loyola Marymount University, in Los Angeles

Title

Assistant professor of mechanical engineering

Member grade

Member

Alma maters

China University of Mining and Technology; Texas A&M University

Cheng’s work spans robotics, intelligent systems, human-machine interaction and artificial intelligence. It has applications in patient-specific surgical planning, an approach whereby treatment is customized to the anatomy and clinical needs of each individual.

Her research also covers wearables for rehabilitation and augmented-reality-enhanced engineering education.

The throughline of her career is sound judgment based on critical thinking. She urges her students to avoid the temptation to accept the answers they’re given by AI without cross-checking them against their own foundational understanding of the subject matter.

“AI can give you ideas,” Cheng says, “but it should never lead your thinking.”

That principle—honed through uncertainty, disciplinary shifts, and hard-earned confidence—has made Cheng an emerging voice in applied intelligent systems and a thoughtful educator preparing students for an AI-saturated world.

From Xi’an to Beijing: A mind drawn to mathematics

Cheng, born in Xi’an, China, grew up in a household shaped by her parents’ disparate careers. Her father was a mining engineer, and her mother taught Chinese and literature at a high school.

“That contrast between logical and literary thinking helped me understand myself early,” Cheng says. “I liked math, and STEM felt natural to me.”

Several teachers reinforced her inclination, she says, particularly a math teacher whose calm, fair approach emphasized reasoning over punishments such as detention for misbehavior or failure to complete assignments.

“It wasn’t about being right,” Cheng says. “It was about thinking clearly.”

She moved to Beijing in 2011 to attend the China University of Mining and Technology , where she studied mechanical engineering. After graduating with a bachelor’s degree in 2015, she was unsure where the field would take her.

An IEEE paper changed her trajectory

Later in 2015, she traveled to the United States to study at Case Western Reserve University, in Cleveland.

She initially viewed the move as exploratory rather than a long-term commitment.

“I wasn’t thinking about a Ph.D.,” she says. “I wasn’t even sure research was for me.”

That uncertainty shifted in 2017, when Cheng submitted her “IntuBot: Design and Prototyping of a Robotic Intubation Device” paper to the IEEE International Conference on Robotics and Automation (ICRA)—which was accepted.

“AI can give you more possibilities, but thinking is still our responsibility.”

Intubation is a procedure in which an endotracheal tube is inserted into a patient’s airway—usually through the mouth—to help them breathe. Because placing the tube correctly is not simple and usually must be done quickly, it requires training. That’s why research into robotic or assisted intubation systems focuses on improving speed, accuracy, and safety.

She presented her findings at ICRA in 2018, giving her early exposure to a global research community.

“That acceptance gave me confidence,” she recalls. “It showed me I could contribute to the field.”

Her advisor at Case Western encouraged her to switch from the mechanical engineering master’s program to the Ph.D. track. When the advisor moved to Texas A&M University, in College Station, in 2019, Cheng decided to transfer. She completed her Ph.D. in mechanical engineering at Texas A&M in 2022.

Although she didn’t earn a degree from Case Western, she credits her experience there with clarifying her professional direction.

Shortly after graduating with her Ph.D., Cheng was hired as an assistant professor of mechanical engineering at Ohio Northern University, in Ada. She left in 2024 to become an assistant professor at Loyola Marymount.

Engineering for the body—and the classroom

Cheng’s research focuses on human-centered engineering, particularly in health care. One of her major projects addresses syndactyly, a congenital condition in which a newborn’s fingers are fused at birth. Surgeons rely on their experience to estimate the size and shape of skin grafts to be taken from another part of the body for the corrective surgery.

She is developing technology to scan the patient’s hand, extract anatomical landmarks, and use finite element analysis—a computer-based method for predicting how a physical object will behave under real-world conditions—to determine the optimal graft size and shape.

Smiling portrait of Xiangyi Cheng. Xiangyi Cheng designs human-centered intelligent systems with applications in health care and education.Xiangyi Cheng

“Everyone’s hand is different,” Cheng says. “So the surgery should be personalized.”

Another project centers on developing smart gloves to assist with hand rehabilitation, pairing the unaffected hand with the injured one so the person’s natural motion can help guide therapy.

She also is exploring augmented reality in engineering education, using immersive visualization and AI tools to help students grasp three-dimensional concepts that are difficult to convey through traditional learning tools. Such visualization lets students see and interact with a digital world as if they’re inside it instead of viewing it on a flat screen.

Teaching balance in an AI-driven world

Despite working at the forefront of AI-enabled systems, Cheng cautions her students to be judicious in their use of the technology so that they don’t rely on it too heavily.

“AI is not always right and perfect,” she says. “You still need to be able to judge whether the answers it provides are correct.”

As AI continues to reshape engineering, Cheng remains grounded in a simple principle, she says: “We should use these tools. But we should never let them replace our judgment. AI can give you more possibilities, but thinking is still our responsibility.”

In her lab and classroom, Cheng prioritizes independent thinking, critical evaluation, and persistence. Many of her research students are undergraduates, and she encourages them to take ownership of their work—planning ahead, testing ideas, and learning from failure.

“The students who succeed don’t give up easily,” she says.

What she finds most rewarding, she says, is watching students mature. Reserved first-year students often become confident seniors who can present complex work and manage demanding projects.

“Getting to witness that transformation is why I teach,” she says.

For students considering engineering, Cheng offers straightforward advice: “Focus on mathematics. Engineering looks hands-on, but math is the foundation behind everything.”

With practice and persistence, she says, students can succeed and find meaning in the field.

Why IEEE continues to matter

Cheng joined IEEE in 2017, the year she submitted her first paper to ICRA. The organization has remained central to her professional development, she says.

She has served as a reviewer for IEEE journals and conferences including Robotics and Automation Letters, Transactions on Medical Robotics and Bionics, Transactions on Robotics, the International Conference on Intelligent Robots and Systems, and ICRA.

IEEE’s interdisciplinary scope aligns naturally with her work, she says, adding that the organization is “one of the few places that truly welcomes research across boundaries.”

More personally, IEEE helped her see a future she had not initially imagined.

“That first conference was a turning point,” she says. “It helped me realize I belonged.”

Reference: https://ift.tt/Empdz9V

Optimizing a Battery Electric Vehicle Thermal Management System

This webinar looks at a Battery Electric Virtual Vehicle Model of a mid-size BEV, and uses Simulink and Simscape to facilitate design exp...