Tuesday, January 13, 2026

The RAM shortage’s silver lining: Less talk about “AI PCs”


RAM prices have soared, which is bad news for people interested in buying, building, or upgrading a computer this year, but it's likely good news for people exasperated by talk of so-called AI PCs.

As Ars Technica has reported, the growing demands of data centers, fueled by the AI boom, have led to a shortage of RAM and flash memory chips, driving prices to skyrocket.

In an announcement today, Ben Yeh, principal analyst at technology research firm Omdia, said that in 2025, “mainstream PC memory and storage costs rose by 40 percent to 70 percent, resulting in cost increases being passed through to customers.”

Read full article

Comments

Reference : https://ift.tt/sbzkPg6

Hegseth wants to integrate Musk’s Grok AI into military networks this month


On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk's AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place "the world's leading AI models on every unclassified and classified network throughout our department."

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth's announced timeline or implementation details.

During the same appearance, Hegseth rolled out what he called an "AI acceleration strategy" for the Department of Defense. The strategy, he said, will "unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future."

Read full article

Comments

Reference : https://ift.tt/gYieB9W

Microsoft vows to cover full power costs for energy-hungry AI data centers


On Tuesday Microsoft announced a new initiative called "Community-First AI Infrastructure" that commits the company to paying full electricity costs for its data centers and refusing to seek local property tax reductions.

As demand for generative AI services has increased over the past year, Big Tech companies have been racing to spin up massive new data centers for serving chatbots and image generators that can have profound economic effects on the surrounding areas in which they are located. Among other concerns, communities across the country have grown concerned that data centers are driving up residential electricity rates through heavy power consumption and by straining water supplies due to server cooling needs.

The International Energy Agency (IEA) projects that global data center electricity demand will more than double by 2030, reaching around 945 TWh, with the United States responsible for nearly half of total electricity demand growth over that period. This growth is happening while much of the country's electricity transmission infrastructure is more than 40 years old and under strain.

Read full article

Comments

Reference : https://ift.tt/FYdLSZQ

Meet the Two Members Petitioning to Be President-Elect Candidates




The IEEE Board of Directors has received petition intentions from IEEE Senior Member Gerardo Barbosa and IEEE Life Senior Member Timothy T. Lee as candidates for 2027 IEEE president-elect. The petitioners are listed in alphabetical order and indicate no preference.

The winner of this year’s election will serve as IEEE president in 2028. For more information about the petitioners and Board-nominated candidates, visit ieee.org/pe27. You can sign their petitions at ieee.org/petition.

Signatures for IEEE president-elect candidate petitions are due 10 April at 12:00 p.m. EST/16:00 p.m. UTC.

IEEE Senior Member Gerardo Barbosa

Gerardo Barbosa smiling in a suit jacket. Gerardo Sosa

Barbosa is an expert in information technology management and technology commercialization, with a career spanning innovation, entrepreneurship,and an international perspective. He began his career designing radio-frequency identification systems for real-time asset tracking and inventory management. In 2014 he founded CLOUDCOM, a software company that develops enterprise software to improve businesses’ billing and logistics operations, and serves as its CEO.

Barbosa’s IEEE journey began in 2009 at the IEEE Monterrey (Mexico) Section, where he served as chair and treasurer. He led grassroots initiatives with students and young professionals. His leadership positions in IEEE Region 9 include technical activities chair and treasurer.

As the 2019—2020 vice chair and 2021—2023 treasurer of IEEE Member and Geographic Activities, Barbosa became recognized as a trusted, data-driven, and collaborative leader.

He has been a member of the IEEE Finance Committee since 2021 and is now its chair due to his role as IEEE treasurer on the IEEE Board of Directors. He is deeply committed to the responsible stewardship of IEEE’s global resources, ensuring long-term financial sustainability in service of IEEE’s mission.

IEEE Life Senior Member Timothy T. Lee

Timothy Lee smiling. Nikon/CES

Lee is a Technical Fellow at Boeing in Southern California with expertise in microelectronics and advanced 2.5D and 3D chip packaging for AI workloads, 5G, and SATCOM systems for aerospace platforms. He leads R&D projects, including work funded by the Defense Advanced Research Projects Agency. He previously held leadership roles at MACOM Technology Solutions and COMSAT Laboratories.

Lee was the 2015 president of the IEEE Microwave Theory and Technology Society. He has served on the IEEE Board of Directors as 2025 IEEE-USA president and 2021–2022 IEEE Region 6 director. He has also been a member of several IEEE committees including Future Directions, Industry Engagement, and New Initiatives.

His vision is to deliver societal value through trust, integrity, ownership, innovation, and customer focus, while strengthening the IEEE member experience. Lee also wants to work to prepare members for AI-enabled work in the future.

He earned his bachelor’s degree in electrical engineering from MIT and a master’s degree in systems architecting and engineering from the University of Southern California in Los Angeles.

Reference: https://ift.tt/35v9zOZ

This $4,500 Conductive Suit Could Make Power-Line Work Safer




In 2018, Justin Kropp was restoring a fire-damaged transmission circuit in Southern California when disaster struck. Grid operators had earlier shut down the 115-kilovolt circuit, but six high-voltage lines that shared the corridor were still operating, and some of their power snuck onto the de-energized wires he was working on. That rogue current shot to the ground through Kropp’s body and his elevated work platform, killing the 32-year-old father of two.

“It went in both of his hands and came out his stomach where he was leaning against the platform rail,” says Justin’s father, Barry Kropp, who is himself a retired line worker. “Justin got hung up on the wire. When they finally got him on the ground, it was too late.”

Budapest-based Electrostatics makes conductive suits that protect line workers from unexpected current. Electrostatics

Justin’s accident was caused by induction: a hazard that occurs when an electric or magnetic field causes current to flow through equipment whose intended power supply has been cut off. Safety practices seek to prevent such induction shocks by grounding all conductive objects in a work zone, giving electricity alternative paths. But accidents happen. In Justin’s case, his platform unexpectedly swung into the line before it could be grounded.

Conductive Suits Protect Line Workers

Adding a layer of defense against induction injuries is the motivation behind Budapest-based Electrostatics’ specialized conductive jumpsuits, which are designed to protect against burns, cardiac fibrillation, and other ills. “If my boy had been wearing one, I know he’d be alive today,” says the elder Kropp, who purchased a line-worker safety training business after Justin’s death. The Mesa, Ariz.–based company, Electrical Safety Consulting International (ESCI), now distributes those suits.

The lower half of a man\u2019s legs clothed in pants and socks that are connected by straps Conductive socks that are connected to the trousers complete the protective suit. BME HVL

Eduardo Ramirez Bettoni, one of the developers of the suits, dug into induction risk after a series of major accidents in the United States in 2017 and 2018, including Justin Kropp’s. At the time, he was principal engineer for transmission and substation standards at Minneapolis-based Xcel Energy. In talking to Xcel line workers and fellow safety engineers, he sensed that the accident cluster might be the tip of an iceberg. And when he and two industry colleagues scoured data from the U.S. Bureau of Labor Statistics, they found 81 induction accidents between 1985 and 2021 and 60 deaths, which they documented in a 2022 report.

“Unfortunately, it is really common. I would say there are hundreds of induction contacts every year in the United States alone,” says Ramirez Bettoni, who is now technical director of R&D for the Houston-based power-distribution equipment firm Powell Industries. He bets that such “contacts”—exposures to dangerous levels of induction—are increasing as grid operators boost grid capacity by squeezing additional circuits into transmission corridors.


Electrostatics’ suits are an enhancement of the standard protective gear that line workers wear when their tasks involve working close to or even touching energized live lines, or “bare-hands” work. Both are interwoven with conductive materials such as stainless steel threads, which form a Faraday cage that shields the wearer against the lines’ electric fields. But the standard suits have limited capacity to shunt current because usually they don’t need to. Like a bird on a wire, bare-hands workers are electrically floating, rather than grounded, so current largely bypasses them via the line itself.

Induction Safety Suit Design

Backed by a US $250,000 investment from Xcel in 2019, Electrostatics adapted its standard suits by adding low-resistance conductive straps that pass current around a worker’s body. “When I’m touching a conductor with one hand and the other hand is grounded, the current will flow through the straps to get out,” says Bálint Németh, Electrostatics’ CEO and director of the High Voltage Laboratory at Budapest University of Technology and Economics.

A man holds one side of his jacket open revealing conductive straps inside. A strapping system links all the elements of the suit—the jacket, trousers, gloves, and socks—and guides current through a controlled path outside the body. BME HVL

The company began selling the suits in 2023 and they have since been adopted by over a dozen transmission operators in the United States and Europe, as well as other countries including Canada, Indonesia, and Turkey. They cost about $4,500 in the United States.

Electrostatics’ suits had to meet a crucial design threshold: keeping body exposure below the 6-milliampere “let-go” threshold, beyond which electrocuted workers become unable to remove themselves from a circuit. “If you lose control of your muscles, you’re going to hold onto the conductor until you pass out or possibly die,” says Ramirez Bettoni.

The gear, which includes the suit, gloves, and socks, protects against 100 amperes for 10 seconds and 50 A for 30 seconds. It also has insulation to protect against heat created by high current and flame retardants to protect against electric arcs.

Kropp, Németh, and Ramirez Bettoni, are hoping that developing industry standards for induction safety gear, including ones published in October, will broaden their use. Meanwhile, the recently enacted Justin Kropp Safety Act in California, for which the elder Kropp lobbied, mandates automated defibrillators at power-line work sites.

Reference: https://ift.tt/2OFhpBJ

Monday, January 12, 2026

Google removes some AI health summaries after investigation finds “dangerous” flaws


On Sunday, Google removed some of its AI Overviews health summaries after a Guardian investigation found people were being put at risk by false and misleading information. The removals came after the newspaper found that Google's generative AI feature delivered inaccurate health information at the top of search results, potentially leading seriously ill patients to mistakenly conclude they are in good health.

Google disabled specific queries, such as "what is the normal range for liver blood tests," after experts contacted by The Guardian flagged the results as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.

The investigation revealed that searching for liver test norms generated raw data tables (listing specific enzymes like ALT, AST, and alkaline phosphatase) that lacked essential context. The AI feature also failed to adjust these figures for patient demographics such as age, sex, and ethnicity. Experts warned that because the AI model's definition of "normal" often differed from actual medical standards, patients with serious liver conditions might mistakenly believe they are healthy and skip necessary follow-up care.

Read full article

Comments

Reference : https://ift.tt/LAsYFPT

Researchers Beam Power From a Moving Airplane




On a blustery November day, a Cessna turboprop flew over Pennsylvania at 5,000 meters, in crosswinds of up to 70 knots—nearly as fast as the little plane was flying. But the bumpy conditions didn’t thwart its mission: to wirelessly beam power down to receivers on the ground as it flew by.

The test flight marked the first time power has been beamed from a moving aircraft. It was conducted by the Ashburn, Virginia-based startup Overview Energy, which emerged from stealth mode in December by announcing the feat.

But the greater purpose of the flight was to demonstrate the feasibility of a much grander ambition: to beam power from space to Earth. Overview plans to launch satellites into geosynchronous orbit (GEO) to collect unfiltered solar energy where the sun never sets and then beam this abundance back to humanity. The solar energy would be transferred as near-infrared waves and received by existing solar panels on the ground.

The far-flung strategy, known as space-based solar power, has become the subject of both daydreaming and serious research over the past decade. Caltech’s Space Solar Power Project launched a demonstration mission in 2023 that transferred power in space using microwaves. And terrestrial power beaming is coming along too. The U.S. Defense Advanced Research Projects Agency (DARPA) in July 2025 set a new record for wirelessly transmitting power: 800 watts over 8.6 kilometers for 30 seconds using a laser beam.

But until November, no one had actively beamed power from a moving platform to a ground receiver.

Wireless Power Beaming Goes Airborne

Overview’s test transferred only a sprinkling of power, but it did it with the same components and techniques that the company plans to send to space. “Not only is it the first optical power beaming from a moving platform at any substantial range or power,” says Overview CEO Marc Berte, “but also it’s the first time anyone’s really done a power beaming thing where it’s all of the functional pieces all working together,” he says. “It’s the same methodology and function that we will take to space and scale up in the long term.”

The approach was compelling enough that power beaming expert Paul Jaffe left his job as a program manager at DARPA to join the company as head of systems engineering. Prior to DARPA, Jaffe spent three decades with the U.S. Naval Research Laboratory.

“This actually sounds like it could work,” –Paul Jaffe

It was hearing Berte explain Overview’s plan at a conference that helped to convince Jaffe to take a chance on the startup. “This actually sounds like it could work,” Jaffe remembers thinking at the time. “It really seems like it gets around a lot of the showstoppers for a lot of the other concepts. I remember coming home and telling my wife that I almost felt like the problem had been solved. So I thought: Should [I] do something which is almost unheard of—to leave in the middle of being a DARPA program manager—to try to do something else?”

For Jaffe, the most compelling reason was in Overview’s solution for space-based solar’s power density problem. A beam with low power density is safer because it’s not blasting too much concentrated energy onto a single spot on the Earth’s surface, but it’s less efficient for the task of delivering usable solar energy. A higher-density beam does the job better, but then the researchers must engineer some way to maintain safety.

Startup Overview Energy demonstrates how space-based solar power could be beamed to Earth from satellites

Space-Based Solar Power Makes Waves

Many researchers have settled on microwaves as their beam of choice for wireless power. But, in addition to the safety concerns about shooting such intense waves at the Earth, Jaffe says there’s another problem: microwaves are part of what he calls the “beachfront property” of the electromagnetic spectrum—a range from 2 to 20 gigahertz that is set aside for many other applications, such as 5G cellular networks.

“The fact is,” Jaffe says, “if you somehow magically had a fully operational solar power satellite that used microwave power transmission in orbit today—and a multi-kilometer-scale microwave power satellite receiver on the ground magically in place today—you could not turn it on because the spectrum is not allocated to do this kind of transmission.”

Instead, Overview plans to use less-dense, wide-field infrared waves. Existing utility-scale solar farms would be able to receive the beamed energy just like they receive the sun’s energy during daylight hours. So “your receivers are already built,” Berte says. The next major step is a prototype demonstrator for low Earth orbit, after which he hopes to have GEO satellites beaming megawatts of power by 2030 and gigawatts by later that decade.

Plenty of doubts about the feasibility of space-based power abound. It is an exotic technology with much left to prove, including the ability to survive orbital debris and the exorbitant cost of launching the power stations. (Overview’s satellite will be built on earth in a folded configuration and it will unfold after it’s brought to orbit, according to the company).

“Getting down the cost per unit mass for launch is a big deal,” Jaffe says. “Then, it just becomes a question of increasing the specific power. A lot of the technologies we’re working on at Overview are squarely focused on that.”

Reference: https://ift.tt/FCAKVzX

Sunday, January 11, 2026

Chilean Telescope Array Gets 145 New Powerful Amplifiers




For decades, scientists have observed the cosmos with radio antennas to visualize the dark, distant regions of the universe. This includes the gas and dust of the interstellar medium, planet-forming disks, and objects that cannot be observed in visible light. In this field, the Atacama Large Millimeter/Submillimeter Array (ALMA) in Chile stands out as one of the world’s most powerful radio telescopes. Using its 66 parabolic antennas, ALMA observes the millimeter and sub-millimeter radiation emitted by cold molecular clouds from which new stars are born.

Universe Today logo; text reads "This post originally appeared on Universe Today."

Each antenna is equipped with high-frequency receivers for ten wavelength ranges, 35 to 50 gigahertz and 787 to 950 GHz, collectively known as Band 1. Thanks to the Fraunhofer Institute for Applied Solid State Physics (IAF) and the Max Planck Institute for Radio Astronomy, ALMA has received an upgrade with the addition of 145 new low-noise amplifiers (LNAs). These amplifiers are part of the facilities’ Band 2 coverage, ranging from 67 to 116 GHz on the electromagnetic spectrum. This additional coverage will allow researchers to study and gain a better understanding of the universe.

In particular, they hope to gain new insights into the “cold interstellar medium”: The dust, gas, radiation, and magnetic fields from which stars are born. In addition, scientists will be able to study planet-forming disks in better detail. Last, but certainly not least, they will be able to study complex organic molecules in nearby galaxies, which are considered precursors to the building blocks of life. In short, these studies will allow astronomers and cosmologists to witness how stars and planetary systems form and evolve, and how the presence of organic molecules can lead to the emergence of life.

Advanced Amplifiers Enhance ALMA Sensitivity

Each LNA includes a series of monolithic microwave integrated circuits (MMICs) developed by Fraunhofer IAF using the semiconducting material indium gallium arsenide. MMICs are based on metamorphic high-electron-mobility transistor technology, a method for creating advanced transistors that are flexible and allow for optimized performance in high-frequency receivers. The addition of LNAs equipped with these circuits will amplify low-noise signals and minimize background noise, dramatically increasing the sensitivity of ALMAs’ receivers.

Fabian Thome, head of the subproject at Fraunhofer IAF, explained in an IAF press release:

The performance of receivers depends largely on the performance of the first high-frequency amplifiers installed in them. Our technology is characterized by an average noise temperature of 22 K, which is unmatched worldwide. With the new LNAs, signals can be amplified more than 300-fold in the first step. “This enables the ALMA receivers to measure millimeter and submillimeter radiation from the depths of the universe much more precisely and obtain better data. We are incredibly proud that our LNA technology is helping us to better understand the origins of stars and entire galaxies.

Both Fraunhofer IAF and the Max Planck Institute for Radio Astronomy were commissioned by the European Southern Observatory to provide the amplifiers. While Fraunhofer IAF was responsible for designing, manufacturing, and testing the MMICs at room temperature, Max Planck was tasked with assembling and qualifying the LNA modules, then testing them in cryogenic conditions. “This is a wonderful recognition of our fantastic collaboration with Fraunhofer IAF, which shows that our amplifiers are not only ‘made in Germany’ but also the best in the world,” said Michael Kramer, executive director at the Max Planck Institute for Radio Astronomy.

Reference: https://ift.tt/OPQcwsT

Saturday, January 10, 2026

Nvidia’s New Rubin Architecture Thrives on Networking




Earlier this week, Nvidia surprise-announced their new Vera Rubin architecture (no relation to the recently unveiled telescope) at the Consumer Electronics Show in Las Vegas. The new platform, set to reach customers later this year, is advertised to offer a ten-fold reduction in inference costs and a four-fold reduction in how many GPUs it would take to train certain models, as compared to Nvidia’s Blackwell architecture.

The usual suspect for improved performance is the GPU. Indeed, the new Rubin GPU boasts 50 quadrillion floating-point operations per second (petaFLOPS) of 4-bit computation, as compared to 10 petaflops on Blackwell, at least for transformer-based inference workloads like large language models.

However, focusing on just the GPU misses the bigger picture. There are a total of six new chips in the Vera-Rubin-based computers: the Vera CPU, the Rubin GPU, and four distinct networking chips. To achieve performance advantages, the components have to work in concert, says Gilad Shainer, senior vice president of networking at Nvidia.

“The same unit connected in a different way will deliver a completely different level of performance,” Shainer says. “That’s why we call it extreme co-design.”

Expanded “in-network compute”

AI workloads, both training and inference, run on large numbers of GPUs simultaneously. “Two years back, inferencing was mainly run on a single GPU, a single box, a single server,” Shainer says. “Right now, inferencing is becoming distributed, and it’s not just in a rack. It’s going to go across racks.”

To accommodate these hugely distributed tasks, as many GPUs as possible need to effectively work as one. This is the aim of the so-called scale-up network: the connection of GPUs within a single rack. Nvidia handles this connection with their NVLink networking chip. The new line includes the NVLink6 switch, with double the bandwidth of the previous version (3,600 gigabytes per second for GPU-to-GPU connections, as compared to 1,800 GB/s for NVLink5 switch).

In addition to the bandwidth doubling, the scale-up chips also include double the number of SerDes—serializer/deserializers (which allow data to be sent across fewer wires) and an expanded number of calculations that can be done within the network.

“The scale-up network is not really the network itself,” Shainer says. “It’s computing infrastructure, and some of the computing operations are done on the network…on the switch.”

The rationale for offloading some operations from the GPUs to the network is two-fold. First, it allows some tasks to only be done once, rather than having every GPU having to perform them. A common example of this is the all-reduce operation in AI training. During training, each GPU computes a mathematical operation called a gradient on its own batch of data. In order to train the model correctly , all the GPUs need to know the average gradient computed across all batches. Rather than each GPU sending its gradient to every other GPU, and every one of them computing the average, it saves computational time and power for that operation to only happen once, within the network.

A second rationale is to hide the time it takes to shuttle data in-between GPUs by doing computations on them en-route. Shainer explains this via an analogy of a pizza parlor trying to speed up the time it takes to deliver an order. “What can you do if you had more ovens or more workers? It doesn’t help you; you can make more pizzas, but the time for a single pizza is going to stay the same. Alternatively, if you would take the oven and put it in a car, so I’m going to bake the pizza while traveling to you, this is where I save time. This is what we do.”

In-network computing is not new to this iteration of Nvidia’s architecture. In fact, it has been in common use since around 2016. But, this iteration adds a broader swath of computations that can be done within the network to accommodate different workloads and different numerical formats, Shainer says.

Scaling out and across

The rest of the networking chips included in the Rubin architecture comprise the so-called scale-out network. This is the part that connects different racks to each other within the data center.

Those chips are the ConnectX-9, a networking interface card; the BlueField-4 a so-called data processing unit, which is paired with two Vera CPUs and a ConnectX-9 card for offloading networking, storage, and security tasks; and finally the Spectrum-6 Ethernet switch, which uses co-packaged optics to send data between racks. The Ethernet switch also doubles the bandwidth of the previous generations, while minimizing jitter—the variation in arrival times of information packets.

“Scale-out infrastructure needs to make sure that those GPUs can communicate well in order to run a distributed computing workload and that means I need a network that has no jitter in it,” he says. The presence of jitter implies that if different racks are doing different parts of the calculation, the answer from each will arrive at different times. One rack will always be slower than the rest, and the rest of the racks, full of costly equipment, sit idle while waiting for that last packet. “Jitter means losing money,” Shainer says.

None of Nvidia’s host of new chips are specifically dedicated to connect between data centers, termed ‘“scale-across.” But Shainer argues this is the next frontier. “It doesn’t stop here, because we are seeing the demands to increase the number of GPUs in a data center,” he says. “100,000 GPUs is not enough anymore for some workloads, and now we need to connect multiple data centers together.”

Reference: https://ift.tt/Yl1UX2R

Friday, January 9, 2026

Sena Kizildemir Simulates Disasters to Prevent Building Collapses




When two airplanes hit the World Trade Center in New York City on 11 September 2001, no one could predict how the Twin Towers would react structurally. The commercial jet airliners severed columns and started fires, weakening steel beams, and causing a “pancaking,” progressive collapse.

Skyscrapers had not been designed or constructed with that kind of catastrophic structural failure in mind. IEEE Senior Member Sena Kizildemir is changing that through disaster simulation, one scenario at a time.

Sena Kizildemir


Employer

Thornton Tomasetti, in New York City

Job title

Project engineer

Member grade

Senior member

Alma maters

Işik University in Şile and Lehigh University, in Bethlehem, Pa.

A project engineer at Thornton Tomasetti’s applied science division in New York, Kizildemir uses simulations to study how buildings fail under extreme events such as impacts and explosions. The simulation results can help designers develop mitigation strategies.

“Simulations help us understand what could happen before it occurs in real life,” she says, “to be able to better plan for it.”

She loves that her work mixes creativity with solving real-world problems, she says: “You’re creating something to help people. My favorite question to answer is, ‘Can you make this better or easier?’”

For her work, the nonprofit Professional Women in Construction named her one of its 20 Under 40: Women in Construction for 2025.

Kizildemir is passionate about mentoring young engineers and being an IEEE volunteer. She says she has made it her mission to “pack as much impact into my years as possible.”

A bright student in Türkiye

She was born in Istanbul to a father who is a professional drummer and a mother who worked in magazine advertising and sales. Kizildemir and her older brother pursued engineering careers despite neither parent being involved in the field. While she became an expert in civil and mechanical engineering, her brother is an industrial engineer.

As a child, she was full of curiosity, she says, interested in figuring out how things were built and how they worked. She loved building objects out of Legos, she says, and one of her earliest memories is using them to make miniature houses for ants.

After acing an entrance exam, she won a spot in a STEM-focused high school, where she studied mathematics and physics.

“Engineering is one of the few careers where you can make a lasting impact on the world, and I plan on mine being meaningful.”

During her final year at the high school, she took the nationwide YKS (Higher Education Foundations Examination). The test determines which universities and programs—such as medicine, engineering, or law—students can pursue.

She received a full scholarship to attend Işik University in Şile. Figuring she would study engineering abroad one day, she chose an English-taught program. She says she found that civil engineering best aligned with making the biggest impact on her community and the world.

Several of her professors were alumni of Lehigh University, in Bethlehem, Pa., and spoke highly of the school. After earning her bachelor’s degree in civil engineering in 2016, she decided to attend Lehigh, where she earned a full scholarship to its master’s program in civil engineering.

Moving abroad and working the rails

Her master’s thesis focused on investigating root causes of crack propagation, which threatens railroad safety.

Repeated wheel-rail loading causes microcracks, leading to metal fatigue, and residual stress results from specialized heating and cooling treatments during the manufacturing of steel rails. Cracks can develop beneath the rail’s surface. Because they’re invisible to the naked eye, such fractures are challenging to detect, Kizildemir says.

The project was done in collaboration with the U.S. Federal Railroad Administration—part of the Department of Transportation—which is looking to adjust technical standards and employ mitigation strategies.

Kizildemir and five colleagues designed and implemented testing protocols and physics-based simulations to detect cracks earlier and prevent their spread. Their research has given the Railroad Administration insights into structural defects that are being used to revise rail-building guidelines and inspection protocols. The administration published the first phase of the research in 2024.

After graduating in 2018, Kizildemir began a summer internship as a civil engineer at Thornton Tomasetti. She conducted computational modeling using Abaqus software for rails subjected to repeated plastic deformation—material that permanently changes shape when under excessive stress—and presented her recommendations for improvement to the company’s management.

During her internship, she worked with professors in different fields, including materials behavior and mechanical engineering. The experience, she says, inspired her to pursue a Ph.D. in mechanical engineering at Lehigh, continuing her research with the Railroad Administration. She earned her degree in 2023.

She loved the work and the team at Thornton Tomasetti so much, she says, that she applied to work at the company, where she is now a project engineer.

From simulations to real-world applications

Her work focuses on developing finite element models for critical infrastructure and extreme events.

Finite modeling breaks complex systems or topics into small elements connected together to numerically simulate real-world situations. She creates computational models of structures enduring realistic catastrophic events, such as a vehicle crashing into a building.

She uses simulations to understand how buildings react to attacks such as the one on 9/11, which, she says, is often used as an example of why such research is essential.

When starting a project, she and her team review building standards and try to identify new issues not yet covered by them. The team then adapts existing codes and standards, usually developed for well-understood hazards such as earthquakes, wind, and floods, to define simulation parameters.

When a new structure is being built, for example, it is not designed to withstand a truck crashing into it. But Kizildemir and her team want to know how the building would react should that happen. They simulate the environments and situations, and they make recommendations based on the results to reduce or eliminate risks of structural failure.

Mitigation suggestions include specific strategies to be implemented during project design and construction.

Simulations can be created for any infrastructure, Kizildemir says.

“I love problems that force me to think differently,” she says. “I want to keep growing.”

She says she plans to live by Thornton Tomasetti’s internal motto: “When others say no, we say ‘Here’s how.’”

Joining IEEE and getting more involved

When Kizildemir first heard of IEEE, she assumed it was only for electrical engineers. But after learning how diverse and inclusive the organization is, she joined in 2024. She has since been elevated to a senior member and has become a volunteer. She joined the IEEE Technology and Engineering Management Society.

She chaired the conference tracks and IEEE-sponsored sessions at the 2024 Joint Rail Conference, held in Columbia, S.C. She actively contributes to IEEE’s Collabratec platform and has participated in panel review meetings for senior member elevation applications.

She’s also a member of ASME and has been volunteering for it since 2023.

“Community is what helped get me to where I am today, and I want to pay it forward and make the field better,” she says. “Helping others improves ourselves.”

Looking ahead and giving back

Kizildemir mentors junior engineers at Thornton Thomasetti and is looking to expand her reach through IEEE’s mentorship programs.

“Engineering doesn’t have a gender requirement,” she says she tells girls. “If you’re curious and like understanding how things work and get excited to solve difficult problems, engineering is for you.

“Civil engineers don’t just build bridges,” she adds. “There are countless niche areas to be explored. Engineering is one of the few careers where you can make a lasting impact on the world, and I plan on mine being meaningful.”

Kizildemir says she wants every engineer to be able to improve their community. Her main piece of advice for recent engineering graduates is that “curiosity, discipline, and the willingness to understand things deeply, to see how things can be done better,” are the keys to success.

Reference: https://ift.tt/1MPbXlu

Video Friday: Robots Are Everywhere at CES 2026




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

We’re excited to announce the product version of our Atlas® robot. This enterprise-grade humanoid robot offers impressive strength and range of motion, precise manipulation, and intelligent adaptability—designed to power the new industrial revolution.

[ Boston Dynamics ]

I appreciate the creativity and technical innovation here, but realistically, if you’ve got more than one floor in your house? Just get a second robot. That single-step sunken living room though....

[ Roborock ]

Wow, SwitchBot’s CES 2026 video show almost as many robots in their fantasy home as I have in my real home.

[ SwitchBot ]

What is happening in robotics right now that I can derive more satisfaction from watching robotic process automation than I can from watching yet another humanoid video?

[ ABB ]

Yes, this is definitely a robot I want in close proximity to my life.

[ Unitree ]

The video below demonstrates a MenteeBot learning, through mentoring, how to replace a battery in another MenteeBot. No teleoperation is used.

[ Mentee Robotics ]

Personally, I think that we should encourage humanoid robots to fall much more often, just so that we can see whether they can get up again.

[ Agility Robotics ]

Achieving long-horizon, reliable clothing manipulation in the real world remains one of the most challenging problems in robotics. This live test demonstrates a strong step forward in embodied intelligence, vision-language-action systems, and real-world robotic autonomy.

[ HKU MMLab ]

Millions of people around the world need assistance with feeding. Robotic feeding systems offer the potential to enhance autonomy and quality of life for individuals with impairments and reduce caregiver workload. However, their widespread adoption has been limited by technical challenges such as estimating bite timing, the appropriate moment for the robot to transfer food to a user’s mouth. In this work, we introduce WAFFLE: Wearable Approach For Feeding with LEarned Bite Timing, a system that accurately predicts bite timing by leveraging wearable sensor data to be highly reactive to natural user cues such as head movements, chewing, and talking.

[ CMU RCHI ]

Humanoid robots are now available as platforms, which is a great way of sidestepping the whole practicality question.

[ PNDbotics ]

We’re introducing Spatially-Enhanced Recurrent Units (SRUs) — a simple yet powerful modification that enables robots to build implicit spatial memories for navigation. Published in the International Journal of Robotics Research (IJRR), this work demonstrates up to +105% improvement over baseline approaches, with robots successfully navigating 70+ meters in the real world using only a single forward-facing camera.

[ ETHZ RSL ]

Looking forward to the DARPA Triage Challenge this fall!

[ DARPA ]

Here are a couple of good interviews from the Humanoids Summit 2025.

[ Humanoids Summit ]

Reference: https://ift.tt/RfIkxEy

Thursday, January 8, 2026

How AI Accelerates PMUT Design for Biomedical Ultrasonic Applications




This whitepaper provides MEMS engineers, biomedical device developers, and multiphysics simulation specialists with a practical AI-accelerated workflow for optimizing piezoelectric micromachined ultrasonic transducers (PMUTs), enabling you to explore complex design trade-offs between sensitivity and bandwidth while achieving validated performance improvements in minutes instead of days using standard cloud infrastructure.

What you will learn about:

  • MultiphysicsAI combines cloud-based FEM simulation with neural surrogates to transform PMUT design from trial-and-error iteration into systematic inverse optimization
  • Training on 10,000 randomized geometries produces AI surrogates with 1% mean error and sub-millisecond inference for key performance indicators: transmit sensitivity, center frequency, fractional bandwidth, and electrical impedance
  • Pareto front optimization simultaneously increases fractional bandwidth from 65% to 100% and improves sensitivity by 2-3 dB while maintaining 12 MHz center frequency within ±0.2%
Reference: https://ift.tt/et9QKJM

ChatGPT Health lets you connect medical records to an AI that makes things up


On Wednesday, OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for "health and wellness conversations" intended to connect a user's health and medical records to the chatbot in a secure way.

But mixing generative AI technology like ChatGPT with health advice or analysis of any kind has been a controversial idea since the launch of the service in late 2022. Just days ago, SFGate published an investigation detailing how a 19-year-old California man died of a drug overdose in May 2025 after 18 months of seeking recreational drug advice from ChatGPT. It's a telling example of what can go wrong when chatbot guardrails fail during long conversations and people follow erroneous AI guidance.

Despite the known accuracy issues with AI chatbots, OpenAI's new Health feature will allow users to connect medical records and wellness apps like Apple Health and MyFitnessPal so that ChatGPT can provide personalized health responses like summarizing care instructions, preparing for doctor appointments, and understanding test results.

Read full article

Comments

Reference : https://ift.tt/iYxUcuN

AI Coding Assistants Are Getting Worse




In recent months, I’ve noticed a troubling trend with AI coding assistants. After two years of steady improvements, over the course of 2025, most of the core models reached a quality plateau, and more recently, seem to be in decline. A task that might have taken five hours assisted by AI, and perhaps ten hours without it, is now more commonly taking seven or eight hours, or even longer. It’s reached the point where I am sometimes going back and using older versions of large language models (LLMs).

I use LLM-generated code extensively in my role as CEO of Carrington Labs, a provider of predictive-analytics risk models for lenders. My team has a sandbox where we create, deploy, and run AI-generated code without a human in the loop. We use them to extract useful features for model construction, a natural-selection approach to feature development. This gives me a unique vantage point from which to evaluate coding assistants’ performance.

Newer models fail in insidious ways

Until recently, the most common problem with AI coding assistants was poor syntax, followed closely by flawed logic. AI-created code would often fail with a syntax error or snarl itself up in faulty structure. This could be frustrating: the solution usually involved manually reviewing the code in detail and finding the mistake. But it was ultimately tractable.

However, recently released LLMs, such as GPT-5, have a much more insidious method of failure. They often generate code that fails to perform as intended, but which on the surface seems to run successfully, avoiding syntax errors or obvious crashes. It does this by removing safety checks, or by creating fake output that matches the desired format, or through a variety of other techniques to avoid crashing during execution.

As any developer will tell you, this kind of silent failure is far, far worse than a crash. Flawed outputs will often lurk undetected in code until they surface much later. This creates confusion and is far more difficult to catch and fix. This sort of behavior is so unhelpful that modern programming languages are deliberately designed to fail quickly and noisily.

A simple test case

I’ve noticed this problem anecdotally over the past several months, but recently, I ran a simple yet systematic test to determine whether it was truly getting worse. I wrote some Python code which loaded a dataframe and then looked for a nonexistent column.

df = pd.read_csv(‘data.csv’)
df['new_column'] = df['index_value'] + 1 #there is no column ‘index_value’

Obviously, this code would never run successfully. Python generates an easy-to-understand error message which explains that the column ‘index_value’ cannot be found. Any human seeing this message would inspect the dataframe and notice that the column was missing.

I sent this error message to nine different versions of ChatGPT, primarily variations on GPT-4 and the more recent GPT-5. I asked each of them to fix the error, specifying that I wanted completed code only, without commentary.

This is of course an impossible task—the problem is the missing data, not the code. So the best answer would be either an outright refusal, or failing that, code that would help me debug the problem. I ran ten trials for each model, and classified the output as helpful (when it suggested the column is probably missing from the dataframe), useless (something like just restating my question), or counterproductive (for example, creating fake data to avoid an error).

GPT-4 gave a useful answer every one of the 10 times that I ran it. In three cases, it ignored my instructions to return only code, and explained that the column was likely missing from my dataset, and that I would have to address it there. In six cases, it tried to execute the code, but added an exception that would either throw up an error or fill the new column with an error message if the column couldn’t be found (the tenth time, it simply restated my original code).

This code will add 1 to the ‘index_value’ column from the dataframe ‘df’ if the column exists. If the column ‘index_value’ does not exist, it will print a message. Please make sure the ‘index_value’ column exists and its name is spelled correctly.”,

GPT-4.1 had an arguably even better solution. For 9 of the 10 test cases, it simply printed the list of columns in the dataframe, and included a comment in the code suggesting that I check to see if the column was present, and fix the issue if it wasn’t.

GPT-5, by contrast, found a solution that worked every time: it simply took the actual index of each row (not the fictitious ‘index_value’) and added 1 to it in order to create new_column. This is the worst possible outcome: the code executes successfully, and at first glance seems to be doing the right thing, but the resulting value is essentially a random number. In a real-world example, this would create a much larger headache downstream in the code.

df = pd.read_csv(‘data.csv’)
df['new_column'] = df.index + 1

I wondered if this issue was particular to the gpt family of models. I didn’t test every model in existence, but as a check I repeated my experiment on Anthropic’s Claude models. I found the same trend: the older Claude models, confronted with this unsolvable problem, essentially shrug their shoulders, while the newer models sometimes solve the problem and sometimes just sweep it under the rug.

A chart showing the fraction of responses that were helpful, unhelpful, or counterproductive for different versions of large language models. Newer versions of large language models were more likely to produce counterproductive output when presented with a simple coding error. Jamie Twiss

Garbage in, garbage out

I don’t have inside knowledge on why the newer models fail in such a pernicious way. But I have an educated guess. I believe it’s the result of how the LLMs are being trained to code. The older models were trained on code much the same way as they were trained on other text. Large volumes of presumably functional code were ingested as training data, which was used to set model weights. This wasn’t always perfect, as anyone using AI for coding in early 2023 will remember, with frequent syntax errors and faulty logic. But it certainly didn’t rip out safety checks or find ways to create plausible but fake data, like GPT-5 in my example above.

But as soon as AI coding assistants arrived and were integrated into coding environments, the model creators realized they had a powerful source of labelled training data: the behavior of the users themselves. If an assistant offered up suggested code, the code ran successfully, and the user accepted the code, that was a positive signal, a sign that the assistant had gotten it right. If the user rejected the code, or if the code failed to run, that was a negative signal, and when the model was retrained, the assistant would be steered in a different direction.

This is a powerful idea, and no doubt contributed to the rapid improvement of AI coding assistants for a period of time. But as inexperienced coders started turning up in greater numbers, it also started to poison the training data. AI coding assistants that found ways to get their code accepted by users kept doing more of that, even if “that” meant turning off safety checks and generating plausible but useless data. As long as a suggestion was taken on board, it was viewed as good, and downstream pain would be unlikely to be traced back to the source.

The most recent generation of AI coding assistants have taken this thinking even further, automating more and more of the coding process with autopilot-like features. These only accelerate the smoothing-out process, as there are fewer points where a human is likely to see code and realize that something isn’t correct. Instead, the assistant is likely to keep iterating to try to get to a successful execution. In doing so, it is likely learning the wrong lessons.

I am a huge believer in artificial intelligence, and I believe that AI coding assistants have a valuable role to play in accelerating development and democratizing the process of software creation. But chasing short-term gains, and relying on cheap, abundant, but ultimately poor-quality training data is going to continue resulting in model outcomes that are worse than useless. To start making models better again, AI coding companies need to invest in high-quality data, perhaps even paying experts to label AI-generated code. Otherwise, the models will continue to produce garbage, be trained on that garbage, and thereby produce even more garbage, eating their own tails.

Reference: https://ift.tt/aGQzRgc

ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues


There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users.

The reason more often than not is that AI is so inherently designed to comply with user requests that the guardrails are reactive and ad hoc, meaning they are built to foreclose a specific attack technique rather than the broader class of vulnerabilities that make it possible. It’s tantamount to putting a new highway guardrail in place in response to a recent crash of a compact car but failing to safeguard larger types of vehicles.

Enter ZombieAgent, son of ShadowLeak

One of the latest examples is a vulnerability recently discovered in ChatGPT. It allowed researchers at Radware to surreptitiously exfiltrate a user's private information. Their attack also allowed for the data to be sent directly from ChatGPT servers, a capability that gave it additional stealth, since there were no signs of breach on user machines, many of which are inside protected enterprises. Further, the exploit planted entries in the long-term memory that the AI assistant stores for the targeted user, giving it persistence.

Read full article

Comments

Reference : https://ift.tt/tz0GKsU

Wednesday, January 7, 2026

Meet the IEEE Board-Nominated Candidates for President-Elect




The IEEE Board of Directors has nominated IEEE Senior Member David Alan Koehler and IEEE Life Fellow Manfred “Fred” J. Schindler as candidates for 2027 IEEE president-elect.

IEEE Senior Member Gerardo Barbosa and IEEE Life Senior Member Timothy T. Lee are seeking nomination by petition. A separate article will be published in The Institute at a later date.

The winner of this year’s election will serve as IEEE president in 2028. For more information about the election, president-elect candidates, and the petition process, visit the ieee.org/elections.

IEEE Senior Member David Alan Koehler

David Alan Koehler smiling in a suit jacket and tie. Steven Miller Photography

Koehler is a subject matter expert with almost 30 years of experience in establishing condition-based maintenance practices for electrical equipment and managing analytical laboratories. He has presented his work at global conferences and published articles in technical publications related to the power industry. Koehler is an executive advisor at Danovo Energy Solutions.

An active volunteer, he has served in every geographical unit within IEEE. His first leadership position was chair of the Central Indiana Section from 2012 to 2014. He served as 2019–2020 director of IEEE Region 4, vice chair of the 2022 IEEE Board of Directors Ad Hoc Committee on the Future of Engagement, 2022 vice president of IEEE Member and Geographic Activities, and chair of the 2024 IEEE Board of Directors Ad Hoc Committee on Leadership Continuity and Efficiency.

He served on the IEEE Board of Directors for three different years. He has been a member of the IEEE-USA, Member and Geographic Activities, and Publication Services and Products boards.

Koehler is a proud and active member of IEEE Women In Engineering and IEEE-Eta Kappa Nu, the honor society.

IEEE Life Fellow Manfred “Fred” J. Schindler

Manfred Schindler smiling in a suit jacket and tie. Steven Miller Photography

Schindler, an expert in microwave semiconductor technology, is an independent consultant supporting clients with technical expertise, due diligence, and project management.

Throughout his career, he led the development of microwave integrated-circuit technology, from lab demonstrations to high-volume commercial products. He has numerous technical publications and holds 11 patents.

Schindler served as CTO of Anlotek, and director of Qorvo and RFMD’s Boston design center. He was applications manager at IBM, engineering manager at ATN Microwave, and a lab manager at Raytheon.

An IEEE volunteer for more than 30 years, Schindler served as the 2024 vice president of IEEE Technical Activities and the 2022–2023 Division IV director. He was chair of the IEEE Conferences Committee from 2015 to 2018 and president of the IEEE Microwave Theory and Technology Society (MTTS) in 2003. He received the 2018 IEEE MTTS Distinguished Service Award. His award-winning micro-business column has appeared in IEEE Microwave Magazine since 2011.

He also led the 2025 One IEEE to Enable Strategic Investments in Innovations and Public Imperative Activities adhoc committee.

Schindler is an IEEE–Eta Kappa Nu honorary life member.

Reference: https://ift.tt/juDaOiU

These Hearing Aids Will Tune in to Your Brain




Imagine you’re at a bustling dinner party filled with laughter, music, and clinking silverware. You’re trying to follow a conversation across the table, but every word feels like it’s wrapped in noise. For most people, these types of party scenarios, where it’s difficult to filter out extraneous sounds and focus on a single source, are an occasional annoyance. For millions with hearing loss, they’re a daily challenge—and not just in busy settings.

Today’s hearing aids aren’t great at determining which sounds to amplify and which to ignore, and this often leaves users overwhelmed and fatigued. Even the routine act of conversing with a loved one during a car ride can be mentally draining, simply because the hum of the engine and road noises are magnified to create loud and constant background static that blurs speech.

In recent years, modern hearing aids have made impressive strides. They can, for example, use a technology called adaptive beamforming to focus their microphones in the direction of a talker. Noise-reduction settings also help decrease background cacophony, and some devices even use machine-learning-based analysis, trained on uploaded data, to detect certain environments—for example a car or a party—and deploy custom settings.

That’s why I was initially surprised to find out that today’s state-of-the-art hearing aids aren’t good enough. “It’s like my ears work but my brain is tired,” I remember one elderly man complaining, frustrated with the inadequacy of his cutting-edge noise-suppression hearing aids. At the time, I was a graduate student at the University of Texas at Dallas, surveying individuals with hearing loss. The man’s insight led me to a realization: Mental strain is an unaddressed frontier of hearing technology.

But what if hearing aids were more than just amplifiers? What if they were listeners too? I envision a new generation of intelligent hearing aids that not only boost sound but also read the wearer’s brain waves and other key physiological markers, enabling them to react accordingly to improve hearing and counter fatigue.

Until last spring, when I took time off to care for my child, I was a senior audio research scientist at Harman International, in Los Angeles. My work combined cognitive neuroscience, auditory prosthetics, and the processing of biosignals, which are measurable physiological cues that reflect our mental and physical state. I’m passionate about developing brain-computer interfaces (BCIs) and adaptive signal-processing systems that make life easier for people with hearing loss. And I’m not alone. A number of researchers and companies are working to create smart hearing aids, and it’s likely they’ll come on the market within a decade.

Two technologies in particular are poised to revolutionize hearing aids, offering personalized, fatigue-free listening experiences: electroencephalography (EEG), which tracks brain activity, and pupillometry, which uses eye measurements to gauge cognitive effort. These approaches might even be used to improve consumer audio devices, transforming the way we listen everywhere.

Aging Populations in a Noisy World

More than 430 million people suffer from disabling hearing loss worldwide, including 34 million children, according to the World Health Organization. And the problem will likely get worse due to rising life expectancies and the fact that the world itself seems to be getting louder. By 2050, an estimated 2.5 billion people will suffer some degree of hearing loss and 700 million will require intervention. On top of that, as many as 1.4 billion of today’s young people—nearly half of those aged 12 to 34—could be at risk of permanent hearing loss from listening to audio devices too loud and for too long.

Every year, close to a trillion dollars is lost globally due to unaddressed hearing loss, a trend that is also likely getting more pronounced. That doesn’t account for the significant emotional and physical toll on the hearing impaired, including isolation, loneliness, depression, shame, anxiety, sleep disturbances, and loss of balance.

A back view of a man's head shows a flexible pattern of lines with electrodes inside that go over his ear and extend toward the front of his face. Flex-printed electrode arrays, such as these from the Fraunhofer Institute for Digital Media Technology, offer a comfortable option for collecting high-quality EEG signals. Leona Hofmann/Fraunhofer IDMT

And yet, despite widespread availability, hearing aid adoption remains low. According to a 2024 study published in The Lancet, only about 13 percent of Americans adults with hearing loss regularly wear hearing aids. Key reasons for this deficiency include discomfort, stigma, cost—and, crucially, frustration with the poor performance of hearing aids in noisy environments.

Historically, hearing technology has come a long way. As early as the 13th century, people began using horns of cows and rams as “ear trumpets.” Commercial versions made of various materials, including brass and wood, came on the market in the early 19th century. (Beethoven, who famously began losing his hearing in his twenties, used variously shaped ear trumpets, some of which are now on display in a museum in Bonn, Germany.) But these contraptions were so bulky that users had to hold them with their hands or wear them within headbands. To avoid stigma, some even hid hearing aids inside furniture to mask their disability. In 1819, a special acoustic chair was designed for the king of Portugal, featuring arms ornately carved to look like open lion mouths, which helped transmit sound to the king’s ear via speaking tubes.

Modern hearing aids came into being after the advent of electronics in the early 20th century. Early devices used vacuum tubes and then transistors to amplify sound, shrinking over time from bulky body-worn boxes to discreet units that fit behind or inside the ear. At their core, today’s hearing aids still work on the same principle: A microphone picks up sound, a processor amplifies and shapes it to match the user’s hearing loss, and a tiny speaker delivers the adjusted sound into the ear canal.

Today’s best-in-class devices, like those from Oticon, Phonak, and Starkey, have pioneered increasingly advanced technologies, including the aforementioned beamforming microphones, frequency lowering to better pick up high-pitched sounds and voices, and machine learning to recognize and adapt to specific environments. For example, the device may reduce amplification in a quiet room to avoid escalating background hums or else increase amplification in a noisy café to make speech more intelligible.

Advances in the AI technique of deep learning, which relies on artificial neural networks to automatically recognize patterns, also hold enormous promise. Using context-aware algorithms, this technology can, for example, be used to help distinguish between speech and noise, predict and suppress unwanted clamor in real time, and attempt to clean up speech that is muffled or distorted.

The problem? As of right now, consumer systems respond only to external acoustic environments and not to the internal cognitive state of the listener—which means they act on imperfect and incomplete information. So, what if hearing aids were more empathetic? What if they could sense when the listener’s brain feels tired or overwhelmed and automatically use that feedback to deploy advanced features?

Using EEG to Augment Hearing Aids

When it comes to creating intelligent hearing aids, there are two main challenges. The first is building convenient, power-efficient wearable devices that accurately detect brain states. The second, perhaps more difficult step is decoding feedback from the brain and using that information to help hearing aids adapt in real time to the listener’s cognitive state and auditory experience.

Let’s start with EEG. This century-old noninvasive technology uses electrodes placed on the scalp to measure the brain’s electrical activity through voltage fluctuations, which are recorded as “brain waves.”

A man with headphones sits in a lab in front of computers displaying information. Behind him through a doorway is seen another person sitting in front of a screen, wearing an EEG cap. Brain-computer interfaces allow researchers to accurately determine a listener’s focus in multitalker environments. Here, professor Christopher Smalt works on an attention-decoding system at the MIT Lincoln Laboratory.MIT Lincoln Laboratory

Clinically, EEG has long been applied for diagnosing epilepsy and sleep disorders, monitoring brain injuries, assessing hearing ability in infants and impaired individuals, and more. And while standard EEG requires conductive gel and bulky headsets, we now have versions that are far more portable and convenient. These breakthroughs have already allowed EEG to migrate from hospitals into the consumer tech spaces, driving everything from neurofeedback headbands to the BCIs in gaming and wellness apps that allow people to control devices with their minds.

The cEEGrid project at Oldenburg University, in Germany, positions lightweight adhesive electrodes around the ear to create a low-profile version. In Denmark, Aarhus University’s Center for Ear-EEG also has an ear-based EEG system designed for comfort and portability. While the signal-to-noise ratio is slightly lower compared to head-worn EEG, these ear-based systems have proven sufficiently accurate for gauging attention, listening effort, hearing thresholds, and speech tracking in real time.

For hearing aids, EEG technology can pick up brain-wave patterns that reveal how well a listener is following speech: When listeners are paying attention, their brain rhythms synchronize with the syllabic rhythms of discourse, essentially tracking the speaker’s cadence. By contrast, if the signal becomes weaker or less precise, it suggests the listener is struggling to comprehend and losing focus.

During my own Ph.D. research, I observed firsthand how real-time brain-wave patterns, picked up by EEG, can reflect the quality of a listener’s speech cognition. For example, when participants successfully homed in on a single talker in a crowded room, their neural rhythms aligned nearly perfectly with that speaker’s voice. It was as if there were a brain-based spotlight on that speaker! But when background fracas grew louder or the listener’s attention drifted, those patterns waned, revealing stress in keeping up.

Today, researchers at Oldenburg University, Aarhus University, and MIT are developing attention-decoding algorithms specifically for auditory applications. For example, Oldenburg’s cEEGrid technology has been used to successfully identify which of two speakers a listener is trying to hear. In a related study, researchers demonstrated that Ear-EEG can track the attended speech stream in multitalker environments.

All of this could prove transformational in creating neuroadaptive hearing aids. If a listener’s EEG reveals a drop in speech tracking, the hearing aid could infer increased listening difficulty, even if ambient noise levels have remained constant. For example, if a hearing-impaired car driver can’t focus on a conversation due to mental fatigue caused by background noise, the hearing aid could switch on beamforming to better spotlight the passenger’s voice, as well as machine-learning settings to deploy sound canceling that blocks the din of the road.

Of course, there are several hurdles to cross before commercialization becomes possible. For one thing, EEG-paired hearing aids will need to handle the fact that neural responses differ from person to person, which means they will likely need to be calibrated individually to capture each wearer’s unique brain-speech patterns.

Additionally, EEG signals are themselves notoriously “noisy,” especially in real-world environments. Luckily, we already have algorithms and processing tools for cleaning and organizing these signals so computer models can search for key patterns that predict mental states, including attention drift and fatigue.

Commercial versions of EEG-paired hearing aids will also need to be small and energy-efficient when it comes to signal processing and real-time computation. And getting them to work reliably, despite head movement and daily activity, will be no small feat. Importantly, companies will need to resolve ethical and regulatory considerations, such as data ownership. To me, these challenges seem surmountable, especially with technology progressing at a rapid clip.

A Window to the Brain: Using Our Eyes to Hear

Now let’s consider a second way of reading brain states: through the listener’s eyes.

When a person has trouble hearing and starts feeling overwhelmed, the body reacts. Heart-rate variability diminishes, indicating stress, and sweating increases. Researchers are investigating how these types of autonomic nervous-system responses can be measured and used to create smart hearing aids. For the purposes of this article, I will focus on a response that seems especially promising—namely, pupil size.

Pupillometry is the measurement of pupil size and how it changes in response to stimuli. We all know that pupils expand or contract depending on light brightness. As it turns out, pupil size is also an accurate means of evaluating attention, arousal, mental strain—and, crucially, listening effort.

Three eye illustrations showing pupil size changes due to light and emotional stimuli. Pupil size is determined by both external stimuli, such as light, and internal stimuli, such as fatigue or excitement.Chris Philpot

In recent years, studies at University College London and Leiden University have demonstrated that pupil dilation is consistently greater in hearing-impaired individuals when processing speech in noisy conditions. Research has also shown pupillometry to be a sensitive, objective correlate of speech intelligibility and mental strain. It could therefore offer a feedback mechanism for user-aware hearing aids that dynamically adjust amplification strategies, directional focus, or noise reduction based not just on the acoustic environment but on how hard the user is working to comprehend speech.

While more straightforward than EEG, pupillometry presents its own engineering challenges. Unlike with ears, which can be assessed from behind, pupillometry requires a direct line of sight to the pupil, necessitating a stable, front-facing camera-to-eye configuration—which isn’t easy to achieve when a wearer is moving around in real-world settings. On top of that, most pupil-tracking systems require infrared illumination and high-resolution optical cameras, which are too bulky and power intensive for the tiny housings of in-ear or behind-the-ear hearing aids. All this makes it unlikely that standalone hearing aids will include pupil-tracking hardware in the near future.

A more viable approach may be pairing hearing aids with smart glasses or other wearables that contain the necessary eye-tracking hardware. Products from companies like Tobii and Pupil Labs already offer real-time pupillometry via lightweight headgear for use in research, behavioral analysis, and assistive technology for people with medical conditions that limit movement but leave eye control intact. Apple’s Vision Pro and other augmented reality or virtual reality platforms also include built-in eye-tracking sensors that could support pupillometry-driven adaptations for audio content.

A woman wears a pair of specialized glasses that have small cameras and infrared illuminators around edges of the glass for eye tracking, as well as a camera and microphone above the nose bridge. Smart glasses that measure pupil size, such as these made by Tobii, could help determine listening strain. Tobii

Once pupil data is acquired, the next step will be real-time interpretation. Here, again, is where machine learning can use large datasets to detect patterns signifying increased cognitive load or attentional shifts. For instance, if a listener’s pupils dilate unnaturally during a conversation, signifying strain, the hearing aid could automatically engage a more aggressive noise suppression mode or narrow its directional microphone beam. These types of systems can also learn from contextual features, such as time of day or prior environments, to continuously refine their response strategies.

While no commercial hearing aid currently integrates pupillometry, adjacent industries are moving quickly. Emteq Labs is developing “emotion-sensing” glasses that combine facial and eye tracking, along with pupil measurement, to do things like evaluate mental health and capture consumer insights. Ethical controversies aside—just imagine what dystopian governments might do with emotion-reading eyewear!—such devices show that it’s feasible to embed biosignal monitoring in consumer-grade smart glasses.

A Future with Empathetic Hearing Aids

Back at the dinner party, it remains nearly impossible to participate in conversation. “Why even bother going out?” some ask. But that will soon change.

We’re at the cusp of a paradigm shift in auditory technology, from device-centered to user-centered innovation. In the next five years, we may see hybrid solutions where EEG-enabled earbuds work in tandem with smart glasses. In 10 years, fully integrated biosignal-driven hearing aids could become the standard. And in 50? Perhaps audio systems will evolve into cognitive companions, devices that adjust, advise, and align with our mental state.

Personalizing hearing-assistance technology isn’t just about improving clarity; it’s also about easing mental fatigue, reducing social isolation, and empowering people to engage confidently with the world. Ultimately, it’s about restoring dignity, connection, and joy.

Reference: https://ift.tt/7moiLKk

The RAM shortage’s silver lining: Less talk about “AI PCs”

RAM prices have soared , which is bad news for people interested in buying, building, or upgrading a computer this year, but it's lik...