Friday, December 5, 2025

Video Friday: Biorobotics Turns Lobster Tails Into Gripper




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

EPFL scientists have integrated discarded crustacean shells into robotic devices, leveraging the strength and flexibility of natural materials for robotic applications.

[ EPFL ]

Finally, a good humanoid robot demo!

Although having said that, I never trust videos demos where it works really well once, and then just pretty well every other time.

[ LimX Dynamics ]

Thanks, Jinyan!

I understand how these structures work, I really do. But watching something rigid extrude itself from a flexible reel will always seem a little magical.

[ AAAS ]

Thanks, Kyujin!

I’m not sure what “industrial grade” actually means, but I want robots to be “automotive grade,” where they’ll easily operate for six months or a year without any maintenance at all.

[ Pudu Robotics ]

Thanks, Mandy!

When you start to suspect that your robotic EV charging solution costs more than your car.

[ Flexiv ]

Yeah uh if the application for this humanoid is actually making robot parts with a hammer and anvil, then I’d be impressed.

[ EngineAI ]

Researchers at Columbia Engineering have designed a robot that can learn a human-like sense of neatness. The researchers taught the system by showing it millions of examples, not teaching it specific instructions. The result is a model that can look at a cluttered tabletop and rearrange scattered objects in an orderly fashion.

[ Paper ]

Why haven’t we seen this sort of thing in humanoid robotics videos yet?

[ HUCEBOT ]

While I definitely appreciate in-the-field testing, it’s also worth asking to what extent your robot is actually being challenged by the in-the-field field that you’ve chosen.

[ DEEP Robotics ]

Introducing HMND 01 Alpha Bipedal — autonomous, adaptive, designed for real-world impact. Built in 5 months, walking stably after 48 hours of training.

[ Humanoid ]

Unitree says that “this is to validate the overall reliability of the robot” but I really have to wonder how useful this kind of reliability validation actually is.

[ Unitree ]

This University of Pennsylvania GRASP on Robotics Seminar is by Jie Tan from Google DeepMind, on “Gemini Robotics: Bringing AI into the Physical World.”

Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. In this talk, I will present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Furthermore, I will discuss the challenges, learnings and future research directions on robot foundation models.

[ University of Pennsylvania GRASP Laboratory ]

Reference: https://ift.tt/PtDINhf

Thursday, December 4, 2025

Are We Testing AI’s Intelligence the Wrong Way?




When people want a clear-eyed take on the state of artificial intelligence and what it all means, they tend to turn to Melanie Mitchell, a computer scientist and a professor at the Santa Fe Institute. Her 2019 book, Artificial Intelligence: A Guide for Thinking Humans, helped define the modern conversation about what today’s AI systems can and can’t do.

A smiling bespectacled woman with shoulder length brown hair. Melanie Mitchell

Today at NeurIPS, the year’s biggest gathering of AI professionals, she gave a keynote titled “On the Science of ‘Alien Intelligences’: Evaluating Cognitive Capabilities in Babies, Animals, and AI.” Ahead of the talk, she spoke with IEEE Spectrum about its themes: Why today’s AI systems should be studied more like nonverbal minds, what developmental and comparative psychology can teach AI researchers, and how better experimental methods could reshape the way we measure machine cognition.

You use the phrase “alien intelligences” for both AI and biological minds like babies and animals. What do you mean by that?

Melanie Mitchell: Hopefully you noticed the quotation marks around “alien intelligences.” I’m quoting from a paper by [the neural network pioneer] Terrence Sejnowski where he talks about ChatGPT as being like a space alien that can communicate with us and seems intelligent. And then there’s another paper by the developmental psychologist Michael Frank who plays on that theme and says, we in developmental psychology study alien intelligences, namely babies. And we have some methods that we think may be helpful in analyzing AI intelligence. So that’s what I’m playing on.

When people talk about evaluating intelligence in AI, what kind of intelligence are they trying to measure? Reasoning or abstraction or world modeling or something else?

Mitchell: All of the above. People mean different things when they use the word intelligence, and intelligence itself has all these different dimensions, as you say. So, I used the term cognitive capabilities, which is a little bit more specific. I’m looking at how different cognitive capabilities are evaluated in developmental and comparative psychology and trying to apply some principles from those fields to AI.

Current Challenges in Evaluating AI Cognition

You say that the field of AI lacks good experimental protocols for evaluating cognition. What does AI evaluation look like today?

Mitchell: The typical way to evaluate an AI system is to have some set of benchmarks, and to run your system on those benchmark tasks and report the accuracy. But often it turns out that even though these AI systems we have now are just killing it on benchmarks, they’re surpassing humans, that performance doesn’t often translate to performance in the real world. If an AI system aces the bar exam, that doesn’t mean it’s going to be a good lawyer in the real world. Often the machines are doing well on those particular questions but can’t generalize very well. Also, tests that are designed to assess humans make assumptions that aren’t necessarily relevant or correct for AI systems, about things like how well a system is able to memorize.

As a computer scientist, I didn’t get any training in experimental methodology. Doing experiments on AI systems has become a core part of evaluating systems, and most people who came up through computer science haven’t had that training.

What do developmental and comparative psychologists know about probing cognition that AI researchers should know too?

Mitchell: There’s all kinds of experimental methodology that you learn as a student of psychology, especially in fields like developmental and comparative psychology because those are nonverbal agents. You have to really think creatively to figure out ways to probe them. So they have all kinds of methodologies that involve very careful control experiments, and making lots of variations on stimuli to check for robustness. They look carefully at failure modes, why the system [being tested] might fail, since those failures can give more insight into what’s going on than success.

Can you give me a concrete example of what these experimental methods look like in developmental or comparative psychology?

Mitchell: One classic example is Clever Hans. There was this horse, Clever Hans, who seemed to be able to do all kinds of arithmetic and counting and other numerical tasks. And the horse would tap out its answer with its hoof. For years, people studied it and said, “I think it’s real. It’s not a hoax.” But then a psychologist came around and said, “I’m going to think really hard about what’s going on and do some control experiments.” And his control experiments were: first, put a blindfold on the horse, and second, put a screen between the horse and the question asker. Turns out if the horse couldn’t see the question asker, it couldn’t do the task. What he found was that the horse was actually perceiving very subtle facial expression cues in the asker to know when to stop tapping. So it’s important to come up with alternative explanations for what’s going on. To be skeptical not only of other people’s research, but maybe even of your own research, your own favorite hypothesis. I don’t think that happens enough in AI.

Do you have any case studies from research on babies?

Mitchell: I have one case study where babies were claimed to have an innate moral sense. The experiment showed them videos where there was a cartoon character trying to climb up a hill. In one case there was another character that helped them go up the hill, and in the other case there was a character that pushed them down the hill. So there was the helper and the hinderer. And the babies were assessed as to which character they liked better—and they had a couple of ways of doing that—and overwhelmingly they liked the helper character better. [Editor's note: The babies were 6 to 10 months old, and assessment techniques included seeing whether the babies reached for the helper or the hinderer.]

But another research group looked very carefully at these videos and found that in all of the helper videos, the climber who was being helped was excited to get to the top of the hill and bounced up and down. And so they said, “Well, what if in the hinderer case we have the climber bounce up and down at the bottom of the hill?” And that completely turned around the results. The babies always chose the one that bounced.

Again, coming up with alternatives, even if you have your favorite hypothesis, is the way that we do science. One thing that I’m always a little shocked by in AI is that people use the word skeptic as a negative: “You’re an LLM skeptic.” But our job is to be skeptics, and that should be a compliment.

Importance of Replication in AI Studies

Both those examples illustrate the theme of looking for counter explanations. Are there other big lessons that you think AI researchers should draw from psychology?

Mitchell: Well, in science in general the idea of replicating experiments is really important, and also building on other people’s work. But that’s sadly a little bit frowned on in the AI world. If you submit a paper to NeurIPS, for example, where you replicated someone’s work and then you do some incremental thing to understand it, the reviewers will say, “This lacks novelty and it’s incremental.” That’s the kiss of death for your paper. I feel like that should be appreciated more because that’s the way that good science gets done.

Going back to measuring cognitive capabilities of AI, there’s lots of talk about how we can measure progress towards AGI. Is that a whole other batch of questions?

Mitchell: Well, the term AGI is a little bit nebulous. People define it in different ways. I think it’s hard to measure progress for something that’s not that well defined. And our conception of it keeps changing, partially in response to things that happen in AI. In the old days of AI, people would talk about human-level intelligence and robots being able to do all the physical things that humans do. But people have looked at robotics and said, “Well, okay, it’s not going to get there soon. Let’s just talk about what people call the cognitive side of intelligence,” which I don’t think is really so separable. So I am a bit of an AGI skeptic, if you will, in the best way.

Reference: https://ift.tt/WIGNku1

BYD’s Engine Flexes Between Ethanol, Gasoline, and Electricity




The world’s first mass-produced ethanol car, the Fiat 147, motored onto Brazilian roads in 1979. The vehicle crowned decades of experimentation in the country with sugar-cane (and later, corn-based and second-generation sugar-cane waste) ethanol as a homegrown fuel. When Chinese automaker BYD introduced a plug-in hybrid designed for Brazil in October, equipped with a flex-fuel engine that lets drivers choose to run on any ratio of gasoline and ethanol or access plug-in electric power, the move felt like the latest chapter in a long national story.

The new engine, designed for the company’s best-selling compact SUV, the Song Pro, is the first plug-in hybrid engine dedicated to biofuel, according to Wang Chuanfu, BYD’s founder and CEO.

Margaret Wooldridge, a professor of mechanical engineering at University of Michigan in Ann Arbor, says the engine’s promise is not in inventing entirely new technology, but in making it accessible.

“The technology existed before,” says Wooldridge, who specializes in hybrid systems, “but fuel switching is expensive, and I’d expect the combinations in this engine to come at a fairly high price tag. BYD’s real innovation is pulling it into a price range where everyday drivers in Brazil can actually choose ratios of ethanol and gasoline, as well as electric.”

BYD’s Affordable Hybrid Innovation

BYD Song Pro vehicles with this new engine were initially priced in a promotion at around US $25,048, with a list price around $35,000. For comparison, another plug-in hybrid vehicle, Toyota’s 2026 Prius Prime, starts at $33,775. The engine is the product of an $18.5 million investment by BYD and a collaboration between Brazilian and Chinese scientists. It adds to Brazil’s history of ethanol use that began in the 1930s and progressed from ethanol-only to flex-fuel vehicles, providing consumers a toolkit to respond to changing fuel prices, ongoing drought like Brazil experienced in the 1980s, or emissions goals.

An engine switching between gasoline and ethanol needs a sensor that can reconcile two distinct fuel-air mixtures. “Integrating that control system, especially in a hybrid architecture, is not trivial,” says Wooldridge. “But BYD appears to have engineered it in a way that’s cost-effective.”

By leveraging a smaller, downsized hybrid engine, the company is likely able to design the engine to be optimal over a smaller speedmap—a narrower, specific range of speeds and power output—avoiding some efficiency compromises that have long plagued flex-fuel powertrain engines, says Wooldridge.

In general, standard flex-fuel vehicles (FFVs) have an internal combustion engine and can operate on gasoline and any blend of gasoline and ethanol up to 83 percent, according to the U.S. Department of Energy. FFV engines only have one fuel system, and mostly use components that are the same as those found in gasoline-only cars. To compensate for ethanol’s different chemical properties and power output compared to gasoline, special components modify the fuel pump and fuel injection system. In addition, FFV engines have engine control modules calibrated to accommodate ethanol’s higher oxygen content.

“Flex-fuel gives consumers flexibility,” Wooldridge says. “If you’re using ethanol, you can run at a higher compression ratio, allowing molecules to be squeezed into a smaller space to allow for faster, more powerful and more efficient combustion. Increasing that ratio boosts efficiency and lowers knock—but if you’re also tying in electric drive, the system can stay optimally efficient across different modes,” she adds.

Jennifer Eaglin, a historian of Brazilian energy at Ohio State University in Columbus, says that BYD is tapping into something deeply rooted in the culture of Brazil, the world’s seventh-most populous country (with a population around 220 million).

“Brazil has built an ethanol-fuel system that’s durable and widespread,” Eaglin says. “It’s no surprise that a company like BYD, recognizing that infrastructure, would innovate to give consumers more options. This isn’t futuristic—it’s a continuation of a long national experiment.”

Reference: https://ift.tt/jdTQEcF

In comedy of errors, men accused of wiping gov databases turned to an AI tool


Two sibling contractors convicted a decade ago for hacking into US State Department have once again been charged, this time for a comically hamfisted attempt to steal and destroy government records just minutes after being fired from their contractor jobs.

The Department of Justice on Thursday said that Muneeb Akhter and Sohaib Akhter, both 34, of Alexandria, Virginia, deleted databases and documents maintained and belonging to three government agencies. The brothers were federal contractors working for an undisclosed company in Washington, DC, that provides software and services to 45 US agencies. Prosecutors said the men coordinated the crimes and began carrying them out just minutes after being fired.

Using AI to cover up an alleged crime—what could go wrong?

On February 18 at roughly 4:55 pm, the men were fired from the company, according to an indictment unsealed on Thursday. Five minutes later, they allegedly began trying to access their employer’s system and access federal government databases. By then, access to one of the brothers’ accounts had already been terminated. The other brother, however, allegedly accessed a government agency’s database stored on the employer’s server and issued commands to prevent other users from connecting or making changes to the database. Then, prosecutors said, he issued a command to delete 96 databases, many of which contained sensitive investigative files and records related to Freedom of Information Act matters.

Read full article

Comments

Reference : https://ift.tt/iOyW3Rs

Wednesday, December 3, 2025

Maximum-severity vulnerability threatens 6% of all websites


Security defenders are girding themselves in response to the disclosure of a maximum-severity vulnerability disclosed Wednesday in React Server, an open source package that’s widely used by websites and in cloud environments. The vulnerability is easy to exploit and allows hackers to execute malicious code on servers that run it.

React is embedded in web apps running on servers so that remote devices render JavaScript and content more quickly and with fewer resources. React is used by an estimated 6 percent of all websites and 39 percent of cloud environments. When end users reload a page, React allows servers to re-render only parts that have changed, a feature that drastically speeds up performance and lowers the computing resources required by the server.

A perfect 10

Security firm Wiz said exploitation requires only a single HTTP request and had a “near-100% reliability” in its testing. Multiple software frameworks and libraries embed React implementations by default. As a result, even when apps don’t explicitly make use of React functionality, they can still be vulnerable, since the integration layer invokes the buggy code.

Read full article

Comments

Reference : https://ift.tt/mPMR8Sb

MIT’s AI Robotics Lab Director Is Building People-Centered Robots




Daniela Rus has spent her career breaking barriers—scientific, social, and material—in her quest to build machines that amplify rather than replace human capability. She made robotics her life’s work, she says, because she understood it was a way to expand the possibilities of computing while enhancing human capabilities.

“I like to think of robotics as a way to give people superpowers,” Rus says. “Machines can help us reach farther, think faster, and live fuller lives.”

Daniela Rus


Employer MIT

Job title

Professor of electrical and computer engineering and computer science; director of the MIT Computer Science and Artificial Intelligence Laboratory

Member grade

Fellow

Alma maters

University of Iowa, in Iowa City; Cornell

Her dual missions, she says, are to make technology humane and to make the most of the opportunities afforded by life in the United States. The two goals have fueled her journey from a childhood living under a dictatorship in Romania to the forefront of global robotics research.

Rus, who is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is the recipient of this year’s IEEE Edison Medal, which recognizes her for “sustained leadership and pioneering contributions in modern robotics.”

An IEEE Fellow, she describes the recognition as a responsibility to further her work and mentor the next generation of roboticists entering the field.

The Edison Medal is the latest in a string of honors she has received. In 2017 she won an Engelberger Robotics Award from the Robotic Industries Association. The following year, she was honored with the Pioneer in Robotics and Automation Award by the IEEE Robotics and Automation Society. The society recognized her again in 2023 with its IEEE Robotics and Automation Technical Field Award.

From Romania to Iowa

Rus was born in Cluj-Napoca, Romania, during the rule of dictator Nicolae Ceausescu. Her early life unfolded in a world defined by scarcity—rationed food, intermittent electricity, and a limited ability to move up or out. But she recalls that, amid the stifling insufficiencies, she was surrounded by an irrepressible warmth and intellectual curiosity—even when she was making locomotive screws in a state-run factory as part of her school’s curriculum.

“Life was hard,” she says, “but we had great teachers and strong communities. As a child, you adapt to whatever is around you.”

Her father, Teodor, was a computer scientist and professor, and her mother, Elena, was a physicist.

In 1982, when she was 19, Rus’s father emigrated to the United States to join the faculty at the University of Iowa, in Iowa City. It was an act of courage and conviction. Within a year, Daniela and her mother joined him there.

“He wanted the freedom to think, to publish, to explore ideas,” Rus says. “And I reaped the benefits of being free from the limitations of our homeland.”

America’s open horizons were intoxicating, she says.

A lecture that changed everything

Rus decided to pursue a degree at her father’s university, where her life changed direction, she says. One afternoon, John Hopcroft—a Turing Award–winning Cornell computer scientist renowned for his work on algorithms and data structures—gave a talk on campus. His message was simple but electrifying, Rus says: Classical computer science had been solved. The next frontier, Hopcroft declared, was computations that interact with the messy physical world.

For Rus, the idea was a revelation.

“It was as if a door had opened,” she says. “I realized the future of computing wasn’t just about logic and code; it was about how machines can perceive, move, and help us in the real world.”

After the lecture, she introduced herself to Hopcroft and told him she wanted to learn from him. Not long after earning her bachelor’s degree in computer science and mathematics in 1985, she applied to get a master’s degree at Cornell, where Hopcroft became her graduate advisor. Rus developed algorithms there for dexterous robotic manipulation—teaching machines to grasp and move objects with precision. She earned her master’s in computer science in 1990, then stayed on at Cornell to work toward a doctorate.

“I like to think of robotics as a way to give people superpowers. Machines can help us reach farther, think faster, and live fuller lives.”

In 1993 she earned her Ph.D. in computer science, then took a position as an assistant professor of computer science at Dartmouth College, in Hanover, N.H. She founded the college’s robotics laboratory and expanded her work into distributed robotics. She developed teams of small robots that cooperated to perform tasks such as ensuring products in warehouses are correctly gathered to fulfill orders, get packaged safely, and are routed to their respective destinations efficiently.

Despite a lack of traditional machine shop facilities for fabrication on the Hanover campus, Rus found a way. She pioneered the use of 3D printing to rapidly prototype and build robots.

In 2003 she left Dartmouth to become a professor in the electrical engineering and computer science department at MIT.

The robotics lab she created at Dartmouth moved with her to MIT and became known as the Distributed Robotics Laboratory (DRL). In 2012 she was named director of MIT’s Computer Science and Artificial Intelligence Laboratory, the school’s largest interdisciplinary lab, with 60 research groups including the DRL. She also continues to serve as the DRL’s principal investigator.

The science of physical intelligence

Rus now leads pioneering research at the intersection of AI and robotics, a field she calls physical intelligence. It’s “a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time,” she says.

Her lab builds soft-body robots inspired by nature that can sense, adapt, and learn. They are AI-driven systems that passively handle tasks—such as self-balancing and complex articulation similar to that done by the human hand—because their shape and materials minimize the need for heavy processing.

Such machines, she says, someday will be able to navigate different environments, perform useful functions without external control, and even recover from disturbances to their route planning. Researchers also are exploring ways to make them more energy-efficient.

One prototype developed by Rus’s team is designed to retrieve foreign objects from the body, including batteries swallowed by children. The ingestible robots are artfully folded, similar to origami, so they are small enough to be swallowed. Embedded magnetic materials allow doctors to steer the soft robots and control their shape. Upon arriving in the stomach, a soft bot can be programmed to wrap around a foreign object and guide it safely out of the patient’s body.

CSAIL researchers also are working on small robots that can carry a medication and release it at a specific area within the digestive tract, bypassing the stomach acid known to diminish some drugs’ efficacy. Ingestible robots also could patch up internal injuries or ulcers. And because they’re made from digestible materials such as sausage casings and biocompatible polymers, the robots can perform their assigned tasks and then get safely absorbed by the body, she says.

Health care isn’t the only application on the horizon for such AI-driven technologies. Robots with physical intelligence might someday help firefighters locate people trapped in burning buildings, find miners after a cave-in, and provide valuable situational awareness information to emergency response teams in the aftermath of natural disasters, Rus says.

“What excites me is the possibility of giving people new powers,” she says. “Machines that can think and move safely in the physical world will let us extend human reach—at work, at home, in medicine … everywhere.”

To make such a vision a reality, she has expanded her technical interests to include several complementary lines of research.

She’s working on self-reconfiguring and modular robots such as MIT’s M-Blocks and NASA’s SuperBots, which can attach, detach, and rearrange themselves to form shapes suited for different actions such as slithering, climbing, and crawling.

With networked robots—including those Amazon uses in its warehouses—thousands of machines can operate as a large adaptive system. The machines communicate continuously to divide tasks, avoid collisions, and optimize package routing.

Rus’s team also is making advances in human-robot interaction, such as reading brainwave activity and interpreting sign language through a smart glove.

To further her plan of putting all the computerized smarts the robots need within their physical bodies instead of in the cloud, she helped found Liquid AI in 2023. The company, based in Cambridge, Mass., develops liquid neural networks, inspired by the simple brains of worms, that can learn and adapt continuously. The word liquid in this case refers to the adaptability, flexibility, and dynamic nature of the team’s model architecture. It can change shape and adapt to new data inputs, and it fits within constraints imposed by the hardware in which it’s contained, she says.

Finding community in IEEE

Rus joined IEEE at one of its robotics conferences when she was a graduate student.

“I think I signed up just to get the student discount,” she says with a laugh. “But IEEE turned out to be the place where my community lived.”

She credits the organization’s conferences, journals, and collaborative spirit with shaping her professional growth.

“The exchange of ideas, the chance to test your thinking against others—it’s invaluable,” she says. “It’s how our field moves forward.”

Rus continues to serve on IEEE panels and committees, mentoring the next generation of roboticists.

“IEEE gave me a platform,” Rus says. “It taught me how to communicate, how to lead, and how to dream bigger.”

Living the American dream

Looking back, Rus sees her story as a testament to unforeseen possibilities.

“When I was growing up in Romania, I couldn’t even imagine living in America,” she says. “Now I’m here, working with brilliant students, building robots that help people, and trying to make a difference. I feel like I’m living the American dream.”

In a nod to a memorable song from the Broadway musical Hamilton, Rus echoes Alexander Hamilton’s determination to make the most of his opportunities, saying, “I don’t ever want to throw away my shot.”

Reference: https://ift.tt/UIqaBs3

When to Leave a Toxic Team




This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!

A word that frequently comes up in career conversations is, unfortunately, “toxic.” The engineers I speak with will tell me that they’re dealing with a toxic manager, a toxic teammate, or a toxic work culture. When you find yourself in a toxic work environment, what should you do?

Is it worth trying to improve things over time, or should you just leave?

The difficult truth is that, in nearly every case, the answer is to leave a toxic team as soon as you can. Here’s why:

  • If you’re earlier in your career, you frankly don’t have much political power in the organization. Any arguments to change team culture or address systemic problems will likely fall on deaf ears. You’ll end up frustrated, and your efforts will be wasted.
  • If you’re more senior, you have some ability to improve processes and relationships on the team. However, if you’re an individual contributor (IC), your capabilities are still limited. There is likely some “low-hanging fruit” of quick improvements to suggest. A few thoughtful pieces of feedback could address many of the problems. If you’ve done that and things are still not getting better, it’s probably time to leave.
  • If you’re part of upper management, you may have inherited the problem, or maybe you were even brought in to solve it. This is the rare case where you could consider the change scenario and address the broken culture: You have both the context and power to make a difference.

The world of technology is large, and constantly getting larger. Don’t waste your time on a bad team or with a bad manager. Find another team, company, or start something on your own.

Engineers often hesitate to leave a poor work environment because they’re afraid or unsure about the process of finding something new. That’s a valid concern. However, inertia should not be the reason you stick around in a job. The best careers stem from the excitement of actively choosing your work, not tolerating toxicity.

Finally, it’s worth noting that even in a toxic team, you’ll still come across smart and kind people. If you are stuck on a bad team, seek out the people who match your wavelength. These relationships will enable you to find new opportunities when you inevitably decide to leave!

—Rahul

IEEE Podcast Focuses on Women in Tech

Are you looking for a new podcast to add to your queue? IEEE Women in Engineering recently launched a podcast featuring experts from around the world to discuss workplace challenges and amplify the diverse experience of women from various STEM fields. New episodes are released on the third Wednesday of each month.

Read more here.

How to Think Like an Entrepreneur

Entrepreneurship is a skill that can benefit all engineers. The editor in chief of IEEE Engineering Management Review shares his tips for acting more like an entrepreneur, from changing your mode of thinking to executing a plan. “The shift from ‘someone should’ to ‘I will’ is the start of entrepreneurial thinking,” the author writes.

Read more here.

Cultivating Innovation in a Research Lab

In a piece for Communications of the ACM, a former employee of Xerox PARC reflects on the lessons he learned about managing a research lab. The philosophies that underpin innovative labs, the author says, require a different approach than those focused on delivering products or services. See how these unwritten rules can help cultivate breakthroughs.

Read more here.

Reference: https://ift.tt/JnaGDB5

De-Risk the Energy Transition with Hardware-in-the-Loop Testing




Learn how hardware-in-the-loop testing validates protection schemes, renewable integration, and HVDC systems before deployment. Download this introduction to real-time power system simulation.

In this white paper, you’ll learn:

  • Why phasor-domain simulation can’t capture transient phenomena in inverter-dominated grids
  • How real-time EMT simulation enables closed-loop testing with actual hardware
  • Key components of a hardware-in-the-loop testbed
  • Applications across renewable energy, HVDC systems, microgrids, and protection schemes
  • Real-world examples from multi-terminal HVDC links to traveling wave protection
  • How HIL testing reduces risk, accelerates commissioning, and validates multi-vendor interoperability
Reference: https://ift.tt/8MsZFx5

Tuesday, December 2, 2025

OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months


The shoe is most certainly on the other foot. On Monday, OpenAI CEO Sam Altman reportedly declared a “code red” at the company to improve ChatGPT, delaying advertising plans and other products in the process,  The Information reported based on a leaked internal memo. The move follows Google’s release of its Gemini 3 model last month, which has outperformed ChatGPT on some industry benchmark tests and sparked high-profile praise on social media.

In the memo, Altman wrote, “We are at a critical time for ChatGPT.” The company will push back work on advertising integration, AI agents for health and shopping, and a personal assistant feature called Pulse. Altman encouraged temporary team transfers and established daily calls for employees responsible for enhancing the chatbot.

The directive creates an odd symmetry with events from December 2022, when Google management declared its own “code red” internal emergency after ChatGPT launched and rapidly gained in popularity. At the time, Google CEO Sundar Pichai reassigned teams across the company to develop AI prototypes and products to compete with OpenAI’s chatbot. Now, three years later, the AI industry is in a very different place.

Read full article

Comments

Reference : https://ift.tt/okUQqw9

Capacity Limits in 5G Prompt a 6G Focus on Infrastructure




When the head of Nokia Bell Labs core research talks about “lessons learned” from 5G, he’s doing something rare in telecom: admitting a flagship technology didn’t quite work out as planned.

That candor matters now, too, because Bell Labs core research president Peter Vetter says 6G’s success depends on getting infrastructure right the first time—something 5G didn’t fully do.

By 2030, he says, 5G will have exhausted its capacity. Not because some 5G killer app will appear tomorrow, suddenly making everyone’s phones demand 10 or 100 times as much data capacity as they require today. Rather, by the turn of the decade, wireless telecom won’t be centered around just cellphones anymore.

AI agents, autonomous cars, drones, IoT nodes, and sensors, sensors, sensors: Everything in a 6G world will potentially need a way on to the network. That means more than anything else in the remaining years before 6G’s anticipated rollout, high-capacity connections behind cell towers are a key game to win. Which brings industry scrutiny, then, to what telecom folks call backhaul—the high-capacity fiber or wireless links that pass data from cell towers toward the internet backbone. It’s the difference between the “local” connection from your phone to a nearby tower and the “trunk” connection that carries millions of signals simultaneously.

But the backhaul crisis ahead isn’t just about capacity. It’s also about architecture. 5G was designed around a world where phones dominated, downloading video at higher and higher resolutions. 6G is now shaping up to be something else entirely. This inversion—from 5G’s anticipated downlink deluge to 6G’s uplink resurgence—requires rethinking everything at the core level, practically from scratch.

Vetter’s career spans the entire arc of the wireless telecom era—from optical interconnections in the 1990s at Alcatel (a research center pioneering fiber-to-home connections) to his roles at Bell Labs and later Nokia Bell Labs, culminating in 2021 in his current position at the industry’s bellwether institution.

In this conversation, held in November at the Brooklyn 6G Summit in New York, Vetter explains what 5G got wrong, what 6G must do differently, and whether these innovations can arrive before telecom’s networks start running out of room.

5G’s Expensive Miscalculation

IEEE Spectrum: Where is telecom today, halfway between 5G’s rollout and 6G’s anticipated rollout?

Peter Vetter: Today, we have enough spectrum and capacity. But going forward, there will not be enough. The 5G network by the end of the decade will run out of steam. We have traffic simulations. And it is something that has been consistent generation to generation, from 2G to 3G to 4G. Every decade, capacity goes up by about a factor of 10. So you need to prepare for that.

And the challenge for us as researchers is how do you do that in an energy-efficient way? Because the power consumption cannot go up by a factor of 10. The cost cannot go up by a factor of 10. And then, lesson learned from 5G: The idea was, “Oh, we do that in higher spectrum. There is more bandwidth. Let’s go to millimeter wave.” The lesson learned is, okay, millimeter waves have short reach. You need a small cell [tower] every 300 meters or so. And that doesn’t cut it. It was too expensive to install all these small cells.

Is this related to the backhaul question?

Vetter: So backhaul is the connection between the base station and what we call the core of the network—the data centers, and the servers. Ideally, you use fiber to your base station. If you have that fiber as a service provider, use it. It gives you the highest capacity. But very often new cell sites don’t have that fiber backhaul, then there are alternatives: wireless backhaul.

Close-up of a Nokia radio on glass circuit board. Nokia Bell Labs has pioneered a glass-based chip architecture for telecom’s backhaul signals, communicating between towers and telecom infrastructure.Nokia

Radios Built on Glass Push Frequencies Higher

What are the challenges ahead for wireless backhaul?

Vetter: To get up to the 100 gigabit per second, fiber-like speeds, you need to go to higher frequency bands.

Higher frequency bands for the signals the backhaul antennas use?

Vetter: Yes. The challenge is the design of the radio front ends and the radio-frequency integrated circuits (RFICs) at those frequencies. You cannot really integrate [present-day] antennas with RFICs at those high speeds.

And what happens as those signal frequencies get higher?

Vetter: So in a millimeter wave, say 28 gigahertz, you could still do [the electronics and waveguides] for this with a classical printed circuit board. But as the frequencies go up, the attenuation gets too high.

What happens when you get to, say, 100 GHz?

Vetter: [Conventional materials] are no good anymore. So we need to look at other still low-cost materials. We have done pioneering work at Bell Labs on radio on glass. And we use glass not for its optical transparency, but for its transparency in the sub-terahertz radio range.

Is Nokia Bell Labs making these radio-on-glass backhaul systems for 100 GHz communications?

Vetter: I used an order of magnitude. Above 100 GHz, you need to look into a different material. But [the wavelength range] is actually 140 to 170 GHz, what is called the D-Band.

We collaborate with our internal customers to get these kind of concepts on the long-term roadmap. As an example, that D-Band radio system, we actually integrated it in a prototype with our mobile business group. And we tested it last year at the Olympics in Paris.

But this is, as I said, a prototype. We need to mature the technology between a research prototype and qualifying it to go into production. The researcher on that is Shahriar Shahramian. He’s well-known in the field for this.

Why 6G’s Bandwidth Crisis Isn’t About Phones

What will be the applications that’ll drive the big 6G demands for bandwidth?

Vetter: We’re installing more and more cameras and other types of sensors. I mean, we’re going into a world where we want to create large world models that are synchronous copies of the physical world. So what we will see going forward in 6G is a massive-scale deployment of sensors which will feed the AI models. So a lot of uplink capacity. That’s where a lot of that increase will come from.

Any others?

Vetter: Autonomous cars could be an example. It can also be in industry—like a digital twin of a harbor, and how you manage that? It can be a digital twin of a warehouse, and you query the digital twin, “Where is my product X?” Then a robot will automatically know thanks to the updated digital twin where it is in the warehouse and which route to take. Because it knows where the obstacles are in real time, thanks to that massive-scale sensing of the physical world and then the interpretation with the AI models.

You will have your agents that act on behalf of you to do your groceries, or order a driverless car. They will actively record where you are, make sure that there are also the proper privacy measures in place. So that your agent has an understanding of the state you’re in and can serve you in the most optimal way.

How 6G Networks Will Help Detect Drones, Earthquakes, and Tsunamis

You’ve described before how 6G signals can not only transmit data but also provide sensing. How will that work?

Vetter: The augmentation now is that the network can be turned also in a sensing modality. That if you turn around the corner, a camera doesn’t see you anymore. But the radio still can detect people that are coming, for instance, at a traffic crossing. And you can anticipate that. Yeah, warn a car that, “There’s a pedestrian coming. Slow down.” We also have fiber sensing. And for instance, using fibers at the bottom of the ocean and detecting movements of waves and detect tsunamis, for instance, and do an early tsunami warning.

What are your teams’ findings?

Vetter: The present-day use of tsunami warning buoys are a few hundred kilometers offshore. These tsunami waves travel at 300 and more meters per second, and so you only have 15 minutes to warn the people and evacuate. If you have now a fiber sensing network across the ocean that you can detect it much deeper in the ocean, you can do meaningful early tsunami warning.

We recently detected there was a major earthquake in East Russia. That was last July. And we had a fiber sensing system between Hawaii and California. And we were able to see that earthquake on the fiber. And we also saw the development of the tsunami wave.

6G’s Thousands of Antennas and Smarter Waveforms

Bell Labs was an early pioneer in multiple-input, multiple-output (MIMO) antennas starting in the 1990s. Where multiple transmit and receive antennas could carry many data streams at once. What is Bell Labs doing with MIMO now to help solve these bandwidth problems you’ve described?

Vetter: So, as I said earlier, you want to provide capacity from existing cell sites. And the way to MIMO can do that by a technology called a simplified beamforming: If you want better coverage at a higher frequency, you need to focus your electromagnetic energy, your radio energy, even more. So in order to do that, you need a larger amount of antennas.

So if you double the frequency, we go from 3.5 gigahertz, which is the C-band in 5G, now to 6G, 7 gigahertz. So it’s about double. That means the wavelength is half. So you can fit four times more antenna elements in the same form factor. So physics helps us in that sense.

What’s the catch?

Vetter: Where physics doesn’t help us is more antenna elements means more signal processing, and the power consumption goes up. So here is where the research then comes in. Can we creatively get to these larger antenna arrays without the power consumption going up?

The use of AI is important in this. How can we leverage AI to do channel estimation, to do such things as equalization, to do smart beamforming, to learn the waveform, for instance?

We’ve shown that with these kind of AI techniques, we can get actually up to 30 percent more capacity on the same spectrum.

And that allows many gigabits per second to go out to each phone or device?

Vetter: So gigabits per second is already possible in 5G. We’ve demonstrated that. You can imagine that this could go up, but that’s not really the need. The need is really how many more can you support from a base station?

Reference: https://ift.tt/OhydfgE

Syntax hacking: Researchers discover sentence structure can bypass AI safety rules


Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available.

The team, led by Chantal Shaib and Vinith M. Suriyakumar, tested this by asking models questions with preserved grammatical patterns but nonsensical words. For example, when prompted with “Quickly sit Paris clouded?” (mimicking the structure of “Where is Paris located?”), models still answered “France.”

This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. The team plans to present these findings at NeurIPS later this month.

Read full article

Comments

Reference : https://ift.tt/LnwvK8P

Monday, December 1, 2025

Why We Keep Making the Same Software Mistakes




Talking to Robert N. Charette can be pretty depressing. Charette, who has been writing about software failures for this magazine for the past 20 years, is a renowned risk analyst and systems expert who over the course of a 50-year career has seen more than his share of delusional thinking among IT professionals, government officials, and corporate executives, before, during, and after massive software failures.

In 2005’s “Why Software Fails,” in IEEE Spectrum, a seminal article documenting the causes behind large-scale software failures, Charette noted, “The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don’t see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.”

Two decades and several trillion wasted dollars later, he finds that people are making the same mistakes. They claim their project is unique, so past lessons don’t apply. They underestimate complexity. Managers come out of the gate with unrealistic budgets and timelines. Testing is inadequate or skipped entirely. Vendor promises that are too good to be true are taken at face value. Newer development approaches like DevOps or AI copilots are implemented without proper training or the organizational change necessary to make the most of them.

What’s worse, the huge impacts of these missteps on end users aren’t fully accounted for. When the Canadian government’s Phoenix paycheck system initially failed, for instance, the developers glossed over the protracted financial and emotional distress inflicted on tens of thousands of employees receiving erroneous paychecks; problems persist today, nine years later. Perhaps that’s because, as Charette told me recently, IT project managers don’t have professional licensing requirements and are rarely, if ever, held legally liable for software debacles.

While medical devices may seem a far cry from giant IT projects, they have a few things in common. As Special Projects Editor Stephen Cass uncovered in this month’s The Data, the U.S. Food and Drug Administration recalls on average 20 medical devices per month due to software issues.

“Software is as significant as electricity. We would never put up with electricity going out every other day, but we sure as hell have no problem having AWS go down.” —Robert N. Charette

Like IT projects, medical devices face fundamental challenges posed by software complexity. Which means that testing, though rigorous and regulated in the medical domain, can’t possibly cover every scenario or every line of code. The major difference between failed medical devices and failed IT projects is that a huge amount of liability attaches to the former.

“When you’re building software for medical devices, there are a lot more standards that have to be met and a lot more concern about the consequences of failure,” Charette observes. “Because when those things don’t work, there’s tort law available, which means manufacturers are on the hook. It’s much harder to bring a case and win when you’re talking about an electronic payroll system.”

Whether a software failure is hyperlocal, as when a medical device fails inside your body, or spread across an entire region, like when an airline’s ticketing system crashes, organizations need to dig into the root causes and apply those lessons to the next device or IT project if they hope to stop history from repeating itself.

“Software is as significant as electricity,” Charette says. “We would never put up with electricity going out every other day, but we sure as hell have no problem accepting AWS going down or telcos or banks going out.” He lets out a heavy sigh worthy of A.A. Milne’s Eeyore. “People just kind of shrug their shoulders.”

Reference: https://ift.tt/3Mx1KW8

IEEE President’s Note: Engineering With Purpose




Innovation, expertise, and efficiency often take center stage in the engineering world. Yet engineering’s impact lies not only in technical advancement but also in its ability to serve the greater good. This foundational principle is behind IEEE’s public imperative initiatives which apply our efforts and expertise to support our mission to advance technology for humanity with a direct benefit to society.

Serving society

Public imperative activities and initiatives serve society by promoting understanding, impact for humans and our environment, and responsible use of science and technology. These initiatives encompass a wide range of efforts, including STEM outreach, humanitarian technology deployments, public education on emerging technologies, and sustainability. Unlike many efforts advancing technology, these initiatives are not designed with financial opportunity in mind. Instead, they fulfill IEEE’s designation as a 501(c)(3) public charity engaged in scientific and educational activities for the benefit of the engineering community and the public.

Building a Better World


Across the globe, IEEE members and volunteers dedicate their time and use their talents, experiences, and expertise to lead, organize, and drive activities to advance technology for humanity. The IEEE Social Impact report showcases a selection of recent projects and initiatives that support that mission.

In my March column, I described my vision for One IEEE, which is aimed at empowering IEEE’s diverse units to work together in ways that magnify their individual and collective impact. Within the framework of One IEEE, public imperative activities are not peripheral; they are central to unifying the organization and amplifying our global relevance. Across IEEE’s varied regions, societies, and technical communities, these activities align efforts around a shared mission. They provide our members from different disciplines and geographies the opportunity to collaborate on projects that transcend boundaries, fostering interdisciplinary innovation and global stewardship.

Such activities also offer members opportunities to apply their technical expertise in service of societal needs. Whether finding innovative solutions to connect the unconnected or developing open-source educational tools for students, we are solving real-world problems. The initiatives transform abstract technical knowledge into actionable solutions, reinforcing the idea that technology is not just about building systems—it’s about building futures.

For our young professionals and students, these activities offer hands-on experiences that connect technical skills with real-world applications, inspiring the next generation to pursue careers in engineering with purpose and passion. These activities also create mentorship opportunities, leadership pathways, and a sense of belonging within the wider IEEE community.

Principled tech leader

In an age when technology influences practically every aspect of life—from health care and energy to communication and transportation—IEEE must, as a leading technical authority, also serve as a socially responsible leader. Public imperative activities include IEEE’s commitment to ethical development, university and pre-university education, and accessible innovation. They help bridge the gap between technical communities and the public, working to ensure that engineering solutions are accessible, equitable, and aligned with societal values.

From a strategic standpoint, public imperatives also support IEEE’s long-term sustainability. The organization is redesigning its budget process to emphasize aligning financial resources with mission-driven goals. One of the guiding principles is to publicize IEEE’s public charity status and invest accordingly.

That means promoting our public imperatives in funding decisions, integrating them into operational planning, and measuring their outcomes with engineering rigor. By treating these activities as core infrastructure, IEEE ensures that its resources are deployed in ways that maximize public benefit and organizational impact.

Public imperatives are vital to the success of One IEEE. They embody the organization’s mission, unify its global membership, and demonstrate the societal relevance of engineering and technology. They offer our members the opportunity to apply their skills in meaningful ways, contribute to public good, and shape the future of technology with integrity.

Through our public imperative activities, IEEE is a force for innovation and a driver of meaningful impact.

This article appears in the December 2025 print issue as “Engineering With Purpose.”

Reference: https://ift.tt/IQBfqLj

The Next Frontier in AI Isn’t More Data




For the past decade, progress in artificial intelligence has been measured by scale: bigger models, larger datasets, and more compute. That approach delivered astonishing breakthroughs in large language models (LLMs); in just five years, AI has leapt from models like GPT-2, which could hardly mimic coherence, to systems like GPT-5 that can reason and engage in substantive dialogue. And now early prototypes of AI agents that can navigate codebases or browse the web point towards an entirely new frontier.

But size alone can only take AI so far. The next leap won’t come from bigger models alone. It will come from combining ever-better data with worlds we build for models to learn in. And the most important question becomes: What do classrooms for AI look like?

In the past few months Silicon Valley has placed its bets, with labs investing billions in constructing such classrooms, which are called reinforcement learning (RL) environments. These environments let machines experiment, fail, and improve in realistic digital spaces.

AI Training: From Data to Experience

The history of modern AI has unfolded in eras, each defined by the kind of data that the models consumed. First came the age of pretraining on internet-scale datasets. This commodity data allowed machines to mimic human language by recognizing statistical patterns. Then came data combined with reinforcement learning from human feedback—a technique that uses crowd workers to grade responses from LLMs—which made AI more useful, responsive, and aligned with human preferences.

We have experienced both eras firsthand. Working in the trenches of model data at Scale AI exposed us to what many consider the fundamental problem in AI: ensuring that the training data fueling these models is diverse, accurate, and effective in driving performance gains. Systems trained on clean, structured, expert-labeled data made leaps. Cracking the data problem allowed us to pioneer some of the most critical advancements in LLMs over the past few years.

Today, data is still a foundation. It is the raw material from which intelligence is built. But we are entering a new phase where data alone is no longer enough. To unlock the next frontier, we must pair high-quality data with environments that allow limitless interaction, continuous feedback, and learning through action. RL environments don’t replace data; they amplify what data can do by enabling models to apply knowledge, test hypotheses, and refine behaviors in realistic settings.

How an RL Environment Works

In an RL environment, the model learns through a simple loop: it observes the state of the world, takes an action, and receives a reward that indicates whether that action helped accomplish a goal. Over many iterations, the model gradually discovers strategies that lead to better outcomes. The crucial shift is that training becomes interactive—models aren’t just predicting the next token but improving through trial, error, and feedback.

For example, language models can already generate code in a simple chat setting. Place them in a live coding environmentwhere they can ingest context, run their code, debug errors, and refine their solutionand something changes. They shift from advising to autonomously problem-solving.

This distinction matters. In a software-driven world, the ability for AI to generate and test production-level code in vast repositories will mark a major change in capability. That leap won’t come solely from larger datasets; it will come from immersive environments where agents can experiment, stumble, and learn through iteration—much like human programmers do. The real world of development is messy: Coders have to deal with underspecified bugs, tangled codebases, vague requirements. Teaching AI to handle that mess is the only way it will ever graduate from producing error-prone attempts to generating consistent and reliable solutions.

Can AI Handle the Messy Real World?

Navigating the internet is also messy. Pop-ups, login walls, broken links, and outdated information are woven throughout day-to-day browsing workflows. Humans handle these disruptions almost instinctively, but AI can only develop that capability by training in environments that simulate the web’s unpredictability. Agents must learn how to recover from errors, recognize and persist through user-interface obstacles, and complete multi-step workflows across widely used applications.

Some of the most important environments aren’t public at all. Governments and enterprises are actively building secure simulations where AI can practice high-stakes decision-making without real-world consequences. Consider disaster relief: It would be unthinkable to deploy an untested agent in a live hurricane response. But in a simulated world of ports, roads, and supply chains, an agent can fail a thousand times and gradually get better at crafting the optimal plan.

Every major leap in AI has relied on unseen infrastructure, such as annotators labeling datasets, researchers training reward models, and engineers building scaffoldings for LLMs to use tools and take action. Finding large-volume and high-quality datasets was once the bottleneck in AI, and solving that problem sparked the previous wave of progress. Today, the bottleneck is not data—it’s building RL environments that are rich, realistic, and truly useful.

The next phase of AI progress won’t be an accident of scale. It will be the result of combining strong data foundations with interactive environments that teach machines how to act, adapt, and reason across messy real-world scenarios. Coding sandboxes, OS and browser playgrounds, and secure simulations will turn prediction into competence.

Reference: https://ift.tt/UDmxBjR

Video Friday: Biorobotics Turns Lobster Tails Into Gripper

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a w...