Friday, December 5, 2025

Video Friday: Biorobotics Turns Lobster Tails Into Gripper




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

EPFL scientists have integrated discarded crustacean shells into robotic devices, leveraging the strength and flexibility of natural materials for robotic applications.

[ EPFL ]

Finally, a good humanoid robot demo!

Although having said that, I never trust videos demos where it works really well once, and then just pretty well every other time.

[ LimX Dynamics ]

Thanks, Jinyan!

I understand how these structures work, I really do. But watching something rigid extrude itself from a flexible reel will always seem a little magical.

[ AAAS ]

Thanks, Kyujin!

I’m not sure what “industrial grade” actually means, but I want robots to be “automotive grade,” where they’ll easily operate for six months or a year without any maintenance at all.

[ Pudu Robotics ]

Thanks, Mandy!

When you start to suspect that your robotic EV charging solution costs more than your car.

[ Flexiv ]

Yeah uh if the application for this humanoid is actually making robot parts with a hammer and anvil, then I’d be impressed.

[ EngineAI ]

Researchers at Columbia Engineering have designed a robot that can learn a human-like sense of neatness. The researchers taught the system by showing it millions of examples, not teaching it specific instructions. The result is a model that can look at a cluttered tabletop and rearrange scattered objects in an orderly fashion.

[ Paper ]

Why haven’t we seen this sort of thing in humanoid robotics videos yet?

[ HUCEBOT ]

While I definitely appreciate in-the-field testing, it’s also worth asking to what extent your robot is actually being challenged by the in-the-field field that you’ve chosen.

[ DEEP Robotics ]

Introducing HMND 01 Alpha Bipedal — autonomous, adaptive, designed for real-world impact. Built in 5 months, walking stably after 48 hours of training.

[ Humanoid ]

Unitree says that “this is to validate the overall reliability of the robot” but I really have to wonder how useful this kind of reliability validation actually is.

[ Unitree ]

This University of Pennsylvania GRASP on Robotics Seminar is by Jie Tan from Google DeepMind, on “Gemini Robotics: Bringing AI into the Physical World.”

Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. In this talk, I will present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Furthermore, I will discuss the challenges, learnings and future research directions on robot foundation models.

[ University of Pennsylvania GRASP Laboratory ]

Reference: https://ift.tt/PtDINhf

Thursday, December 4, 2025

Are We Testing AI’s Intelligence the Wrong Way?




When people want a clear-eyed take on the state of artificial intelligence and what it all means, they tend to turn to Melanie Mitchell, a computer scientist and a professor at the Santa Fe Institute. Her 2019 book, Artificial Intelligence: A Guide for Thinking Humans, helped define the modern conversation about what today’s AI systems can and can’t do.

A smiling bespectacled woman with shoulder length brown hair. Melanie Mitchell

Today at NeurIPS, the year’s biggest gathering of AI professionals, she gave a keynote titled “On the Science of ‘Alien Intelligences’: Evaluating Cognitive Capabilities in Babies, Animals, and AI.” Ahead of the talk, she spoke with IEEE Spectrum about its themes: Why today’s AI systems should be studied more like nonverbal minds, what developmental and comparative psychology can teach AI researchers, and how better experimental methods could reshape the way we measure machine cognition.

You use the phrase “alien intelligences” for both AI and biological minds like babies and animals. What do you mean by that?

Melanie Mitchell: Hopefully you noticed the quotation marks around “alien intelligences.” I’m quoting from a paper by [the neural network pioneer] Terrence Sejnowski where he talks about ChatGPT as being like a space alien that can communicate with us and seems intelligent. And then there’s another paper by the developmental psychologist Michael Frank who plays on that theme and says, we in developmental psychology study alien intelligences, namely babies. And we have some methods that we think may be helpful in analyzing AI intelligence. So that’s what I’m playing on.

When people talk about evaluating intelligence in AI, what kind of intelligence are they trying to measure? Reasoning or abstraction or world modeling or something else?

Mitchell: All of the above. People mean different things when they use the word intelligence, and intelligence itself has all these different dimensions, as you say. So, I used the term cognitive capabilities, which is a little bit more specific. I’m looking at how different cognitive capabilities are evaluated in developmental and comparative psychology and trying to apply some principles from those fields to AI.

Current Challenges in Evaluating AI Cognition

You say that the field of AI lacks good experimental protocols for evaluating cognition. What does AI evaluation look like today?

Mitchell: The typical way to evaluate an AI system is to have some set of benchmarks, and to run your system on those benchmark tasks and report the accuracy. But often it turns out that even though these AI systems we have now are just killing it on benchmarks, they’re surpassing humans, that performance doesn’t often translate to performance in the real world. If an AI system aces the bar exam, that doesn’t mean it’s going to be a good lawyer in the real world. Often the machines are doing well on those particular questions but can’t generalize very well. Also, tests that are designed to assess humans make assumptions that aren’t necessarily relevant or correct for AI systems, about things like how well a system is able to memorize.

As a computer scientist, I didn’t get any training in experimental methodology. Doing experiments on AI systems has become a core part of evaluating systems, and most people who came up through computer science haven’t had that training.

What do developmental and comparative psychologists know about probing cognition that AI researchers should know too?

Mitchell: There’s all kinds of experimental methodology that you learn as a student of psychology, especially in fields like developmental and comparative psychology because those are nonverbal agents. You have to really think creatively to figure out ways to probe them. So they have all kinds of methodologies that involve very careful control experiments, and making lots of variations on stimuli to check for robustness. They look carefully at failure modes, why the system [being tested] might fail, since those failures can give more insight into what’s going on than success.

Can you give me a concrete example of what these experimental methods look like in developmental or comparative psychology?

Mitchell: One classic example is Clever Hans. There was this horse, Clever Hans, who seemed to be able to do all kinds of arithmetic and counting and other numerical tasks. And the horse would tap out its answer with its hoof. For years, people studied it and said, “I think it’s real. It’s not a hoax.” But then a psychologist came around and said, “I’m going to think really hard about what’s going on and do some control experiments.” And his control experiments were: first, put a blindfold on the horse, and second, put a screen between the horse and the question asker. Turns out if the horse couldn’t see the question asker, it couldn’t do the task. What he found was that the horse was actually perceiving very subtle facial expression cues in the asker to know when to stop tapping. So it’s important to come up with alternative explanations for what’s going on. To be skeptical not only of other people’s research, but maybe even of your own research, your own favorite hypothesis. I don’t think that happens enough in AI.

Do you have any case studies from research on babies?

Mitchell: I have one case study where babies were claimed to have an innate moral sense. The experiment showed them videos where there was a cartoon character trying to climb up a hill. In one case there was another character that helped them go up the hill, and in the other case there was a character that pushed them down the hill. So there was the helper and the hinderer. And the babies were assessed as to which character they liked better—and they had a couple of ways of doing that—and overwhelmingly they liked the helper character better. [Editor's note: The babies were 6 to 10 months old, and assessment techniques included seeing whether the babies reached for the helper or the hinderer.]

But another research group looked very carefully at these videos and found that in all of the helper videos, the climber who was being helped was excited to get to the top of the hill and bounced up and down. And so they said, “Well, what if in the hinderer case we have the climber bounce up and down at the bottom of the hill?” And that completely turned around the results. The babies always chose the one that bounced.

Again, coming up with alternatives, even if you have your favorite hypothesis, is the way that we do science. One thing that I’m always a little shocked by in AI is that people use the word skeptic as a negative: “You’re an LLM skeptic.” But our job is to be skeptics, and that should be a compliment.

Importance of Replication in AI Studies

Both those examples illustrate the theme of looking for counter explanations. Are there other big lessons that you think AI researchers should draw from psychology?

Mitchell: Well, in science in general the idea of replicating experiments is really important, and also building on other people’s work. But that’s sadly a little bit frowned on in the AI world. If you submit a paper to NeurIPS, for example, where you replicated someone’s work and then you do some incremental thing to understand it, the reviewers will say, “This lacks novelty and it’s incremental.” That’s the kiss of death for your paper. I feel like that should be appreciated more because that’s the way that good science gets done.

Going back to measuring cognitive capabilities of AI, there’s lots of talk about how we can measure progress towards AGI. Is that a whole other batch of questions?

Mitchell: Well, the term AGI is a little bit nebulous. People define it in different ways. I think it’s hard to measure progress for something that’s not that well defined. And our conception of it keeps changing, partially in response to things that happen in AI. In the old days of AI, people would talk about human-level intelligence and robots being able to do all the physical things that humans do. But people have looked at robotics and said, “Well, okay, it’s not going to get there soon. Let’s just talk about what people call the cognitive side of intelligence,” which I don’t think is really so separable. So I am a bit of an AGI skeptic, if you will, in the best way.

Reference: https://ift.tt/WIGNku1

BYD’s Engine Flexes Between Ethanol, Gasoline, and Electricity




The world’s first mass-produced ethanol car, the Fiat 147, motored onto Brazilian roads in 1979. The vehicle crowned decades of experimentation in the country with sugar-cane (and later, corn-based and second-generation sugar-cane waste) ethanol as a homegrown fuel. When Chinese automaker BYD introduced a plug-in hybrid designed for Brazil in October, equipped with a flex-fuel engine that lets drivers choose to run on any ratio of gasoline and ethanol or access plug-in electric power, the move felt like the latest chapter in a long national story.

The new engine, designed for the company’s best-selling compact SUV, the Song Pro, is the first plug-in hybrid engine dedicated to biofuel, according to Wang Chuanfu, BYD’s founder and CEO.

Margaret Wooldridge, a professor of mechanical engineering at University of Michigan in Ann Arbor, says the engine’s promise is not in inventing entirely new technology, but in making it accessible.

“The technology existed before,” says Wooldridge, who specializes in hybrid systems, “but fuel switching is expensive, and I’d expect the combinations in this engine to come at a fairly high price tag. BYD’s real innovation is pulling it into a price range where everyday drivers in Brazil can actually choose ratios of ethanol and gasoline, as well as electric.”

BYD’s Affordable Hybrid Innovation

BYD Song Pro vehicles with this new engine were initially priced in a promotion at around US $25,048, with a list price around $35,000. For comparison, another plug-in hybrid vehicle, Toyota’s 2026 Prius Prime, starts at $33,775. The engine is the product of an $18.5 million investment by BYD and a collaboration between Brazilian and Chinese scientists. It adds to Brazil’s history of ethanol use that began in the 1930s and progressed from ethanol-only to flex-fuel vehicles, providing consumers a toolkit to respond to changing fuel prices, ongoing drought like Brazil experienced in the 1980s, or emissions goals.

An engine switching between gasoline and ethanol needs a sensor that can reconcile two distinct fuel-air mixtures. “Integrating that control system, especially in a hybrid architecture, is not trivial,” says Wooldridge. “But BYD appears to have engineered it in a way that’s cost-effective.”

By leveraging a smaller, downsized hybrid engine, the company is likely able to design the engine to be optimal over a smaller speedmap—a narrower, specific range of speeds and power output—avoiding some efficiency compromises that have long plagued flex-fuel powertrain engines, says Wooldridge.

In general, standard flex-fuel vehicles (FFVs) have an internal combustion engine and can operate on gasoline and any blend of gasoline and ethanol up to 83 percent, according to the U.S. Department of Energy. FFV engines only have one fuel system, and mostly use components that are the same as those found in gasoline-only cars. To compensate for ethanol’s different chemical properties and power output compared to gasoline, special components modify the fuel pump and fuel injection system. In addition, FFV engines have engine control modules calibrated to accommodate ethanol’s higher oxygen content.

“Flex-fuel gives consumers flexibility,” Wooldridge says. “If you’re using ethanol, you can run at a higher compression ratio, allowing molecules to be squeezed into a smaller space to allow for faster, more powerful and more efficient combustion. Increasing that ratio boosts efficiency and lowers knock—but if you’re also tying in electric drive, the system can stay optimally efficient across different modes,” she adds.

Jennifer Eaglin, a historian of Brazilian energy at Ohio State University in Columbus, says that BYD is tapping into something deeply rooted in the culture of Brazil, the world’s seventh-most populous country (with a population around 220 million).

“Brazil has built an ethanol-fuel system that’s durable and widespread,” Eaglin says. “It’s no surprise that a company like BYD, recognizing that infrastructure, would innovate to give consumers more options. This isn’t futuristic—it’s a continuation of a long national experiment.”

Reference: https://ift.tt/jdTQEcF

In comedy of errors, men accused of wiping gov databases turned to an AI tool


Two sibling contractors convicted a decade ago for hacking into US State Department have once again been charged, this time for a comically hamfisted attempt to steal and destroy government records just minutes after being fired from their contractor jobs.

The Department of Justice on Thursday said that Muneeb Akhter and Sohaib Akhter, both 34, of Alexandria, Virginia, deleted databases and documents maintained and belonging to three government agencies. The brothers were federal contractors working for an undisclosed company in Washington, DC, that provides software and services to 45 US agencies. Prosecutors said the men coordinated the crimes and began carrying them out just minutes after being fired.

Using AI to cover up an alleged crime—what could go wrong?

On February 18 at roughly 4:55 pm, the men were fired from the company, according to an indictment unsealed on Thursday. Five minutes later, they allegedly began trying to access their employer’s system and access federal government databases. By then, access to one of the brothers’ accounts had already been terminated. The other brother, however, allegedly accessed a government agency’s database stored on the employer’s server and issued commands to prevent other users from connecting or making changes to the database. Then, prosecutors said, he issued a command to delete 96 databases, many of which contained sensitive investigative files and records related to Freedom of Information Act matters.

Read full article

Comments

Reference : https://ift.tt/iOyW3Rs

Wednesday, December 3, 2025

Maximum-severity vulnerability threatens 6% of all websites


Security defenders are girding themselves in response to the disclosure of a maximum-severity vulnerability disclosed Wednesday in React Server, an open source package that’s widely used by websites and in cloud environments. The vulnerability is easy to exploit and allows hackers to execute malicious code on servers that run it.

React is embedded in web apps running on servers so that remote devices render JavaScript and content more quickly and with fewer resources. React is used by an estimated 6 percent of all websites and 39 percent of cloud environments. When end users reload a page, React allows servers to re-render only parts that have changed, a feature that drastically speeds up performance and lowers the computing resources required by the server.

A perfect 10

Security firm Wiz said exploitation requires only a single HTTP request and had a “near-100% reliability” in its testing. Multiple software frameworks and libraries embed React implementations by default. As a result, even when apps don’t explicitly make use of React functionality, they can still be vulnerable, since the integration layer invokes the buggy code.

Read full article

Comments

Reference : https://ift.tt/mPMR8Sb

MIT’s AI Robotics Lab Director Is Building People-Centered Robots




Daniela Rus has spent her career breaking barriers—scientific, social, and material—in her quest to build machines that amplify rather than replace human capability. She made robotics her life’s work, she says, because she understood it was a way to expand the possibilities of computing while enhancing human capabilities.

“I like to think of robotics as a way to give people superpowers,” Rus says. “Machines can help us reach farther, think faster, and live fuller lives.”

Daniela Rus


Employer MIT

Job title

Professor of electrical and computer engineering and computer science; director of the MIT Computer Science and Artificial Intelligence Laboratory

Member grade

Fellow

Alma maters

University of Iowa, in Iowa City; Cornell

Her dual missions, she says, are to make technology humane and to make the most of the opportunities afforded by life in the United States. The two goals have fueled her journey from a childhood living under a dictatorship in Romania to the forefront of global robotics research.

Rus, who is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is the recipient of this year’s IEEE Edison Medal, which recognizes her for “sustained leadership and pioneering contributions in modern robotics.”

An IEEE Fellow, she describes the recognition as a responsibility to further her work and mentor the next generation of roboticists entering the field.

The Edison Medal is the latest in a string of honors she has received. In 2017 she won an Engelberger Robotics Award from the Robotic Industries Association. The following year, she was honored with the Pioneer in Robotics and Automation Award by the IEEE Robotics and Automation Society. The society recognized her again in 2023 with its IEEE Robotics and Automation Technical Field Award.

From Romania to Iowa

Rus was born in Cluj-Napoca, Romania, during the rule of dictator Nicolae Ceausescu. Her early life unfolded in a world defined by scarcity—rationed food, intermittent electricity, and a limited ability to move up or out. But she recalls that, amid the stifling insufficiencies, she was surrounded by an irrepressible warmth and intellectual curiosity—even when she was making locomotive screws in a state-run factory as part of her school’s curriculum.

“Life was hard,” she says, “but we had great teachers and strong communities. As a child, you adapt to whatever is around you.”

Her father, Teodor, was a computer scientist and professor, and her mother, Elena, was a physicist.

In 1982, when she was 19, Rus’s father emigrated to the United States to join the faculty at the University of Iowa, in Iowa City. It was an act of courage and conviction. Within a year, Daniela and her mother joined him there.

“He wanted the freedom to think, to publish, to explore ideas,” Rus says. “And I reaped the benefits of being free from the limitations of our homeland.”

America’s open horizons were intoxicating, she says.

A lecture that changed everything

Rus decided to pursue a degree at her father’s university, where her life changed direction, she says. One afternoon, John Hopcroft—a Turing Award–winning Cornell computer scientist renowned for his work on algorithms and data structures—gave a talk on campus. His message was simple but electrifying, Rus says: Classical computer science had been solved. The next frontier, Hopcroft declared, was computations that interact with the messy physical world.

For Rus, the idea was a revelation.

“It was as if a door had opened,” she says. “I realized the future of computing wasn’t just about logic and code; it was about how machines can perceive, move, and help us in the real world.”

After the lecture, she introduced herself to Hopcroft and told him she wanted to learn from him. Not long after earning her bachelor’s degree in computer science and mathematics in 1985, she applied to get a master’s degree at Cornell, where Hopcroft became her graduate advisor. Rus developed algorithms there for dexterous robotic manipulation—teaching machines to grasp and move objects with precision. She earned her master’s in computer science in 1990, then stayed on at Cornell to work toward a doctorate.

“I like to think of robotics as a way to give people superpowers. Machines can help us reach farther, think faster, and live fuller lives.”

In 1993 she earned her Ph.D. in computer science, then took a position as an assistant professor of computer science at Dartmouth College, in Hanover, N.H. She founded the college’s robotics laboratory and expanded her work into distributed robotics. She developed teams of small robots that cooperated to perform tasks such as ensuring products in warehouses are correctly gathered to fulfill orders, get packaged safely, and are routed to their respective destinations efficiently.

Despite a lack of traditional machine shop facilities for fabrication on the Hanover campus, Rus found a way. She pioneered the use of 3D printing to rapidly prototype and build robots.

In 2003 she left Dartmouth to become a professor in the electrical engineering and computer science department at MIT.

The robotics lab she created at Dartmouth moved with her to MIT and became known as the Distributed Robotics Laboratory (DRL). In 2012 she was named director of MIT’s Computer Science and Artificial Intelligence Laboratory, the school’s largest interdisciplinary lab, with 60 research groups including the DRL. She also continues to serve as the DRL’s principal investigator.

The science of physical intelligence

Rus now leads pioneering research at the intersection of AI and robotics, a field she calls physical intelligence. It’s “a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time,” she says.

Her lab builds soft-body robots inspired by nature that can sense, adapt, and learn. They are AI-driven systems that passively handle tasks—such as self-balancing and complex articulation similar to that done by the human hand—because their shape and materials minimize the need for heavy processing.

Such machines, she says, someday will be able to navigate different environments, perform useful functions without external control, and even recover from disturbances to their route planning. Researchers also are exploring ways to make them more energy-efficient.

One prototype developed by Rus’s team is designed to retrieve foreign objects from the body, including batteries swallowed by children. The ingestible robots are artfully folded, similar to origami, so they are small enough to be swallowed. Embedded magnetic materials allow doctors to steer the soft robots and control their shape. Upon arriving in the stomach, a soft bot can be programmed to wrap around a foreign object and guide it safely out of the patient’s body.

CSAIL researchers also are working on small robots that can carry a medication and release it at a specific area within the digestive tract, bypassing the stomach acid known to diminish some drugs’ efficacy. Ingestible robots also could patch up internal injuries or ulcers. And because they’re made from digestible materials such as sausage casings and biocompatible polymers, the robots can perform their assigned tasks and then get safely absorbed by the body, she says.

Health care isn’t the only application on the horizon for such AI-driven technologies. Robots with physical intelligence might someday help firefighters locate people trapped in burning buildings, find miners after a cave-in, and provide valuable situational awareness information to emergency response teams in the aftermath of natural disasters, Rus says.

“What excites me is the possibility of giving people new powers,” she says. “Machines that can think and move safely in the physical world will let us extend human reach—at work, at home, in medicine … everywhere.”

To make such a vision a reality, she has expanded her technical interests to include several complementary lines of research.

She’s working on self-reconfiguring and modular robots such as MIT’s M-Blocks and NASA’s SuperBots, which can attach, detach, and rearrange themselves to form shapes suited for different actions such as slithering, climbing, and crawling.

With networked robots—including those Amazon uses in its warehouses—thousands of machines can operate as a large adaptive system. The machines communicate continuously to divide tasks, avoid collisions, and optimize package routing.

Rus’s team also is making advances in human-robot interaction, such as reading brainwave activity and interpreting sign language through a smart glove.

To further her plan of putting all the computerized smarts the robots need within their physical bodies instead of in the cloud, she helped found Liquid AI in 2023. The company, based in Cambridge, Mass., develops liquid neural networks, inspired by the simple brains of worms, that can learn and adapt continuously. The word liquid in this case refers to the adaptability, flexibility, and dynamic nature of the team’s model architecture. It can change shape and adapt to new data inputs, and it fits within constraints imposed by the hardware in which it’s contained, she says.

Finding community in IEEE

Rus joined IEEE at one of its robotics conferences when she was a graduate student.

“I think I signed up just to get the student discount,” she says with a laugh. “But IEEE turned out to be the place where my community lived.”

She credits the organization’s conferences, journals, and collaborative spirit with shaping her professional growth.

“The exchange of ideas, the chance to test your thinking against others—it’s invaluable,” she says. “It’s how our field moves forward.”

Rus continues to serve on IEEE panels and committees, mentoring the next generation of roboticists.

“IEEE gave me a platform,” Rus says. “It taught me how to communicate, how to lead, and how to dream bigger.”

Living the American dream

Looking back, Rus sees her story as a testament to unforeseen possibilities.

“When I was growing up in Romania, I couldn’t even imagine living in America,” she says. “Now I’m here, working with brilliant students, building robots that help people, and trying to make a difference. I feel like I’m living the American dream.”

In a nod to a memorable song from the Broadway musical Hamilton, Rus echoes Alexander Hamilton’s determination to make the most of his opportunities, saying, “I don’t ever want to throw away my shot.”

Reference: https://ift.tt/UIqaBs3

When to Leave a Toxic Team




This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!

A word that frequently comes up in career conversations is, unfortunately, “toxic.” The engineers I speak with will tell me that they’re dealing with a toxic manager, a toxic teammate, or a toxic work culture. When you find yourself in a toxic work environment, what should you do?

Is it worth trying to improve things over time, or should you just leave?

The difficult truth is that, in nearly every case, the answer is to leave a toxic team as soon as you can. Here’s why:

  • If you’re earlier in your career, you frankly don’t have much political power in the organization. Any arguments to change team culture or address systemic problems will likely fall on deaf ears. You’ll end up frustrated, and your efforts will be wasted.
  • If you’re more senior, you have some ability to improve processes and relationships on the team. However, if you’re an individual contributor (IC), your capabilities are still limited. There is likely some “low-hanging fruit” of quick improvements to suggest. A few thoughtful pieces of feedback could address many of the problems. If you’ve done that and things are still not getting better, it’s probably time to leave.
  • If you’re part of upper management, you may have inherited the problem, or maybe you were even brought in to solve it. This is the rare case where you could consider the change scenario and address the broken culture: You have both the context and power to make a difference.

The world of technology is large, and constantly getting larger. Don’t waste your time on a bad team or with a bad manager. Find another team, company, or start something on your own.

Engineers often hesitate to leave a poor work environment because they’re afraid or unsure about the process of finding something new. That’s a valid concern. However, inertia should not be the reason you stick around in a job. The best careers stem from the excitement of actively choosing your work, not tolerating toxicity.

Finally, it’s worth noting that even in a toxic team, you’ll still come across smart and kind people. If you are stuck on a bad team, seek out the people who match your wavelength. These relationships will enable you to find new opportunities when you inevitably decide to leave!

—Rahul

IEEE Podcast Focuses on Women in Tech

Are you looking for a new podcast to add to your queue? IEEE Women in Engineering recently launched a podcast featuring experts from around the world to discuss workplace challenges and amplify the diverse experience of women from various STEM fields. New episodes are released on the third Wednesday of each month.

Read more here.

How to Think Like an Entrepreneur

Entrepreneurship is a skill that can benefit all engineers. The editor in chief of IEEE Engineering Management Review shares his tips for acting more like an entrepreneur, from changing your mode of thinking to executing a plan. “The shift from ‘someone should’ to ‘I will’ is the start of entrepreneurial thinking,” the author writes.

Read more here.

Cultivating Innovation in a Research Lab

In a piece for Communications of the ACM, a former employee of Xerox PARC reflects on the lessons he learned about managing a research lab. The philosophies that underpin innovative labs, the author says, require a different approach than those focused on delivering products or services. See how these unwritten rules can help cultivate breakthroughs.

Read more here.

Reference: https://ift.tt/JnaGDB5

Video Friday: Biorobotics Turns Lobster Tails Into Gripper

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a w...