Monday, February 9, 2026

New Devices Might Scale the Memory Wall




The hunt is on for anything that can surmount AI’s perennial memory wall–even quick models are bogged down by the time and energy needed to carry data between processor and memory. Resistive RAM (RRAM)could circumvent the wall by allowing computation to happen in the memory itself. Unfortunately, most types of this nonvolatile memory are too unstable and unwieldy for that purpose.

Fortunately, a potential solution may be at hand. At December’s IEEE International Electron Device Meeting (IEDM), researchers from the University of California, San Diego showed they could run a learning algorithm on an entirely new type of RRAM.

“We actually redesigned RRAM, completely rethinking the way it switches,” says Duygu Kuzum, an electrical engineer at the University of California, San Diego, who led the work.

RRAM stores data as a level of resistance to the flow of current. The key digital operation in a neural network—multiplying arrays of numbers and then summing the results—can be done in analog simply by running current through an array of RRAM cells, connecting their outputs, and measuring the resulting current.

Traditionally, RRAM stores data by creating low-resistance filaments in the higher-resistance surrounds of a dielectric material. Forming these filaments often needs voltages too high for standard CMOS, hindering its integration inside processors. Worse, forming the filaments is a noisy and random process, not ideal for storing data. (Imagine a neural network’s weights randomly drifting. Answers to the same question would change from one day to the next.)

Moreover, most filament-based RRAM cells’ noisy nature means they must be isolated from their surrounding circuits, usually with a selector transistor, which makes 3D stacking difficult.

Limitations like these mean that traditional RRAM isn’t great for computing. In particular, Kuzum says, it’s difficult to use filamentary RRAM for the sort of parallel matrix operations that are crucial for today’s neural networks.

So, the San Diego researchers decided to dispense with the filaments entirely. Instead they developed devices that switch an entire layer from high to low resistance and back again. This format, called “bulk RRAM”, can do away with both the annoying high-voltage filament-forming step and the geometry-limiting selector transistor.

3D memory for machine learning

The San Diego group wasn’t the first to build bulk RRAM devices, but it made breakthroughs both in shrinking them and forming 3D circuits with them. Kuzum and her colleagues shrank RRAM into the nanoscale; their device was just 40 nm across. They also managed to stack bulk RRAM into as many as eight layers.

With a single pulse of identical voltage, an eight-layer stack of cells each of which can take any of 64 resistance values, a number that’s very difficult to achieve with traditional filamentous RRAM. And whereas the resistance of most filament-based cells are limited to kiloohms, the San Diego stack is in the megaohm range, which Kuzum says is better for parallel operations. e

“We can actually tune it to anywhere we want, but we think that from an integration and system-level simulations perspective, megaohm is the desirable range,” Kuzum says.

These two benefits–a greater number of resistance levels and a higher resistance–could allow this bulk RRAM stack to perform more complex operations than traditional RRAM’s can manage.

Kuzum and colleagues assembled multiple eight-layer stacks into a 1-kilobyte array that required no selectors. Then, they tested the array with a continual learning algorithm: making the chip classify data from wearable sensors—for example, reading data from a waist-mounted smartphone to determine if its wearer was sitting, walking, climbing stairs, or taking another action—while constantly adding new data. Tests showed an accuracy of 90 percent, which the researchers say is comparable to the performance of a digitally-implemented neural network.



This test exemplifies what Kuzum thinks can especially benefit from bulk RRAM: neural network models on edge devices, which may need to learn from their environment without accessing the cloud.

“We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications,” Kuzum says.

The ability to integrate RRAM into an array like this is a significant advance, says Albert Talin, materials scientist at Sandia National Laboratories in Livermore, California, and a bulk RRAM researcher who wasn’t involved in the San Diego group’s work. “I think that any step in terms of integration is very useful,” he says.

But Talin highlights a potential obstacle: the ability to retain data for an extended period of time. While the San Diego group showed their RRAM could retain data at room temperature for several years (on par with flash memory), Talin says that its retention at the higher temperatures where computers actually operate is less certain. “That’s one of the major challenges of this technology,” he says, especially when it comes to edge applications.

If engineers can prove the technology, then all types of models may benefit. This memory wall has only grown higher this decade, as traditional memory hasn’t been able to keep up with the ballooning demands of large models. Anything that allows models to operate on the memory itself could be a welcome shortcut.

Reference: https://ift.tt/UM1NOkj

Saturday, February 7, 2026

Low-Vision Programmers Can Now Design 3D Models Independently




Most 3D design software requires visual dragging and rotating—posing a challenge for blind and low-vision users. As a result, a range of hardware design, robotics, coding, and engineering work is inaccessible to interested programmers. A visually-impaired programmer might write great code. But because of the lack of accessible modeling software, the coder can’t model, design, and verify physical and virtual components of their system.

However, new 3D modeling tools are beginning to change this equation. A new prototype program called A11yShape aims to close the gap. There are already code-based tools that let users describe 3D models in text, such as the popular OpenSCAD software. Other recent large-language-model tools generate 3D code from natural-language prompts. But even with these, blind and low-vision programmers still depend on sighted feedback to bridge the gap between their code and its visual output.

Blind and low-vision programmers previously had to rely on a sighted person to visually check every update of a model to describe what changed. But with A11yShape, blind and low-vision programmers can independently create, inspect, and refine 3D models without relying on sighted peers.

A11yShape does this by generating accessible model descriptions, organizing the model into a semantic hierarchy, and ensuring every step works with screen readers.

The project began when Liang He, assistant professor of computer science at the University of Texas at Dallas, spoke with his low-vision classmate who was studying 3D modeling. He saw an opportunity to turn his classmate’s coding strategies, learned in a 3D modeling for blind programmers course at the University of Washington, into a streamlined tool.

“I want to design something useful and practical for the group,” he says. “Not just something I created from my imagination and applied to the group.”

Re-imagining Assistive 3D Design With OpenSCAD

A11yShape assumes the user is running OpenSCAD, the script-based 3D modeling editor. The program adds OpenSCAD features to connect each component of modeling across three application UI panels.

OpenSCAD allows users to create models entirely through typing, eliminating the need for clicking and dragging. Other common graphics-based user interfaces are difficult for blind programmers to navigate.

A11yshape introduces an AI Assistance Panel, where users can submit real-time queries to ChatGPT-4o to validate design decisions and debug existing OpenSCAD scripts.

AllyShape's 3-D modeling web interface, featuring a code editor panel with programming capabilities, an AI assistance panel providing contextual feedback, and a model panel displaying hierarchical structure and rendering of the resulting model. A11yShape’s three panels synchronize code, AI descriptions, and model structure so blind programmers can discover how code changes affect designs independently.Anhong Guo, Liang He, et al.

If a user selects a piece of code or a model component, A11yShape highlights the matching part across all three panels and updates the description, so blind and low-vision users always know what they’re working on.

User Feedback Improved Accessible Interface

The research team recruited 4 participants with a range of visual impairments and programming backgrounds. The team asked the participants to design models using A11yShape and observed their workflows.

One participant, who had never modeled before, said the tool “provided [the blind and low-vision community] with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.”

Participants also reported that long text descriptions still make it hard to grasp complex shapes, and several said that without eventually touching a physical model or using a tactile display, it was difficult to fully “see” the design in their mind.

To evaluate the accuracy of the AI-generated descriptions, the research team recruited 15 sighted participants. “On a 1–5 scale, the descriptions earned average scores between about 4.1 and 5 for geometric accuracy, clarity, and avoiding hallucinations, suggesting the AI is reliable enough for everyday use.”


A failed all-at-once attempt to construct a 3-D helicopter shows incorrect shapes and placement of elements. In contrast, when the user journey allows for completion of each individual element before moving forward, results significantly improve. A new assistive program for blind and low-vision programmers, A11yShape, assists visually disabled programmers in verifying the design of their models.Source: Anhong Guo, Liang He, et al.

The feedback will help to inform future iterations—which He says could integrate tactile displays, real-time 3D printing, and more concise AI-generated audio descriptions.

Beyond its applications in the professional computer programming community, He noted that A11yShape also lowers the barrier to entry for blind and low-vision computer programming learners.

“People like being able to express themselves in creative ways. . . using technology such as 3D printing to make things for utility or entertainment,” says Stephanie Ludi, director of DiscoverABILITY Lab and professor of the department of computer science and engineering at the University of North Texas. “Persons who are blind and visually impaired share that interest, with A11yShape serving as a model to support accessibility in the maker community.”

The team presented A11yshape in October at the ASSETS conference in Denver.

Reference: https://ift.tt/mQNz6Yv

Friday, February 6, 2026

Sixteen Claude AI agents working together created a new C compiler


Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Read full article

Comments

Reference : https://ift.tt/LPz2D4K

Malicious packages for dYdX cryptocurrency exchange empties user wallets


Open source packages published on the npm and PyPI repositories were laced with code that stole wallet credentials from dYdX developers and backend systems and, in some cases, backdoored devices, researchers said.

“Every application using the compromised npm versions is at risk ….” the researchers, from security firm Socket, said Friday. “Direct impact includes complete wallet compromise and irreversible cryptocurrency theft. The attack scope includes all applications depending on the compromised versions and both developers testing with real credentials and production end-users."

Packages that were infected were:

Read full article

Comments

Reference : https://ift.tt/s67awox

IEEE Online Mini-MBA Aims to Fill Leadership Skills Gaps in AI




Boardroom priorities are shifting from financial metrics toward technical oversight. Although market share and operational efficiency remain business bedrocks, executives also must now manage the complexities of machine learning, the integrity of their data systems, and the risks of algorithmic bias.

The change represents more than just a tech update; it marks a fundamental redefinition of the skills required for business leadership.

Research from the McKinsey Global Institute on the economic impact of artificial intelligence shows that companies integrating it effectively have boosted profit margins by up to 15 percent. Yet the same study revealed a sobering reality: 87 percent of organizations acknowledge significant AI skill gaps in their leadership ranks.

That disconnect between AI’s business potential and executive readiness has created a need for a new type of professional education.

The leadership skills gap in the AI era

Traditional business education, with its focus on finance, marketing, and operations, wasn’t designed for an AI-driven economy. Today’s leaders need to understand not just what AI can do but also how to evaluate investments in the technology, manage algorithmic risks, and lead teams through digital transformations.

The challenges extend beyond the executive suite. Middle managers, project leaders, and department heads across industries are discovering that AI fluency has become essential for career advancement. In 2020 the World Economic Forum predicted that 50 percent of all employees would need reskilling by 2025, with AI-related competencies topping the list of required skills.

IEEE | Rutgers Online Mini-MBA: Artificial Intelligence

Recognizing the skills gap, IEEE partnered with the Rutgers Business School to offer a comprehensive business education program designed for the new era of AI. The IEEE | Rutgers Online Mini-MBA: Artificial Intelligence program combines rigorous business strategy with deep AI literacy.

Rather than treating AI as a separate technical subject, the program incorporates it into each aspect of business strategy. Students learn to evaluate AI opportunities through financial modeling, assess algorithmic risks through governance frameworks, and use change-management principles to implement new technologies.

A curriculum built for real-world impact

The program’s modular structure lets professionals focus on areas relevant to their immediate needs while building toward comprehensive AI business literacy. Each of the 10 modules includes practical exercises and case study analyses that participants can immediately apply in their organization.

The Introduction to AI module provides a comprehensive overview of the technology’s capabilities, benefits, and challenges. Other technologies are covered as well, including how they can be applied across diverse business contexts, laying the groundwork for informed decision‑making and strategic adoption.

Rather than treating AI as a separate technical subject, the online mini-MBA program incorporates the technology throughout each aspect of business strategy.

Building on that foundation, the Data Analytics module highlights how AI projects differ from traditional programming, how to assess data readiness, and how to optimize data to improve accuracy and outcomes. The module can equip leaders to evaluate whether their organization is prepared to launch successful AI initiatives.

The Process Optimization module focuses on reimagining core organizational workflows using AI. Students learn how machine learning and automation are already transforming industries such as manufacturing, distribution, transportation, and health care. They also learn how to identify critical processes, create AI road maps, establish pilot programs, and prepare their organization for change.

Industry-specific applications

The core modules are designed for all participants, and the program highlights how AI is applied across industries. By analyzing case studies in fraud detection, medical diagnostics, and predictive maintenance, participants see underlying principles in action.

Participants gain a broader perspective on how AI can be adapted to different contexts so they can draw connections to the opportunities and challenges in their organization. The approach ensures everyone comes away with a strong foundation and the ability to apply learned lessons to their environment.

Flexible learning for busy professionals

With the understanding that senior professionals have demanding schedules, the mini-MBA program offers flexibility. The online format lets participants engage with content in their own time frame, while live virtual office hours with faculty provide opportunities for real-time interaction.

The program, which offers discounts to IEEE members and flexible payment options, qualifies for many tuition reimbursement programs.

Graduates report that implementing AI strategies developed during the program has helped drive tangible business results. This success often translates into career advancement, including promotions and expanded leadership roles. Furthermore, the curriculum empowers graduates to confidently vet AI vendor proposals, lead AI project teams, and navigate high-stakes investment decisions.

Beyond curriculum content, the mini MBA can create valuable professional networks among AI-forward business leaders. Participants collaborate on projects, share implementation experiences, and build relationships that extend beyond the program’s 12 weeks.

Specialized training from IEEE

To complement the mini-MBA program, IEEE offers targeted courses addressing specific AI applications in critical industries. The Artificial Intelligence and Machine Learning in Chip Design course explores how the technology is revolutionizing semiconductor development. Integrating Edge AI and Advanced Nanotechnology in Semiconductor Applications delves into cutting-edge hardware implementations. The Mastering AI Integration in Semiconductor Manufacturing course examines how AI enhances production efficiency and quality control in one of the world’s most complex manufacturing processes. AI in Semiconductor Packaging equips professionals to apply machine learning and neural networks to modernize semiconductor packaging reliability and performance.

The programs grant professional development credits including PDHs and CEUs, ensuring participants receive formal recognition for their educational investments. Digital badges provide shareable credentials that professionals can showcase across professional networks, demonstrating their AI competencies to current and prospective employers.

Learn more about IEEE Educational Activities’ corporate solutions and professional development programs at innovationatwork.ieee.org.

Reference: https://ift.tt/PbpO7wa

Video Friday: Autonomous Robots Learn By Doing in This Factory




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

To train the next generation of autonomous robots, scientists at Toyota Research Institute are working with Toyota Manufacturing to deploy them on the factory floor.

[ Toyota Research Institute ]

Thanks, Erin!

This is just one story (of many) about how we tried, failed, and learned how to improve ‪drone delivery system.

Okay but like you didn’t show the really cool bit...?

[ Zipline ]

We’re introducing KinetIQ, an AI framework developed by Humanoid, for end-to-end orchestration of humanoid robot fleets. KinetIQ coordinates wheeled and bipedal robots within a single system, managing both fleet-level operations and individual robot behaviour across multiple environments. The framework operates across four cognitive layers, from task allocation and workflow optimization to VLA-based task execution and reinforcement-learning-trained whole-body control, and is shown here running across our wheeled industrial robots and bipedal R&D platform.

[ Humanoid ]

What if a robot gets damaged during operation? Can it still perform its mission without immediate repair? Inspired by self-embodied resilience strategies of stick insects, we developed a decentralized adaptive resilient neural control system (DARCON). This system allows legged robots to autonomously adapt to limb loss, ensuring mission success despite mechanical failure. This innovative approach leads to a future of truly resilient, self-recovering robotics.

[ VISTEC ]

Thanks, Poramate!

This animation shows Perseverance’s point of view during drive of 807 feet (246 meters) along the rim of Jezero Crater on Dec. 10, 2025, the 1,709th Martian day, or sol, of the mission. Captured over two hours and 35 minutes, 53 Navigation Camera (Navcam) image pairs were combined with rover data on orientation, wheel speed, and steering angle, as well as data from Perseverance’s Inertial Measurement Unit, and placed into a 3D virtual environment. The result is this reconstruction with virtual frames inserted about every 4 inches (0.1 meters) of drive progress.

[ NASA Jet Propulsion Lab ]

−47.4°C, 130,000 steps, 89.75°E, 47.21°N… On the extremely cold snowfields of Altay, the birthplace of human skiing, Unitree’s humanoid robot G1 left behind a unique set of marks.

[ Unitree ]

Representing and understanding 3D environments in a structured manner is crucial for autonomous agents to navigate and reason about their surroundings. In this work, we propose an enhanced hierarchical 3D scene graph that integrates open-vocabulary features across multiple abstraction levels and supports object-relational reasoning. Our approach leverages a Vision Language Model (VLM) to infer semantic relationships. Notably, we introduce a task reasoning module that combines Large Language Models (LLM) and a VLM to interpret the scene graph’s semantic and relational information, enabling agents to reason about tasks and interact with their environment more intelligently. We validate our method by deploying it on a quadruped robot in multiple environments and tasks, highlighting its ability to reason about them.

[ Norwegian University of Science & Technology, Autonomous Robots Lab ]

Thanks, Kostas!

We present HoLoArm, a quadrotor with compliant arms inspired by the nodus structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a Reinforcement Learning (RL) control policy that enhances both recovery and hovering performance.

[ HO Lab via IEEE Robotics and Automation Letters ]

In this work, we present SkyDreamer, to the best of our knowledge, the first end-to-end vision-based autonomous drone racing policy that maps directly from pixel-level representations to motor commands.

[ MAVLab ]

This video showcases AI WORKER equipped with five-finger hands performing dexterous object manipulation across diverse environments. Through teleoperation, the robot demonstrates precise, human-like hand control in a variety of manipulation tasks.

[ Robotis ]

Autonomous following, 45° slope climbing, and reliable payload transport in extreme winter conditions — built to support operations where environments push the limits.

[ DEEP Robotics ]

Living architectures, from plants to beehives, adapt continuously to their environments through self-organization. In this work, we introduce the concept of architectural swarms: systems that integrate swarm robotics into modular architectural façades. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications.

[ SSR Lab via Science Robotics ]

Here are a couple of IROS 2025 keynotes, featuring Bram Vanderborght and Kyu Jin Cho.


- YouTube www.youtube.com

[ IROS 2025 ]

Reference: https://ift.tt/njz9ZrL

Thursday, February 5, 2026

AI companies want you to stop chatting with bots and start managing them


On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic's contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called "agent teams" in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

Read full article

Comments

Reference : https://ift.tt/JQGa8Hj

New Devices Might Scale the Memory Wall

The hunt is on for anything that can surmount AI’s perennial memory wall –even quick models are bogged down by the time and energy neede...