Saturday, February 7, 2026

Low-Vision Programmers Can Now Design 3D Models Independently




Most 3D design software requires visual dragging and rotating—posing a challenge for blind and low-vision users. As a result, a range of hardware design, robotics, coding, and engineering work is inaccessible to interested programmers. A visually-impaired programmer might write great code. But because of the lack of accessible modeling software, the coder can’t model, design, and verify physical and virtual components of their system.

However, new 3D modeling tools are beginning to change this equation. A new prototype program called A11yShape aims to close the gap. There are already code-based tools that let users describe 3D models in text, such as the popular OpenSCAD software. Other recent large-language-model tools generate 3D code from natural-language prompts. But even with these, blind and low-vision programmers still depend on sighted feedback to bridge the gap between their code and its visual output.

Blind and low-vision programmers previously had to rely on a sighted person to visually check every update of a model to describe what changed. But with A11yShape, blind and low-vision programmers can independently create, inspect, and refine 3D models without relying on sighted peers.

A11yShape does this by generating accessible model descriptions, organizing the model into a semantic hierarchy, and ensuring every step works with screen readers.

The project began when Liang He, assistant professor of computer science at the University of Texas at Dallas, spoke with his low-vision classmate who was studying 3D modeling. He saw an opportunity to turn his classmate’s coding strategies, learned in a 3D modeling for blind programmers course at the University of Washington, into a streamlined tool.

“I want to design something useful and practical for the group,” he says. “Not just something I created from my imagination and applied to the group.”

Re-imagining Assistive 3D Design With OpenSCAD

A11yShape assumes the user is running OpenSCAD, the script-based 3D modeling editor. The program adds OpenSCAD features to connect each component of modeling across three application UI panels.

OpenSCAD allows users to create models entirely through typing, eliminating the need for clicking and dragging. Other common graphics-based user interfaces are difficult for blind programmers to navigate.

A11yshape introduces an AI Assistance Panel, where users can submit real-time queries to ChatGPT-4o to validate design decisions and debug existing OpenSCAD scripts.

AllyShape's 3-D modeling web interface, featuring a code editor panel with programming capabilities, an AI assistance panel providing contextual feedback, and a model panel displaying hierarchical structure and rendering of the resulting model. A11yShape’s three panels synchronize code, AI descriptions, and model structure so blind programmers can discover how code changes affect designs independently.Anhong Guo, Liang He, et al.

If a user selects a piece of code or a model component, A11yShape highlights the matching part across all three panels and updates the description, so blind and low-vision users always know what they’re working on.

User Feedback Improved Accessible Interface

The research team recruited 4 participants with a range of visual impairments and programming backgrounds. The team asked the participants to design models using A11yShape and observed their workflows.

One participant, who had never modeled before, said the tool “provided [the blind and low-vision community] with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.”

Participants also reported that long text descriptions still make it hard to grasp complex shapes, and several said that without eventually touching a physical model or using a tactile display, it was difficult to fully “see” the design in their mind.

To evaluate the accuracy of the AI-generated descriptions, the research team recruited 15 sighted participants. “On a 1–5 scale, the descriptions earned average scores between about 4.1 and 5 for geometric accuracy, clarity, and avoiding hallucinations, suggesting the AI is reliable enough for everyday use.”


A failed all-at-once attempt to construct a 3-D helicopter shows incorrect shapes and placement of elements. In contrast, when the user journey allows for completion of each individual element before moving forward, results significantly improve. A new assistive program for blind and low-vision programmers, A11yShape, assists visually disabled programmers in verifying the design of their models.Source: Anhong Guo, Liang He, et al.

The feedback will help to inform future iterations—which He says could integrate tactile displays, real-time 3D printing, and more concise AI-generated audio descriptions.

Beyond its applications in the professional computer programming community, He noted that A11yShape also lowers the barrier to entry for blind and low-vision computer programming learners.

“People like being able to express themselves in creative ways. . . using technology such as 3D printing to make things for utility or entertainment,” says Stephanie Ludi, director of DiscoverABILITY Lab and professor of the department of computer science and engineering at the University of North Texas. “Persons who are blind and visually impaired share that interest, with A11yShape serving as a model to support accessibility in the maker community.”

The team presented A11yshape in October at the ASSETS conference in Denver.

Reference: https://ift.tt/mQNz6Yv

Friday, February 6, 2026

Sixteen Claude AI agents working together created a new C compiler


Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Read full article

Comments

Reference : https://ift.tt/LPz2D4K

Malicious packages for dYdX cryptocurrency exchange empties user wallets


Open source packages published on the npm and PyPI repositories were laced with code that stole wallet credentials from dYdX developers and backend systems and, in some cases, backdoored devices, researchers said.

“Every application using the compromised npm versions is at risk ….” the researchers, from security firm Socket, said Friday. “Direct impact includes complete wallet compromise and irreversible cryptocurrency theft. The attack scope includes all applications depending on the compromised versions and both developers testing with real credentials and production end-users."

Packages that were infected were:

Read full article

Comments

Reference : https://ift.tt/s67awox

IEEE Online Mini-MBA Aims to Fill Leadership Skills Gaps in AI




Boardroom priorities are shifting from financial metrics toward technical oversight. Although market share and operational efficiency remain business bedrocks, executives also must now manage the complexities of machine learning, the integrity of their data systems, and the risks of algorithmic bias.

The change represents more than just a tech update; it marks a fundamental redefinition of the skills required for business leadership.

Research from the McKinsey Global Institute on the economic impact of artificial intelligence shows that companies integrating it effectively have boosted profit margins by up to 15 percent. Yet the same study revealed a sobering reality: 87 percent of organizations acknowledge significant AI skill gaps in their leadership ranks.

That disconnect between AI’s business potential and executive readiness has created a need for a new type of professional education.

The leadership skills gap in the AI era

Traditional business education, with its focus on finance, marketing, and operations, wasn’t designed for an AI-driven economy. Today’s leaders need to understand not just what AI can do but also how to evaluate investments in the technology, manage algorithmic risks, and lead teams through digital transformations.

The challenges extend beyond the executive suite. Middle managers, project leaders, and department heads across industries are discovering that AI fluency has become essential for career advancement. In 2020 the World Economic Forum predicted that 50 percent of all employees would need reskilling by 2025, with AI-related competencies topping the list of required skills.

IEEE | Rutgers Online Mini-MBA: Artificial Intelligence

Recognizing the skills gap, IEEE partnered with the Rutgers Business School to offer a comprehensive business education program designed for the new era of AI. The IEEE | Rutgers Online Mini-MBA: Artificial Intelligence program combines rigorous business strategy with deep AI literacy.

Rather than treating AI as a separate technical subject, the program incorporates it into each aspect of business strategy. Students learn to evaluate AI opportunities through financial modeling, assess algorithmic risks through governance frameworks, and use change-management principles to implement new technologies.

A curriculum built for real-world impact

The program’s modular structure lets professionals focus on areas relevant to their immediate needs while building toward comprehensive AI business literacy. Each of the 10 modules includes practical exercises and case study analyses that participants can immediately apply in their organization.

The Introduction to AI module provides a comprehensive overview of the technology’s capabilities, benefits, and challenges. Other technologies are covered as well, including how they can be applied across diverse business contexts, laying the groundwork for informed decision‑making and strategic adoption.

Rather than treating AI as a separate technical subject, the online mini-MBA program incorporates the technology throughout each aspect of business strategy.

Building on that foundation, the Data Analytics module highlights how AI projects differ from traditional programming, how to assess data readiness, and how to optimize data to improve accuracy and outcomes. The module can equip leaders to evaluate whether their organization is prepared to launch successful AI initiatives.

The Process Optimization module focuses on reimagining core organizational workflows using AI. Students learn how machine learning and automation are already transforming industries such as manufacturing, distribution, transportation, and health care. They also learn how to identify critical processes, create AI road maps, establish pilot programs, and prepare their organization for change.

Industry-specific applications

The core modules are designed for all participants, and the program highlights how AI is applied across industries. By analyzing case studies in fraud detection, medical diagnostics, and predictive maintenance, participants see underlying principles in action.

Participants gain a broader perspective on how AI can be adapted to different contexts so they can draw connections to the opportunities and challenges in their organization. The approach ensures everyone comes away with a strong foundation and the ability to apply learned lessons to their environment.

Flexible learning for busy professionals

With the understanding that senior professionals have demanding schedules, the mini-MBA program offers flexibility. The online format lets participants engage with content in their own time frame, while live virtual office hours with faculty provide opportunities for real-time interaction.

The program, which offers discounts to IEEE members and flexible payment options, qualifies for many tuition reimbursement programs.

Graduates report that implementing AI strategies developed during the program has helped drive tangible business results. This success often translates into career advancement, including promotions and expanded leadership roles. Furthermore, the curriculum empowers graduates to confidently vet AI vendor proposals, lead AI project teams, and navigate high-stakes investment decisions.

Beyond curriculum content, the mini MBA can create valuable professional networks among AI-forward business leaders. Participants collaborate on projects, share implementation experiences, and build relationships that extend beyond the program’s 12 weeks.

Specialized training from IEEE

To complement the mini-MBA program, IEEE offers targeted courses addressing specific AI applications in critical industries. The Artificial Intelligence and Machine Learning in Chip Design course explores how the technology is revolutionizing semiconductor development. Integrating Edge AI and Advanced Nanotechnology in Semiconductor Applications delves into cutting-edge hardware implementations. The Mastering AI Integration in Semiconductor Manufacturing course examines how AI enhances production efficiency and quality control in one of the world’s most complex manufacturing processes. AI in Semiconductor Packaging equips professionals to apply machine learning and neural networks to modernize semiconductor packaging reliability and performance.

The programs grant professional development credits including PDHs and CEUs, ensuring participants receive formal recognition for their educational investments. Digital badges provide shareable credentials that professionals can showcase across professional networks, demonstrating their AI competencies to current and prospective employers.

Learn more about IEEE Educational Activities’ corporate solutions and professional development programs at innovationatwork.ieee.org.

Reference: https://ift.tt/PbpO7wa

Video Friday: Autonomous Robots Learn By Doing in This Factory




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

To train the next generation of autonomous robots, scientists at Toyota Research Institute are working with Toyota Manufacturing to deploy them on the factory floor.

[ Toyota Research Institute ]

Thanks, Erin!

This is just one story (of many) about how we tried, failed, and learned how to improve ‪drone delivery system.

Okay but like you didn’t show the really cool bit...?

[ Zipline ]

We’re introducing KinetIQ, an AI framework developed by Humanoid, for end-to-end orchestration of humanoid robot fleets. KinetIQ coordinates wheeled and bipedal robots within a single system, managing both fleet-level operations and individual robot behaviour across multiple environments. The framework operates across four cognitive layers, from task allocation and workflow optimization to VLA-based task execution and reinforcement-learning-trained whole-body control, and is shown here running across our wheeled industrial robots and bipedal R&D platform.

[ Humanoid ]

What if a robot gets damaged during operation? Can it still perform its mission without immediate repair? Inspired by self-embodied resilience strategies of stick insects, we developed a decentralized adaptive resilient neural control system (DARCON). This system allows legged robots to autonomously adapt to limb loss, ensuring mission success despite mechanical failure. This innovative approach leads to a future of truly resilient, self-recovering robotics.

[ VISTEC ]

Thanks, Poramate!

This animation shows Perseverance’s point of view during drive of 807 feet (246 meters) along the rim of Jezero Crater on Dec. 10, 2025, the 1,709th Martian day, or sol, of the mission. Captured over two hours and 35 minutes, 53 Navigation Camera (Navcam) image pairs were combined with rover data on orientation, wheel speed, and steering angle, as well as data from Perseverance’s Inertial Measurement Unit, and placed into a 3D virtual environment. The result is this reconstruction with virtual frames inserted about every 4 inches (0.1 meters) of drive progress.

[ NASA Jet Propulsion Lab ]

−47.4°C, 130,000 steps, 89.75°E, 47.21°N… On the extremely cold snowfields of Altay, the birthplace of human skiing, Unitree’s humanoid robot G1 left behind a unique set of marks.

[ Unitree ]

Representing and understanding 3D environments in a structured manner is crucial for autonomous agents to navigate and reason about their surroundings. In this work, we propose an enhanced hierarchical 3D scene graph that integrates open-vocabulary features across multiple abstraction levels and supports object-relational reasoning. Our approach leverages a Vision Language Model (VLM) to infer semantic relationships. Notably, we introduce a task reasoning module that combines Large Language Models (LLM) and a VLM to interpret the scene graph’s semantic and relational information, enabling agents to reason about tasks and interact with their environment more intelligently. We validate our method by deploying it on a quadruped robot in multiple environments and tasks, highlighting its ability to reason about them.

[ Norwegian University of Science & Technology, Autonomous Robots Lab ]

Thanks, Kostas!

We present HoLoArm, a quadrotor with compliant arms inspired by the nodus structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a Reinforcement Learning (RL) control policy that enhances both recovery and hovering performance.

[ HO Lab via IEEE Robotics and Automation Letters ]

In this work, we present SkyDreamer, to the best of our knowledge, the first end-to-end vision-based autonomous drone racing policy that maps directly from pixel-level representations to motor commands.

[ MAVLab ]

This video showcases AI WORKER equipped with five-finger hands performing dexterous object manipulation across diverse environments. Through teleoperation, the robot demonstrates precise, human-like hand control in a variety of manipulation tasks.

[ Robotis ]

Autonomous following, 45° slope climbing, and reliable payload transport in extreme winter conditions — built to support operations where environments push the limits.

[ DEEP Robotics ]

Living architectures, from plants to beehives, adapt continuously to their environments through self-organization. In this work, we introduce the concept of architectural swarms: systems that integrate swarm robotics into modular architectural façades. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications.

[ SSR Lab via Science Robotics ]

Here are a couple of IROS 2025 keynotes, featuring Bram Vanderborght and Kyu Jin Cho.


- YouTube www.youtube.com

[ IROS 2025 ]

Reference: https://ift.tt/njz9ZrL

Thursday, February 5, 2026

AI companies want you to stop chatting with bots and start managing them


On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic's contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called "agent teams" in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

Read full article

Comments

Reference : https://ift.tt/JQGa8Hj

“Quantum Twins” Simulate What Supercomputers Can’t




While quantum computers continue to slowly grind towards usefulness, some are pursuing a different approach—analog quantum simulation. This path doesn’t offer complete control of single bits of quantum information, known as qubits—it is not a universal quantum computer. Instead, quantum simulators directly mimic complex, difficult-to-access things, like individual molecules, chemical reactions, or novel materials. What analog quantum simulation lacks in flexibility, it makes up for in feasibility: quantum simulators are ready now.

“Instead of using qubits, as you would typically in a quantum computer, we just directly encode the problem into the geometry and structure of the array itself,” says Sam Gorman, quantum systems engineering lead at Sydney-based start-up Silicon Quantum Computing.

Yesterday, Silicon Quantum Computing unveiled its Quantum Twins product, a silicon quantum simulator, which is now available to customers through direct contract. Simultaneously, the team demonstrated that their device, made up of fifteen thousand quantum dots, can simulate an often-studied transition of a material from an insulator to a metal, and all the states between. They published their work this week in the journal Nature.

“We can do things now that we think nobody else in the world can do,” Gorman says.

The powerful process

Though the product announcement came yesterday, the team at Silicon Quantum Computing established its Precision Atom Qubit Manufacturing process following the startup’s establishment in 2017, building on the academic work that the company’s founder, Michelle Simmons, led for over 25 years. The underlying technology is a manufacturing process for placing single phosphorus atoms in silicon with sub-nanometer precision.

“We have a 38-stage process,” Simmons says, for patterning phosphorus atoms into silicon. The process starts with a silicon substrate, which gets coated with a layer of hydrogen. Then, using a scanning-tunneling microscope, individual hydrogen atoms are knocked off the surface, exposing the silicon underneath. The surface is then dosed with phosphine gas, which adsorbs to the surface only in places where the silicon is exposed. With the help of a low temperature thermal anneal, the phosphorus atom is then incorporated into the silicon crystal. Then, layers of silicon are grown on top.

“It’s done in ultra-high vacuum. So it’s a very pure, very clean system,” Simmons says. “It’s a fully monolithic chip that we make with that sub-nanometer precision. In 2014, we figured out how to make markers in the chip so that we can then come back and find where we put the atoms within the device to make contacts. Those contacts are then made at the same length scale as the atoms and dots.”

Though the team is able to place single atoms of phosphorus, they use clusters of ten to fifty such atoms to make up a so-called register for these application-specific chips. These registers act like quantum dots, preserving quantum properties of the individual atoms. The registers are controlled by a gate voltage from contacts placed atop the chip, and interactions between registers can be tuned by precisely controlling the distances between them.

While the company is also pursuing more traditional quantum computing using this technology, they realized they already had the capacity to do useful simulations in the analog domain by putting thousands of registers on a single chip and measuring global properties, without controlling individual qubits.

“The thing that’s quite unique is we can do that very quickly,” Simmons says. “We put 250,000 of these registers [on a chip] in eight hours, and we can turn a chip design around in a week.”

What to simulate

Back in 2022, the team at Silicon Quantum Computing used a previous version of this same technology to simulate a molecule of polyacetylene. The chemical is made up of carbon atoms with alternating single and double bonds, and, crucially, its conductivity changes drastically depending on whether the chain is cut on a single or double bond. In order to accurately simulate single and double carbon bonds, the team had to control the distances of their registers to sub-nanometer precision. By tuning the gate voltages of each quantum dot, the researchers reproduced the jump in conductivity.

Now, they’ve demonstrated the quantum twin technology on a much larger problem—the metal-insulator transition of a two-dimensional material. Where the polyacetylene molecule required ten registers, the new model used 15,000. The metal-insulator model is important because, in most cases, it cannot be simulated on a classical computer. At the extremes—in the fully metal or fully insulating phase—the physics can be simplified and made accessible to classical computing. But in the murky intermediate regime, the full quantum complexity of each electron plays a role, and the problem is classically intractable. “That is the part which is challenging for classical computing. But we can actually put our system into this regime quite easily,” Gorman says.

The metal-insulator model was a proof of concept. Now, Gorman says, the team can design a quantum twin for almost any two-dimensional problem.

“Now that we’ve demonstrated that the device is behaving as we predict, we’re looking at high-impact issues or outstanding problems,” says Gorman. The team plans to investigate things like unconventional superconductivity, the origins of magnetism, and materials interfaces such as those that occur in batteries.

Although the initial applications will most likely be in the scientific domain, Simmons is hopeful that Quantum Twins will eventually be useful for industrial applications such as drug discovery. “If you look at different drugs, they’re actually very similar to polyacetylene. They’re carbon chains, and they have functional groups. So, understanding how to map it [onto our simulator] is a unique challenge. But that’s definitely an area we’re going to focus on,” she says. “We’re excited at the potential possibilities.”

Reference: https://ift.tt/6AZCFn4

Low-Vision Programmers Can Now Design 3D Models Independently

Most 3D design software requires visual dragging and rotating—posing a challenge for blind and low-vision users. As a result, a range of ...