Tuesday, February 10, 2026

How and When the Memory Chip Shortage Will End




If it feels these days as if everything in technology is about AI, that’s because it is. And nowhere is that more true than in the market for computer memory. Demand, and profitability, for the type of DRAM used to feed GPUs and other accelerators in AI data centers is so huge that it’s diverting away supply of memory for other uses and causing prices to skyrocket. According to Counterpoint Research, DRAM prices have risen 80-90 precent so far this quarter.

The largest AI hardware companies say they have secured their chips out as far as 2028, but that leaves everybody else—makers of PCs, consumer gizmos, and everything else that needs to temporarily store a billion bits—scrambling to deal with scarce supply and inflated prices.

How did the electronics industry get into this mess, and more importantly, how will it get out? IEEE Spectrum asked economists and memory experts to explain. They say today’s situation is the result of a collision between the DRAM industry’s historic boom and bust cycle and an AI hardware infrastructure build-out that’s without precedent in its scale. And, barring some major collapse in the AI sector, it will take years for new capacity and new technology to bring supply in line with demand. Prices might stay high even then.

To understand both ends of the tale, you need to know the main culprit in the supply and demand swing, high-bandwidth memory, or HBM.

What is HBM?

HBM is the DRAM industry’s attempt to short-circuit the slowing pace of Moore’s Law by using 3D chip packaging technology. Each HBM chip is made up of as many as 12 thinned-down DRAM chips called dies. Each die contains a number of vertical connections called through silicon vias (TSVs). The dies are piled atop each other and connected by arrays of microscopic solder balls aligned to the TSVs. This DRAM tower—well, at about 750 micrometers thick, it’s more of a brutalist office-block than a tower—is then stacked atop what’s called the base die, which shuttles bits between the memory dies and the processor.

This complex piece of technology is then set within a millimeter of a GPU or other AI accelerator, to which it is linked by as many as 2,048 micrometer-scale connections. HBMs are attached on two sides of the processor, and the GPU and memory are packaged together as a single unit.

The idea behind such a tight, highly-connected squeeze with the GPU is to knock down what’s called the memory wall. That’s the barrier in energy and time of bringing the terabytes per second of data needed to run large language models into the GPU. Memory bandwidth is a key limiter to how fast LLMs can run.

As a technology, HBM has been around for more than 10 years, and DRAM makers have been busy boosting its capability.

As the size of AI models has grown, so has HBM’s importance to the GPU. But that’s come at a cost. SemiAnalysis estimates that HBM generally costs three times as much as other types of memory and constitutes 50 percent or more of the cost of the packaged GPU.

Origins of the memory chip shortage

Memory and storage industry watchers agree that DRAM is a highly cyclical industry with huge booms and devastating busts. With new fabs costing US $15 billion or more, firms are extremely reluctant to expand and may only have the cash to do so during boom times, explains Thomas Coughlin, a storage and memory expert and president of Coughlin Associates. But building such a fab and getting it up and running can take 18 months or more, practically ensuring that new capacity arrives well past the initial surge in demand, flooding the market and depressing prices.

The origins of today’s cycle, says Coughlin, go all the way back to the chip supply panic surrounding the COVID-19 pandemic . To avoid supply-chain stumbles and support the rapid shift to remote work, hyperscalers—data center giants like Amazon, Google, and Microsoft—bought up huge inventories of memory and storage, boosting prices, he notes.

But then supply became more regular and data center expansion fell off in 2022, causing memory and storage prices to plummet. This recession continued into 2023, and even resulted in big memory and storage companies such as Samsung cutting production by 50 percent to try and keep prices from going below the costs of manufacturing, says Coughlin. It was a rare and fairly desperate move, because companies typically have to run plants at full capacity just to earn back their value.

After a recovery began in late 2023, “all the memory and storage companies were very wary of increasing their production capacity again,” says Coughlin. “Thus there was little or no investment in new production capacity in 2024 and through most of 2025.”

The AI data center boom

That lack of new investment is colliding headlong with a huge boost in demand from new data centers. Globally, there are nearly 2,000 new data centers either planned or under construction right now, according to Data Center Map. If they’re all built, it would represent a 20 percent jump in the global supply, which stands at around 9,000 facilities now.

If the current build-out continues at pace, McKinsey predicts companies will spend $7 trillion by 2030, with the bulk of that—$5.2 trillion—going to AI-focused data centers. Of that chunk, $3.3 billion will go toward servers, data storage, and network equipment, the firm predicts.

The biggest beneficiary so far of the AI data center boom is unquestionably GPU-maker Nvidia. Revenue for its data center business went from barely a billion in the final quarter of 2019 to $51 billion in the quarter that ended in October 2025. Over this period, its server GPUs have demanded not just more and more gigabytes of DRAM but an increasing number of DRAM chips. The recently released B300 uses eight HBM chips, each of which is a stack of 12 DRAM dies. Competitors’ use of HBM has largely mirrored Nvidia’s. AMD’s MI350 GPU, for example, also uses eight, 12-die chips.

With so much demand, an increasing fraction of the revenue for DRAM makers comes from HBM. Micron—the number three producer behind SK Hynix and Samsung—reported that HBM and other cloud-related memory went from being 17 percent of its DRAM revenue in 2023 to nearly 50 percent in 2025.

Micron predicts the total market for HBM will grow from $35 billion in 2025 to $100 billion by 2028—a figure larger than the entire DRAM market in 2024, CEO Sanjay Mehrotra told analysts in December. It’s reaching that figure two years earlier than Micron had previously expected. Across the industry, demand will outstrip supply “substantially… for the foreseeable future,” he said.

Future DRAM supply and technology

“There are two ways to address supply issues with DRAM: with innovation or with building more fabs,” explains Mina Kim, an economist with the Mkecon Insights. “As DRAM scaling has become more difficult, the industry has turned to advanced packaging… which is just using more DRAM.”

Micron, Samsung, and SK Hynix combined make up the vast majority of the memory and storage markets, and all three have new fabs and facilities in the works. However, these are unlikely to contribute meaningfully to bringing down prices.

Micron is in the process of building an HBM fab in Singapore that should be in production in 2027. And it is retooling a fab it purchased from PSMC in Taiwan that will begin production in the second half of 2027. Last month, Micron broke ground on what will be a DRAM fab complex in Onondaga County, N.Y. It will not be in full production until 2030.

Samsung plans to start producing at a new plant in Pyeongtaek, South Korea in 2028.

SK Hynix is building HBM and packaging facilities in West Lafayette, Indiana set to begin production by the end of 2028, and an HBM fab it’s building in Cheongju should be complete in 2027.

Speaking of his sense of the DRAM market, Intel CEO Lip-Bu Tan told attendees at the Cisco AI Summit last week: “There’s no relief until 2028.”

With these expansions unable to contribute for several years, other factors will be needed to increase supply. “Relief will come from a combination of incremental capacity expansions by existing DRAM leaders, yield improvements in advanced packaging, and a broader diversification of supply chains,” says Shawn DuBravac , chief economist for the Global Electronics Association (formerly the IPC). “New fabs will help at the margin, but the faster gains will come from process learning, better [DRAM] stacking efficiency, and tighter coordination between memory suppliers and AI chip designers.”

So, will prices come down once some of these new plants come on line? Don’t bet on it. “In general, economists find that prices come down much more slowly and reluctantly than they go up. DRAM today is unlikely to be an exception to this general observation, especially given the insatiable demand for compute,” says Kim.

In the meantime, technologies are in the works that could make HBM an even bigger consumer of silicon. The standard for HBM4 can accommodate 16 stacked DRAM dies, even though today’s chips only use 12 dies. Getting to 16 has a lot to do with the chip stacking technology. Conducting heat through the HBM “layer cake” of silicon, solder, and support material is a key limiter to going higher and in repositioning HBM inside the package to get even more bandwidth.

SK Hynix claims a heat conduction advantage through a manufacturing process called advanced MR-MUF (mass reflow molded underfill). Further out, an alternative chip stacking technology called hybrid bonding could help heat conduction by reducing the die-to-die vertical distance essentially to zero. In 2024, researchers at Samsung proved they could produce a 16-high stack with hybrid bonding, and they suggested that 20 dies was not out of reach.

Reference: https://ift.tt/lgI2hUj

Monday, February 9, 2026

IEEE Honors Global Dream Team of Innovators




Meet the recipients of the 2026 IEEE Medals—the organization’s highest-level honors. Presented on behalf of the IEEE Board of Directors, these medals recognize innovators whose work has shaped modern technology across disciplines including AI, education, and semiconductors.

The medals will be presented at the IEEE Honors Ceremony in April in New York City. View the full list of 2026 recipients on the IEEE Awards website, and follow IEEE Awards on LinkedIn for news and updates.

IEEE MEDAL OF HONOR

Sponsor: IEEE

Jensen Huang

Nvidia

Santa Clara, Calif.

“For leadership in the development of graphics processing units and their application to scientific computing and artificial intelligence.”

IEEE FRANCES E. ALLEN MEDAL

Sponsor: IBM

Luis von Ahn

Duolingo

Pittsburgh

“For contributions to the advancement of societal improvement and education through innovative technology.”

IEEE ALEXANDER GRAHAM BELL MEDAL

Sponsor: Nokia Bell Labs

Scott J. Shenker

University of California, Berkeley

“For contributions to Internet architecture, network resource allocation, and software-defined networking.”

IEEE JAGADISH CHANDRA BOSE MEDAL IN WIRELESS COMMUNICATIONS

Sponsor: Mani L. Bhaumik

Co-recipients:
Erik Dahlman

Stefan Parkvall
Johan Sköld

Ericsson

Stockholm

“For contributions to and leadership in the research, development, and standardization of cellular wireless communications.”

IEEE MILDRED DRESSELHAUS MEDAL

Sponsor: Google

Karen Ann Panetta

Tufts University

Medford, Mass.

“For contributions to computer vision and simulation algorithms, and for leadership in developing programs to promote STEM careers.”

IEEE EDISON MEDAL

Sponsor: The Edison Medal Fund

Eric Swanson

PIXCEL Inc.

MIT

“For pioneering contributions to biomedical imaging, terrestrial optical communications and networking, and inter-satellite optical links.”

IEEE MEDAL FOR ENVIRONMENTAL AND SAFETY TECHNOLOGIES

Sponsor: Toyota Motor Corp.

Wei-Jen Lee

University of Texas at Arlington

“For contributions to advancing electrical safety in the workplace, integrating renewable energy and grid modernization for climate change mitigation.”

IEEE FOUNDERS MEDAL

Sponsor: IEEE Foundation

Marian Rogers Croak

Google

Reston, Va.

“For leadership in communication networks, including acceleration of digital equity, responsible Artificial Intelligence, and the promotion of diversity and inclusion.”

IEEE RICHARD W. HAMMING MEDAL

Sponsor: Qualcomm, Inc.

Muriel Médard

MIT

“For contributions to coding for reliable communications and networking.”

IEEE NICK HOLONYAK, JR. MEDAL FOR SEMICONDUCTOR OPTOELECTRONIC TECHNOLOGIES

Sponsor: Friends of Nick Holonyak, Jr.

Steven P. DenBaars

University of California, Santa Barbara

“For seminal contributions to compound semiconductor optoelectronics, including high-efficiency visible light-emitting diodes, lasers, and LED displays.”

IEEE MEDAL FOR INNOVATIONS IN HEALTHCARE TECHNOLOGY

Sponsor: IEEE Engineering Medicine and Biology Society

Rosalind W. Picard

MIT

“For pioneering contributions to wearable affective computing for health and wellbeing.”

IEEE JACK S. KILBY SIGNAL PROCESSING MEDAL

Sponsor: Apple

Biing-Hwang “Fred” Juang

Georgia Tech

“For contributions to signal modeling, coding, and recognition for speech communication.”

IEEE/RSE JAMES CLERK MAXWELL MEDAL

Sponsor: ARM, Ltd.

Paul B. Corkum

University of Ottawa

“For the development of the recollision model for strong field light–matter interactions leading to the field of attosecond science.”

IEEE JAMES H. MULLIGAN, JR. EDUCATION MEDAL

Sponsor: IEEE Life Members Fund and MathWorks

James H. McClellan

Georgia Tech

“For fundamental contributions to electrical and computer engineering education through innovative digital signal processing curriculum development.”

IEEE JUN-ICHI NISHIZAWA MEDAL

Sponsor: IEEE Jun-ichi Nishizawa Medal Fund

Eric R. Fossum

Dartmouth College

Hanover, N.H.

“For the invention, development, and commercialization of the CMOS image sensor.”

IEEE ROBERT N. NOYCE MEDAL

Sponsor: Intel Corp.

Chris Malachowsky

Nvidia

Santa Clara, Calif.

“For pioneering parallel computing architectures and leadership in semiconductor design that transformed artificial intelligence, scientific research, and accelerated computing.”

IEEE DENNIS J. PICARD MEDAL FOR RADAR TECHNOLOGIES AND APPLICATIONS

Sponsor: RTX

Yoshio Yamaguchi

Niigata University

Japan

“For contributions to polarimetric synthetic aperture radar imaging and its utilization.”

IEEE MEDAL IN POWER ENGINEERING

Sponsors: IEEE Industry Applications, Industrial Electronics, Power Electronics, and Power & Energy societies

Fang Zheng Peng

University of Pittsburgh

“For contributions to Z-Source and modular multi-level converters for distribution and transmission networks.”

IEEE SIMON RAMO MEDAL

Sponsor: Northrop Grumman Corp.

Michael D. Griffin

LogiQ, Inc.

Arlington, Va.

“For leadership in national security, civil, and commercial systems engineering and development of elegant design principles.”

IEEE JOHN VON NEUMANN MEDAL

Sponsor: IBM

Donald D. Chamberlin

IBM

San Jose, Calif.

“For contributions to database query languages, particularly Structured Query Language, which powers most of the world’s data management and analysis systems.”

Reference: https://ift.tt/nvySKWA

New Devices Might Scale the Memory Wall




The hunt is on for anything that can surmount AI’s perennial memory wall–even quick models are bogged down by the time and energy needed to carry data between processor and memory. Resistive RAM (RRAM)could circumvent the wall by allowing computation to happen in the memory itself. Unfortunately, most types of this nonvolatile memory are too unstable and unwieldy for that purpose.

Fortunately, a potential solution may be at hand. At December’s IEEE International Electron Device Meeting (IEDM), researchers from the University of California, San Diego showed they could run a learning algorithm on an entirely new type of RRAM.

“We actually redesigned RRAM, completely rethinking the way it switches,” says Duygu Kuzum, an electrical engineer at the University of California, San Diego, who led the work.

RRAM stores data as a level of resistance to the flow of current. The key digital operation in a neural network—multiplying arrays of numbers and then summing the results—can be done in analog simply by running current through an array of RRAM cells, connecting their outputs, and measuring the resulting current.

Traditionally, RRAM stores data by creating low-resistance filaments in the higher-resistance surrounds of a dielectric material. Forming these filaments often needs voltages too high for standard CMOS, hindering its integration inside processors. Worse, forming the filaments is a noisy and random process, not ideal for storing data. (Imagine a neural network’s weights randomly drifting. Answers to the same question would change from one day to the next.)

Moreover, most filament-based RRAM cells’ noisy nature means they must be isolated from their surrounding circuits, usually with a selector transistor, which makes 3D stacking difficult.

Limitations like these mean that traditional RRAM isn’t great for computing. In particular, Kuzum says, it’s difficult to use filamentary RRAM for the sort of parallel matrix operations that are crucial for today’s neural networks.

So, the San Diego researchers decided to dispense with the filaments entirely. Instead they developed devices that switch an entire layer from high to low resistance and back again. This format, called “bulk RRAM”, can do away with both the annoying high-voltage filament-forming step and the geometry-limiting selector transistor.

3D memory for machine learning

The San Diego group wasn’t the first to build bulk RRAM devices, but it made breakthroughs both in shrinking them and forming 3D circuits with them. Kuzum and her colleagues shrank RRAM into the nanoscale; their device was just 40 nm across. They also managed to stack bulk RRAM into as many as eight layers.

With a single pulse of identical voltage, an eight-layer stack of cells each of which can take any of 64 resistance values, a number that’s very difficult to achieve with traditional filamentous RRAM. And whereas the resistance of most filament-based cells are limited to kiloohms, the San Diego stack is in the megaohm range, which Kuzum says is better for parallel operations. e

“We can actually tune it to anywhere we want, but we think that from an integration and system-level simulations perspective, megaohm is the desirable range,” Kuzum says.

These two benefits–a greater number of resistance levels and a higher resistance–could allow this bulk RRAM stack to perform more complex operations than traditional RRAM’s can manage.

Kuzum and colleagues assembled multiple eight-layer stacks into a 1-kilobyte array that required no selectors. Then, they tested the array with a continual learning algorithm: making the chip classify data from wearable sensors—for example, reading data from a waist-mounted smartphone to determine if its wearer was sitting, walking, climbing stairs, or taking another action—while constantly adding new data. Tests showed an accuracy of 90 percent, which the researchers say is comparable to the performance of a digitally-implemented neural network.



This test exemplifies what Kuzum thinks can especially benefit from bulk RRAM: neural network models on edge devices, which may need to learn from their environment without accessing the cloud.

“We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications,” Kuzum says.

The ability to integrate RRAM into an array like this is a significant advance, says Albert Talin, materials scientist at Sandia National Laboratories in Livermore, California, and a bulk RRAM researcher who wasn’t involved in the San Diego group’s work. “I think that any step in terms of integration is very useful,” he says.

But Talin highlights a potential obstacle: the ability to retain data for an extended period of time. While the San Diego group showed their RRAM could retain data at room temperature for several years (on par with flash memory), Talin says that its retention at the higher temperatures where computers actually operate is less certain. “That’s one of the major challenges of this technology,” he says, especially when it comes to edge applications.

If engineers can prove the technology, then all types of models may benefit. This memory wall has only grown higher this decade, as traditional memory hasn’t been able to keep up with the ballooning demands of large models. Anything that allows models to operate on the memory itself could be a welcome shortcut.

Reference: https://ift.tt/UM1NOkj

Saturday, February 7, 2026

Low-Vision Programmers Can Now Design 3D Models Independently




Most 3D design software requires visual dragging and rotating—posing a challenge for blind and low-vision users. As a result, a range of hardware design, robotics, coding, and engineering work is inaccessible to interested programmers. A visually-impaired programmer might write great code. But because of the lack of accessible modeling software, the coder can’t model, design, and verify physical and virtual components of their system.

However, new 3D modeling tools are beginning to change this equation. A new prototype program called A11yShape aims to close the gap. There are already code-based tools that let users describe 3D models in text, such as the popular OpenSCAD software. Other recent large-language-model tools generate 3D code from natural-language prompts. But even with these, blind and low-vision programmers still depend on sighted feedback to bridge the gap between their code and its visual output.

Blind and low-vision programmers previously had to rely on a sighted person to visually check every update of a model to describe what changed. But with A11yShape, blind and low-vision programmers can independently create, inspect, and refine 3D models without relying on sighted peers.

A11yShape does this by generating accessible model descriptions, organizing the model into a semantic hierarchy, and ensuring every step works with screen readers.

The project began when Liang He, assistant professor of computer science at the University of Texas at Dallas, spoke with his low-vision classmate who was studying 3D modeling. He saw an opportunity to turn his classmate’s coding strategies, learned in a 3D modeling for blind programmers course at the University of Washington, into a streamlined tool.

“I want to design something useful and practical for the group,” he says. “Not just something I created from my imagination and applied to the group.”

Re-imagining Assistive 3D Design With OpenSCAD

A11yShape assumes the user is running OpenSCAD, the script-based 3D modeling editor. The program adds OpenSCAD features to connect each component of modeling across three application UI panels.

OpenSCAD allows users to create models entirely through typing, eliminating the need for clicking and dragging. Other common graphics-based user interfaces are difficult for blind programmers to navigate.

A11yshape introduces an AI Assistance Panel, where users can submit real-time queries to ChatGPT-4o to validate design decisions and debug existing OpenSCAD scripts.

AllyShape's 3-D modeling web interface, featuring a code editor panel with programming capabilities, an AI assistance panel providing contextual feedback, and a model panel displaying hierarchical structure and rendering of the resulting model. A11yShape’s three panels synchronize code, AI descriptions, and model structure so blind programmers can discover how code changes affect designs independently.Anhong Guo, Liang He, et al.

If a user selects a piece of code or a model component, A11yShape highlights the matching part across all three panels and updates the description, so blind and low-vision users always know what they’re working on.

User Feedback Improved Accessible Interface

The research team recruited 4 participants with a range of visual impairments and programming backgrounds. The team asked the participants to design models using A11yShape and observed their workflows.

One participant, who had never modeled before, said the tool “provided [the blind and low-vision community] with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.”

Participants also reported that long text descriptions still make it hard to grasp complex shapes, and several said that without eventually touching a physical model or using a tactile display, it was difficult to fully “see” the design in their mind.

To evaluate the accuracy of the AI-generated descriptions, the research team recruited 15 sighted participants. “On a 1–5 scale, the descriptions earned average scores between about 4.1 and 5 for geometric accuracy, clarity, and avoiding hallucinations, suggesting the AI is reliable enough for everyday use.”


A failed all-at-once attempt to construct a 3-D helicopter shows incorrect shapes and placement of elements. In contrast, when the user journey allows for completion of each individual element before moving forward, results significantly improve. A new assistive program for blind and low-vision programmers, A11yShape, assists visually disabled programmers in verifying the design of their models.Source: Anhong Guo, Liang He, et al.

The feedback will help to inform future iterations—which He says could integrate tactile displays, real-time 3D printing, and more concise AI-generated audio descriptions.

Beyond its applications in the professional computer programming community, He noted that A11yShape also lowers the barrier to entry for blind and low-vision computer programming learners.

“People like being able to express themselves in creative ways. . . using technology such as 3D printing to make things for utility or entertainment,” says Stephanie Ludi, director of DiscoverABILITY Lab and professor of the department of computer science and engineering at the University of North Texas. “Persons who are blind and visually impaired share that interest, with A11yShape serving as a model to support accessibility in the maker community.”

The team presented A11yshape in October at the ASSETS conference in Denver.

Reference: https://ift.tt/mQNz6Yv

Friday, February 6, 2026

Sixteen Claude AI agents working together created a new C compiler


Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Read full article

Comments

Reference : https://ift.tt/LPz2D4K

Malicious packages for dYdX cryptocurrency exchange empties user wallets


Open source packages published on the npm and PyPI repositories were laced with code that stole wallet credentials from dYdX developers and backend systems and, in some cases, backdoored devices, researchers said.

“Every application using the compromised npm versions is at risk ….” the researchers, from security firm Socket, said Friday. “Direct impact includes complete wallet compromise and irreversible cryptocurrency theft. The attack scope includes all applications depending on the compromised versions and both developers testing with real credentials and production end-users."

Packages that were infected were:

Read full article

Comments

Reference : https://ift.tt/s67awox

IEEE Online Mini-MBA Aims to Fill Leadership Skills Gaps in AI




Boardroom priorities are shifting from financial metrics toward technical oversight. Although market share and operational efficiency remain business bedrocks, executives also must now manage the complexities of machine learning, the integrity of their data systems, and the risks of algorithmic bias.

The change represents more than just a tech update; it marks a fundamental redefinition of the skills required for business leadership.

Research from the McKinsey Global Institute on the economic impact of artificial intelligence shows that companies integrating it effectively have boosted profit margins by up to 15 percent. Yet the same study revealed a sobering reality: 87 percent of organizations acknowledge significant AI skill gaps in their leadership ranks.

That disconnect between AI’s business potential and executive readiness has created a need for a new type of professional education.

The leadership skills gap in the AI era

Traditional business education, with its focus on finance, marketing, and operations, wasn’t designed for an AI-driven economy. Today’s leaders need to understand not just what AI can do but also how to evaluate investments in the technology, manage algorithmic risks, and lead teams through digital transformations.

The challenges extend beyond the executive suite. Middle managers, project leaders, and department heads across industries are discovering that AI fluency has become essential for career advancement. In 2020 the World Economic Forum predicted that 50 percent of all employees would need reskilling by 2025, with AI-related competencies topping the list of required skills.

IEEE | Rutgers Online Mini-MBA: Artificial Intelligence

Recognizing the skills gap, IEEE partnered with the Rutgers Business School to offer a comprehensive business education program designed for the new era of AI. The IEEE | Rutgers Online Mini-MBA: Artificial Intelligence program combines rigorous business strategy with deep AI literacy.

Rather than treating AI as a separate technical subject, the program incorporates it into each aspect of business strategy. Students learn to evaluate AI opportunities through financial modeling, assess algorithmic risks through governance frameworks, and use change-management principles to implement new technologies.

A curriculum built for real-world impact

The program’s modular structure lets professionals focus on areas relevant to their immediate needs while building toward comprehensive AI business literacy. Each of the 10 modules includes practical exercises and case study analyses that participants can immediately apply in their organization.

The Introduction to AI module provides a comprehensive overview of the technology’s capabilities, benefits, and challenges. Other technologies are covered as well, including how they can be applied across diverse business contexts, laying the groundwork for informed decision‑making and strategic adoption.

Rather than treating AI as a separate technical subject, the online mini-MBA program incorporates the technology throughout each aspect of business strategy.

Building on that foundation, the Data Analytics module highlights how AI projects differ from traditional programming, how to assess data readiness, and how to optimize data to improve accuracy and outcomes. The module can equip leaders to evaluate whether their organization is prepared to launch successful AI initiatives.

The Process Optimization module focuses on reimagining core organizational workflows using AI. Students learn how machine learning and automation are already transforming industries such as manufacturing, distribution, transportation, and health care. They also learn how to identify critical processes, create AI road maps, establish pilot programs, and prepare their organization for change.

Industry-specific applications

The core modules are designed for all participants, and the program highlights how AI is applied across industries. By analyzing case studies in fraud detection, medical diagnostics, and predictive maintenance, participants see underlying principles in action.

Participants gain a broader perspective on how AI can be adapted to different contexts so they can draw connections to the opportunities and challenges in their organization. The approach ensures everyone comes away with a strong foundation and the ability to apply learned lessons to their environment.

Flexible learning for busy professionals

With the understanding that senior professionals have demanding schedules, the mini-MBA program offers flexibility. The online format lets participants engage with content in their own time frame, while live virtual office hours with faculty provide opportunities for real-time interaction.

The program, which offers discounts to IEEE members and flexible payment options, qualifies for many tuition reimbursement programs.

Graduates report that implementing AI strategies developed during the program has helped drive tangible business results. This success often translates into career advancement, including promotions and expanded leadership roles. Furthermore, the curriculum empowers graduates to confidently vet AI vendor proposals, lead AI project teams, and navigate high-stakes investment decisions.

Beyond curriculum content, the mini MBA can create valuable professional networks among AI-forward business leaders. Participants collaborate on projects, share implementation experiences, and build relationships that extend beyond the program’s 12 weeks.

Specialized training from IEEE

To complement the mini-MBA program, IEEE offers targeted courses addressing specific AI applications in critical industries. The Artificial Intelligence and Machine Learning in Chip Design course explores how the technology is revolutionizing semiconductor development. Integrating Edge AI and Advanced Nanotechnology in Semiconductor Applications delves into cutting-edge hardware implementations. The Mastering AI Integration in Semiconductor Manufacturing course examines how AI enhances production efficiency and quality control in one of the world’s most complex manufacturing processes. AI in Semiconductor Packaging equips professionals to apply machine learning and neural networks to modernize semiconductor packaging reliability and performance.

The programs grant professional development credits including PDHs and CEUs, ensuring participants receive formal recognition for their educational investments. Digital badges provide shareable credentials that professionals can showcase across professional networks, demonstrating their AI competencies to current and prospective employers.

Learn more about IEEE Educational Activities’ corporate solutions and professional development programs at innovationatwork.ieee.org.

Reference: https://ift.tt/PbpO7wa

How and When the Memory Chip Shortage Will End

If it feels these days as if everything in technology is about AI, that’s because it is. And nowhere is that more true than in the market...