Thursday, March 19, 2026

Nigerian Firms Embrace Kit-Based EV Assembly for Cost Savings




A growing number of Nigerian companies are turning to kit-based assembly to bring electric vehicles to market in Africa. Lagos-based Saglev Micromobility Nigeria recently partnered with Dongfeng Motor Corporation, in Wuhan, China, to assemble 18-seat electric passenger vans from imported kits.

Kit-based assembly allows Nigerian firms to reduce costs, create jobs, and develop local technical expertise—key steps toward expanding EV access. Fully assembled and imported EVs face high tariffs that put them out of reach for many African consumers, whereas kit-based approaches make electric mobility more affordable today. Saglev’s initiative reflects a broader trend: CIG Motors, NEV Electric, and regional players in Cote D’Ivoire, Ghana, and Kenya are also leveraging imported kits to build local EV ecosystems, signaling that parts of West Africa are intent on catching up with global electrification efforts.


Expanding the Local EV Ecosystem

CIG Motors operates a kit-assembly plant in Lagos producing vehicles from Chinese automakers GAC Motor and Wuling Motors. These vehicles include the Wuling Bingo, a compact five-door electric hatchback, and the Hongguang Mini EV Macaron, a microcar with roughly 200 kilometers of range aimed at ride-share operators looking for ultralow-cost urban transport. NEV Electric focuses on electric buses and three-wheelers for urban transit and last-mile delivery.

Saglev’s CEO, Olu Faleye, emphasizes that Nigeria’s EV transition addresses both practical economic needs in addition to environmental goals. Beyond passenger transport, electric vehicles could help reduce one of Nigeria’s persistent agricultural challenges: post-harvest spoilage. Nigeria loses an estimated 30–40 million tonnes of food annually because of weak logistics and limited refrigeration infrastructure, according to the Organization for Technology Advancement of Cold Chain in West Africa.

Electric vans, mini-trucks, and three-wheel cargo vehicles could help close this gap because their batteries can power refrigeration systems during transport without relying on costly diesel fuel. As EV adoption grows and charging infrastructure expands, temperature-controlled transport could become more affordable, reducing spoilage, improving farmer incomes, and helping stabilize food supplies, the organization says.

“I don’t believe that the promised land is making a fully built EV on the ground here.”
–Olu Faleye, Saglev CEO

Beyond Nigeria, Mombasa, Kenya–based Associated Vehicle Assemblers has begun assembling electric taxis and minibuses from imported kits, and Ghana’s government is spurring kit-car assembly there under its national Automotive Development Plan. In Ghana, assemblers benefit from import-duty exemptions on kits and equipment, corporate tax breaks, and access to industrial infrastructure. Saglev is already availing itself of those benefits, at its kit-assembly plant in Accra. The company says it also plans to expand its assembly operations to Cote D’Ivoire.


Infrastructure Challenges and Workarounds

Despite these signs that West Africa’s EV ecosystem is gaining traction, limited grid reliability and sparse public charging infrastructure remain major barriers to widespread EV adoption. Urban households in Nigeria experience roughly six or seven blackouts per week, each lasting about 12 hours, according to Nigeria’s National Bureau of Statistics. That’s more downtime each day than the average U.S. household experiences in a year. More than 40 percent of households rely on generators, which supply about 44 percent of residential electricity, according to research by Stears and Sterling Bank.

Many early EV adopters therefore charge vehicles using gasoline or diesel generators. Faleye notes that Nigerians have long relied on such workarounds and expects fossil fuels to remain part of the EV charging equation for the foreseeable future—at least until falling costs for solar panels and battery storage make cleaner charging viable.

He acknowledges that charging EVs using hydrocarbons is fraught from an environmental perspective, but he points out that the practice at least brings other benefits of EVs, including lower maintenance costs and the EVs’ synergies with refrigeration and transportation logistics. And he points to a 2020 peer-reviewed study in the journal Environmental and Climate Technologies that compared the overall efficiency of internal combustion vehicles and electric vehicles across the full well-to-wheel energy chain. The study’s conclusion: Even after accounting for conversion losses, generating electricity with a diesel or gasoline generator to power an electric vehicle can remain just as efficient overall as burning the same fuel directly in a vehicle’s internal combustion engine.


Scalable EV Adoption in Nigeria

The approach taken by Saglev and other Nigerian kit-car builders shows how local assembly can advance EV adoption even where infrastructure remains unreliable. By starting with kits, companies can deploy practical electric mobility solutions now while building the supply chains and technical expertise needed for more resource-intensive localized production.

Still, when asked whether Saglev plans to eventually move beyond kit assembly to independent design and manufacturing of EVs, Faleye calls such a move impractical.

“I don’t believe that the promised land is making a fully built EV on the ground here,” he says. “For me to do efficient vehicle manufacturing, I’d need a lot of robotics and 3D printing. That expense is unnecessary—it would just increase costs and make EVs more expensive.”

In a country where electricity can disappear for days, Nigeria’s kit-based EV strategy highlights a practical truth: incremental progress and ingenuity may matter more than perfect infrastructure. For Saglev, every kit-based vehicle rolling off the line is not just a van or bus—it’s a step toward an EV ecosystem that works for Nigeria’s realities today.

Reference: https://ift.tt/Y3cuEag

How Your Virtual Twin Could One Day Save Your Life




One morning in May 2019, a cardiac surgeon stepped into the operating room at Boston Children’s Hospital more prepared than ever before to perform a high-risk procedure to rebuild a child’s heart. The surgeon was experienced, but he had an additional advantage: He had already performed the procedure on this child dozens of times—virtually. He knew exactly what to do before the first cut was made. Even more important, he knew which strategies would provide the best possible outcome for the child whose life was in his hands.

How was this possible? Over the prior weeks, the hospital’s surgical and cardio-engineering teams had come together to build a fully functioning model of the child’s heart and surrounding vascular system from MRI and CT scans. They began by carefully converting the medical imaging into a 3D model, then used physics to bring the 3D heart to life, creating a dynamic digital replica of the patient’s physiology. The mock-up reproduced this particular heart’s unique behavior, including details of blood flow, pressure differentials, and muscle-tissue stresses.

This type of model, known as a virtual twin, can do more than identify medical problems—it can provide detailed diagnostic insights. In Boston, the team used the model to predict how the child’s heart would respond to any cut or stitch, allowing the surgeon to test many strategies to find the best one for this patient’s exact anatomy.

That day, the stakes were high. With the patient’s unique condition—a heart defect in which large holes between the atria and ventricles were causing blood to flow between all four chambers—there was no manual or textbook to fully guide the doctors. The condition strains the lungs, so the doctors planned an open-heart surgery to reroute deoxygenated blood from the lower body directly to the lungs, bypassing the heart. Typically with this kind of surgery, decisions would be made on the fly, under demanding conditions, and with high uncertainty. But in this case, the plan had been tested in advance, and the entire team had rehearsed it before the first incision. The surgery was a complete success.

Such procedures have become routine at the Boston hospital. Since that first patient, nearly 2,000 procedures have been guided by virtual-twin modeling. This is the power of the technology behind the Living Heart Project, which I launched in 2014, five years before that first procedure. The project started as an exploratory initiative to see if modeling the human heart was possible. Now with more than 150 member organizations across 28 countries, the project includes dozens of multidisciplinary teams that regularly use multiscale virtual twins of the heart and other vital organs.

This technology is reshaping how we understand and treat the human body. To reach this transformative moment, we had to solve a fundamental challenge: building a digital heart accurate enough—and trustworthy enough—to guide real clinical decisions.

A father’s concern

Now entering its second decade, the Living Heart Project was born in part from a personal conviction. For many years, I had watched helplessly as my daughter Jesse faced endless diagnostic uncertainty due to a rare congenital heart condition in which the position of the ventricles is reversed, threatening her life as she grew. As an engineer, I understood that the heart was an array of pumping chambers, controlled by an electrical signal and its blood flow carefully regulated by valves. Yet I struggled to grasp the unique structure and behavior of my daughter’s heart well enough to contribute meaningfully to her care. Her specialists knew the bleak forecast children like her faced if left untreated, but because every heart with her condition is anatomically unique, they had little more than their best guesses to guide their decisions about what to do and when to do it. With each specialist, a new guess.

Then my engineering curiosity sparked a question that has guided my career ever since: Why can’t we simulate the human body the way we simulate a car or a plane?

woman facing away and looking at a wall where the simulated interior of a heart is projected At a visualization center in Boston, VR imagery helps the mother of a young girl with a complex heart defect understand the inner workings of her child’s heart. Dassault Systèmes

I had spent my career developing powerful computational tools to help engineers build digital models of complex mechanical systems, using models that ranged from the interactions of individual atoms to the components of entire vehicles. What most of these models had in common was the use of physics to predict behavior and optimize performance. But in medicine today, those same physics-based approaches rarely inform decision-making. In most clinical settings, treatment decisions still hinge on judgments drawn from static 2D images, statistical guidelines, and retrospective studies.

This was not always the case. Historically, physics was central to medicine. The word “physician” itself traces back to the Latin physica, which translates to “natural science.” Early doctors were, in a sense, applied physicists. They understood the heart as a pump, the lungs as bellows, and the body as a dynamic system. To be a physician meant you were a master of physics as it applied to the human body.

As medicine matured, biology and chemistry grew to dominate the field, and the knowledge of physics got left behind. But for patients like my daughter, that child in Boston, and millions like them, outcomes are governed by mechanics. No pill or ointment—no chemistry-based solution—would help, only physics. While I did not realize it at the time, virtual twins can reunite modern physicians with their roots, using engineering principles, simulation science, and artificial intelligence.

A decade of progress

The LHP concept was simple: Could we combine what hundreds of experts across many specialties knew about the human heart to build a digital twin accurate enough to be trusted, flexible enough to personalize, and predictive enough to guide clinical care?

We invited researchers, clinicians, device and drug companies, and government regulators to share their data, tools, and knowledge toward a common goal that would lift the entire field of medicine. The Living Heart Project launched with a dozen or so institutions on board. Within a year, we had created the first fully functional virtual twin of the human heart.

The Living Heart was not an anatomical rendering, tuned to simply replicate what we observed. It was a first-principles model, coupling the network of fibers in the heart’s electrical system, the biological battery that keeps us alive, with the heart’s mechanical response, the muscle contractions that we know as the heartbeat.

The Living Heart virtual twin simulates how the heart beats, offering different views to help scientists and doctors better predict how it will respond to disease or treatment. The center view shows the fine engineering mesh, the detailed framework that allows computers to model the heart’s motion. The image on the right uses colors to show the electrical wave that drives the heartbeat as it conducts through the muscle, and the image on the left shows how much strain is on the tissue as it stretches and squeezes. Dassault Systèmes

Academic researchers had long explored computational models of the heart, but those projects were typically limited by the technology they had access to. Our version was built on industrial-grade simulation software from Dassault Systèmes, a company best known for modeling tools used in aerospace and automotive engineering, where I was working to develop the engineering simulation division. This platform gave teams the tools to personalize an individual heart model using the patient’s MRI and CT data, blood-pressure readings, and echocardiogram measurements, directly linking scans to simulations.

Surgeons then began using the Living Heart to model procedures. Device makers used it to design and test implants. Pharmaceutical companies used it to evaluate drug effects such as toxicity. Hundreds of publications have emerged from the project, and because they all share the same foundation, the findings can be reproduced, reused, and built upon. With each application, the research community’s understanding of the heart snowballed.

Early on, we also addressed an essential requirement for these innovations to make it to patients: regulatory acceptance. Within the project’s first year, the U.S Food and Drug Administration agreed to join the project as an observer. Over the next several years, methods for using virtual-heart models as scientific evidence began to take shape within regulatory research programs. In 2019, we formalized a second five-year collaboration with the FDA’s Center for Devices and Radiological Health with a specific goal.

That goal was to use the heart model to create a virtual patient population and re-create a pivotal trial of a previously approved device for repairing the heart’s mitral valve. This helped our team learn how to create such a population, and let the FDA experiment with evaluating virtual evidence as a replacement for evidence from flesh-and-blood patients. In August 2024, we published the results, creating the first FDA-led guidelines for in silico clinical trials and establishing a new paradigm for streamlining and reducing risk in the entire clinical-trial process.

In 10 years, we went from a concept that many people doubted could be achieved to regulatory reality. But building the heart was only the beginning. Following the template set by the heart team, we’ve expanded the project to develop virtual twins of other organs, including the lungs, liver, brain, eyes, and gut. Each corresponds to a different medical domain, which has its own community, data types, and clinical use cases. Working independently, these teams are progressing toward a breakthrough in our understanding of the human body: a multiscale, modular twin platform where each organ twin could plug into a unified virtual human.

How a digital twin of the heart is constructed

A cardiac digital twin starts with medical imaging, typically MRI, CT, or both. The slices are reconstructed into the 3D geometry of the heart and connected vessels. The geometry of the whole organ must then be segmented into its constituent parts, so each substructure—atria, ventricles, valves, and so on—can be assigned their unique properties.

At this point, the object is converted to a functional, computational model that can represent how the various cardiac tissues deform under load—the mechanics. The complete digital twin model becomes “living” when we integrate the electrical fiber network that drives mechanical contractions in the muscle tissue.

two computer simulations of a heart. The simulation on left shows the left ventricle with a triangular grid across the 3D surface. The simulation on right shows the exterior of a heart including vasculature and fat. Each part of the heart, such as the left ventricle [left], is superimposed with a detailed digital mesh to re-create its physiology. These pieces come together to form an anatomically accurate rendering of the whole organ [right].Dassault Systèmes

To simulate circulation, the twin adds computational models of hemodynamics, the physics of blood flow and pressure. The model is constrained by boundary conditions of blood flow, valve behavior, and vascular resistance set to closely match human physiology. This lets the model predict blood flow patterns, pressure differentials, and tissue stresses.

Finally, the model is personalized and calibrated using available patient data, such as how much the volume of the heart chambers changes during the cardiac cycle, pressure measurements, and the timing of electrical pulses. This means the twin reflects not only the patient’s anatomy but how their specific heart functions.

Building bigger cohorts with generative AI

When the FDA in silico clinical trial initiative launched in 2019, the project’s focus shifted from these handcrafted virtual twins of specific patients to cohorts large enough to stand in for entire trial populations. That scale is feasible today only because virtual twins have converged with generative AI. Modeling thousands of patients’ responses to a treatment or projecting years of disease progression is prohibitively slow with conventional digital-twin simulations. Generative AI removes that bottleneck.

AI boosts the capability of virtual twins in two complementary ways. First, machine learning algorithms are unrivaled at integrating the patchwork of imaging, sensor, and clinical records needed to build a high-fidelity twin. The algorithms rapidly search thousands of model permutations, benchmark each against patient data, and converge on the most accurate representation. Workflows that once required months of manual tuning can now be completed in days, making it realistic to spin up population-scale cohorts or to personalize a single twin on the fly in the clinic.

Second, enriching AI models’ training sets with data from validated virtual patients grounds the AI simulations in physics. By contrast, many conventional AI predictions for patient trajectories rely on statistical modeling trained on retrospective datasets. Such models can drift beyond physiological reality, but virtual twins anchor predictions in the laws of hemodynamics, electrophysiology, and tissue mechanics. This added rigor is indispensable for both research and clinical care—especially in areas where real-world data are scarce, whether because a disease is rare or because certain patient populations, such as children, are underrepresented in existing datasets.

Enabling in silico clinical trials

On the research side, the FDA-sponsored In Silico Clinical Trial Project that we completed in 2024 opened a new world for medical innovations. A conventional clinical trial may take a decade, and 90 percent of new drug treatments fail in the process. Virtual twins, combined with AI methods, allow researchers to design and test treatments quickly in a simulated human environment. With a small library of virtual twins, AI models can rapidly create expansive virtual patient cohorts to cover any subset of the general population. As clinical data becomes available, it can be added into the training set to increase reliability and enable better predictions.

3D simulations of the brain, foot, and lungs. A quadrant of the brain is cut out, showing a dense network of connections between color-coded sections of the brain. The foot shows a gray outline of bones and points of soft tissue strain in red at the ankle and heel. In the lung model, the trachea is colored green flowing into blue bronchi. The Living Heart Project has expanded beyond the heart, modeling organs throughout the body. The 3D brain reconstruction [top] shows major pathways in the brain’s white matter connecting color-coded regions of the brain. The lung virtual twin [middle] combines the organ’s geometry with a physics-based simulation of air flowing down the trachea and into the bronchi. And the cross section of a patient’s foot [bottom] shows points of strain in the soft tissue when bearing weight. Dassault Systèmes

Virtual twin cohorts can represent a realistic population by building individual “virtual patients” that vary by age, gender, race, weight, disease state, comorbidities, and lifestyle factors. These twins can be used as a rich training set for the AI model, which can expand the cohort from dozens to hundreds of thousands. Next the virtual cohort can be filtered to identify patients likely to respond to a treatment, increasing the chances of a successful trial for the target population.

The trial design can also include a sampling of patient types less likely to respond or with elevated risk factors, thus allowing regulators and clinicians to understand the risks to the broader population without jeopardizing overall trial success. This methodology enhances precision and efficiency in clinical research, providing population-level insights previously available only after many years of real-world evidence.

Of course, though today’s heart digital twins are powerful, they’re not perfect replicas. Their accuracy is bounded by three main factors: what we can measure (for example, image resolution or the uncertainty of how tissue behaves in real life), what we must assume about the physiology, and what we can validate against real outcomes. Many inputs, like scarring, microvascular function, or drug effects are difficult to capture clinically, so models often rely on population data or indirect estimation. That means predictions can be highly reliable for certain questions but remain less certain for others. Additionally, today’s digital twins lack validation for predicting long-term outcomes years in the future, because the technology has been in use for only a few years.

Over time, each of these limitations will steadily shrink. Richer, more standardized data will tighten personalization of the models. AI tools will help automate labor-intensive steps. And the collection of longitudinal data will improve the model’s ability to reliably predict how the body will evolve over time.

How virtual twins will change health care

Throughout modern medicine, new technologies have sharpened our ability to diagnose, providing ever-clearer images, lab data, and analytics that tell physicians what is presently happening inside a patient’s body. Virtual twins shift that paradigm, giving clinicians a predictive tool.

gif of a lung simulation. The lungs are blue when deflated then grow and become green with points of red. This “Living Lung” virtual-twin simulation shows strain patterns during breathing. Mona Eskandari/UC Riverside

Early demonstrations are already appearing in many areas of medicine, including cardiology, orthopedics, and oncology. Soon, doctors will also be able to collaborate across specialties, using a patient-specific virtual twin as the common ground for discussing potential interactions or side effects they couldn’t predict independently.

Although these applications will take some time to become the standard in clinical care, more changes are on the horizon. Real-time data from wearables, for example, could continuously update a patient’s personalized virtual twin. This approach could empower patients to understand and engage more deeply in their care, as they could see the direct effects of medical and lifestyle changes. In parallel, their doctors could get comprehensive data feeds, using virtual twins to monitor progress.

Imagine a digital companion that shows how your particular heart will react to different amounts of salt intake, stress, or sleep deprivation. Or a visual explanation of how your upcoming surgery will affect your circulation or breathing. Virtual twins could demystify the body for patients, fostering trust and encouraging proactive health decisions.

How are virtual twins being used in medicine?


  • Virtual twins have guided cardiovascular surgeries, providing predictions and exposing hidden details that even expert clinicians might miss, such as subtle tissue responses and flow dynamics.
  • Oncologists are modeling tumor growth and the body’s response to different therapies, reducing the uncertainty in choosing the best treatment path for both medical and quality-of-life metrics.
  • Orthopedic specialists are personalizing implants to deliver custom-made solutions, considering not only the local environment but also the overall body kinematics that will govern long-term outcomes.

A new era of healing

With the Living Heart Project, we’re bringing physics back to physicians. Modern physicians won’t need to be physicists, any more than they need to be chemists to use pharmacology. However, to benefit from the new technology, they will need to adapt their approach to care.

This means no longer seeing the body as a collection of discrete organs and considering only symptoms, but instead viewing it as a dynamic system that can be understood, and in most cases, guided toward health. It means no longer guessing what might work but knowing—because the simulation has already shown the result. By better integrating engineering principles into medicine, we can redefine it as a field of precision, rooted in the unchanging laws of nature. The modern physician will be a true physicist of the body and an engineer of health.

Reference: https://ift.tt/odRpNLZ

Overcoming Core Engineering Barriers in Humanoid Robotics Development




A technical examination of the sensing, motion control, power, and thermal challenges facing humanoid robotics engineers — with component-level design strategies for real-world deployment.

What Attendees will Learn

  1. Why motion control remains the hardest unsolved problem — Explore the modelling complexity, real-time feedback requirements, and sensor fusion demands of maintaining stable bipedal locomotion across dynamic environments.
  2. How sensing architectures enable perception and safety — Understand the role of inertial measurement units, force/torque feedback, and tactile sensing in achieving reliable human-robot interaction and collision avoidance.
  3. What power and thermal constraints mean for system design — Examine the trade-offs in battery chemistry selection (LFP vs. NCA), DC/DC converter topologies, and thermal protection strategies that determine operational endurance.
  4. How the industry is transitioning from prototype to mass production — Learn about the shift toward modular architectures, cost-driven component selection, and supply chain readiness projected for the late 2020s.

Download this free whitepaper now!

Reference: https://ift.tt/NUthPXa

Wednesday, March 18, 2026

ENIAC, the First General-Purpose Digital Computer, Turns 80




Happy 80th anniversary, ENIAC! The Electronic Numerical Integrator and Computer, the first large-scale, general-purpose, programmable electronic digital computer, helped shape our world.

On 15 February 1946, ENIAC—developed in the Moore School of Electrical Engineering at the University of Pennsylvania, in Philadelphia—was publicly demonstrated for the first time. Although primitive by today’s standards, ENIAC’s purely electronic design and programmability were breakthroughs in computing at the time. ENIAC made high-speed, general-purpose computing practicable and laid the foundation for today’s machines.

On the eve of its unveiling, the U.S. Department of War issued a news release hailing it as a new machine “expected to revolutionize the mathematics of engineering and change many of our industrial design methods.” Without a doubt, electronic computers have transformed engineering and mathematics, as well as practically every other domain, including politics and spirituality.

ENIAC’s success ushered the modern computing industry and laid the foundation for today’s digital economy. During the past eight decades, computing has grown from a niche scientific endeavor into an engine of economic growth, the backbone of billion-dollar enterprises, and a catalyst for global innovation. Computing has led to a chain of innovations and developments such as stored programs, semiconductor electronics, integrated circuits, networking, software, the Internet, and distributed large-scale systems.

Inside the ENIAC

The motivation for developing ENIAC was the need for faster computation during World War II. The U.S. military wanted to produce extensive artillery firing tables for field gunners to quickly determine settings for a specific weapon, a target, and conditions. Calculating the tables by hand took “human computers” several days, and the available mechanical machines were far too slow to meet the demand.

80 Years of Electronic Computer Milestones


1946

ENIAC operational

Birth of electronic computing

1951

UNIVAC I

Start of commercial computing

1958

Integrated circuit

Foundation for modern computer hardware

1964

IBM System/360

Popular mainframe computer

1970

Programmed Data Processor (PDP-11)

Popular 16-bit minicomputer

1971

Intel 4004

Beginning of the microprocessor and microcomputer era

1975

Cray-1

First supercomputer

1977

VAX

Popular 32-bit minicomputer

1981

IBM PC

Personal and small-business computing

1989

World Wide Web

Digital communication, interaction, and transaction (e-commerce)

2002

Amazon Web Services

Beginning of the cloud computing revolution

2010

Apple iPad

Handheld computer/tablet

2010

Industry 4.0

Delivered real-time decision-making, smart manufacturing, and logistics

2016

First reprogrammable quantum computer demonstrated

Ignited interest in quantum computing

2023

Generative AI boom

Widespread use of GenAI by individuals, businesses, and academia

2026

ENIAC’s 80th anniversary

80 years of computing evolution


In 1942 John Mauchly, an associate professor of electrical engineering at Penn’s Moore School, suggested using vacuum tubes to speed up computer calculations. Following up on his theory, the U.S. Army Ballistic Research Laboratory, which was responsible for providing artillery settings to soldiers in the field, commissioned Mauchly and his colleagues J. Presper Eckert and Adele Katz Goldstine, to work on a new high-speed computer. Eckert was a lab instructor at Moore, and Goldstine became one of ENIAC’s programmers. It took them a year to design ENIAC and 18 months to build it.

The computer contained about 18,000 vacuum tubes, which were cooled by 80 air blowers. More than 30 meters long, it filled a 9 m by 15 m room and weighed about 30 kilograms. It consumed as much electricity as a small town.

Programming the machine was difficult. ENIAC did not have stored programs, so to reprogram the machine, operators manually reconfigured cables with switches and plugboards, a process that took several days.

By the 1950s, large universities either had acquired or built their own machines to rival ENIAC. The schools included Cambridge (EDSAC), MIT (Whirlwind), and Princeton (IAS). Researchers used the computers to model physical phenomena, solve mathematical problems, and perform simulations.

After almost nine years of operation, ENIAC officially was decommissioned on 2 October 1955.

ENIAC in Action: Making and Remaking the Modern Computer, a book by Thomas Haigh, Mark Priestley, and Crispin Rope, describes the design, construction, and testing processes and dives into its afterlife use. The book also outlines the complex relationship between ENIAC and its designers, as well as the revolutionary approaches to computer architecture.

In the early 1970s, there was a controversy over who invented the electronic computer and who would be assigned the patent. In 1973 Judge Earl Richard Larson of U.S. District Court in Minnesota ruled in the Honeywell v. Sperry Rand case that Eckert and Mauchly did not invent the automatic electronic digital computer but instead had derived their subject matter from a computer prototyped in 1939 by John Vincent Atanasoff and Clifford Berry at Iowa State College (now Iowa State University). The ruling granted Atanasoff legal recognition as the inventor of the first electronic digital computer.

IEEE’s ENIAC Milestone

In 1987 IEEE designated ENIAC as an IEEE Milestone, citing it as “a major advance in the history of computing” and saying the machine “established the practicality of large-scale electronic digital computers and strongly influenced the development of the modern, stored-program, general-purpose computer.”

The commemorative Milestone plaque is displayed at the Moore School, by the entrance to the classroom where ENIAC was built.


“The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.”



A paper on the machine, published in 1996 in IEEE Annals of the History of Computing and available in the IEEE Xplore Digital Library, is a valuable source of technical information.

The Second Life of ENIAC,” an article published in the annals in 2006, covers a lesser-known chapter in the machine’s history, about how it evolved from a static system—configured and reconfigured through laborious cable plugging—into a precursor of today’s stored-program computers.

A classic history paper on ENIAC was published in the December 1995 IEEE Technology and Society Magazine.

The IEEE Inspiring Technology: 34 Breakthroughs book, published in 2023, features an ENIAC chapter.

The women behind ENIAC

One of the most remarkable aspects of the ENIAC story is the pivotal role women played, according to the book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer, highlighted in an article in The Institute. There were no “programmers” at that time; only schematics existed for the computer. Six women, known as the ENIAC 6, became the machine’s first programmers.

The ENIAC 6 were Kathleen Antonelli, Jean Bartik, Betty Holberton, Marlyn Meltzer, Frances Spence, and Ruth Teitelbaum.

“These six women found out what it took to run this computer, and they really did incredible things,” a Penn professor, Mitch Marcus, said in a 2006 PhillyVoice article. Marcus teaches in Penn’s computer and information science department.

In 1997 all six female programmers were inducted into the Women in Technology International Hall of Fame, in Los Angeles.

Two other women contributed to the programming. Goldstine wrote ENIAC’s five-volume manual, and Klára Dán von Neumann, wife of John von Neumann, helped train the programmers and debug and verify their code.

To honor the women of ENIAC, the IEEE Computer Society established the annual Computer Pioneer Award in 1981. Eckert and Mauchly were among the award’s first recipients. In 2008 Bartik was honored with the award. Nominations are open to all professionals, regardless of gender.

An ENIAC replica

Last year a group of 80 autistic students, ages 12 to 16, from PS Academy Arizona, in Gilbert, recreated the ENIAC using 22,000 custom parts. It took the students almost six months to assemble.

A ceremony was held in January to display their creation. The full-scale replica features actual-size panels made from layered cardboard and wood. Although all electronic components are simulated, they are not electrically active. The machine, illuminated by hundreds of LEDs, is accompanied by a soundtrack that simulates the deep hum of ENIAC’s transformers and the rhythmic clicking of relays.


A white woman using a computer-adding machine in the 1940\u2019s. The device resembles a bulky typewriter and prints large stacks of paper with tabulated answers.

“Every major unit, accumulators, function tables, initiator, and master programmer is present and placed exactly where it was on the original machine,” Tom Burick, the teacher who mentored the project, said at the ceremony.

The replica, still on display at the school, is expected to be moved to a more permanent spot in the near future.

ENIAC’s legacy

ENIAC’s significance is both technical and symbolic. Technically, it marks the beginning of the chain of innovations that created today’s computational infrastructure. Symbolically, it made governments, militaries, universities, and industry view computation as a tool for improvement and for innovative applications that had previously been impossible. It marked a tectonic shift in the way humans approach problem-solving, modeling, and scientific reasoning.

The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.

As Eckert is reported to have said, “There are two epochs in computer history: Before ENIAC and After ENIAC.”

Coevolution of programming languages

The remarkable evolution of computer hardware during the past 80 years has been sparked by advances in programming languages—the essential drivers of computing.

From the manual rewiring of ENIAC to the orchestration of intelligent, distributed systems, programming languages have steadily evolved to make computers more powerful, expressive, and accessible.

Lessons From Computing’s Remarkable Journey


Computing history teaches us that flexibility, accessibility, collaboration, sound governance, and forward thinking are essential for sustained technological progress. In a recent Communications of the ACM article, Richa Gupta identified four historic shifts that led to computing’s rapid, transformative progress:

  1. Programmable machines taught us that flexibility is key; technologies that adapt and are repurposed scale better.
  2. The Internet showed that connection and standard protocols drive explosive growth but also bring new risks such as data security issues, invasion of privacy, and misuse.
  3. Personal computers illustrated that accessibility and usability matter more than raw power. When nonexperts can use a tool easily, adoption rises.
  4. The open-source movement revealed that collaborative innovation accelerates growth and helps spot problems early.

Predictions for computing in the decades ahead

The evolution of computing will continue along multiple trajectories, with the emphasis moving from generalization to specialization (for AI, graphics, security, and networking), from monolithic system design to modular integration, and from performance-centric metrics alone to energy efficiency and sustainability as primary objectives.

Increasingly, security will be built into hardware by design. Computing paradigms will expand beyond traditional deterministic models to embrace probabilistic, approximate, and hybrid approaches for certain tasks.

Those developments will usher in a new era of computing and a new class of applications.

Reference: https://ift.tt/TWIuEQi

Federal cyber experts called Microsoft's cloud a "pile of shit," approved it anyway


In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

Read full article

Comments

Reference : https://ift.tt/bXcND7e

Tuesday, March 17, 2026

New Polymer Blend Could Help Store Energy for the Grid and EVs




As electronics demand higher energy density, one component has proved challenging to shrink: the capacitor. Making a smaller capacitor usually requires thinning the dielectric layer or electrode surface area, which has often resulted in a reduction of power. A new polymer material could help change that.

In a study published 18 February in Nature, a Pennsylvania State University-led team reported a capacitor crafted from a polymer blend that can operate at temperatures up to 250 °C while storing roughly four times as much energy as conventional polymer capacitors. Today’s advanced polymer capacitors typically function only up to about 100 °C, meaning engineers often rely on bulky cooling systems in high-power electronics. The research team has filed a patent for the polymer capacitors and plans to bring them to market.

Capacitors deliver rapid bursts of energy and stabilize voltage in circuits, making them essential in applications ranging from electric vehicles and aerospace electronics to power-grid infrastructure and AI data centers. Yet while transistors have steadily shrunk with advances in semiconductor manufacturing, passive components such as capacitors and inductors have not scaled at the same pace.

“Capacitors can account for 30 to 40 percent of the volume in some power electronics systems,” says Qiming Zhang, an electrical engineering researcher at Penn State and study author, explaining why it’s important to make smaller capacitors.

A plastics blend more powerful than its parts

The research team combined two commercially available engineered plastics: polyetherimide (PEI), originally developed by General Electric and widely used in industrial equipment, and PBPDA, known for strong heat resistance and electrical insulation. When processed together under controlled conditions, the polymers self-assemble into nanoscale structures that form thin dielectric films inside capacitors. Those structures help suppress electrical leakage while allowing the material to polarize strongly in an electric field, allowing greater energy storage.

The resulting material exhibits an unusually high dielectric constant—a measure of how much electrical energy a material can store. Most polymer dielectrics have values around four, but the blended polymer dielectric in the new work had a value of 13.5.

“If you look at the literature up to now, no one has reached this level of dielectric constant in this type of polymer system,” Zhang says. “Putting two commonly used polymers together and seeing this kind of performance was a surprise to many people.”

Because the material can remain operational even at elevated temperatures—such as those from extreme environmental heat or hot spots in densely built components—capacitors built from this polymer could potentially store the same amount of energy in a smaller package.

“With this material, you can make the same device using about [one-fourth as much] material,” Zhang says. “Because the polymers themselves are inexpensive, the cost does not increase. At the same time, the component can become smaller and lighter.”

How the polymer mix improves capacitors

The researchers’ finding is “a big advancement,” says Alamgir Karim, a polymer research director at the University of Houston who was not involved in the Penn State development. “Normally when you mix polymers, you don’t expect the dielectric constant to increase.”

Karim says the effect likely arises from nanoscale interfaces created when the polymers partially separate. “At about a 50–50 mixture, the polymers don’t fully mix and instead create a very large interfacial area,” he says. “Those interfaces may be where the unusual electrical behavior comes from.”

If the material can be produced at scale, it could help address a key bottleneck in high-power electronics. Higher-temperature capacitors could reduce cooling requirements and allow engineers to pack more power into smaller systems—an advantage for aerospace platforms, electric vehicles, the electric grid, and other high-temperature environments.

But translating the concept from laboratory methods to commercial manufacturing may present challenges, says Zongliang Xie, a postdoctoral researcher at the Lawrence Berkeley National Laboratory. The Penn State team is now producing small dielectric films, but industrial capacitor manufacturing typically requires continuous rolls of material that can extend for kilometers.

“Industry generally prefers extrusion-based processing because it’s easier and cheaper to control,” Xie says. “Scaling to produce great lengths of film while maintaining the same structure and performance could complicate matters. There’s potential, but it’s also challenging.”

Still, researchers say the discovery demonstrates that new performance limits may still be unlocked using familiar materials. “Developing the material is only the first step,” Zhang says. “But it shows people that this barrier can be broken.”

Reference: https://ift.tt/QmjeiRs

Wanted: Europe’s Missing Cloud Provider




Looming over the internet lasers and firestarting phones companies were touting at Mobile World Congress in Barcelona this month, was a more nebulous but much larger announcement: a pan-European cloud called EURO-3C.

EURO-3C’s backers – Spanish telecoms giant Telefónica, dozens of other European companies, and the European Commission (EC) – aim to fill a gap. U.S.-based cloud giants dominate in the EU, and European policymakers want their growing portfolio of digital government services on a “sovereign cloud” under full EU control.

But the EU lacks a real equivalent to the likes of AWS or Microsoft Azure. Indeed, any effort to build one will inevitably run up against the same U.S. cloud giants.

Just four U.S.-based hyperscalers – AWS, Microsoft Azure, Google Cloud, and IBM Cloud – together account for some 70 percent of EU cloud services. This is despite the fact that the 2018 U.S. CLOUD Act allows U.S. federal law enforcement – at least in theory – to compel U.S.-based firms to hand over data that’s stored abroad.

But those hypothetical risks to digital services have become more real as transatlantic relations have soured under the second Trump administration. The U.S. has openly threatened to invade an EU member state and sanctioned a European Commissioner for passing legislation the White House dislikes.

After the White House sanctioned the Netherlands-based International Criminal Court in February 2025, Court staffers claimed Microsoft locked the Court’s chief prosecutor out of his email (Microsoft has denied this). Around the same time, the U.S. reportedly threatened to sever EU ally Ukraine’s access to crucial Starlink satellite internet as leverage during trade negotiations.

“The geopolitical risk isn’t just the most extreme form of a doomsday ‘kill switch’ where Washington turns off Europe’s internet,” Stéfane Fermigier of EuroStack, an industry group that supports European digital independence. “It is the selective degradation of services and a total lack of retaliatory leverage.”

What, then, is the EU to do? France offers an example. Even before 2025, France implemented harsh restrictions on non-EU cloud providers in public services – providers must locate data in the EU, rely on EU-based staff, and may not have majority-non-EU shareholders. Now, EU policymakers are following France’s lead.

In October 2025, the EC issued a two-part framework for judging cloud providers bidding for public sector contracts. In the first part, the framework lays out a sort of sovereignty ladder. The more that a provider is subject to EU law, the higher its sovereignty level on this ladder. Any prospective bidder must first meet a certain level, depending on the tender.

Qualifying bidders then move to the second part, where their “sovereignty” is scored in more detail. Using too much proprietary software; over-relying on supply chains from outside the EU; having non-EU support staff; liability to non-EU laws like the CLOUD Act: all hurt a bidder’s score.

The framework was created for one tender, but observers say it sets a major precedent. Cloud providers bidding for state contracts across Europe may need to follow it, and it may influence legislation on both national and EU-wide levels.

Who, then, will receive high marks? At the moment, the answer is not simple. The EU cloud scene is quite fragmented. Numerous modest EU providers offer “sovereign cloud” services – such as Scaleway, OVHcloud, and Deutsche Telekom’s T-Systems – but none are on the scale of AWS or Google Cloud.


Inertia is on the side of the U.S. cloud giants, who can invest in their infrastructure and services on a far grander scale than their European counterparts. Some U.S. providers now offer cloud services they say comply with the Commission’s “cloud sovereignty” demands.

Some European observers, like EuroStack, say such promises are hollow so long as a provider’s parent company is subject to the likes of the CLOUD Act, and loopholes in the Commission’s process remain open. An AWS spokesperson told Spectrum it had not disclosed any non-US enterprise or government data to the U.S. government under the CLOUD Act; a Google spokesperson said that its most sensitive EU offerings “are subject to local laws, not US law”.

Even if a project like EURO-3C can offer a large-scale alternative, the US cloud giants have another sort of inertia. Many developers – and many public purchasers of their services – will need convincing to leave behind a familiar environment.


“If you look at AWS, you look at Google, they’ve created some super technology. It’s very convenient, it’s easy to use,” says Arnold Juffer, CEO of the Netherlands-based cloud provider Nebul. “Once you’re in that platform, in that ecosystem, it’s very hard to get out.”

Martyna Chmura, an analyst at the Bloomsbury Intelligence and Security Institute, a London-based think tank, sees some EU developers taking a mixed approach. “Many organizations are already moving toward multi-cloud setups, using European or sovereign providers for sensitive workloads while still relying on hyperscalers for certain services,” she says.

In that case, the EU’s top-down demands may encourage developers to use EU providers for sensitive applications – like government services, transport, autonomous vehicles, and some industrial automation – even if it’s inconvenient in the short term, or if it causes even more fragmentation of the EU cloud scene. “Running systems across different platforms can increase integration costs and make security and data governance more complicated. In some cases, organisations could lose some of the efficiency and cost advantages that come from using large hyperscale platforms,” Chmura says.

“Overall, the EU appears willing to accept some of these trade-offs,” Chmura says.

Reference: https://ift.tt/jzZgEnp

Nigerian Firms Embrace Kit-Based EV Assembly for Cost Savings

A growing number of Nigerian companies are turning to kit-based assembly to bring electric vehicles to market in Africa. Lagos-based Sag...