The world is collectively freaking out about the growth of artificial intelligence and its strain on power grids. But a look back at electricity load growth in the United States over the last 75 years shows that innovations in efficiency continually compensate for relentless technological progress.
In the 1950s, for example, rural America electrified, the industrial sector boomed, and homeowners rapidly accumulated nifty domestic appliances such as spinning clothes dryers and deep freezers. This caused electricity demand to grow at a breathtaking clip of nearly 9 percent per year on average. The growth continued into the 1960s as homes and businesses readily adopted air conditioners and the industrial sector automated. But over the next 30 years, industrial processes such as steelmaking became more efficient, and home appliances did more with less power.
Around 2000, the onslaught of computing brought widespread concerns about its electricity demand. But even with the explosion of Internet use and credit card transactions, improvements in computing and industrial efficiencies and the adoption of LED lighting compensated. Net result: Average electricity growth in the United States remained nearly flat from 2000 to 2020.
Now it’s back on the rise, driven by AI data centers and manufacturing of batteries and semiconductor chips. Electricity demand is expected to grow more than 3 percent every year for the next five years, according to Grid Strategies, an energy research firm in Washington, D.C. “Three percent per year today is more challenging than 3 percent in the 1960s because the baseline is so much larger,” says John Wilson, an energy regulation expert at Grid Strategies.
Can the United States counter the growth with innovation in data-center and industrial efficiency? History suggests it can.
Meet FREDERICK Mark 2, the Friendly Robot for Education, Discussion and Entertainment, the Retrieval of Information, and the Collation of Knowledge, better known as Freddy II. This remarkable robot could put together a simple model car from an assortment of parts dumped in its workspace. Its video-camera eyes and pincer hand identified and sorted the individual pieces before assembling the desired end product. But onlookers had to be patient. Assembly took about 16 hours, and that was after a day or two of “learning” and programming.
Freddy II was completed in 1973 as one of a series of research robots developed byDonald Michieand his team at the University of Edinburgh during the 1960s and ’70s. The robots became the focus of an intense debate over the future of AI in the United Kingdom. Michie eventually lost, his funding was gutted, and the ensuing AI winter set back U.K. research in the field for a decade.
Why were the Freddy I and II robots built?
In 1967, Donald Michie, along with Richard Gregory and Hugh Christopher Longuet-Higgins, founded the Department of Machine Intelligence and Perception at the University of Edinburgh with the near-term goal of developing a semiautomated robot and then longer-term vision of programming “integrated cognitive systems,” or what other people might call intelligent robots. At the time, the U.S. Defense Advanced Research Projects Agency and Japan’s Computer Usage Development Institute were both considering plans to create fully automated factories within a decade. The team at Edinburgh thought they should get in on the action too.
Two years later, Stephen Salter and Harry G. Barrow joined Michie and got to work on Freddy I. Salter devised the hardware while Barrow designed and wrote the software and computer interfacing. The resulting simple robot worked, but it was crude. The AI researcher Jean Hayes (who would marry Michie in 1971) referred to this iteration of Freddy as an “arthritic Lady of Shalott.”
Freddy I consisted of a robotic arm, a camera, a set of wheels, and some bumpers to detect obstacles. Instead of roaming freely, it remained stationary while a small platform moved beneath it. Barrow developed an adaptable program that enabled Freddy I to recognize irregular objects. In 1969, Salter and Barrow published in Machine Intelligence their results, “Design of Low-Cost Equipment for Cognitive Robot Research,” which included suggestions for the next iteration of the robot.
Freddy I, completed in 1969, could recognize objects placed in front of it—in this case, a teacup.University of Edinburgh
More people joined the team to build Freddy Mark 1.5, which they finished in May 1971. Freddy 1.5 was a true robotic hand-eye system. The hand consisted of two vertical, parallel plates that could grip an object and lift it off the platform. The eyes were two cameras: one looking directly down on the platform, and the other mounted obliquely on the truss that suspended the hand over the platform. Freddy 1.5’s world was a 2-meter by 2-meter square platform that moved in an x-y plane.
Freddy 1.5 quickly morphed into Freddy II as the team continued to grow. Improvements included force transducers added to the “wrist” that could deduce the strength of the grip, the weight of the object held, and whether it had collided with an object. But what really set Freddy II apart was its versatile assembly program: The robot could be taught to recognize the shapes of various parts, and then after a day or two of programming, it could assemble simple models. The various steps can be seen in this extended video, narrated by Barrow:
The Lighthill Report Takes Down Freddy the Robot
And then what happened? So much. But before I get into all that, let me just say that rarely do I, as a historian, have the luxury of having my subjects clearly articulate the aims of their projects, imagine the future, and then, years later, reflect on their experiences. As a cherry on top of this historian’s delight, the topic at hand—artificial intelligence—also happens to be of current interest to pretty much everyone.
As with many fascinating histories of technology, events turn on a healthy dose of professional bickering. In this case, the disputants were Michie and the applied mathematician James Lighthill, who had drastically different ideas about the direction of robotics research. Lighthill favored applied research, while Michie was more interested in the theoretical and experimental possibilities. Their fight escalated quickly, became public with a televised debate on the BBC, and concluded with the demise of an entire research field in Britain.
A damning report in 1973 by applied mathematician James Lighthill [left] resulted in funding being pulled from the AI and robotics program led by Donald Michie [right]. Left: Chronicle/Alamy; Right: University of Edinburgh
It all started in September 1971, when the British Science Research Council, which distributed public funds for scientific research, commissioned Lighthill to survey the state of academic research in artificial intelligence. The SRC was finding it difficult to make informed funding decisions in AI, given the field’s complexity. It suspected that some AI researchers’ interests were too narrowly focused, while others might be outright charlatans. Lighthill was called in to give the SRC a road map.
No intellectual slouch, Lighthill was the Lucasian Professor of Mathematics at the University of Cambridge, a position also held by Isaac Newton, Charles Babbage, and Stephen Hawking. Lighthill solicited input from scholars in the field and completed his report in March 1972. Officially titled “ Artificial Intelligence: A General Survey,” but informally called the Lighthill Report, it divided AI into three broad categories: A, for advanced automation; B, for building robots, but also bridge activities between categories A and C; and C, for computer-based central nervous system research. Lighthill acknowledged some progress in categories A and C, as well as a few disappointments.
Lighthill viewed Category B, though, as a complete failure. “Progress in category B has been even slower and more discouraging,” he wrote, “tending to sap confidence in whether the field of research called AI has any true coherence.” For good measure, he added, “AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.” So very British.
Lighthill concluded his report with his view of the next 25 years in AI. He predicted a “fission of the field of AI research,” with some tempered optimism for achievement in categories A and C but a valley of continued failures in category B. Success would come in fields with clear applications, he argued, but basic research was a lost cause.
The Science Research Council published Lighthill’s report the following year, with responses from N. Stuart Sutherland of the University of Sussex and Roger M. Needham of the University of Cambridge, as well as Michie and his colleague Longuet-Higgins.
Sutherland sought to relabel category B as “basic research in AI” and to have the SRC increase funding for it. Needham mostly supported Lighthill’s conclusions and called for the elimination of the term AI—“a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better.”
Longuet-Higgins focused on his own area of interest, cognitive science, and ended with an ominous warning that any spin-off of advanced automation would be “more likely to inflict multiple injuries on human society,” but he didn’t explain what those might be.
Michie, as the United Kingdom’s academic leader in robots and machine intelligence, understandably saw the Lighthill Report as a direct attack on his research agenda. With his funding at stake, he provided the most critical response, questioning the very foundation of the survey: Did Lighthill talk with any international experts? How did he overcome his own biases? Did he have any sources and references that others could check? He ended with a request for more funding—specifically the purchase of a DEC System 10 (also known as the PDP-10) mainframe computer. According to Michie, if his plan were followed, Britain would be internationally competitive in AI by the end of the decade.
After Michie’s funding was cut, the many researchers affiliated with his bustling lab lost their jobs.University of Edinburgh
This whole affair might have remained an academic dispute, but then the BBC decided to include a debate between Lighthill and a panel of experts as part of its “Controversy” TV series. “Controversy” was an experiment to engage the public in science. On 9 May 1973, an interested but nonspecialist audience filled the auditorium at the Royal Institution in London to hear the debate.
Lighthill started with a review of his report, explaining the differences he saw between automation and what he called “the mirage” of general-purpose robots. Michie responded with a short film of Freddy II assembling a model, explaining how the robot processes information. Michie argued that AI is a subject with its own purposes, its own criteria, and its own professional standards.
After a brief back and forth between Lighthill and Michie, the show’s host turned to the other panelists: John McCarthy, a professor of computer science at Stanford University, and Richard Gregory, a professor in the department of anatomy at the University of Bristol who had been Michie’s colleague at Edinburgh. McCarthy, who coined the term artificial intelligence in 1955, supported Michie’s position that AI should be its own area of research, not simply a bridge between automation and a robot that mimics a human brain. Gregory described how the work of Michie and McCarthy had influenced the field of psychology.
Despite international support from the AI community, though, the SRC sided with Lighthill and gutted funding for AI and robotics; Michie had lost. Michie’s bustling lab went from being an international center of research to just Michie, a technician, and an administrative assistant. The loss ushered in the first British AI winter, with the United Kingdom making little progress in the field for a decade.
In 1983, Michie founded the Turing Institute in Glasgow, an AI lab that worked with industry on both basic and applied research. The year before, he had written Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach). Michie intended it as intellectual musings that he hoped scientists would read, perhaps on the weekend, to help them get beyond the pursuits of the workweek. The book is wide-ranging, covering his three decades of work.
In the introduction to the chapters covering Freddy and the aftermath of the Lighthill report, Michie wrote, perhaps with an eye toward history:
“Work of excellence by talented young people was stigmatised as bad science and the experiment killed in mid-trajectory. This destruction of a co-operative human mechanism and of the careful craft of many hands is elsewhere described as a mishap. But to speak plainly, it was an outrage. In some later time when the values and methods of science have further expanded, and those adversary politics have contracted, it will be seen as such.”
History has indeed rendered judgment on the debate and the Lighthill Report. In 2019, for example, computer scientist Maarten van Emden, a colleague of Michie’s, reflected on the demise of the Freddy project with these choice words for Lighthill: “a pompous idiot who lent himself to produce a flaky report to serve as a blatantly inadequate cover for a hatchet job.”
And in a March 2024 post on GitHub, the blockchain entrepreneur Jeffrey Emanuel thoughtfully dissected Lighthill’s comments and the debate itself. Of Lighthill, he wrote, “I think we can all learn a very valuable lesson from this episode about the dangers of overconfidence and the importance of keeping an open mind. The fact that such a brilliant and learned person could be so confidently wrong about something so important should give us pause.”
Arguably, both Lighthill and Michie correctly predicted certain aspects of the AI future while failing to anticipate others. On the surface, the report and the debate could be described as simply about funding. But it was also more fundamentally about the role of academic research in shaping science and engineering and, by extension, society. Ideally, universities can support both applied research and more theoretical work. When funds are limited, though, choices are made. Lighthill chose applied automation as the future, leaving research in AI and machine intelligence in the cold.
It helps to take the long view. Over the decades, AI research has cycled through several periods of spring and winter, boom and bust. We’re currently in another AI boom. Is this time different? No one can be certain what lies just over the horizon, of course. That very uncertainty is, I think, the best argument for supporting people to experiment and conduct research into fundamental questions, so that they may help all of us to dream up the next big thing.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the May 2025 print issue as “This Robot Was the Fall Guy for British AI.”
References
Donald Michie’s lab regularly published articles on the group’s progress, especially in Machine Intelligence, a journal founded by Michie.
In 2009, a group of alumni from Michie’s Edinburgh lab, including Harry Barrow and Pat Fothergill (formerly Ambler), created a website to share their memories of working on Freddy. The site offers great firsthand accounts of the development of the robot. Unfortunately for the historian, they didn’t explore the lasting effects of the experience. A decade later, though, Maarten van Emden did, in his 2019 article “Reflecting Back on the Lighthill Affair,” in the IEEE Annals of the History of Computing.
Beyond his academic articles, Michie was a prolific author. Two collections of essays I found particularly useful are On Machine Intelligence (John Wiley & Sons, 1974) and Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach, 1982).
Rural connectivity is still a huge issue. As of 2022, approximately 28 percent of Americans living in rural areas did not have access to broadband Internet, which at that time was defined by 25 megabits per second for download speeds, and 3 megabits per second for upload speeds by the Federal Communications Commission (FCC). As of 2024, FCC came out with a new benchmark with higher speed requirements—increasing the number of people whose connections don’t meet the definition.One potential solution to the problem is small, rugged data centers with relatively old, redundant components, placed strategically in rural areas such that crucial data can be stored locally and network providers can route through them, providing redundancy.
“We are not the big AI users,” said Doug Recker, the CEO of Duos Edge AI, in a talk delivered at the Data Center World conference in Washington, D.C. earlier this month. “We’re still trying to resolve the problem from 20 years ago. These aren’t high-bandwidth or high-power data centers. We don’t need them out there. We just need better connectivity. We need robust networks.”
The Jacksonville, Florida-based startup provides small data centers (about the size of a shipping container) to rural areas, mostly in the Texas panhandle. They recently added such a data center in Amarillo, working with the local school district to provide more robust connectivity to students. The school district runs their learning platform on Amazon Web Services (AWS), and can now store that platform locally in the data center.
Previously, data had to travel to and from Dallas, over 500 kilometers away. Network outages were a common occurrence, impeding student learning. Recker’s company paid the upfront cost of US $1.2 to $1.5 million to build the 15-cabinet data center, which it calls a pod. Duos is making the money back by charging a monthly usage and maintenance fee (between $1800 and $3000 per shelf) to the school district and other customers.
The company follows a ‘build what’s needed and they will come’ approach. Once the data center is installed, Recker says, existing network providers co-locate there, providing redundancy and reliability to the customers. The pod provides a seed around which network providers can build a hub-and-spoke-type network.
3 Requirements for Edge Data Centers
The trick to making these edge data centers financially profitable is minimizing their energy usage and maximizing their reliability. To minimize energy use, Duos uses relatively old, time-tested equipment. For reliability, every piece of equipment is duplicated, including uninterruptible power supply batteries, generators, and air conditioning units.
They also have to locate the pods in places where there would still be a large enough number of potential customers to justify building a 15-rack pod (the equipment is rented out per rack).
The pods are unmanned, but efficient and timely maintenance is key. “Say your AC unit goes down at two in the morning,” Recker says. “It’s redundant, but you don’t want it to be down, so you have to dispatch somebody who can get into a pod at two o’clock in the morning.” Duos has a system for dispatching maintenance workers, and an auditing standard that remotely keeps track of all the work that has been done or needs to be done on each piece of equipment. Each pod also has a clean room to prevent maintenance workers from tracking in dust or dirt from outside while they work on repairs.
The compact data center allows the Amarillo school district to have affordable and reliable connectivity for their digital learning platform. Students will soon have access to AI-powered tools, simulations, and real-time data for their classes. “The pod enables that to happen because they can compute on site and host that environment on site where they couldn’t do it before because of the latency issues,” says Recker.
Duos is also placing pods elsewhere in the Texas panhandle, as well as Florida. And they’re getting so much demand in Amarillo that they’re planning to install a second pod. Recker says that although they initially built the pod in collaboration with the school district, other local institutions quickly became interested as well, including hospitals, utility companies, and farmers.
One of the most influential—and by some counts, notorious—AI models yet released will soon fade into history. OpenAI announced on April 10 that GPT-4 will be "fully replaced" by GPT-4o in ChatGPT at the end of April, bringing a public-facing end to the model that accelerated a global AI race when it launched in March 2023.
"Effective April 30, 2025, GPT-4 will be retired from ChatGPT and fully replaced by GPT-4o," OpenAI wrote in its April 10 changelog for ChatGPT. While ChatGPT users will no longer be able to chat with the older AI model, the company added that "GPT-4 will still be available in the API," providing some reassurance to developers who might still be using the older model for various tasks.
The retirement marks the end of an era that began on March 14, 2023, when GPT-4 demonstrated capabilities that shocked some observers: reportedly scoring at the 90th percentile on the Uniform Bar Exam, acing AP tests, and solving complex reasoning problems that stumped previous models. Its release created a wave of immense hype—and existential panic—about AI's ability to imitate human communication and composition.
In the rocky terrain of China’s Sichuan province, a massive X-shaped building is quickly rising, its crisscrossed arms stretching outward in a bold, futuristic design. From a satellite’s view, it could be just another ambitious megaproject in a country known for building fast and thinking big. But to some observers of Chinese tech development, it’s yet more evidence that China may be on the verge of pulling ahead in one of the most consequential technological races of our time: the quest to achieve commercial nuclear fusion.
The X-shaped facility under construction in Mianyang, Sichuan, appears to be a massive laser-based fusion facility; its four long arms, likely laser bays, could focus intense energy on a central chamber. Analysts who’ve examined satellite imagery and procurement records say it resembles the U.S. National Ignition Facility (NIF), but is significantly larger. Others have speculated that it could be a massive Z-pinch machine—a fusion-capable device that uses an extremely powerful electrical current to compress plasma into a narrow, dense column.
“Even if China is not ahead right now,” says Decker Eveleth, an analyst at the research nonprofit CAN Corp., “when you look at how quickly they build things, and the financial willpower to build these facilities at scale, the trajectory is not favorable for the U.S.”
Fusion is a marathon, not a sprint—and China is pacing itself to win.
Other Chinese plasma physics programs have also been gathering momentum. In January, researchers at the Experimental Advanced Superconducting Tokamak (EAST)—nicknamed the “Artificial Sun”—reported maintaining plasma at over 100 million degrees Celsius for more than 17 minutes. (A tokamak is a donut-shaped device that uses magnetic fields to confine plasma for nuclear fusion.) Operational since 2006, EAST is based in Hefei, in Anhui province, and serves as a testbed for technologies that will feed into next-generation fusion reactors.
Meanwhile, on Yaohu Science Island in Nanchang, in central China, the federal government is preparing to launchXinghuo—the world’s first fusion-fission hybrid power plant. Slated for grid connection by 2030, the reactor will use high-energy neutrons from fusion reactions to trigger fission in surrounding materials, boosting overall energy output and potentially reducing long-lived radioactive waste. Xinghuo aims to generate 100 megawatts of continuous electricity, enough to power approximately 83,000 U.S.-size homes.
Why China is doubling down on fusion
Why such an aggressive push? Fusion energy aligns neatly with three of China’s top priorities: securing domestic energy, reducing carbon emissions, and winning the future of high technology—a pillar of President Xi Jinping’s“great rejuvenation” agenda.
“Fusion is a next-generation energy technology,” says Jimmy Goodrich, a senior advisor for technology analysis at RAND. “Whoever masters it will gain enormous advantages—economically, strategically, and from a national security perspective.”
The lengthy development required to commercialize fusion also aligns with China’s political economy. Fusion requires patient capital. The Chinese government doesn’t need to answer to voters or shareholders, and so it’s uniquely suited to fund fusion R&D and wait for a payoff that may take decades.
In the United States, by contrast, fusion momentum has shifted away from government-funded projects to private companies like Helion, Commonwealth Fusion Systems, and TAE Technologies. These fusion startups have captured billions in venture capital, riding a wave of interest from tech billionaires hoping to power, among other things, the datacenters of an AI-driven future. But that model has vulnerabilities. If demand for energy-hungry data centers slows or market sentiment turns, funding could dry up quickly.
“The future of fusion may come down to which investment model proves more resilient,” says Goodrich. “If there’s a slowdown in AI or data center demand, U.S. [fusion] startups could see funding evaporate. In contrast, Chinese fusion firms are unlikely to face the same risk, as sustained government support can shield them from market turbulence.”
The talent equation is shifting, too. In March, plasma physicist Chang Liu left the Princeton Plasma Physics Laboratory to join a fusion program at Peking University, where he’d earned his undergraduate degree. At the Princeton lab, Liu had pioneered a promising method to reduce the impact of damaging runaway electrons in tokamak plasmas.
“The future of fusion may come down to which investment model proves more resilient.” —Jimmy Goodrich, RAND
Liu’s move exemplifies a broader trend, says Goodrich. “When the Chinese government prioritizes a sector for development, a surge of financing and incentives quickly follows,” he says. “For respected scientists and engineers in the U.S. or Europe, the chance to [move to China to] see their ideas industrialized and commercialized can be a powerful draw.”
Meanwhile, China is growing its own talent. Universities and labs in Hefei, Mianyang, and Nanchang are training a generation of physicists and engineers to lead in fusion science. Within a decade, China could have a vast, self-sustaining pipeline of experts.
There are military implications as well. Eveleth notes that while the Mianyang project could aid energy research, it also will boost China’s ability to simulate nuclear weapons tests. “Whether it’s a laser fusion facility or a Z-pinch machine, you’re looking at a pretty significant increase in Chinese capability to conduct miniaturized weapons experiments and boost their understanding of various materials used within weapons,” says Eveleth.
These new facilities are likely to surpass U.S. capabilities for certain kinds of weapons development, Eveleth warns. While Los Alamos and other U.S. national labs are aging, China is building fresh and installing the latest technologies in shiny new buildings.
The United States still leads in scientific creativity and startup diversity, but the U.S. fusion effort remains comparatively fragmented. During the Biden administration, the U.S. government invested about $800 million annually in fusion research. China, according to the U.S. Department of Energy, is investing up to $1.5 billion per year—although some analysts say that the amount could be twice as high.
Fusion is a marathon, not a sprint—and China is pacing itself to win. Backed by a coordinated national strategy, generous funding, and a rapidly expanding talent base, Beijing isn’t just chasing fusion energy—it’s positioning itself to dominate the field.
“It’s a Renaissance moment for advanced energy in China,” says Goodrich, who contends that unless the United States ramps up public investment and support, it may soon find itself looking eastward at the future of fusion. The next few years will be decisive, he and others say. Reactors are rising. Scientists are relocating. Timelines are tightening. Whichever nation first harnesses practical fusion energy won’t just light up cities. It may also reshape the balance of global power.
AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows.
The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code and are an essential part of the modern software supply chain.
Package hallucination flashbacks
These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks. These attacks work by causing a software package to access the wrong component dependency, for instance by publishing a malicious package and giving it the same name as the legitimate one but with a later version stamp. Software that depends on the package will, in some cases, choose the malicious version rather than the legitimate one because the former appears to be more recent.
Backblaze is dismissing allegations from a short seller that it engaged in “sham accounting” that could put the cloud storage and backup solution provider and its customers' backups in jeopardy.
On April 24, Morpheus Research posted a lengthy report accusing the San Mateo, California-based firm of practicing “sham accounting and brazen insider dumping.” The claims largely stem from a pair of lawsuits filed against Backblaze by former employees Huey Hall [PDF] and James Kisner [PDF] in October. Per LinkedIn profiles, Hall was Backblaze’s head of finance from March 2020 to February 2024, and Kisner was Backblaze’s VP of investor relations and financial planning from May 2021 to November 2023.
As Morpheus wrote, the lawsuits accuse Backblaze’s founders of participating in “an aggressive trading plan to sell 10,000 shares a day, along with other potential sales from early employee holders, against ‘all external capital markets advice.’” The plan allegedly started in April 2022, after the IPO lockup period expired and despite advisor warnings, including one from a capital markets consultant that such a trading plan likely breached Backblaze’s fiduciary duties.