Wednesday, April 30, 2025

This Chart Might Keep You From Worrying About AI’s Energy Use





The world is collectively freaking out about the growth of artificial intelligence and its strain on power grids. But a look back at electricity load growth in the United States over the last 75 years shows that innovations in efficiency continually compensate for relentless technological progress.

In the 1950s, for example, rural America electrified, the industrial sector boomed, and homeowners rapidly accumulated nifty domestic appliances such as spinning clothes dryers and deep freezers. This caused electricity demand to grow at a breathtaking clip of nearly 9 percent per year on average. The growth continued into the 1960s as homes and businesses readily adopted air conditioners and the industrial sector automated. But over the next 30 years, industrial processes such as steelmaking became more efficient, and home appliances did more with less power.

Around 2000, the onslaught of computing brought widespread concerns about its electricity demand. But even with the explosion of Internet use and credit card transactions, improvements in computing and industrial efficiencies and the adoption of LED lighting compensated. Net result: Average electricity growth in the United States remained nearly flat from 2000 to 2020.

Now it’s back on the rise, driven by AI data centers and manufacturing of batteries and semiconductor chips. Electricity demand is expected to grow more than 3 percent every year for the next five years, according to Grid Strategies, an energy research firm in Washington, D.C. “Three percent per year today is more challenging than 3 percent in the 1960s because the baseline is so much larger,” says John Wilson, an energy regulation expert at Grid Strategies.

Can the United States counter the growth with innovation in data-center and industrial efficiency? History suggests it can.

Reference: https://ift.tt/wjLruSz

Freddy the Robot Was the Fall Guy for British AI




Meet FREDERICK Mark 2, the Friendly Robot for Education, Discussion and Entertainment, the Retrieval of Information, and the Collation of Knowledge, better known as Freddy II. This remarkable robot could put together a simple model car from an assortment of parts dumped in its workspace. Its video-camera eyes and pincer hand identified and sorted the individual pieces before assembling the desired end product. But onlookers had to be patient. Assembly took about 16 hours, and that was after a day or two of “learning” and programming.

Freddy II was completed in 1973 as one of a series of research robots developed by Donald Michie and his team at the University of Edinburgh during the 1960s and ’70s. The robots became the focus of an intense debate over the future of AI in the United Kingdom. Michie eventually lost, his funding was gutted, and the ensuing AI winter set back U.K. research in the field for a decade.

Why were the Freddy I and II robots built?

In 1967, Donald Michie, along with Richard Gregory and Hugh Christopher Longuet-Higgins, founded the Department of Machine Intelligence and Perception at the University of Edinburgh with the near-term goal of developing a semiautomated robot and then longer-term vision of programming “integrated cognitive systems,” or what other people might call intelligent robots. At the time, the U.S. Defense Advanced Research Projects Agency and Japan’s Computer Usage Development Institute were both considering plans to create fully automated factories within a decade. The team at Edinburgh thought they should get in on the action too.

Two years later, Stephen Salter and Harry G. Barrow joined Michie and got to work on Freddy I. Salter devised the hardware while Barrow designed and wrote the software and computer interfacing. The resulting simple robot worked, but it was crude. The AI researcher Jean Hayes (who would marry Michie in 1971) referred to this iteration of Freddy as an “arthritic Lady of Shalott.”

Freddy I consisted of a robotic arm, a camera, a set of wheels, and some bumpers to detect obstacles. Instead of roaming freely, it remained stationary while a small platform moved beneath it. Barrow developed an adaptable program that enabled Freddy I to recognize irregular objects. In 1969, Salter and Barrow published in Machine Intelligence their results, “Design of Low-Cost Equipment for Cognitive Robot Research,” which included suggestions for the next iteration of the robot.

Photo of a camera pointed at a teacup, and a computer printout in the shape of a teacup. Freddy I, completed in 1969, could recognize objects placed in front of it—in this case, a teacup.University of Edinburgh

More people joined the team to build Freddy Mark 1.5, which they finished in May 1971. Freddy 1.5 was a true robotic hand-eye system. The hand consisted of two vertical, parallel plates that could grip an object and lift it off the platform. The eyes were two cameras: one looking directly down on the platform, and the other mounted obliquely on the truss that suspended the hand over the platform. Freddy 1.5’s world was a 2-meter by 2-meter square platform that moved in an x-y plane.

Freddy 1.5 quickly morphed into Freddy II as the team continued to grow. Improvements included force transducers added to the “wrist” that could deduce the strength of the grip, the weight of the object held, and whether it had collided with an object. But what really set Freddy II apart was its versatile assembly program: The robot could be taught to recognize the shapes of various parts, and then after a day or two of programming, it could assemble simple models. The various steps can be seen in this extended video, narrated by Barrow:

The Lighthill Report Takes Down Freddy the Robot

And then what happened? So much. But before I get into all that, let me just say that rarely do I, as a historian, have the luxury of having my subjects clearly articulate the aims of their projects, imagine the future, and then, years later, reflect on their experiences. As a cherry on top of this historian’s delight, the topic at hand—artificial intelligence—also happens to be of current interest to pretty much everyone.

As with many fascinating histories of technology, events turn on a healthy dose of professional bickering. In this case, the disputants were Michie and the applied mathematician James Lighthill, who had drastically different ideas about the direction of robotics research. Lighthill favored applied research, while Michie was more interested in the theoretical and experimental possibilities. Their fight escalated quickly, became public with a televised debate on the BBC, and concluded with the demise of an entire research field in Britain.

Two black and a white photos of white men, both wearing suits and glasses. A damning report in 1973 by applied mathematician James Lighthill [left] resulted in funding being pulled from the AI and robotics program led by Donald Michie [right]. Left: Chronicle/Alamy; Right: University of Edinburgh

It all started in September 1971, when the British Science Research Council, which distributed public funds for scientific research, commissioned Lighthill to survey the state of academic research in artificial intelligence. The SRC was finding it difficult to make informed funding decisions in AI, given the field’s complexity. It suspected that some AI researchers’ interests were too narrowly focused, while others might be outright charlatans. Lighthill was called in to give the SRC a road map.

No intellectual slouch, Lighthill was the Lucasian Professor of Mathematics at the University of Cambridge, a position also held by Isaac Newton, Charles Babbage, and Stephen Hawking. Lighthill solicited input from scholars in the field and completed his report in March 1972. Officially titled “ Artificial Intelligence: A General Survey,” but informally called the Lighthill Report, it divided AI into three broad categories: A, for advanced automation; B, for building robots, but also bridge activities between categories A and C; and C, for computer-based central nervous system research. Lighthill acknowledged some progress in categories A and C, as well as a few disappointments.

Lighthill viewed Category B, though, as a complete failure. “Progress in category B has been even slower and more discouraging,” he wrote, “tending to sap confidence in whether the field of research called AI has any true coherence.” For good measure, he added, “AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.” So very British.

Lighthill concluded his report with his view of the next 25 years in AI. He predicted a “fission of the field of AI research,” with some tempered optimism for achievement in categories A and C but a valley of continued failures in category B. Success would come in fields with clear applications, he argued, but basic research was a lost cause.

The Science Research Council published Lighthill’s report the following year, with responses from N. Stuart Sutherland of the University of Sussex and Roger M. Needham of the University of Cambridge, as well as Michie and his colleague Longuet-Higgins.

Sutherland sought to relabel category B as “basic research in AI” and to have the SRC increase funding for it. Needham mostly supported Lighthill’s conclusions and called for the elimination of the term AI—“a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better.”

Longuet-Higgins focused on his own area of interest, cognitive science, and ended with an ominous warning that any spin-off of advanced automation would be “more likely to inflict multiple injuries on human society,” but he didn’t explain what those might be.

Michie, as the United Kingdom’s academic leader in robots and machine intelligence, understandably saw the Lighthill Report as a direct attack on his research agenda. With his funding at stake, he provided the most critical response, questioning the very foundation of the survey: Did Lighthill talk with any international experts? How did he overcome his own biases? Did he have any sources and references that others could check? He ended with a request for more funding—specifically the purchase of a DEC System 10 (also known as the PDP-10) mainframe computer. According to Michie, if his plan were followed, Britain would be internationally competitive in AI by the end of the decade.

Black and white photo of a robot hovering over a square platform and surrounded by four young men who are crouching as they look at it. After Michie’s funding was cut, the many researchers affiliated with his bustling lab lost their jobs.University of Edinburgh

This whole affair might have remained an academic dispute, but then the BBC decided to include a debate between Lighthill and a panel of experts as part of its “Controversy” TV series. “Controversy” was an experiment to engage the public in science. On 9 May 1973, an interested but nonspecialist audience filled the auditorium at the Royal Institution in London to hear the debate.

Lighthill started with a review of his report, explaining the differences he saw between automation and what he called “the mirage” of general-purpose robots. Michie responded with a short film of Freddy II assembling a model, explaining how the robot processes information. Michie argued that AI is a subject with its own purposes, its own criteria, and its own professional standards.

After a brief back and forth between Lighthill and Michie, the show’s host turned to the other panelists: John McCarthy, a professor of computer science at Stanford University, and Richard Gregory, a professor in the department of anatomy at the University of Bristol who had been Michie’s colleague at Edinburgh. McCarthy, who coined the term artificial intelligence in 1955, supported Michie’s position that AI should be its own area of research, not simply a bridge between automation and a robot that mimics a human brain. Gregory described how the work of Michie and McCarthy had influenced the field of psychology.

You can watch the debate or read a transcript.

A Look Back at the Lighthill Report

Despite international support from the AI community, though, the SRC sided with Lighthill and gutted funding for AI and robotics; Michie had lost. Michie’s bustling lab went from being an international center of research to just Michie, a technician, and an administrative assistant. The loss ushered in the first British AI winter, with the United Kingdom making little progress in the field for a decade.

For his part, Michie pivoted and recovered. He decommissioned Freddy II in 1980, at which point it moved to the Royal Museum of Scotland (now the National Museum of Scotland), and he replaced it with a Unimation PUMA robot.

In 1983, Michie founded the Turing Institute in Glasgow, an AI lab that worked with industry on both basic and applied research. The year before, he had written Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach). Michie intended it as intellectual musings that he hoped scientists would read, perhaps on the weekend, to help them get beyond the pursuits of the workweek. The book is wide-ranging, covering his three decades of work.

In the introduction to the chapters covering Freddy and the aftermath of the Lighthill report, Michie wrote, perhaps with an eye toward history:

“Work of excellence by talented young people was stigmatised as bad science and the experiment killed in mid-trajectory. This destruction of a co-operative human mechanism and of the careful craft of many hands is elsewhere described as a mishap. But to speak plainly, it was an outrage. In some later time when the values and methods of science have further expanded, and those adversary politics have contracted, it will be seen as such.”

History has indeed rendered judgment on the debate and the Lighthill Report. In 2019, for example, computer scientist Maarten van Emden, a colleague of Michie’s, reflected on the demise of the Freddy project with these choice words for Lighthill: “a pompous idiot who lent himself to produce a flaky report to serve as a blatantly inadequate cover for a hatchet job.”

And in a March 2024 post on GitHub, the blockchain entrepreneur Jeffrey Emanuel thoughtfully dissected Lighthill’s comments and the debate itself. Of Lighthill, he wrote, “I think we can all learn a very valuable lesson from this episode about the dangers of overconfidence and the importance of keeping an open mind. The fact that such a brilliant and learned person could be so confidently wrong about something so important should give us pause.”

Arguably, both Lighthill and Michie correctly predicted certain aspects of the AI future while failing to anticipate others. On the surface, the report and the debate could be described as simply about funding. But it was also more fundamentally about the role of academic research in shaping science and engineering and, by extension, society. Ideally, universities can support both applied research and more theoretical work. When funds are limited, though, choices are made. Lighthill chose applied automation as the future, leaving research in AI and machine intelligence in the cold.

It helps to take the long view. Over the decades, AI research has cycled through several periods of spring and winter, boom and bust. We’re currently in another AI boom. Is this time different? No one can be certain what lies just over the horizon, of course. That very uncertainty is, I think, the best argument for supporting people to experiment and conduct research into fundamental questions, so that they may help all of us to dream up the next big thing.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the May 2025 print issue as “This Robot Was the Fall Guy for British AI.”

References


Donald Michie’s lab regularly published articles on the group’s progress, especially in Machine Intelligence, a journal founded by Michie.

The Lighthill Report and recordings of the debate are both available in their entirety online—primary sources that capture the intensity of the moment.

In 2009, a group of alumni from Michie’s Edinburgh lab, including Harry Barrow and Pat Fothergill (formerly Ambler), created a website to share their memories of working on Freddy. The site offers great firsthand accounts of the development of the robot. Unfortunately for the historian, they didn’t explore the lasting effects of the experience. A decade later, though, Maarten van Emden did, in his 2019 article “Reflecting Back on the Lighthill Affair,” in the IEEE Annals of the History of Computing.

Beyond his academic articles, Michie was a prolific author. Two collections of essays I found particularly useful are On Machine Intelligence (John Wiley & Sons, 1974) and Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach, 1982).

Jon Agar’s 2020 article “What Is Science for? The Lighthill Report on Artificial Intelligence Reinterpreted” and Jeffrey Emanuel’s GitHub post offer historical interpretations on this mostly forgotten blip in the history of robotics and artificial intelligence.

Reference: https://ift.tt/bOfku9C

Rugged, Micro Data Centers Bring Rural Reliability




Rural connectivity is still a huge issue. As of 2022, approximately 28 percent of Americans living in rural areas did not have access to broadband Internet, which at that time was defined by 25 megabits per second for download speeds, and 3 megabits per second for upload speeds by the Federal Communications Commission (FCC). As of 2024, FCC came out with a new benchmark with higher speed requirements—increasing the number of people whose connections don’t meet the definition.One potential solution to the problem is small, rugged data centers with relatively old, redundant components, placed strategically in rural areas such that crucial data can be stored locally and network providers can route through them, providing redundancy.

“We are not the big AI users,” said Doug Recker, the CEO of Duos Edge AI, in a talk delivered at the Data Center World conference in Washington, D.C. earlier this month. “We’re still trying to resolve the problem from 20 years ago. These aren’t high-bandwidth or high-power data centers. We don’t need them out there. We just need better connectivity. We need robust networks.”

The Jacksonville, Florida-based startup provides small data centers (about the size of a shipping container) to rural areas, mostly in the Texas panhandle. They recently added such a data center in Amarillo, working with the local school district to provide more robust connectivity to students. The school district runs their learning platform on Amazon Web Services (AWS), and can now store that platform locally in the data center.

Previously, data had to travel to and from Dallas, over 500 kilometers away. Network outages were a common occurrence, impeding student learning. Recker’s company paid the upfront cost of US $1.2 to $1.5 million to build the 15-cabinet data center, which it calls a pod. Duos is making the money back by charging a monthly usage and maintenance fee (between $1800 and $3000 per shelf) to the school district and other customers.

The company follows a ‘build what’s needed and they will come’ approach. Once the data center is installed, Recker says, existing network providers co-locate there, providing redundancy and reliability to the customers. The pod provides a seed around which network providers can build a hub-and-spoke-type network.

3 Requirements for Edge Data Centers

The trick to making these edge data centers financially profitable is minimizing their energy usage and maximizing their reliability. To minimize energy use, Duos uses relatively old, time-tested equipment. For reliability, every piece of equipment is duplicated, including uninterruptible power supply batteries, generators, and air conditioning units.

They also have to locate the pods in places where there would still be a large enough number of potential customers to justify building a 15-rack pod (the equipment is rented out per rack).

The pods are unmanned, but efficient and timely maintenance is key. “Say your AC unit goes down at two in the morning,” Recker says. “It’s redundant, but you don’t want it to be down, so you have to dispatch somebody who can get into a pod at two o’clock in the morning.” Duos has a system for dispatching maintenance workers, and an auditing standard that remotely keeps track of all the work that has been done or needs to be done on each piece of equipment. Each pod also has a clean room to prevent maintenance workers from tracking in dust or dirt from outside while they work on repairs.

The compact data center allows the Amarillo school district to have affordable and reliable connectivity for their digital learning platform. Students will soon have access to AI-powered tools, simulations, and real-time data for their classes. “The pod enables that to happen because they can compute on site and host that environment on site where they couldn’t do it before because of the latency issues,” says Recker.

Duos is also placing pods elsewhere in the Texas panhandle, as well as Florida. And they’re getting so much demand in Amarillo that they’re planning to install a second pod. Recker says that although they initially built the pod in collaboration with the school district, other local institutions quickly became interested as well, including hospitals, utility companies, and farmers.

Reference: https://ift.tt/e9xBXAi

The end of an AI that shocked the world: OpenAI retires GPT-4


One of the most influential—and by some counts, notorious—AI models yet released will soon fade into history. OpenAI announced on April 10 that GPT-4 will be "fully replaced" by GPT-4o in ChatGPT at the end of April, bringing a public-facing end to the model that accelerated a global AI race when it launched in March 2023.

"Effective April 30, 2025, GPT-4 will be retired from ChatGPT and fully replaced by GPT-4o," OpenAI wrote in its April 10 changelog for ChatGPT. While ChatGPT users will no longer be able to chat with the older AI model, the company added that "GPT-4 will still be available in the API," providing some reassurance to developers who might still be using the older model for various tasks.

The retirement marks the end of an era that began on March 14, 2023, when GPT-4 demonstrated capabilities that shocked some observers: reportedly scoring at the 90th percentile on the Uniform Bar Exam, acing AP tests, and solving complex reasoning problems that stumped previous models. Its release created a wave of immense hype—and existential panic—about AI's ability to imitate human communication and composition.

Read full article

Comments

Reference : https://ift.tt/gnJedcm

Tuesday, April 29, 2025

Is China Pulling Ahead in the Quest for Fusion Energy?




In the rocky terrain of China’s Sichuan province, a massive X-shaped building is quickly rising, its crisscrossed arms stretching outward in a bold, futuristic design. From a satellite’s view, it could be just another ambitious megaproject in a country known for building fast and thinking big. But to some observers of Chinese tech development, it’s yet more evidence that China may be on the verge of pulling ahead in one of the most consequential technological races of our time: the quest to achieve commercial nuclear fusion.

Fusion—the process that powers stars—promises nearly limitless clean energy, without the radioactive waste and meltdown risk of fission. But building a reactor that can sustain fusion requires an extraordinary level of scientific and engineering precision.

The X-shaped facility under construction in Mianyang, Sichuan, appears to be a massive laser-based fusion facility; its four long arms, likely laser bays, could focus intense energy on a central chamber. Analysts who’ve examined satellite imagery and procurement records say it resembles the U.S. National Ignition Facility (NIF), but is significantly larger. Others have speculated that it could be a massive Z-pinch machine—a fusion-capable device that uses an extremely powerful electrical current to compress plasma into a narrow, dense column.

“Even if China is not ahead right now,” says Decker Eveleth, an analyst at the research nonprofit CAN Corp., “when you look at how quickly they build things, and the financial willpower to build these facilities at scale, the trajectory is not favorable for the U.S.”

Fusion is a marathon, not a sprint—and China is pacing itself to win.

Other Chinese plasma physics programs have also been gathering momentum. In January, researchers at the Experimental Advanced Superconducting Tokamak (EAST)—nicknamed the “Artificial Sun”—reported maintaining plasma at over 100 million degrees Celsius for more than 17 minutes. (A tokamak is a donut-shaped device that uses magnetic fields to confine plasma for nuclear fusion.) Operational since 2006, EAST is based in Hefei, in Anhui province, and serves as a testbed for technologies that will feed into next-generation fusion reactors.

Not far from EAST, the Chinese government is building the Comprehensive Research Facility for Fusion Technology (CRAFT), a 40-hectare complex that will develop the underlying engineering for future fusion machines. Results from EAST and CRAFT will feed into the design of the China Fusion Engineering Test Reactor (CFETR), envisioned as a critical bridge between experimental and commercial fusion power. The engineering design of CFETR was completed in 2020 and calls for using high-temperature superconducting magnets to scale up what machines like EAST have begun.

Meanwhile, on Yaohu Science Island in Nanchang, in central China, the federal government is preparing to launch Xinghuo—the world’s first fusion-fission hybrid power plant. Slated for grid connection by 2030, the reactor will use high-energy neutrons from fusion reactions to trigger fission in surrounding materials, boosting overall energy output and potentially reducing long-lived radioactive waste. Xinghuo aims to generate 100 megawatts of continuous electricity, enough to power approximately 83,000 U.S.-size homes.

Why China is doubling down on fusion

Why such an aggressive push? Fusion energy aligns neatly with three of China’s top priorities: securing domestic energy, reducing carbon emissions, and winning the future of high technology—a pillar of President Xi Jinping’s “great rejuvenation” agenda.

“Fusion is a next-generation energy technology,” says Jimmy Goodrich, a senior advisor for technology analysis at RAND. “Whoever masters it will gain enormous advantages—economically, strategically, and from a national security perspective.”

The lengthy development required to commercialize fusion also aligns with China’s political economy. Fusion requires patient capital. The Chinese government doesn’t need to answer to voters or shareholders, and so it’s uniquely suited to fund fusion R&D and wait for a payoff that may take decades.

In the United States, by contrast, fusion momentum has shifted away from government-funded projects to private companies like Helion, Commonwealth Fusion Systems, and TAE Technologies. These fusion startups have captured billions in venture capital, riding a wave of interest from tech billionaires hoping to power, among other things, the datacenters of an AI-driven future. But that model has vulnerabilities. If demand for energy-hungry data centers slows or market sentiment turns, funding could dry up quickly.

“The future of fusion may come down to which investment model proves more resilient,” says Goodrich. “If there’s a slowdown in AI or data center demand, U.S. [fusion] startups could see funding evaporate. In contrast, Chinese fusion firms are unlikely to face the same risk, as sustained government support can shield them from market turbulence.”

The talent equation is shifting, too. In March, plasma physicist Chang Liu left the Princeton Plasma Physics Laboratory to join a fusion program at Peking University, where he’d earned his undergraduate degree. At the Princeton lab, Liu had pioneered a promising method to reduce the impact of damaging runaway electrons in tokamak plasmas.

“The future of fusion may come down to which investment model proves more resilient.” —Jimmy Goodrich, RAND

Liu’s move exemplifies a broader trend, says Goodrich. “When the Chinese government prioritizes a sector for development, a surge of financing and incentives quickly follows,” he says. “For respected scientists and engineers in the U.S. or Europe, the chance to [move to China to] see their ideas industrialized and commercialized can be a powerful draw.”

Meanwhile, China is growing its own talent. Universities and labs in Hefei, Mianyang, and Nanchang are training a generation of physicists and engineers to lead in fusion science. Within a decade, China could have a vast, self-sustaining pipeline of experts.

The scale and ambition of China’s fusion effort is hard to miss. Analysts say the facility in Mianyang could be 50 percent larger than NIF, which in 2022 became the first fusion energy project to achieve scientific breakeven—producing 3.15 megajoules of energy from a 2.05-megajoule input.

There are military implications as well. Eveleth notes that while the Mianyang project could aid energy research, it also will boost China’s ability to simulate nuclear weapons tests. “Whether it’s a laser fusion facility or a Z-pinch machine, you’re looking at a pretty significant increase in Chinese capability to conduct miniaturized weapons experiments and boost their understanding of various materials used within weapons,” says Eveleth.

These new facilities are likely to surpass U.S. capabilities for certain kinds of weapons development, Eveleth warns. While Los Alamos and other U.S. national labs are aging, China is building fresh and installing the latest technologies in shiny new buildings.

The United States still leads in scientific creativity and startup diversity, but the U.S. fusion effort remains comparatively fragmented. During the Biden administration, the U.S. government invested about $800 million annually in fusion research. China, according to the U.S. Department of Energy, is investing up to $1.5 billion per year—although some analysts say that the amount could be twice as high.

Fusion is a marathon, not a sprint—and China is pacing itself to win. Backed by a coordinated national strategy, generous funding, and a rapidly expanding talent base, Beijing isn’t just chasing fusion energy—it’s positioning itself to dominate the field.

“It’s a Renaissance moment for advanced energy in China,” says Goodrich, who contends that unless the United States ramps up public investment and support, it may soon find itself looking eastward at the future of fusion. The next few years will be decisive, he and others say. Reactors are rising. Scientists are relocating. Timelines are tightening. Whichever nation first harnesses practical fusion energy won’t just light up cities. It may also reshape the balance of global power.

Reference: https://ift.tt/8MDTA31

AI-generated code could be a disaster for the software supply chain. Here’s why.


AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows.

The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code and are an essential part of the modern software supply chain.

Package hallucination flashbacks

These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks. These attacks work by causing a software package to access the wrong component dependency, for instance by publishing a malicious package and giving it the same name as the legitimate one but with a later version stamp. Software that depends on the package will, in some cases, choose the malicious version rather than the legitimate one because the former appears to be more recent.

Read full article

Comments

Reference : https://ift.tt/WVtepw8

Monday, April 28, 2025

Backblaze responds to claims of “sham accounting,” customer backups at risk


Backblaze is dismissing allegations from a short seller that it engaged in “sham accounting” that could put the cloud storage and backup solution provider and its customers' backups in jeopardy.

On April 24, Morpheus Research posted a lengthy report accusing the San Mateo, California-based firm of practicing “sham accounting and brazen insider dumping.” The claims largely stem from a pair of lawsuits filed against Backblaze by former employees Huey Hall [PDF] and James Kisner [PDF] in October. Per LinkedIn profiles, Hall was Backblaze’s head of finance from March 2020 to February 2024, and Kisner was Backblaze’s VP of investor relations and financial planning from May 2021 to November 2023.

As Morpheus wrote, the lawsuits accuse Backblaze’s founders of participating in “an aggressive trading plan to sell 10,000 shares a day, along with other potential sales from early employee holders, against ‘all external capital markets advice.’” The plan allegedly started in April 2022, after the IPO lockup period expired and despite advisor warnings, including one from a capital markets consultant that such a trading plan likely breached Backblaze’s fiduciary duties.

Read full article

Comments

Reference : https://ift.tt/jbD6dMI

Put an Old-School BBS on Meshtastic Radio




In the 1980s and 1990s, online communities formed around tiny digital oases called bulletin-board systems. Often run out of people’s homes and accessible by only one or two people at a time via dial-up modems, these BBSs let people exchange public and private messages, play games, and share files using simple menus and a text-based interface. Today, there is an uptick in interest in BBSs as a way to create idiosyncratic digital spaces away from the glare of large social-media platforms like Facebook, X, and Bluesky. Today’s BBSs are typically accessed over the Internet, rather than dial-up connections. But their old standalone mojo is possible thanks to one of the hottest new radio technologies: Meshtastic.

Indeed, this article is really the latest installment in what has become an accidental series that I’ll call “Climbing the LoRa Stack.” LoRa first appeared on Hands On’s radar in 2020, when enthusiasts realized that the long-range, low-bandwidth protocol had a lot of potential beyond just machine-to-machine Internet of Things connections, such as building person-to-person text messagers. Then last year we talked about the advent of Meshtastic, which adds mesh-networking capabilities to LoRa, allowing devices to autonomously create wireless networks and exchange data over a much larger area. In that article, I wondered what kind of interesting applications might be built on top of Meshtastic—and that brings us to today.

Created by the Comms Channel, the open source TC2-BBS software was first released last summer. It’s a set of Python scripts that relies on just two additional libraries: one for talking to Meshtastic radios over a USB connection and one that helps manage internal data traffic. TC2-BBS doesn’t require a lot of computing power because the low-bandwidth limits of LoRa mean it’s never handling much data at any given time. All of this means the BBS code is very portable and you can run it on something as low-powered as a Raspberry Pi Zero.

Major components of the BBS Meshtastic radio The BBS system uses a WisBlock Meshtastic radio with a status display [middle left and center], which can communicate wirelessly using LoRa and bluetooth antennas [top]. A servo moves a physical flag under the control of an Arduino Nano [middle right and bottom], while a Raspberry Pi runs the BBS Python software.James Provost

The current TC2-BBS feature set is minimal, albeit under active development. There’s no option for sharing files, the interface is basic even by BBS standards, and there are no “door games,” which let visitors play what were typically turn-based text adventures or strategy games. On the other hand TC2-BBS does have some features from the more advanced bulletin-board systems of yore, such as the ability to store-and-forward email among other BBSs, similar to the FidoNet network, which flourished in the early 1990s until it was supplanted by the Internet. And in a nod to the whimsy of door games, the TC2-BBS system does have an option that lets users ask for a fortune-cookie-style aphorism, à la the Unix fortune command. And of course, anyone can access it at any time without having to worry about a busy phone line.

I installed the software on a spare Raspberry Pi 3, following the simple instructions on GitHub. There is a Docker image, but because I was dedicating this Pi to the BBS, I just installed it directly. For the radio hardware, I hooked the Pi up to a RAKwireless WisBlock, which runs Meshtastic out of the box. In addition to a LoRa antenna, the WisBlock also has a Bluetooth antenna that allows for easy configuration of the radio via a smartphone app.

Anyone can access it at any time without having to worry about a busy phone line

The biggest hiccup was power: Normally the WisBlock radio is powered via its USB connection, but my attached Pi couldn’t meet the radio’s needs without triggering low-voltage warnings. So I powered the WisBlock separately through a connector normally reserved for accepting juice from a solar panel.

Soon I had IEEE Spectrum’s TC2-BBS up and running and happily talking via Meshtastic with a HelTXT communicator I’d bought for my earlier Hands On experiments. Now anyone within three hops of Spectrum’s midtown Manhattan office on New York City’s emerging Meshtastic network can leave a message by sending “hello” to our node, advertised on the Meshtastic network as IEEE Spectrum BBS.

But of course, just like the BBS’s of old, it was going to take a while for people to realize it was there and start leaving messages. I could monitor the BBS for visitors via a display connected to the Pi, but after a little poking around in the Python scripts, I realized I could do something more fun. By using the RPi.GPIO library and adding a few lines of code at the point where the BBS stores board messages in memory, I set the Pi to pulse one of its general-purpose input/output (GPIO) pins on and off for a moment every time a new message was posted.

A block diagram of the BBS Meshtastic radio The Raspberry Pi sends and receives serial data from the WisBlock Meshtastic radio, and it sends pulses via its GPIO header to the Arduino Nano when a post is added to the bulletin-board database. When the Nano receives a signal, it raises a physical flag until the reset button is pushedJames Provost

I fished an Arduino Nano out of my drawer and hooked it up to a servo, a push button, and the Pi’s GPIO pin. The Nano listens for an incoming pulse from the Pi. When the Nano hears one, it moves the arm of the servo through 90 degrees, raising a little red flag. Pressing the button to acknowledge the flag lowers the notification flag again and the Nano resumes listening for another pulse. This eliminates the need to keep the Pi plugged into a display, and I can check to see what the new message is via my HelTXT radio or smartphone.

So please, if you’re in New York City and have a Meshtastic radio, drop by our new/old digital watering hole and leave a message! As for me, I’m going to keep climbing up the LoRa stack and see if I can write one of those door games.

Reference: https://ift.tt/90aDxyM

iOS and Android juice jacking defenses have been trivial to bypass for years


About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.

“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone.

An attacker would then make the chargers available in airports, shopping malls, or other public venues for use by people looking to recharge depleted batteries. While the charger was ostensibly only providing electricity to the phone, it was also secretly downloading files or running malicious code on the device behind the scenes. Starting in 2012, both Apple and Google tried to mitigate the threat by requiring users to click a confirmation button on their phones before a computer—or a computer masquerading as a charger—could access files or execute code on the phone.

Read full article

Comments

Reference : https://ift.tt/eBcio0T

Sunday, April 27, 2025

How to Avoid Ethical Red Flags in Your AI Projects




As a computer scientist who has been immersed in AI ethics for about a decade, I’ve witnessed firsthand how the field has evolved. Today, a growing number of engineers find themselves developing AI solutions while navigating complex ethical considerations. Beyond technical expertise, responsible AI deployment requires a nuanced understanding of ethical implications.

In my role as IBM’s AI ethics global leader, I’ve observed a significant shift in how AI engineers must operate. They are no longer just talking to other AI engineers about how to build the technology. Now they need to engage with those who understand how their creations will affect the communities using these services. Several years ago at IBM, we recognized that AI engineers needed to incorporate additional steps into their development process, both technical and administrative. We created a playbook providing the right tools for testing issues like bias and privacy. But understanding how to use these tools properly is crucial. For instance, there are many different definitions of fairness in AI. Determining which definition applies requires consultation with the affected community, clients, and end users.

A woman with long, reddish-brown hair wearing a dark shirt and knotted scarf. In her role at IBM, Francesca Rossi cochairs the company’s AI ethics board to help determine its core principles and internal processes. Francesca Rossi

Education plays a vital role in this process. When piloting our AI ethics playbook with AI engineering teams, one team believed their project was free from bias concerns because it didn’t include protected variables like race or gender. They didn’t realize that other features, such as zip code, could serve as proxies correlated to protected variables. Engineers sometimes believe that technological problems can be solved with technological solutions. While software tools are useful, they’re just the beginning. The greater challenge lies in learning to communicate and collaborate effectively with diverse stakeholders.

The pressure to rapidly release new AI products and tools may create tension with thorough ethical evaluation. This is why we established centralized AI ethics governance through an AI ethics board at IBM. Often, individual project teams face deadlines and quarterly results, making it difficult for them to fully consider broader impacts on reputation or client trust. Principles and internal processes should be centralized. Our clients—other companies—increasingly demand solutions that respect certain values. Additionally, regulations in some regions now mandate ethical considerations. Even major AI conferences require papers to discuss ethical implications of the research, pushing AI researchers to consider the impact of their work.

At IBM, we began by developing tools focused on key issues like privacy, explainability, fairness, and transparency. For each concern, we created an open-source tool kit with code guidelines and tutorials to help engineers implement them effectively. But as technology evolves, so do the ethical challenges. With generative AI, for example, we face new concerns about potentially offensive or violent content creation, as well as hallucinations. As part of IBM’s family of Granite models, we’ve developed safeguarding models that evaluate both input prompts and outputs for issues like factuality and harmful content. These model capabilities serve both our internal needs and those of our clients.

While software tools are useful, they’re just the beginning. The greater challenge lies in learning to communicate and collaborate effectively.

Company governance structures must remain agile enough to adapt to technological evolution. We continually assess how new developments like generative AI and agentic AI might amplify or reduce certain risks. When releasing models as open source, we evaluate whether this introduces new risks and what safeguards are needed.

For AI solutions raising ethical red flags, we have an internal review process that may lead to modifications. Our assessment extends beyond the technology’s properties (fairness, explainability, privacy) to how it’s deployed. Deployment can either respect human dignity and agency or undermine it. We conduct risk assessments for each technology use case, recognizing that understanding risk requires knowledge of the context in which the technology will operate. This approach aligns with the European AI Act’s framework—it’s not that generative AI or machine learning is inherently risky, but certain scenarios may be high or low risk. High-risk use cases demand additional scrutiny.

In this rapidly evolving landscape, responsible AI engineering requires ongoing vigilance, adaptability, and a commitment to ethical principles that place human well-being at the center of technological innovation.

Reference: https://ift.tt/Bb3QTmE

Friday, April 25, 2025

New study shows why simulated reasoning AI models don’t yet live up to their billing


There's a curious contradiction at the heart of today's most capable AI models that purport to "reason": They can solve routine math problems with impressive accuracy, yet when faced with formulating deeper mathematical proofs found in competition-level challenges, they often fail.

That's the finding of eye-opening preprint research into simulated reasoning (SR) models, initially listed in March and updated in April, that mostly fell under the news radar. The research serves as an instructive case study on the mathematical limitations of SR models, despite sometimes grandiose marketing claims from AI vendors.

What sets simulated reasoning models apart from traditional large language models (LLMs) is that they have been trained to output a step-by-step "thinking" process (often called "chain-of-thought") to solve problems. Note that "simulated" in this case doesn't mean that the models do not reason at all but rather that they do not necessarily reason using the same techniques as humans. That distinction is important because human reasoning itself is difficult to define.

Read full article

Comments

Reference : https://ift.tt/7ZoXYjx

FBI offers $10 million for information about Salt Typhoon members


The FBI is offering $10 million for information about the China-state hacking group tracked as Salt Typhoon and its intrusion last year into sensitive networks belonging to multiple US telecommunications companies.

Salt Typhoon is one of a half-dozen or more hacking groups that work on behalf of the People’s Republic of China. Intelligence agencies and private security companies have concluded the group has been behind a string of espionage attacks designed to collect vital information, in part for use in any military conflicts that may arise in the future.

A broad and significant cyber campaign

The agency on Thursday published a statement offering up to $10 million, relocation assistance, and other compensation for information about Salt Typhoon. The announcement specifically sought information about the specific members of Salt Typhoon and the group's compromise of multiple US telecommunications companies last year.

Read full article

Comments

Reference : https://ift.tt/c4nVKdW

Video Friday: High Mobility Logistics




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
ICRA 2025: 19–23 May 2025, ATLANTA, GA
London Humanoids Summit: 29–30 May 2025, LONDON
IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
RSS 2025: 21–25 June 2025, LOS ANGELES
ETH Robotics Summer School: 21–27 June 2025, GENEVA
IAS 2025: 30 June–4 July 2025, GENOA, ITALY
ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
IFAC Symposium on Robotics: 15–18 July 2025, PARIS
RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
CLAWAR 2025: 5–7 September 2025, SHENZHEN
CoRL 2025: 27–30 September 2025, SEOUL
IEEE Humanoids: 30 September–2 October 2025, SEOUL
World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
IROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Throughout the course of the past year, LEVA has been designed from the ground up as a novel robot to transport payloads. Although the use of robotics is widespread in logistics, few solutions offer the capability to efficiently transport payloads both in controlled and unstructured environments. Four-legged robots are ideal for navigating any environment a human can, yet few have the features to autonomously move payloads. This is where LEVA shines. By combining both wheels (a means of locomotion ideally suited for fast and precise motion on flat surfaces) and legs (which are perfect for traversing any terrain that humans can), LEVA strikes a balance that makes it highly versatile.

[ LEVA ]

You probably heard about this humanoid robot half-marathon in China, becuase it got a lot of media attention, which I presume was the goal. And for those of us who remember when Asimo running was a big deal, marathon running is still impressive in some sense, it’s just hard to connect that to these robots doing anything practical, you know?

[ NBC ]

A robot navigating an outdoor environment with no prior knowledge of the space must rely on its local sensing to perceive its surroundings and plan. This can come in the form of a local metric map or local policy with some fixed horizon. Beyond that, there is a fog of unknown space marked with some fixed cost. In this work, we make a key observation that long-range navigation only necessitates identifying good frontier directions for planning instead of full map knowledge. To this end, we propose the Long Range Navigator (LRN), that learns an intermediate affordance representation mapping high-dimensional camera images to affordable frontiers for planning, and then optimizing for maximum alignment with the desired goal. Through extensive off-road experiments on Spot and a Big Vehicle, we find that augmenting existing navigation stacks with LRN reduces human interventions at test-time and leads to faster decision making indicating the relevance of LRN.

[ LRN ]

Goby is a compact, capable, programmable, and low-cost robot that lets you uncover miniature worlds from its tiny perspective.

On Kickstarter now, for an absurdly cheap $80.

[ Kickstarter ]

Thanks, Rich!

HEBI robots demonstrated inchworm mobility during the Innovation Faire of the FIRST Robotics World Championships in Houston.

[ HEBI ]

Thanks, Andrew!

Happy Easter from Flexiv!

[ Flexiv ]

We are excited to present our proprietary reinforcement learning algorithm, refined through extensive simulations and vast training data, enabling our full-scale humanoid robot, Adam, to master human-like locomotion. Unlike model-based gait control, our RL-driven approach grants Adam exceptional adaptability. On challenging terrains like uneven surfaces, Adam seamlessly adjusts stride, pace, and balance in real time, ensuring stable, natural movement while boosting efficiency and safety. The algorithm also delivers fluid, graceful motion with smooth joint coordination, minimizing mechanical wear, extending operational life, and significantly reducing energy use for enhanced endurance.

[ PNDbotics ]

Inside the GRASP Lab - Dr. Michael Posa and DAIR Lab. Our research centers on control, learning, planning, and analysis of robots as they interact with the world. Whether a robot is assisting within the home, or operating in a manufacturing plant, the fundamental promise of robotics requires touching and affecting a complex environment in a safe and controlled fashion. We are focused on developing computationally tractable and data efficient algorithms which enable robots to operate both dynamically and safely as they quickly maneuver through and interact with their environments.

[ DAIR Lab ]

I will never understand why robotics companies feel the need to add the sounds of sick actuators when their robots move.

[ Kepler ]

Join Matt Trossen, founder of Trossen Robotics, on a time-traveling teardown through the evolution of our robotic arms! In this deep dive, Matt unboxes the ghosts of robots past—sharing behind-the-scenes stories, bold design decisions, lessons learned, and how the industry itself has shifted gears.

[ Trossen ]

This week’s CMU RI seminar is a retro edition (2008!) from Charlie Kemp, previously of the Healthcare Robotics Lab at Georgia Tech and now at Hello Robot.

[ CMU RI ]

This week’s actual CMU RI seminar is from a much more modern version of Charlie Kemp.

When I started in robotics, my goal was to help robots emulate humans. Yet as my lab worked with people with mobility impairments, my notions of success changed. For assistive applications, emulation of humans is less important than ease of use and usefulness. Helping with seemingly simple tasks, such as scratching an itch or picking up a dropped object, can make a meaningful difference in a person’s life. Even full autonomy can be undesirable, since actively directing a robot can provide a sense of independence and agency. Overall, many benefits of robotic assistance derive from nonhuman aspects of robots, such as being tireless, directly controllable, and free of social characteristics that can inhibit use.
While technical challenges abound for home robots that attempt to emulate humans, I will provide evidence that human-scale mobile manipulators could benefit people with mobility impairments at home in the near future. I will describe work from my lab and Hello Robot that illustrates opportunities for valued assistance at home, including supporting activities of daily living, leading exercise games, and strengthening social connections. I will also present recent progress by Hello Robot toward unsupervised, daily in-home use by a person with severe mobility impairments.

[ CMU RI ]

Reference: https://ift.tt/NEHnLY5

In the age of AI, we must protect human creativity as a natural resource


Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

Read full article

Comments

Reference : https://ift.tt/wdEuyOK

Thursday, April 24, 2025

New Android spyware is targeting Russian military personnel on the front lines


Russian military personnel are being targeted with recently discovered Android malware that steals their contacts and tracks their location.

The malware is hidden inside a modified app for Alpine Quest mapping software, which is used by, among others, hunters, athletes, and Russian personnel stationed in the war zone in Ukraine. The app displays various topographical maps for use online and offline. The trojanized Alpine Quest app is being pushed on a dedicated Telegram channel and in unofficial Android app repositories. The chief selling point of the trojanized app is that it provides a free version of Alpine Quest Pro, which is usually available only to paying users.

Looks like the real thing

The malicious module is named Android.Spy.1292.origin. In a blog post, researchers at Russia-based security firm Dr.Web wrote:

Read full article

Comments

Reference : https://ift.tt/VDe3Eqj

IEEE Standards Development Pioneer Koepfinger Dies at 99



Joseph Koepfinger

Developed standards for electric power systems

Life Fellow, 99; died 6 January

Koepfinger was an active volunteer with the American Institute of Electrical Engineers (AIEE), an IEEE predecessor society. He made significant contributions to the fields of surge protection and electric power engineering.

In the early 1950s he took part in a three-year task force studying distribution circuit reliability as a member of AIEE’s surge protective devices committee (SPDC), according to his ArresterWorks biography.

In 1955 he helped revise the AIEE Standard 32 on neutral grounding devices and was part of a team that developed guidelines for power transformer loadings.

In the 1960s he became chair of the SPDC and initiated efforts to develop standards for low-voltage surge protectors. Later, Koepfinger served on the IEEE Standards Association Board and contributed to the development of IEEE standards for lightning arresters and surge protectors.

He received several awards for his work in standards development, including the IEEE Standards Association’s first Lifetime Achievement Award in 2011 and the 1989 IEEE Charles Proteus Steinmetz Award. In 2008 he was inducted into the Surge Protection Hall of Fame, a tribute webpage honoring engineers who have contributed to the field.

Koepfinger had a 60-year career at Duquesne Light, in Pittsburgh, retiring in 2000 as director of its system studies and research department. After retirement, he continued to serve as a technical advisor for the International Electrotechnical Commission, a standards organization.

He received bachelor’s and master’s degrees in electrical engineering from the University of Pittsburgh in 1949 and 1953.

Bruce E. Arnold

Electrical engineer

Life member, 81; died 16 January

Arnold was an electrical engineer and computer support specialist. He began his career in 1967 at sewing machine manufacturer Singer in New York City. As supervisor of electrical design and electromechanical equipment, he developed new electronic and motor package subsystems for high-volume consumer sewing machines.

Arnold left Singer in 1983 to join Revlon, a New York City–based cosmetics company, as director of electrical engineering. There he designed electronic and pneumatic systems for automated manufacturing and robotic automation.

Ten years later he changed careers, becoming a computer support specialist at Degussa Corp., a chemical manufacturing company in Piscataway, N.J. Degussa is now part of Evonik.

Arnold retired in 2006 and became a consultant.

He received a bachelor’s degree in electrical engineering in 1969 from the Newark College of Engineering (now the New Jersey Institute of Technology). He earned a master’s degree in EE in 1975 from NJIT.


William Hayes Kersting

Electrical engineering professor

Life Fellow, 88; died 7 January

Kersting taught electrical engineering for 40 years at his alma mater, New Mexico State University, in Las Cruces.

During his tenure, he established the university’s electric utility management program. He published more than 70 academic research articles. He also wrote Distribution System Modeling and Analysis, a textbook that is widely used in graduate programs worldwide.

He was an active volunteer of the IEEE Power & Energy Society, serving on its education committee and distribution systems analysis subcommittee.

Kersting received the Edison Electric Institute’s 1979 Power Engineering Education Award.


Richard A. Olsen

Human factors engineer

Life member, 90; died 7 November

Olsen made significant contributions to aerospace defense technologies and transportation safety. He specialized in human factors engineering, a field that focuses on designing products, systems, and environments that are safe, efficient, and easy for people to use. While working as a human factors engineer at the Lockheed Missiles and Space Co. in Sunnyvale, Calif., he contributed to early guidelines for computer-human interaction.

He helped build the first-generation Image Data Exploitation (IDEX) system, used by intelligence agencies and the military to analyze digital imagery.

After receiving a bachelor’s degree in physics in 1955 from Union College, in Schenectady, N.Y., he enlisted in the U.S. Navy. Olsen attended the Navy’s Officer Candidate School, in Newport, R.I., before being assigned to a destroyer ship in December 1956. He left active duty three years later.

In 1960 he joined Hughes Aircraft Co., a defense contractor in Fullerton, Calif., as a field engineer. He helped develop radar systems and worked at the Navy Electronics Laboratory’s Fleet Anti-Air Warfare Training Center, in San Diego, on the USS Enterprise and USS Mahan. He later was promoted to lead field engineer and worked at the Mare Island Naval Shipyard, in Vallejo, Calif.

Olsen moved to Pennsylvania in 1964 to attend graduate school at Pennsylvania State University in State College. After earning master’s and doctoral degrees in experimental psychology in 1966 and 1970, he joined Penn State’s Larson Transportation Institute as director of its human factors research program. Four years later, he became an assistant professor at the university’s industrial and management systems engineering department.

He left Penn State in 1980 to join Lockheed. After retiring from the company in 1990, he served as an expert witness for 14 years, testifying in several hundred accident-investigation cases.

He was a member of the Association for the Advancement of Automotive Medicine, the National Academy of Engineering’s Transportation Research Board, and SAE (the Society of Automotive Engineers). He was a Fellow of the Human Factors and Ergonomics Society and edited one of its newsletters, The Forvm.


Jo Edward Davidson

Communications engineer

Life senior member, 87; died 24 April 2024

Davidson’s work as an electrical engineer impacted several key communications technologies including early GPS development.

He was instrumental in installing cellular networks in Argentina, Nigeria, and the Philippines. He wrote about his career in his memoir: Far From the Flagpole: An Electrical Engineer Tells His Story.

He served in the U.S. Air Force from 1959 to 1965, attaining the rank of second lieutenant. After he was discharged, he worked at several companies including Eastman Kodak, Scientific Atlanta, and BellSouth International.

He contributed to several satellite communications and network projects while working at Alcatel and Globalstar, both in Memphis. He retired from Globalstar in 2000 as director of satellite network systems.

Davidson received a bachelor’s degree in engineering in 1963 from Arizona State University, in Tempe.

Reference: https://ift.tt/LvVtF3O

Amazon’s Vulcan Robots Are Mastering Picking Packages

As far as I can make out, Amazon’s warehouses are highly structured, extremely organized, very tidy, absolute raging messes. Everything i...