Wednesday, April 2, 2025

AI bots strain Wikimedia as bandwidth surges 50%


On Tuesday, the Wikimedia Foundation announced that relentless AI scraping is putting strain on Wikipedia's servers. Automated bots seeking AI model training data for LLMs have been vacuuming up terabytes of data, growing the foundation's bandwidth used for downloading multimedia content by 50 percent since January 2024. It’s a scenario familiar across the free and open source software (FOSS) community, as we've previously detailed.

The Foundation hosts not only Wikipedia but also platforms like Wikimedia Commons, which offers 144 million media files under open licenses. For decades, this content has powered everything from search results to school projects. But since early 2024, AI companies have dramatically increased automated scraping through direct crawling, APIs, and bulk downloads to feed their hungry AI models. This exponential growth in non-human traffic has imposed steep technical and financial costs—often without the attribution that helps sustain Wikimedia’s volunteer ecosystem.

The impact isn’t theoretical. The foundation says that when former US President Jimmy Carter died in December 2024, his Wikipedia page predictably drew millions of views. But the real stress came when users simultaneously streamed a 1.5-hour video of a 1980 debate from Wikimedia Commons. The surge doubled Wikimedia’s normal network traffic, temporarily maxing out several of its Internet connections. Wikimedia engineers quickly rerouted traffic to reduce congestion, but the event revealed a deeper problem: The baseline bandwidth had already been consumed largely by bots scraping media at scale.

Read full article

Comments

Reference : https://ift.tt/Kzx2sD8

Nvidia Blackwell Ahead in AI Inference, AMD Second




In the latest round of machine learning benchmark results from MLCommons, computers built around Nvidia’s new Blackwell GPU architecture outperformed all others. But AMD’s latest spin on its Instinct GPUs, the MI325, proved a match for the Nvidia H200, the product it was meant to counter. The comparable results were mostly on tests of one of the smaller-scale large language models Llama2 70B (for 70 billion parameters). However, in an effort to keep up with a rapidly changing AI landscape, MLPerf added three new benchmarks to better reflect where machine learning is headed.

MLPerf runs benchmarking for machine learning systems in an effort to provide an apples-to-apples comparison between computer systems. Submitters use their own software and hardware, but the underlying neural networks must be the same. There are a total of 11 benchmarks for servers now, with three added this year.

It has been “hard to keep up with the rapid development of the field,” says Miro Hodak, the co-chair of MLPerf Inference. ChatGPT only appeared in late 2022, OpenAI unveiled its first large language model (LLM) that can reason through tasks last September, and LLMs have grown exponentially—GPT3 had 175 billion parameters, while GPT4 is thought to have nearly 2 trillion. As a result of the breakneck innovation, “we’ve increased the pace of getting new benchmarks into the field,” says Hodak.

The new benchmarks include two LLMs. The popular and relatively compact Llama2-70B is already an established MLPerf benchmark, but the consortium wanted something that mimicked the responsiveness people are expecting of chatbots today. So the new benchmark “Llama2-70B Interactive” tightens the requirements. Computers must produce at least 25 tokens per second under any circumstance and cannot take more than 450 milliseconds to begin an answer.

Seeing the rise of “agentic AI”—networks that can reason through complex tasks—MLPerf sought to test an LLM that would have some of the characteristics needed for that. They chose Llama3.1 405B for the job. That LLM has what’s called a wide context window. That’s a measure of how much information—documents, samples of code, etc.—it can take in at once. For Llama3.1 405B that’s 128,000 tokens, more than 30 times as much as Llama2 70B.

The final new benchmark, called RGAT, is what’s called a graph attention network. It acts to classify information in a network. For example, the dataset used to test RGAT consist of scientific papers, which all have relationships between authors, institutions, and fields of studies, making up 2 terabytes of data. RGAT must classify the papers into just under 3000 topics.

Blackwell, Instinct Results

Nvidia continued its domination of MLPerf benchmarks through its own submissions and those of some 15 partners such as Dell, Google, and Supermicro. Both its first and second generation Hopper architecture GPUs—the H100 and the memory-enhanced H200—made strong showings. “We were able to get another 60 percent performance over the last year” from Hopper, which went into production in 2022, says Dave Salvator, director of accelerated computing products at Nvidia. “It still has some headroom in terms of performance.”

But it was Nvidia’s Blackwell architecture GPU, the B200, that really dominated. “The only thing faster than Hopper is Blackwell,” says Salvator. The B200 packs in 36 percent more high-bandwidth memory than the H200, but more importantly it can perform key machine-learning math using numbers with a precision as low as 4 bits instead of the 8 bits Hopper pioneered. Lower precision compute units are smaller, so more fit on the GPU, which leads to faster AI computing.

In the Llama3.1 405B benchmark, an eight-B200 system from Supermicro delivered nearly four times the tokens per second of an eight-H200 system by Cisco. And the same Supermicro system was three times as fast as the quickest H200 computer at the interactive version of Llama2-70B.

Nvidia used its combination of Blackwell GPUs and Grace CPU, called GB200, to demonstrate how well its NVL72 data links can integrate multiple servers in a rack, so they perform as if they were one giant GPU. In an unverified result the company shared with reporters, a full rack of GB200-based computers delivers 869,200 tokens/s on Llama2 70B. The fastest system reported in this round of MLPerf was an Nvidia B200 server that delivered 98,443 tokens/s.

AMD is positioning its latest Instinct GPU, the MI325X, as providing competitive performance to Nvidia’s H200. MI325X has the same architecture as its predecessor MI300 but adds even more high-bandwidth memory and memory bandwidth—288 gigabytes and 6 terabytes per second (a 50 percent and 13 percent boost respectively).

Adding more memory is a play to handle larger and larger LLMs. “Larger models are able to take advantage of these GPUs because the model can fit in a single GPU or a single server,” says Mahesh Balasubramanian, director of data center GPU marketing at AMD. “So you don’t have to have that communication overhead of going from one GPU to another GPU or one server to another server. When you take out those communications your latency improves quite a bit.” AMD was able to take advantage of the extra memory through software optimization to boost the inference speed of DeepSeek-R1 8-fold.

On the Llama2-70B test, an eight-GPU MI325X computers came within 3 to 7 percent the speed of a similarly tricked out H200-based system. And on image generation the MI325X system was within 10 percent of the Nvidia H200 computer.

AMD’s other noteworthy mark this round was from its partner, Mangoboost, which showed nearly four-fold performance on the Llama2-70B test by doing the computation across four computers.

Intel has historically put forth CPU-only systems in the inference competition to show that for some workloads you don’t really need a GPU. This time around saw the first data from Intel’s Xeon 6 chips, which were formerly known as Granite Rapids and are made using Intel’s 3-nanometer process. At 40,285 samples per second, the best image recognition results for a dual-Xeon 6 computer was about one-third the performance of a Cisco computer with two Nvidia H100s.

Compared to Xeon 5 results from October 2024, the new CPU provides about an 80 percent boost on that benchmark and an even bigger boost on object detection and medical imaging. Since it first started submitting Xeon results in 2021 (the Xeon 3), the company has achieve an 11-fold boost in performance on Resnet.

For now, it seems Intel has quit the field in the AI accelerator chip battle. Its alternative to the Nvidia H100, Gaudi 3, did not make an appearance in the new MLPerf results nor in version 4.1, released last October. Gaudi 3 got a later than planned release because its software was not ready. In the opening remarks at Intel Vision 2025, the company’s invite-only customer conference, newly minted CEO Lip Bu Tan seemed to apologize for Intel’s AI efforts. “I’m not happy with our current position,” he told attendees. “You’re not happy either. I hear you loud and clear. We are working toward a competitive system. It won’t happen overnight, but we will get there for you.”

Google’s TPU v6e chip also made a showing, though the results were restricted only to the image generation task. At 5.48 queries per second, the 4-TPU system saw a 2.5x boost over a similar computer using its predecessor TPU v5e in the October 2024 results. Even so, 5.48 queries per second was roughly in line with a similarly-sized Lenovo computer using Nvidia H100s.

Reference: https://ift.tt/EB6OTga

Four Ways Engineers Are Trying to Break Physics




In particle physics, the smallest problems often require the biggest solutions.

Along the border of France and Switzerland, around a hundred meters underneath the countryside, protons speed through a 27-kilometer ring—about seven times the length of the Indy 500 circuit—until they crash into protons going in the opposite direction. These particle pileups produce a petabyte of data every second, the most interesting of which is poured into data centers, accessible to thousands of physicists worldwide.

The Large Hadron Collider (LHC), arguably the largest experiment ever engineered, is needed to probe the universe’s smallest constituents. In 2012, two teams at the LHC discovered the elusive Higgs boson, the particle whose existence confirmed 50-year-old theories about the origins of mass. It was a scientific triumph that led to a Nobel Prize and worldwide plaudits.

Since then, experiments at the LHC have focused on better understanding how the newfound Higgs fits into the Standard Model, particle physicists’ best theoretical description of matter and forces—minus gravity. “The Standard Model is beautiful,” says Victoria Martin, an experimental physicist at the University of Edinburgh. “Because it’s so precise, all the little niggles stand out.”

Tunnel with cylindrical tube stretching into distance The Large Hadron Collider lives in a 27-kilometer tunnel ring, about 100 meters underneath France and Switzerland. It was used to discover the Higgs boson, but further research may require something larger still. Maximilien Brice/CERN

The minor quibbles physicists have about the Standard Model could be explained by new particles: Dark matter, the invisible material whose gravity shapes the universe, is thought to be made of heretofore undiscovered particles. But such new particles may be out of reach for the LHC, even after it undergoes upgrades that are set to be completed later this decade. To address these lingering questions, particle physicists have been planning its successors. These next-generation colliders will improve on the LHC by smashing protons at higher energies or by making more precise collisions with muons, antimuons, electrons, and positrons. In doing so, they’ll allow researchers to peek into a whole new realm of physics.

Martin herself is particularly interested in learning more about the Higgs, and learning exactly how the particle responsible for mass behaves. One possible find: Properties of the Higgs suggest that the universe might not be stable in the long, long term. [Editor’s note: About 10790 years. Other problems may be more pressing.] “We don’t really know exactly what we’re going to find,” Martin says. “But that’s okay, because it’s science, it’s research.”

There are four main proposals for new colliders, and each one comes with its own slew of engineering challenges. To build them, engineers would need to navigate tricky regional geology, design accelerating cavities, handle the excess heat within the cavities, and develop powerful new magnets to whip the particles through these cavities. But perhaps more daunting are the geopolitical obstacles: coordinating multinational funding commitments and slogging through bureaucratic muck.

Collider projects take years to plan and billions of dollars to finance. The fastest that any of the four machines would come on line is the late 2030s. But now is when physicists and engineers are making key scientific and engineering decisions about what’s coming next.

Supercolliders at a glance


Large Hadron Collider

Size (circumference): 27 kilometers

Collision energy: 13,600 giga-electron volts

Colliding particles: protons and ions

Luminosity: 2 × 1034 collisions per square centimeter per second (5 × 1034 for high-luminosity upgrade)

Location: Switzerland–France border

Start date: 2008–

International Linear Collider

Size (length): 31 km

Collision energy: 500 GeV

Colliding particles: electrons and positrons

Luminosity (at peak energy): 3 × 1034 collisions per cm2 per second

Location: Iwate, Japan

Earliest start date: 2038


Muon collider

Size (circumference): 4.5 km (or 10 km)

Collision energy: 3,000 GeV (or 10,000 GeV)

Colliding particles: muons and antimuons

Luminosity: 2 × 1035 collisions per cm2 per second

Location: possibly Fermilab

Earliest start date: 2045 (or in the mid-2050s)


Future Circular Collider-ee | FCC-hh

Size (circumference): 91 km

Collision energy: 240 GeV | 85,000 GeV

Colliding particles: electrons and positrons | protons

Luminosity: 8.5 × 1034 | 30 × 1034 collisions per cm2 per second

Location: Switzerland–France border

Earliest start date: 2046 | 2070


Circular Electron Positron Collider | Super proton–proton Collider (SPPC)

Size (circumference): 100 km

Collision energy: 240 GeV | 100,000 GeV

Colliding particles: electrons and positrons | protons

Luminosity: 8.3 × 1034 | 13 × 1034 collisions per cm2 per second

Location: China

Earliest start date: 2035 | 2060s

Possible supercolliders of the future

The LHC collides protons and other hadrons. Hadrons are like beanbags, full of quarks and gluons, that spray around everywhere upon collision.

Next-generation colliders have two ways to improve on the LHC: They can go to higher energies or higher precision. Higher energies provide more data by producing more particles—potentially new, heavy ones. Higher-precision collisions give physicists cleaner data with a better signal-to-noise ratio because the particle crash produces less debris. Either approach could reveal new physics beyond the Standard Model.

Three of the new colliders would improve on the LHC’s precision by colliding electrons and their antimatter counterparts, positrons, instead of hadrons. These particles are more like individual marbles—much lighter, and not made up of any smaller constituents. Compared with the collisions between messy, beanbag-like hadrons, a collision between electrons and positrons is much cleaner. After taking data for years, some of those colliders could be converted to smash protons as well, though at energies about eight times as high as those of the LHC.

These new colliders range from technically mature to speculative. One such speculative option is to smash muons, electrons’ heavier cousins, which have never been collided before. In 2023, an influential panel of particle physicists recommended that the US pursue development of such a machine, in a so-called ‘muon shot’. If it is built, a muon collider would likely be based at Fermilab, the center of particle physics in the United States.

A muon collider “can bring us outside of the world that we know,” says Daniele Calzolari, a physicist working on muon collider design at CERN, the European Organization for Nuclear Research. “We don’t know exactly how everything will look like, but we believe we can make it work.”

While muon colliders have remained conceptual for more than 50 years, their potential has long excited and intrigued physicists. Muons are heavy compared with electrons, almost as heavy as protons, but they lack the mess of quarks and gluons, so collisions between muons could be both high energy and high precision.

A shiny metallic machine component set up in a lab setting. Superconducting radio-frequency cavities are used in particle colliders to apply electric fields to charged particles, speeding them up toward each other until they smash together. Newer methods of making these cavities are seamless, providing more-precise steering and, presumably, better collisions. Reidar Hahn/Fermi

The trouble is that muons decay rapidly—in a mere 2.2 microseconds while at rest—so they have to be cooled, accelerated, and collided before they expire. Preliminary studies suggest a muon collider is possible, but key technologies, like powerful high-field solenoid magnets used for cooling, still need to be developed. In March 2025, Calzolari and his colleagues submitted an internal proposal for a preliminary demonstration of the cooling technology, which they hope will happen before the end of the decade.

The accelerator that could theoretically come on line the soonest, would be the International Linear Collider (ILC) in Iwate, Japan. The ILC would send electrons and positrons down straight tunnels where the particles would collide to produce Higgs bosons that are easier to detect than at the LHC. The collider’s design is technically mature, so if the Japanese government officially approved the project, construction could begin almost immediately. But after multiple delays by the government, the ILC remains in a sort of planning purgatory, looking more and more unlikely.

Chart of Standard Model particles showing quarks, leptons, gauge bosons, and the Higgs boson. The Standard Model of particle physics is the current best theory of all the understood matter and forces in our universe (except gravity). The model works extremely well, but scientists also know that it is incomplete. The next generation of supercolliders might give a glimpse at what’s beyond the Standard Model.

So, the two colliders, which are both technically mature, that have perhaps the clearest path to construction are China’s Circular Electron Positron Collider (CEPC) and CERN’s Future Circular Collider (FCC-ee).

CERN’s FCC-ee would be a 91-km ring, designed to initially collide electrons and positrons to study the parameters of particles like the Higgs in fine detail (the “ee” indicates collisions between electrons and positrons). Compared with the LHC’s collisions of protons or heavy ions, those between electrons and positrons “are much cleaner, so you can have a more precise measurement,” says Michael Benedikt, the head of the FCC-ee effort. After about a decade of operation—enough time to gather data and develop the needed magnets—it would be upgraded to collide protons and search for new physics at much higher energies (and then become known as the FCC-hh, for hadrons). The FCC-ee’s feasibility report just concluded, and CERN’s member states are now left deciding whether to pursue the project.

Similarly, China’s CEPC would also be a 100-km ring designed to collide electrons and positrons for the first 18 years or so. And much like the FCC, a proton or other hadron upgrade is in the works after that. Later this year, Chinese researchers plan to submit the CEPC for official approval by the Chinese government as part of the next five-year-plan. As the two colliders (and their proton upgrades) are considered for construction in the next few years, policymakers will be thinking about more than just their potential for discovery.

CEPC and FCC-ee are, in this sense, less abstract physics experiments and more engineering projects with concrete design challenges.

Laying the groundwork

When particles zip around the curve of a collider, they lose energy—much like a car braking on a racetrack. The effect is particularly pronounced for lightweight particles like electrons and positrons. To reduce this energy loss from sharp turns, CEPC and FCC-ee are both planned to have enormous tunnels, which, if built, would be among the longest in the world. The construction cost of such an enormous tunnel would be several billion U.S.dollars, roughly one-third of the total collider price.

Finding a place to bury a 90-km ring is not easy, especially in Switzerland. The proposed path of the FCC-ee has an average depth of 200 meters, with a dip to 500 meters under Lake Geneva, fit snugly between the Jura Mountains to the northwest and the Prealps to the east. The land there was once covered by a sea, which left behind sedimentary rock—a mixture of sandstone and shale known as molasse. “We’ve done so much tunneling at CERN before. We were quite confident about the molasse rock,” says Liam Bromiley, a civil engineer at CERN.

But the FCC-ee’s path also takes it through deposits of limestone, which is permeable and can hold karsts, or cavities, full of water. “If you hit one of those, you could end up flooding the tunnel,” Bromiley says. During the next two years, if the project is green-lit, engineers will drill boreholes into the limestone to determine whether there are karsts that can be avoided.

Map showing collider sizes in Geneva, Switzerland, and Qinhuangdao, China. FCC-ee would be a 91-km ring spanning underneath Switzerland and France, near the current Large Hadron Collider. One of the proposed locations for the CEPC is near the northern port city of Qinhuangdao, where the 100 km circumference collider would be buried underground.Chris Philpot

CEPC, in contrast, has a much looser spatial constraint, and can choose from nearly anywhere in China. Three main sites are being considered: Qinhuangdao (a northern port city), Changsha (a metropolis in central China), and Huzhou (a coastal city near Shanghai). According to Jie Gao, a particle physicist at the Institute of High Energy Physics, in Beijing, the ideal location will have hard rock, like granite, and low seismic activity. Additionally, Gao says, they want a site with good infrastructure to create a “science city” ideal for an international community of physicists.

The colliders’ carbon footprints are also on the minds of physicists. One potential energy-saving measure: redirecting excess heat from operations. “In the past we used to throw it into the atmosphere,” Benedikt says. In recent years, heated water from one of the LHC’s cooling stations has kept part of the commune of Ferney-Voltaire warm during the winters, and Benedikt says the FCC-ee would expand these environmental efforts.

Getting up to speed

If the civil-engineering challenges are met, physicists will rely on a spate of technologies to accelerate, focus, and collide electrons and positrons at CEPC and FCC-ee more precisely and efficiently than they could at the LHC.

When both types of particles are first produced from their sources, they start off at a comparatively low energy, around 4 giga-electron volts. To get them up to speed, electrons and positrons are sent through superconducting radio-frequency (SRF) cavities—gleaming metal bubbles strung together like beads of a necklace, which apply an electric field that pushes the charged particles forward.

Cutaway diagrams of Future Circular Collider and Circular Electron Positron Collider designs. Both China’s Circular Electron Positron Collider (CEPC) [bottom] and CERN’s Future Circular Collider (FCC-ee) [top] have preliminary designs of the insides of their tunnels, including the collider itself, associated vacuum and control equipment, and detectors.Chris Philpot

In the past, SRF cavities were welded together, which inherently left imperfections that led to beam instabilities. “You can never obtain a perfect surface along this weld,” Benedikt says. FCC-ee researchers have explored several techniques to create cavities without seams, including hydroforming, which is widely used for the components of high-end sports cars. A metal tube is placed in a pressurized cell and compressed against a die by liquid. The resulting cavity has no seams and is smooth as blown glass.

To improve efficiency, engineers focus on the machines that power the SRF cavities, machines called klystrons. Klystrons have historically had efficiencies that peak around 65 percent, but design advances, such as the machines’ ability to bunch electrons together, are on track to reach efficiencies of 80 percent. “The efficiency of the klystron is becoming very important,” Gao says. Over 10 years of operation, these savings could amount to 1 terawatt hour—about enough electricity to power all of China for an hour.

Another efficiency boost comes from focusing on the tunnel design. As electrons and positrons follow the curve of the ring, they will lose a considerable amount of energy, so SRF cavities will be placed around the ring to boost particle energies. The lost energy will be emitted as potent synchrotron radiation—about 10,000 times as much radiation as is emitted by protons circling the LHC today. “You do not want to send the synchrotron radiation into the detectors,” Benedikt says. To avoid this fate, neither FCC-ee nor CEPC will be perfectly circular. Shaped a bit like a racetrack, both colliders will have about 1.5-km-long straight sections before an interaction point. Other options are also on the table—in the past, researchers have even used repurposed steel from scrapped World War II battleships to shield particle detectors from radiation.

Both CEPC and FCC-ee will be massive data-generating machines. Unlike the LHC, which is regularly stopped to insert new particles, the next-generation colliders will be fed with a continuous stream of particles, allowing it to stay in “collision mode” and take more data.

At a collider, data is a function of ‘luminosity’— the ratio of detected events per square centimeter, per second. The more particle collisions, the “brighter” the collider. Firing particles at each other is a little like trying to get two bullets to collide—they often miss each other, which limits the luminosity. But physicists have a variety of strategies to squeeze more electrons and positrons into smaller areas to achieve more of these unlikely collisions. Compared to the Large Electron-Positron (LEP) collider of the 1990s, the new machines will produce 100,000 times as many Z bosons—particles responsible for radioactive decay. More Z bosons means more data. “The FCC-ee can produce all the data that were accumulated in operation over 10 years of LEP within minutes,” Benedikt says.

Back to protons

While both the FCC-ee and CEPC would start with electrons and positrons, they are designed to eventually collide protons. These upgrades are called FCC-hh and Super proton-proton Collider (SPPC). Using protons, FCC-hh and SPPC would reach a collision energy of 100,000 GeV, roughly an order of magnitude higher than the LHC’s 13,600 GeV. Though the collisions would be messy, their high energy would allow physicists to “explore fully new territory,” Benedikt says. While there’s no guarantee, physicists hope that territory teems with discoveries-in-waiting, such as dark-matter particles, or strange new collisions where the Higgs recursively interacts with itself many times.

One pro of protons is that they are over 1,800 times as heavy as electrons, so they emit far less radiation as they follow the curve of the collider ring. But this extra heft comes with a substantial cost: Bending protons’ paths requires even stronger superconducting magnets.

Magnet development has been the downfall of colliders before. In the early 1980s, a planned collider named Isabelle was scrapped because magnet technology was not far enough along. The LHC’s magnets are made from a strong alloy of niobium-titanium, wound together into a coil that produces magnetic fields when subjected to a current. These coils can produce field strengths over 8 teslas. The strength of the magnet pushes its two halves apart with a force of nearly 600 tons per meter. “If you have an abrupt movement of the turns in the coil by as little as 10 micrometers,” the entire magnet can fail, says Bernhard Auchmann, an expert on magnets at CERN.

It Is Unlikely That Any ColliderWhether Based in China, at Cern, the United States, or JapanWill Be Able To Go It Alone.

Future magnets for FCC-hh and SPPC will need to have at least twice the magnetic field strength, about 16 to 20 T, pushing the limits of materials and physics. Auchmann points to three possible paths forward. The most straightforward option might be “niobium three tin” (Nb3Sn). Substituting tin for titanium allows the metal to host magnetic fields up to 16 T but makes it quite brittle, so you can’t “clamp the hell out of it,” Auchmann says. One possible solution involves placing (Nb3Sn) into a protective steel endoskeleton that prevents it from crushing itself.

Then there are high-temperature superconductors. Some magnets made with rare earth metals can exceed 20 T, but they too are fragile and require similar steel supports. Currently, these materials are expensive, but demand from fusion startups, which also require these types of magnets, may push the price down, Auchmann says.

Finally, there is a class of iron-based high-temperature superconductors that is being championed by physicists in China, thanks to the low price of iron and manufacturing-process improvements. “It’s cheap,” Gao says. “This technology is very promising.” Over the next decade or so, physicists will work on each of these materials, and hope to settle on one direction for next-generation magnets.

Time and money

While FCC-ee and CEPC (as well as their proton upgrades) share many of the same technical specifications, they differ dramatically in two critical factors: timelines and politics.

Construction for CEPC could begin in two years; the FCC-ee would need to wait about another decade. The difference comes down largely to the fact that CERN has a planned upgrade to the LHC—enabling it to collect 10 times as much data—which will consume resources until nearly 2040. China, by contrast, is investing heavily in basic research and has the funds immediately at hand.

The abstruse physics that happens at colliders is never as far from political realities on Earth as it seems. Japan’s ILC is in limbo because of budget issues. The muon collider is subject to the whims of the highly divided 119th U.S. Congress. Last year, a representative for Germany criticized the FCC-ee for being unaffordable, and CERN continues to struggle with the politics of including Russian scientists. Tensions between China and the United States are similarly on the rise following the Trump administration’s tariffs.

How physicists plan to tackle these practical problems remains to be seen. But it is unlikely that any collider—whether based in China, at CERN, the United States, or Japan—will be able to go it alone. In addition to the tens of billions of dollars for construction and operation of the new facility, the physics expertise needed to run it and perform complex experiments at scale must be global. “By definition, it’s an international project,” Gao says. “The door is wide open.”

Reference: https://ift.tt/r4fmJqu

Complex Haptics Deliver a Pinch, a Stretch, or a Tap




Most haptic interfaces today are limited to simple vibrations. While visual displays and audio systems have continued to progress, those using our sense of touch have largely stagnated. Now, researchers have developed a haptics system that creates more complex tactile feedback. Beyond just buzzing, the device simulates sensations like pinching, stretching, and tapping for a more realistic experience.

“The sensation of touch is the most personal connection that you can have with another individual,” says John Rogers, a professor at Northwestern University in Evanston, Illinois, who led the project. “It’s really important, but it’s much more difficult than audio or video.”

Co-led by Rogers and Yonggang Huang, also a professor at Northwestern, the work is largely geared toward medical applications. But the technology could be used in a wide range of applications, including virtual or augmented reality and feeling the texture of clothing fabric or other items while shopping online. The research was published in the journal Science on 27 March.

A Nuanced Sense of Touch

Today’s haptic interfaces mostly rely on vibrating actuators, which are fairly simple to construct. “It’s a great place to start,” says Rogers. But going beyond vibration could help add the vibrancy of real-world interactions to the technology, he adds.

These types of interactions require more sophisticated mechanical forces, which include a combination of both normal forces directed perpendicular to the skin’s surface and shear forces directed parallel. Whether through vibration or applying pressure, forces directed vertically into the skin have been the main focus of haptic designs, according to Rogers. But these don’t fully engage the many receptors embedded in our skin.

The researchers aimed to build an actuator that offers full freedom of motion, which they achieved with “very old physics,” Rogers says—namely, electromagnetism. The basic design of the device consists of three nested copper coils and a small magnet. Running current through the coils generates a magnetic field that then moves the magnet, which delivers force to the skin.

“What we’ve put together is an engineering embodiment [of the physics] that provides a very compact force delivery system and offers full programmability in direction, amplitude, and temporal characteristics,” says Rogers. For a more elaborate setup, the researchers also developed a version that uses a collection of four magnets with different orientations of north and south poles. This creates even more complex sensations of pinching, stretching, and twisting.

Haptics at Your Fingertips—Or Anywhere

Hand wearing finger splints and wrist support against a plain background. Because fingertips are highly sensitive, only small forces are needed for this application. John A. Rogers/Northwestern University

Although much of the previous work in haptics has focused on fingertips and the hands, these devices could be placed elsewhere on the body, including the back, chest, or arms. However, these applications may have different requirements. Compared to places like the back, the fingertips are highly sensitive—both in terms of the force needed and the spatial density of receptors.

“The fingertips are probably the most challenging in terms of density, but they’re easiest in terms of the forces that you need to deliver,” says Rogers. In other use cases, delivering enough power may be a challenge, he acknowledges.

The force possible may also be limited by the size of the coils, says Gregory Gerling, a systems engineering professor at the University of Virginia and former chair of the IEEE Technical Committee on Haptics. The coil size dictates how much force you can generate, and at a certain point, the device won’t be wearable. However, he believes it is sufficient for VR applications.

Gerling, an IEEE senior member, finds the use of magnetism in multiple directions interesting. Compared to other approaches that are based on hydraulics or air pressure, this system doesn’t require pumping fluids or gasses. “You can be kind of untethered,” Gerling says. “Overall, it’s a very interesting, novel device, and maybe it takes the field in a slightly new direction.”

Applications in VR, Neuropathy, and More

The clearest application of the device is probably in virtual or augmented reality, says Rogers. These environments now have highly sophisticated audio and video inputs, “but the tactile component of that experience is still a work in progress,” he says.

Their lab, however, is primarily focused on medical applications, including sensory substitution for patients who have lost sensation in a part of the body. A complex haptics interface could reproduce the sensation in another part of the body.

For example, nerve damage in people with diabetic neuropathy makes it difficult for them to walk without looking at their feet. The lab is experimenting with placing an array of pressure sensors into the base of these patients’ shoes, then reproducing the pattern of pressure using a haptic array mounted on their upper thighs, where they still have sensation. The researchers are working with a rehabilitation facility in Chicago to test the approach, mainly with this population.

Continuing to develop these medical applications will be a focus moving forward, says Rogers. In terms of engineering, he would like to further miniaturize the actuators to make dense arrays possible in regions of the body like the fingertips.

Feeling the Music

Additionally, the researchers explored the possibility of using the device to increase engagement in musical performances. Apart from perhaps feeling vibrations of the bass line, performances usually rely on sight and sound. Adding a tactile element could make for a more immersive experience, or help people with hearing impairment engage with the music.

With the current tech, basic vibrating actuators can change the frequency of vibration to match the notes being played. While this can convey a simple melody, it lacks the richness of different instruments and musical components.

The researchers’ full freedom of motion actuator can convey a more vibrant sound. Voice, guitar, and drums, for instance, can each be converted into a particular force delivery mechanism. Like with vibration alone, the frequency of each force can be modulated to match the music. The experiment was exploratory, Rogers says, but exploits the advanced capabilities of the system.

Reference: https://ift.tt/i3smYkV

Tuesday, April 1, 2025

Protecting Robots in Harsh Environments with Advanced Sealing Systems




This is a sponsored article brought to you by Freudenberg Sealing Technologies.

The increasing deployment of collaborative robots (cobots) in outdoor environments presents significant engineering challenges, requiring highly advanced sealing solutions to ensure reliability and durability. Unlike industrial robots that operate in controlled indoor environments, outdoor cobots are exposed to extreme weather conditions that can compromise their mechanical integrity. Maintenance robots used in servicing wind turbines, for example, must endure intense temperature fluctuations, high humidity, prolonged UV radiation exposure, and powerful wind loads. Similarly, agricultural robots operate in harsh conditions where they are continuously exposed to abrasive dust, chemically aggressive fertilizers and pesticides, and mechanical stresses from rough terrains.

To ensure these robotic systems maintain long-term functionality, sealing solutions must offer effective protection against environmental ingress, mechanical wear, corrosion, and chemical degradation. Outdoor robots must perform flawlessly in temperature ranges spanning from scorching heat to freezing cold while withstanding constant exposure to moisture, lubricants, solvents, and other contaminants. In addition, sealing systems must be resilient to continuous vibrations and mechanical shocks, which are inherent to robotic motion and can accelerate material fatigue over time.

Comprehensive Technical Requirements for Robotic Sealing Solutions

The development of sealing solutions for outdoor robotics demands an intricate balance of durability, flexibility, and resistance to wear. Robotic joints, particularly those in high-mobility systems, experience multidirectional movements within confined installation spaces, making the selection of appropriate sealing materials and geometries crucial. Traditional elastomeric O-rings, widely used in industrial applications, often fail under such extreme conditions. Exposure to high temperatures can cause thermal degradation, while continuous mechanical stress accelerates fatigue, leading to early seal failure. Chemical incompatibility with lubricants, fuels, and cleaning agents further contributes to material degradation, shortening operational lifespans.

Friction-related wear is another critical concern, especially in robotic joints that operate at high speeds. Excessive friction not only generates heat but can also affect movement precision. In collaborative robotics, where robots work alongside humans, such inefficiencies pose safety risks by delaying response times and reducing motion accuracy. Additionally, prolonged exposure to UV radiation can cause conventional sealing materials to become brittle and crack, further compromising their performance.

Advanced IPSR Technology: Tailored for Cobots

To address these demanding conditions, Freudenberg Sealing Technologies has developed a specialized sealing solution: Ingress Protection Seals for Robots (IPSR). Unlike conventional seals that rely on metallic springs for mechanical support, the IPSR design features an innovative Z-shaped geometry that dynamically adapts to the axial and radial movements typical in robotic joints.

Diagram of a robotic arm displaying PSS, Simmering, and MSS1 seal locations. Numerous seals are required in cobots and these are exposed to high speeds and forces.Freudenberg Sealing Technologies

This unique structural design distributes mechanical loads more efficiently, significantly reducing friction and wear over time. While traditional spring-supported seals tend to degrade due to mechanical fatigue, the IPSR configuration eliminates this limitation, ensuring long-lasting performance. Additionally, the optimized contact pressure reduces frictional forces in robotic joints, thereby minimizing heat generation and extending component lifespans. This results in lower maintenance requirements, a crucial factor in applications where downtime can lead to significant operational disruptions.

Optimized Through Advanced Simulation Techniques

The development of IPSR technology relied extensively on Finite Element Analysis (FEA) simulations to optimize seal geometries, material selection, and surface textures before physical prototyping. These advanced computational techniques allowed engineers to predict and enhance seal behavior under real-world operational conditions.

FEA simulations focused on key performance factors such as frictional forces, contact pressure distribution, deformation under load, and long-term fatigue resistance. By iteratively refining the design based on simulation data, Freudenberg engineers were able to develop a sealing solution that balances minimal friction with maximum durability.

Furthermore, these simulations provided insights into how IPSR seals would perform under extreme conditions, including exposure to humidity, rapid temperature changes, and prolonged mechanical stress. This predictive approach enabled early detection of potential failure points, allowing for targeted improvements before mass production. By reducing the need for extensive physical testing, Freudenberg was able to accelerate the development cycle while ensuring high-performance reliability.

Material Innovations: Superior Resistance and Longevity

The effectiveness of a sealing solution is largely determined by its material composition. Freudenberg utilizes advanced elastomeric compounds, including Fluoroprene XP and EPDM, both selected for their exceptional chemical resistance, mechanical strength, and thermal stability.

Fluoroprene XP, in particular, offers superior resistance to aggressive chemicals, including solvents, lubricants, fuels, and industrial cleaning agents. Additionally, its resilience against ozone and UV radiation makes it an ideal choice for outdoor applications where continuous exposure to sunlight could otherwise lead to material degradation. EPDM, on the other hand, provides outstanding flexibility at low temperatures and excellent aging resistance, making it suitable for applications that require long-term durability under fluctuating environmental conditions.

To further enhance performance, Freudenberg applies specialized solid-film lubricant coatings to IPSR seals. These coatings significantly reduce friction and eliminate stick-slip effects, ensuring smooth robotic motion and precise movement control. This friction management not only improves energy efficiency but also enhances the overall responsiveness of robotic systems, an essential factor in high-precision automation.

Extensive Validation Through Real-World Testing

While advanced simulations provide critical insights into seal behavior, empirical testing remains essential for validating real-world performance. Freudenberg subjected IPSR seals to rigorous durability tests, including prolonged exposure to moisture, dust, temperature cycling, chemical immersion, and mechanical vibration.

Throughout these tests, IPSR seals consistently achieved IP65 certification, demonstrating their ability to effectively prevent environmental contaminants from compromising robotic components. Real-world deployment in maintenance robotics for wind turbines and agricultural automation further confirmed their reliability, with extensive wear analysis showing significantly extended operational lifetimes compared to traditional sealing technologies.

Safety Through Advanced Friction Management

In collaborative robotics, sealing performance plays a direct role in operational safety. Excessive friction in robotic joints can delay emergency-stop responses and reduce motion precision, posing potential hazards in human-robot interaction. By incorporating low-friction coatings and optimized sealing geometries, Freudenberg ensures that robotic systems respond rapidly and accurately, enhancing workplace safety and efficiency.

Tailored Sealing Solutions for Various Robotic Systems

Freudenberg Sealing Technologies provides customized sealing solutions across a wide range of robotic applications, ensuring optimal performance in diverse environments.

Automated Guided Vehicles (AGVs) operate in industrial settings where they are exposed to abrasive contaminants, mechanical vibrations, and chemical exposure. Freudenberg employs reinforced PTFE composites to enhance durability and protect internal components.

Diagram showing different sealing technologies in a device: PSS, Simmerring, MSS1, and eCON. Delta robots can perform complex movements at high speed. This requires seals that meet the high dynamic and acceleration requirements.Freudenberg Sealing Technologies

Delta robots, commonly used in food processing, pharmaceuticals, and precision electronics, require FDA-compliant materials that withstand rigorous cleaning procedures such as Cleaning-In-Place (CIP) and Sterilization-In-Place (SIP). Freudenberg utilizes advanced fluoropolymers that maintain structural integrity under aggressive sanitation processes.

A mechanical device with colored dots indicating PSS, Simmering\u00ae, MSS1, and eCON components. Seals for Scara robots must have high chemical resistance, compressive strength and thermal resistance to function reliably in a variety of industrial environments.Freudenberg Sealing Technologies

SCARA robots benefit from Freudenberg’s Modular Plastic Sealing Concept (MPSC), which integrates sealing, bearing support, and vibration damping within a compact, lightweight design. This innovation optimizes robot weight distribution and extends component service life.

Six-axis robots used in automotive, aerospace, and electronics manufacturing require sealing solutions capable of withstanding high-speed operations, mechanical stress, and chemical exposure. Freudenberg’s Premium Sine Seal (PSS), featuring reinforced PTFE liners and specialized elastomer compounds, ensures maximum durability and minimal friction losses.

Continuous Innovation for Future Robotic Applications

Freudenberg Sealing Technologies remains at the forefront of innovation, continuously developing new materials, sealing designs, and validation methods to address evolving challenges in robotics. Through strategic customer collaborations, cutting-edge material science, and state-of-the-art simulation technologies, Freudenberg ensures that its sealing solutions provide unparalleled reliability, efficiency, and safety across all robotic platforms.

Reference: https://ift.tt/sFMCIO5

“The Doctor Will See Your Electronic Health Record Now”




Cheryl Conrad no longer seethes with the frustration that threatened to overwhelm her in 2006. As described in IEEE Spectrum, Cheryl’s husband, Tom, has a rare genetic disease that causes ammonia to accumulate in his blood. At an emergency room visit two decades ago, Cheryl told the doctors Tom needed an immediate dose of lactulose to avoid going into a coma, but they refused to medicate him until his primary doctor confirmed his medical condition hours later.

Making the situation more vexing was that Tom had been treated at that facility for the same problem a few months earlier, and no one could locate his medical records. After Tom’s recovery, Cheryl vowed to always have immediate access to them.

Today, Cheryl says, “Happily, I’m not involved anymore in lugging Tom’s medical records everywhere.” Tom’s two primary medical facilities use the same electronic health record (EHR) system, allowing doctors at both facilities to access his medical information quickly.

In 2004, President George W. Bush set an ambitious goal for U.S. health care providers to transition to EHRs by 2014. Electronic health records, he declared, would transform health care by ensuring that a person’s complete medical information was available “at the time and place of care, no matter where it originates.”

U.S. President George Bush stands next to a doctor in a white lab coat, as he points at a screen with an electronic health record on it. President George W. Bush looks at an electronic medical record system during a visit to the Cleveland Clinic on 27 January 2005. Brooks Kraft/Corbis/Getty Images

Over the next four years, a bipartisan Congress approved more than US $150 million in funding aimed at setting up electronic health record demonstration projects and creating the administrative infrastructure needed.

Then, in 2009, during efforts to mitigate the financial crisis, newly elected President Barack Obama signed the $787 billion economic stimulus bill. Part of it contained the Health Information Technology for Economic and Clinical Health Act, also known as the HITECH Act, which budgeted $49 billion to promote health information technology and EHRs in the United States.

As a result, Tom, like most Americans, now has an electronic health record. However, many millions of Americans now have multiple electronic health records. On average, patients in the United States visit 19 different kinds of doctors throughout their lives. Further, many specialists have unique EHR systems that do not automatically communicate medical data between each other, so you must update your medical information for each one. Nevertheless, Tom now has immediate access to all his medical treatment and test information, something not readily available 20 years ago.

Tom’s situation underlines the paradox of how far the United States has come since 2004 and how far it still must go to achieve President Bush’s vision of a complete, secure, easily accessible, and seamlessly interoperable lifetime EHR.


As of 2021, nearly 80 percent of physicians and almost all nonfederal acute-care hospitals deployed an electronic health record system.


For many patients in the United States today, instead of fragmented, paper medical record silos, they have a plethora of fragmented, electronic medical record silos. And thousands of health care providers are burdened with costly, poorly designed, and insecure EHR systems that have exacerbated clinician burnout, led to hundreds of millions of medical records lost in data breaches, and created new sources of medical errors.

EHR’s baseline standardization does help centralize a very fragmented health care system, but in the rush to get EHR systems adopted, key technological and security challenges were overlooked and underappreciated. Subsequently, problems were introduced due to the sheer complexity of the systems being deployed. These still-unresolved issues are now potentially coupled with the unknown consequences of bolting on immature AI-driven technologies. Unless more thought and care are taken now in how to proceed as a fully integrated health care system, we could unintentionally put the entire U.S. health care system in a worse place than when President Bush first declared his EHR goal in 2004.

IT to Correct Health Care Inefficiencies Is a Global Project

Putting government pressure on the health care industry to adopt EHR systems through various financial incentives made sense by the early 2000s. Health care in the United States was in deep trouble. Spending increased from $74.1 billion in 1970 to more than $1.4 trillion by 2000, 2.3 times as fast as the U.S. gross domestic product. Health care costs grew at three times the rate of inflation from 1990 to 2000 alone, surpassing 13 percent of GDP.

Two major studies conducted by the Institute of Medicine in 2000 and 2001, titled To Err Is Human and Crossing the Quality Chasm, found that health care was deteriorating in terms of accessibility, quality, and safety. Inferior quality and needless medical treatments, including overuse or duplication of diagnostic tests, underuse of effective medical practices, misuse of drug therapies, and poor communication between health care providers emerged as particularly frustrating problems.

Administrative waste and unnecessary expenditures were substantial cost drivers, from billing to resolving insurance claims to managing patients’ cases. Health care’s administrative side was characterized as a “ monstrosity,” showing huge transaction costs associated with an estimated 30 billion communications conducted by mail, fax, or telephone annually at that time.

Both health care experts and policymakers agreed that reductions in health care delivery and its costs were possible only by deploying health information technology such as electronic prescribing and EHR. Early adopters of EHR systems like the Mayo Clinic, Cleveland Clinic, and the U.S. Department of Veterans Affairs proved the case. Governments across the European Union and the United Kingdom reached the same conclusion.

There has been a consistent push, especially in more economically advanced countries, to adopt EHR systems over the past two decades. For example, the E.U. has set a goal of providing 100 percent of its citizens across 27 countries access to electronic health records by 2030. Several countries are well on their way to this achievement, including Belgium, Denmark, Estonia, Lithuania, and Poland. Outside the E.U., countries such as Israel and Singapore also have very advanced systems, and after a rocky start, Australia’s My Health Record system seems to have found its footing. The United Kingdom was hoping to be a global leader in adopting interoperable health information systems, but a disastrous implementation of its National Programme for IT ended in 2011 after nine years and more than £10 billion. Canada, China, India, and Japan also have EHR system initiatives in place at varying levels of maturity. However, it will likely be years before they achieve the same capabilities found in leading digital-health countries.

EHRs Need a Systems-Engineering Approach

When it comes to embracing automation, the health care industry has historically moved at a snail’s pace, and when it does move, money goes to IT automation first. Market forces alone were unlikely to speed up EHR adoption.

Even in the early 2000s, health care experts and government officials were confident that digitalization could reduce total health spending by 10 percent while improving patient care. In a highly influential 2005 study, the RAND Corp. estimated that adopting EHR systems in hospitals and physician offices would cost $98 billion and $17 billion, respectively. The report also estimated that these entities would save at least $77 billion a year after moving to digital records. A highly cited paper in HealthAffairs from 2005 also claimed that small physician practices could recoup their EHR system investments in 2.5 years and profit handsomely thereafter.

Moreover, RAND claimed that a fully automated health care system could save the United States $346 billion per year. When Michael O. Leavitt, then the Secretary of Health and Human Services, looked at the projected savings, he saw them as “a key part of saving Medicare.” As baby boomers began retiring en masse in the early 2010s, cutting health care costs was also a political imperative since Medicare funding was projected to run out by 2020.

Some doubted the EHR revolution’s health care improvement and cost reduction claims or that it could be achieved within 20 years. The Congressional Budget Office argued that the RAND report overstated the potential costs and benefits of EHR systems and ignored peer-reviewed studies that contradicted it. The CBO also pointed out that RAND assumed EHR systems would be widely adopted and effectively used, which implies that effective tools already existed, though very few commercially available systems were. There was also skepticism about whether replicating the benefits for early adopters of EHR systems—who spent decades perfecting their systems—was possible once the five-year period of governmental EHR adoption incentives ended.

Even former House Speaker Newt Gingrich, a strong advocate for electronic health record systems, warned that health care was “30 times more difficult to fix than national defense.” The extent of the problem was one reason the 2005 National Academy of Sciences report, Building a Better Delivery System: A New Engineering / Health Care Partnership, forcefully and repeatedly called for innovative systems-engineering approaches to be developed and applied across the entire health care delivery process. The scale, complexity, and extremely short time frame for attempting to transform the totality of the health care environment demanded a robust “system of systems” engineering approach.

This was especially true because of the potential human impacts of automation on health care professionals and patients. Researchers warned that ignoring the interplay of computer-mediated work and existing sociotechnical conditions in health care practices would result in unexpected, unintentional, and undesirable consequences.

Additionally, without standard mechanisms for making EHR systems interoperable, many potential benefits would not materialize. As David Brailer, the first National Health Information Technology Coordinator, stated, “Unless interoperability is achieved…potential clinical and economic benefits won’t be realized, and we will not move closer to badly needed health care reform in the U.S.”

HITECH’s Broken Promises and Unforeseen Consequences

A few years later, policymakers in the Obama administration thought it was unrealistic to prioritize interoperability. They feared that defining interoperability standards too early would lock the health industry into outdated information-sharing approaches. Further, no existing health care business model supported interoperability, and a strong business model actively discouraged providers from sharing information. If patient information could easily shift to another provider, for example, what incentive does the provider have to readily share it?

Instead, policymakers decided to have EHR systems adopted as widely and quickly as possible during the five years of HITECH incentives. Tackling interoperability would come later. The government’s unofficial operational mantra was that EHR systems needed to become operational before they could become interoperable.

“Researchers have found that doctors spend between 3.5 and 6 hours a day (4.5 hours on average) filling out their digital health records.”

Existing EHR system vendors, making $2 billion annually at the time, viewed the HITECH incentive program as a once-in-a-lifetime opportunity to increase market share and revenue streams. Like fresh chum to hungry sharks, the subsidy money attracted a host of new EHR technology entrants eager for a piece of the action. The resulting feeding frenzy pitted an IT-naïve health care industry rushing to adopt EHR systems against a horde of vendors willing to promise (almost) anything to make a sale.

A few years into the HITECH program, a 2013 report by RAND wryly observed the market distortion caused by what amounted to an EHR adoption mandate: “We found that (EHR system) usability represents a relatively new, unique, and vexing challenge to physician professional satisfaction. Few other service industries are exposed to universal and substantial incentives to adopt such a specific, highly regulated form of technology, which has, as our findings suggest, not yet matured.”

In addition to forcing health care providers to choose quickly among a host of immature EHR solutions, the HITECH program completely undercut the warnings raised about the need for systems engineering or considering the impact of automation on very human-centered aspects of health care delivery by professionals. Sadly, the lack of attention to these concerns affects current EHR systems.

Today, studies like that conducted by Stanford Medicine indicate that nearly 70 percent of health care professionals express some level of satisfaction with their electronic health record system and that more than 60 percent think EHR systems have improved patient care. Electronic prescribing has also been seen as a general success, with the risk of medication errors and adverse drug events reduced.

However, professional satisfaction with EHRs runs shallow. The poor usability of EHR systems surfaced early in the HITECH program and continues as a main driver for physician dissatisfaction. The Stanford Medicine study, for example, also reported that 54 percent of physicians polled felt their EHR systems detracted from their professional satisfaction, and 59 percent felt it required a complete overhaul.


“What we’ve essentially done is created 24/7/365 access to clinicians with no economic model for that: The doctors don’t get paid.” —Robert Wachter, chair of the department of medicine at the University of California, San Francisco


Poor EHR system usability results in laborious and low-value data entry, obstacles to face-to-face patient communication, and information overload, where clinicians have to wade through an excess of irrelevant data when treating a patient. A 2019 study in Mayo Clinic Proceedings comparing EHR system usability to other IT products like Google Search, Microsoft Word, and Amazon placed EHR products in the bottom 10 percent.

Electronic health record systems were supposed to increase provider productivity, but for many clinicians, their EHRs are productivity vampires instead. Researchers have found that doctors spend between 3.5 and 6 hours a day (4.5 hours on average) filling out their patient’s digital health records, with an Annals of Internal Medicine study reporting that doctors in outpatient settings spend only 27 percent of their work time face-to-face with their patients.

In those visits, patients often complain that their doctors spend too much time staring at their computers. They are not likely wrong, as nearly 70 percent of doctors in 2018 felt that EHRs took valuable time away from their patients. To address this issue, health care providers employ more than 100,000 medical scribes today—or about one for every 10 U.S. physicians—to record documentation during office visits, but this only highlights the unacceptable usability problem.

Furthermore, physicians are spending more time dealing with their EHRs because the government, health care managers, and insurance companies are requesting more patient information regarding billing, quality measures, and compliance data. Patient notes are twice as long as they were 10 years ago. This is not surprising, as EHR systems so far have not complemented clinician work as much as directed it.

“A phenomenon of the productivity vampire is that the goalposts get moved,” explains University of Michigan professor emeritus John Leslie King, who coined the phrase “productivity vampire.” King, a student of system–human interactions, continues, “With the ability to better track health care activities, more government and insurance companies are going to ask for that information in order for providers to get paid.”


Portrait of a silver-haired bespectacled man in a white Dr. coat

Robert Wachter, chair of the department of medicine at the University of California, San Francisco, and author of The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, believes that EHRs “became an enabler of corporate control and outside entity control.”

“It became a way that entities that cared about what the doctor was doing could now look to see in real time what the doctor was doing, and then influence what the doctor was doing and even constrain it,” Wachter says.

Federal law mandates that patients have access to their medical information contained in EHR systems—which is great, says Wachter, but this also adds to clinician workloads, as patients now feel free to pepper their physicians with emails and messages about the information.

“What we’ve essentially done is created 24/7/365 access to clinicians with no economic model for that: The doctors don’t get paid,” Wachter says. His doctors’ biggest complaints are that their EHR system has overloaded email inboxes with patient inquiries. Some doctors report that their in-boxes have become the equivalent of a second set of patients.

It is not so much a problem with the electronic information system design per se, notes Wachter, but with EHR systems that “meet the payment system and the workflow system in ways that we really did not think about.” EHRs also promised to reduce stress among health care professionals. Numerous studies have found, however, that EHR systems worsen clinician burnout, with Stanford Medicine finding that 71 percent of physicians felt the systems contributed to burnout.


Half of U.S. physicians are experiencing burnout, with 63 percent reporting at least one manifestation in 2022. The average physician works 53 hours weekly (19 hours more than the general population) and spends over 4 hours daily on documentation.


Clinical burnout is lowest among clinicians with highly usable EHR systems or in specialties with the least interaction with their EHR systems, such as surgeons and radiologists. Physicians who make, on average, 4,000 EHR system clicks per shift, like emergency room doctors, report the highest levels of burnout.

Aggravating the situation, notes Wachter, was “that decision support is so rudimentary…which means that the doctors feel like they’re spending all this time entering data in the machine, (but) getting relatively little useful intelligence out of it.”

Poorly designed information systems can also compromise patient safety. Evidence suggests that EHR systems with unacceptable usability contribute to low-quality patient care and reduce the likelihood of catching medical errors. According to a study funded by the U.S. Agency for Healthcare Research and Quality, EHR system issues were involved in the majority of malpractice claims over a six-and-a-half-year period of study ending in 2021. Sadly, the situation has not changed today.

Interoperability, Cybersecurity Bite Back

EHR system interoperability closely follows poor EHR system usability as a driver of health care provider dissatisfaction. Recent data from the Assistant Secretary for Technology Policy / Office of the National Coordinator for Health Information Technology indicates that 70 percent of hospitals sometimes exchange patient data, though only 43 percent claim they regularly do. System-affiliated hospitals share the most information, while independent and small hospitals share the least.

Exchanging information using the same EHR system helps. Wachter observes that interoperability among similar EHR systems is straightforward, but across different EHR systems, he says, “it is still relatively weak.”

However, even if two hospitals use the same EHR vendor, communicating patient data can be difficult if each hospital’s system is customized. Studies indicate that patient mismatch rates can be as high as 50 percent, even in practices using the same EHR vendor. This often leads to duplicate patient records that lack vital patient information, which can result in avoidable patient injuries and deaths.

The ability to share information associated with a unique patient identifier (UPI), like other countries that use advanced EHRs, including Estonia, Israel, and Singapore, makes health information interoperability easier, says Christina Grimes, digital health strategist for the Healthcare Information and Management Systems Society (HIMSS).

But in the United States, “Congress has forbidden it since 1998” and steadfastly resists allowing for UPIs, she notes.

Using a single-payer health insurance system, like most other countries with advanced EHR systems, would also make sharing patient information easier, decrease time spent on EHRs, and reduce clinician burnout, but that is also a nonstarter in the United States for the foreseeable future.

Interoperability is even more challenging because an average hospital uses 10 different EHR vendors internally to support more than a dozen different health care functions, and an average health system has 16 different EHR vendors when affiliated providers are included. Grimes notes that only a small percentage of health care systems use fully integrated EHR systems that cover all functions.

EHR systems adoption also promised to bend the national health care cost curve, but these costs continue to rise at the national level. The United States spent an estimated $4.8 trillion on health care in 2023, or 17.6 percent of GDP. While there seems to be general agreement that EHRs can help with cost savings, no rigorous quantitative studies at the national level show the tens of billions of dollars of promised savings that RAND loudly proclaimed in 2005.

However, studies have shown that health care providers, especially those in rural areas, have had difficulty saving money by using EHR systems. A recent study, for example, points out that rural hospitals do not benefit as much from EHR systems as urban hospitals in terms of reducing operating costs. With 700 rural hospitals at risk of closing due to severe financial pressures, investing in EHR systems has not proved to be the financial panacea they thought it would be.

Cybersecurity is a major cost not included in the 2005 RAND study. Even though there were warnings that cybersecurity was being given short shrift, vendors, providers, and policymakers paid scant attention to the cybersecurity implications of EHR systems, especially the multitude of new cyberthreat access points that would be created and potentially exploited. Tom Leary, senior vice president and head of government relations at HIMSS, points out the painfully obvious fact that “security was an afterthought. You have to make sure that security by design is involved from the beginning, so we’re still paying for the decision not to invest in security.”

From 2009 to 2023, a total of 5,887 health care breaches of 500 records or more have been reported to the U.S. Department of Health and Human Services Office for Civil Rights resulting in some 520 million health care records being exposed. Health care breaches have also led to widespread disruption to medical care in various hospital systems, sometimes for over a month.


In 2024, the average cost of a health care data breach was $9.97 million. The cost of these breaches will soon surpass the $27 billion ($44.5 billion in 2024 dollars) provided under HITECH to adopt EHRs.

2025 may see the first major revision since 2013 to the Health Insurance Portability and Accountability Act (HIPAA) Security Rule outlining how electronic protected health information will need to be cybersecured. The proposed rule will likely force health care providers and their EHR vendors to make cybersecurity investment a much higher priority.

$100 Billion Spent on Health Care IT: Was the Juice Worth the (Mega) Squeeze?

The U.S. health care industry has spent more than $100 billion on information technology, but few providers are fully meeting President Bush’s vision of a nation of seamlessly interoperable and secure digital health records.

Many past government policymakers now admit they failed to understand the complex business dynamics, technical scale, complexity, or time needed to create a nationwide system of usable, interoperable EHR systems. The entire process lacked systems-engineering thinking. As Seema Verma, former administrator of the Centers for Medicare and Medicaid Services, told Fortune, “We didn’t think about how all these systems connect with one another. That was the real missing piece.”

Over the past eight years, successive administrations and congresses have taken actions to try to rectify these early oversights. In 2016, the 21st Century Cures Act was passed, which kept EHR system vendors and providers from blocking the sharing of patient data, and spurred them to start working in earnest to create a trusted health information exchange. The Cures Act mandated standardized application programming interfaces (APIs) to promote interoperability. In 2022, the Trusted Exchange Framework and Common Agreement (TEFCA) was published, which aims to facilitate technical principles for securely exchanging health information.

“The EHR venture has proved troublesome thus far. The trouble is far from over.” —John Leslie King, University of Michigan professor emeritus

In late 2023, the first Qualified Health Information Networks (QHINs) were approved to begin supporting the exchange of data governed by TEFCA, and in 2024, updates were made to the APIs to make information interoperability easier. These seven QHINs allow thousands of health providers to more easily exchange information. Combined with the emerging consolidation among hospital systems around three EHR vendors—Epic Systems Corp., Oracle Health, and Meditechthis should improve interoperability in the next decade.

These changes, says HIMSS’s Tom Leary, will help give “all patients access to their data in whatever format they want with limited barriers. The health care environment is starting to become patient-centric now. So, as a patient, I should soon be able to go out to any of my healthcare providers to really get that information.”

HIMSS’s Christina Grimes adds that the patient-centric change is the continuing consolidation of EHR system portals. “Patients really want one portal to interact with instead of the number they have today,” she says.

In 2024, the Assistant Secretary for Technology Policy / Office of the National Coordinator for Health IT, the U.S. government department responsible for overseeing electronic health systems’ adoption and standards, was reorganized to focus more on cybersecurity and advanced technology like AI. In addition to the proposed HIPAA security requirements, Congress is also considering new laws to mandate better cybersecurity. There is hope that AI can help overcome EHR system usability issues, especially clinician burnout and interoperability issues like patient matching.

Wachter states that the new AI scribes are showing real promise. “The way it works is that I can now have a conversation with my patient and look the patient in the eye. I’m actually focusing on them and not my keyboard. And then a note, formatted correctly, just magically appears. Almost ironically, this new set of AI technologies may well solve some of the problems that the last technology created.”

Whether these technologies live up to the hype remains to be seen. More concerning is whether AI will exacerbate the rampant feeling among providers that they have become tools of their tools and not masters of them.

As EHR systems become more usable, interoperable, and patient-friendly, the underlying foundations of medical care can be finally addressed. High-quality evidence backs only about 10 percent of the care patients receive today. One of the great potentials of digitizing health records is to discover what treatments work best and why and then distribute that information to the health care community. While this is an active research area, more research and funding are needed.

Twenty years ago, Tom Conrad, who himself was a senior computer scientist, told me he was skeptical that having more information necessarily meant that better medical decisions would automatically be made. He pointed out that when doctors’ earnings are related to the number of patients they see, there is a trade-off between the better care that EHR provides and the sheer amount of time required to review a more complete medical record. Today, the trade-off is not in the patients’ or doctors’ favor. Whether it can ever be balanced is one of the great unknowns.

Obviously, no one wants to go back to paper records. However, as John Leslie King says, “The way forward involves multiple moving targets due to advances in technology, care, and administration. Most EHR vendors are moving as fast as they can.”

However, it would be foolish to think it will be smooth sailing from here on, King says: “The EHR venture has proved troublesome thus far. The trouble is far from over.”

Reference: https://ift.tt/6p50cDW

AI bots strain Wikimedia as bandwidth surges 50%

On Tuesday, the Wikimedia Foundation announced that relentless AI scraping is putting strain on Wikipedia's servers. Automated bots s...