Thursday, February 5, 2026

“Quantum Twins” Simulate What Supercomputers Can’t




While quantum computers continue to slowly grind towards usefulness, some are pursuing a different approach—analog quantum simulation. This path doesn’t offer complete control of single bits of quantum information, known as qubits—it is not a universal quantum computer. Instead, quantum simulators directly mimic complex, difficult-to-access things, like individual molecules, chemical reactions, or novel materials. What analog quantum simulation lacks in flexibility, it makes up for in feasibility: quantum simulators are ready now.

“Instead of using qubits, as you would typically in a quantum computer, we just directly encode the problem into the geometry and structure of the array itself,” says Sam Gorman, quantum systems engineering lead at Sydney-based start-up Silicon Quantum Computing.

Yesterday, Silicon Quantum Computing unveiled its Quantum Twins product, a silicon quantum simulator, which is now available to customers through direct contract. Simultaneously, the team demonstrated that their device, made up of fifteen thousand quantum dots, can simulate an often-studied transition of a material from an insulator to a metal, and all the states between. They published their work this week in the journal Nature.

“We can do things now that we think nobody else in the world can do,” Gorman says.

The powerful process

Though the product announcement came yesterday, the team at Silicon Quantum Computing established its Precision Atom Qubit Manufacturing process following the startup’s establishment in 2017, building on the academic work that the company’s founder, Michelle Simmons, led for over 25 years. The underlying technology is a manufacturing process for placing single phosphorus atoms in silicon with sub-nanometer precision.

“We have a 38-stage process,” Simmons says, for patterning phosphorus atoms into silicon. The process starts with a silicon substrate, which gets coated with a layer of hydrogen. Then, using a scanning-tunneling microscope, individual hydrogen atoms are knocked off the surface, exposing the silicon underneath. The surface is then dosed with phosphine gas, which adsorbs to the surface only in places where the silicon is exposed. With the help of a low temperature thermal anneal, the phosphorus atom is then incorporated into the silicon crystal. Then, layers of silicon are grown on top.

“It’s done in ultra-high vacuum. So it’s a very pure, very clean system,” Simmons says. “It’s a fully monolithic chip that we make with that sub-nanometer precision. In 2014, we figured out how to make markers in the chip so that we can then come back and find where we put the atoms within the device to make contacts. Those contacts are then made at the same length scale as the atoms and dots.”

Though the team is able to place single atoms of phosphorus, they use clusters of ten to fifty such atoms to make up a so-called register for these application-specific chips. These registers act like quantum dots, preserving quantum properties of the individual atoms. The registers are controlled by a gate voltage from contacts placed atop the chip, and interactions between registers can be tuned by precisely controlling the distances between them.

While the company is also pursuing more traditional quantum computing using this technology, they realized they already had the capacity to do useful simulations in the analog domain by putting thousands of registers on a single chip and measuring global properties, without controlling individual qubits.

“The thing that’s quite unique is we can do that very quickly,” Simmons says. “We put 250,000 of these registers [on a chip] in eight hours, and we can turn a chip design around in a week.”

What to simulate

Back in 2022, the team at Silicon Quantum Computing used a previous version of this same technology to simulate a molecule of polyacetylene. The chemical is made up of carbon atoms with alternating single and double bonds, and, crucially, its conductivity changes drastically depending on whether the chain is cut on a single or double bond. In order to accurately simulate single and double carbon bonds, the team had to control the distances of their registers to sub-nanometer precision. By tuning the gate voltages of each quantum dot, the researchers reproduced the jump in conductivity.

Now, they’ve demonstrated the quantum twin technology on a much larger problem—the metal-insulator transition of a two-dimensional material. Where the polyacetylene molecule required ten registers, the new model used 15,000. The metal-insulator model is important because, in most cases, it cannot be simulated on a classical computer. At the extremes—in the fully metal or fully insulating phase—the physics can be simplified and made accessible to classical computing. But in the murky intermediate regime, the full quantum complexity of each electron plays a role, and the problem is classically intractable. “That is the part which is challenging for classical computing. But we can actually put our system into this regime quite easily,” Gorman says.

The metal-insulator model was a proof of concept. Now, Gorman says, the team can design a quantum twin for almost any two-dimensional problem.

“Now that we’ve demonstrated that the device is behaving as we predict, we’re looking at high-impact issues or outstanding problems,” says Gorman. The team plans to investigate things like unconventional superconductivity, the origins of magnetism, and materials interfaces such as those that occur in batteries.

Although the initial applications will most likely be in the scientific domain, Simmons is hopeful that Quantum Twins will eventually be useful for industrial applications such as drug discovery. “If you look at different drugs, they’re actually very similar to polyacetylene. They’re carbon chains, and they have functional groups. So, understanding how to map it [onto our simulator] is a unique challenge. But that’s definitely an area we’re going to focus on,” she says. “We’re excited at the potential possibilities.”

Reference: https://ift.tt/6AZCFn4

Increase of AI bots on the Internet sparks arms race


The viral virtual assistant OpenClaw—formerly known as Moltbot, and before that Clawdbot—is a symbol of a broader revolution underway that could fundamentally alter how the Internet functions. Instead of a place primarily inhabited by humans, the web may very soon be dominated by autonomous AI bots.

A new report measuring bot activity on the web, as well as related data shared with WIRED by the Internet infrastructure company Akamai, shows that AI bots already account for a meaningful share of web traffic. The findings also shed light on an increasingly sophisticated arms race unfolding as bots deploy clever tactics to bypass website defenses meant to keep them out.

“The majority of the Internet is going to be bot traffic in the future,” says Toshit Pangrahi, cofounder and CEO of TollBit, a company that tracks web-scraping activity and published the new report. “It’s not just a copyright problem, there is a new visitor emerging on the Internet.”

Read full article

Comments

Reference : https://ift.tt/54kFJRu

Wednesday, February 4, 2026

Microsoft releases urgent Office patch. Russian-state hackers pounce.


Russian-state hackers wasted no time exploiting a critical Microsoft Office vulnerability that allowed them to compromise the devices inside diplomatic, maritime, and transport organizations in more than half a dozen countries, researchers said Wednesday.

The threat group, tracked under names including APT28, Fancy Bear, Sednit, Forest Blizzard, and Sofacy, pounced on the vulnerability, tracked as CVE-2026-21509, less than 48 hours after Microsoft released an urgent, unscheduled security update late last month, the researchers said. After reverse-engineering the patch, group members wrote an advanced exploit that installed one of two never-before-seen backdoor implants.

Stealth, speed, and precision

The entire campaign was designed to make the compromise undetectable to endpoint protection. Besides being novel, the exploits and payloads were encrypted and ran in memory, making their malice hard to spot. The initial infection vector came from previously compromised government accounts from multiple countries and were likely familiar to the targeted email holders. Command and control channels were hosted in legitimate cloud services that are typically allow-listed inside sensitive networks.

Read full article

Comments

Reference : https://ift.tt/8CJ4pxk

Should AI chatbots have ads? Anthropic says no.


On Wednesday, Anthropic announced that its AI chatbot, Claude, will remain free of advertisements, drawing a sharp line between itself and rival OpenAI, which began testing ads in a low-cost tier of ChatGPT last month. The announcement comes alongside a Super Bowl ad campaign that mocks AI assistants that interrupt personal conversations with product pitches.

"There are many good places for advertising. A conversation with Claude is not one of them," Anthropic wrote in a blog post. The company argued that including ads in AI conversations would be "incompatible" with what it wants Claude to be: "a genuinely helpful assistant for work and for deep thinking."

The stance contrasts with OpenAI's January announcement that it would begin testing banner ads for free users and ChatGPT Go subscribers in the US. OpenAI said those ads would appear at the bottom of responses and would not influence the chatbot's actual answers. Paid subscribers on Plus, Pro, Business, and Enterprise tiers will not see ads on ChatGPT.

Read full article

Comments

Reference : https://ift.tt/24kN6gv

Milan-Cortina Winter Olympics Debut Next-Generation Sports Smarts




From 6-22 February, the 2026 Winter Olympics in Milan-Cortina d’Ampezzo, Italy will feature not just the world’s top winter athletes but also some of the most advanced sports technologies today. At the first Cortina Olympics in 1956, the Swiss company Omega—based in Biel/Bienne—introduced electronic ski starting gates and launched the first automated timing tech of its kind.

At this year’s Olympics, Swiss Timing, sister company to Omega under the parent Swatch Group, unveils a new generation of motion analysis and computer vision technology. The new technologies on offer include photofinish cameras that capture up to 40,000 images per second.

“We work very closely with athletes,” says Swiss Timing CEO Alain Zobrist, who has overseen Olympic timekeeping since the winter games of 2006 in Torino “They are the primary customers of our technology and services, and they need to understand how our systems work in order to trust them.”

Live data capture of a figure skater's performance, with a 3D rendering of the athlete, jump heights and more. Using high-resolution cameras and AI algorithms tuned to skaters’ routines, Milan-Cortina Olympic officials expect new figure skating tech to be a key highlight of the games. Omega

Figure Skating Tech Completes the Rotation

Figure skating, the Winter Olympics’ biggest TV draw, is receiving a substantial upgrade at Milano Cortina 2026.

Fourteen 8K resolution cameras positioned around the rink will capture every skater’s movement. “We use proprietary software to interpret the images and visualize athlete movement in a 3D model,” says Zobrist. “AI processes the data so we can track trajectory, position, and movement across all three axes—X, Y, and Z”.

The system measures jump heights, air times, and landing speeds in real time, producing heat maps and graphic overlays that break down each program—all instantaneously. “The time it takes for us to measure the data, until we show a matrix on TV with a graphic, this whole chain needs to take less than 1/10 of a second,” Zobrist says.


A range of different AI models helps the broadcasters and commentators process each skater’s every move on the ice.

“There is an AI that helps our computer vision system do pose estimation,” he says. “So we have a camera that is filming what is happening, and an AI that helps the camera understand what it’s looking at. And then there is a second type of AI, which is more similar to a large language model that makes sense of the data that we collect”.

Among the features Swiss Timing’s new systems provide is blade angle detection, which gives judges precise technical data to augment their technical and aesthetic decisions. Zobrist says future versions will also determine whether a given rotation is complete, so that “If the rotation is 355 degrees, there is going to be a deduction,” he says.

This builds on technology Omega unveiled at the 2024 Paris Olympics for diving, where cameras measured distances between a diver’s head and the board to help judges assess points and penalties to be awarded.

Three dimensional rendering of a ski jumper preparing for dismount on a tall slope. At the 2026 Winter Olympics, ski jumping will feature both camera-based and sensor-based technologies to make the aerial experience more immediate and real-time. Omega

Ski Jumping Tech Finds Make-or-Break Moments

Unlike figure skating’s camera-based approach, ski jumping also relies on physical sensors.

“In ski jumping, we use a small, lightweight sensor attached to each ski, one sensor per ski, not on the athlete’s body,” Zobrist says. The sensors are lightweight and broadcast data on a skier’s speed, acceleration, and positioning in the air. The technology also correlates performance data with wind conditions, revealing environmental factors’ influence on each jump.

High-speed cameras also track each ski jumper. Then, a stroboscopic camera provides body position time-lapses throughout the jump.

“The first 20 to 30 meters after takeoff are crucial as athletes move into a V position and lean forward,” Zobrist says. “And both the timing and precision of this movement strongly influence performance.”

The system reveals biomechanical characteristics in real time, he adds, showing how athletes position their bodies during every moment of the takeoff process. The most common mistake in flight position, over-rotation or under-rotation, can now be detailed and diagnosed with precision on every jump.

Bobsleigh: Pushing the Line on the Photo Finish

This year’s Olympics will also feature a “virtual photo finish,” providing comparison images of when different sleds cross the finish line over previous runs.

Red Omega camera with large lens, under a sleek hood, set against a black background. Omega’s cameras will provide virtual photo finishes at the 2026 Winter Olympics. Omega

“We virtually build a photo finish that shows different sleds from different runs on a single visual reference,” says Zobrist.

After each run, composite images show the margins separating performances. However, more tried-and-true technology still generates official results. A Swiss Timing score, he says, still comes courtesy of photoelectric cells, devices that emit light beams across the finish line and stop the clock when broken. The company offers its virtual photo finish, by contrast, as a visualization tool for spectators and commentators.

In bobsleigh, as in every timed Winter Olympic event, the line between triumph and heartbreak is sometimes measured in milliseconds or even shorter time intervals still. Such precision will, Zobrist says, stem from Omega’s Quantum Timer.

“We can measure time to the millionth of a second, so 6 digits after the comma, with a deviation of about 23 nanoseconds over 24 hours,” Zobrist explained. “These devices are constantly calibrated and used across all timed sports.”

Reference: https://ift.tt/UMLQgD4

So yeah, I vibe-coded a log colorizer—and I feel good about it


I can't code.

I know, I know—these days, that sounds like an excuse. Anyone can code, right?! Grab some tutorials, maybe an O'Reilly book, download an example project, and jump in. It's just a matter of learning how to break your project into small steps that you can make the computer do, then memorizing a bit of syntax. Nothing about that is hard!

Perhaps you can sense my sarcasm (and sympathize with my lack of time to learn one more technical skill).

Read full article

Comments

Reference : https://ift.tt/wp5e1gA

Tuesday, February 3, 2026

Nvidia's $100 billion OpenAI deal has seemingly vanished


In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia to invest up to $100 billion in OpenAI's AI infrastructure. At the time, the companies said they expected to finalize details "in the coming weeks." Five months later, no deal has closed, Nvidia's CEO now says the $100 billion figure was "never a commitment," and Reuters reports that OpenAI has been quietly seeking alternatives to Nvidia chips since last year.

Reuters also wrote that OpenAI is unsatisfied with the speed of some Nvidia chips for inference tasks, citing eight sources familiar with the matter. Inference is the process by which a trained AI model generates responses to user queries. According to the report, the issue became apparent in OpenAI's Codex, an AI code-generation tool. OpenAI staff reportedly attributed some of Codex's performance limitations to Nvidia's GPU-based hardware.

After the Reuters story published and Nvidia's stock price took a dive, Nvidia and OpenAI have tried to smooth things over publicly. OpenAI CEO Sam Altman posted on X: "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don't get where all this insanity is coming from."

Read full article

Comments

Reference : https://ift.tt/59DAvZx

Breaking Boundaries in Wireless Communication




This paper discusses how RF propagation simulations empower engineers to test numerous real-world use cases in far less time, and at lower costs, than in situ testing alone. Learn how simulations provide a powerful visual aid and offer valuable insights to improve the performance and design of body-worn wireless devices.

Download this free whitepaper now!

Reference: https://ift.tt/Uc5IdNm

Can AI Find Physics Beyond the Standard Model?




In 1930, a young physicist named Carl D. Anderson was tasked by his mentor with measuring the energies of cosmic rays—particles arriving at high speed from outer space. Anderson built an improved version of a cloud chamber, a device that visually records the trajectories of particles. In 1932, he saw evidence that confusingly combined the properties of protons and electrons. “A situation began to develop that had its awkward aspects,” he wrote many years after winning a Nobel Prize at the age of 31. Anderson had accidentally discovered antimatter.

Four years after his first discovery, he codiscovered another elementary particle, the muon. This one prompted one physicist to ask, “Who ordered that?”

a photo shows a man in a suit sitting beside a large laboratory apparatus.

 a circular black-and-white image shows curved particle tracks. Carl Anderson [top] sits beside the magnet cloud chamber he used to discover the positron. His cloud-chamber photograph [bottom] from 1932 shows the curved track of a positron, the first known antimatter particle. Caltech Archives & Special Collections

Over the decades since then, particle physicists have built increasingly sophisticated instruments of exploration. At the apex of these physics-finding machines sits the Large Hadron Collider, which in 2022 started its third operational run. This underground ring, 27 kilometers in circumference and straddling the border between France and Switzerland, was built to slam subatomic particles together at near light speed and test deep theories of the universe. Physicists from around the world turn to the LHC, hoping to find something new. They’re not sure what, but they hope to find it.

It’s the latest manifestation of a rich tradition. Throughout the history of science, new instruments have prompted hunts for the unexpected. Galileo Galilei built telescopes and found Jupiter’s moons. Antonie van Leeuwenhoek built microscopes and noticed “animalcules, very prettily a-moving.” And still today, people peer through lenses and pore through data in search of patterns they hadn’t hypothesized. Nature’s secrets don’t always come with spoilers, and so we gaze into the unknown, ready for anything.


But novel, fundamental aspects of the universe are growing less forthcoming. In a sense, we’ve plucked the lowest-hanging fruit. We know to a good approximation what the building blocks of matter are. The Standard Model of particle physics, which describes the currently known elementary particles, has been in place since the 1970s. Nature can still surprise us, but it typically requires larger or finer instruments, more detailed or expansive data, and faster or more flexible analysis tools.

Those analysis tools include a form of artificial intelligence (AI) called machine learning. Researchers train complex statistical models to find patterns in their data, patterns too subtle for human eyes to see, or too rare for a single human to encounter. At the LHC, which smashes together protons to create immense bursts of energy that decay into other short-lived particles of matter, a theorist might predict some new particle or interaction and describe what its signature would look like in the LHC data, often using a simulation to create synthetic data. Experimentalists would then collect petabytes of measurements and run a machine learning algorithm that compares them with the simulated data, looking for a match. Usually, they come up empty. But maybe new algorithms can peer into corners they haven’t considered.

A New Path for Particle Physics

“You’ve heard probably that there’s a crisis in particle physics,” says Tilman Plehn, a theoretical physicist at Heidelberg University, in Germany. At the LHC and other high-energy physics facilities around the world, the experimental results have failed to yield insights on new physics. “We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t,” Plehn says.


Person wearing a patterned shirt against a pale blue background.

“We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t.”


Gregor Kasieczka, a physicist at the University of Hamburg, in Germany, recalls the field’s enthusiasm when the LHC began running in 2008. Back then, he was a young graduate student and expected to see signs of supersymmetry, a theory predicting heavier versions of the known matter particles. The presumption was that “we turn on the LHC, and supersymmetry will jump in your face, and we’ll discover it in the first year or so,” he tells me. Eighteen years later, supersymmetry remains in the theoretical realm. “I think this level of exuberant optimism has somewhat gone.”


The result, Plehn says, is that models for all kinds of things have fallen in the face of data. “And I think we’re going on a different path now.”

That path involves a kind of machine learning called unsupervised learning. In unsupervised learning, you don’t teach the AI to recognize your specific prediction—signs of a particle with this mass and this charge. Instead, you might teach it to find anything out of the ordinary, anything interesting—which could indicate brand new physics. It’s the equivalent of looking with fresh eyes at a starry sky or a slide of pond scum. The problem is, how do you automate the search for something “interesting”?

Going Beyond the Standard Model

The Standard Model leaves many questions unanswered. Why do matter particles have the masses they do? Why do neutrinos have mass at all? Where is the particle for transmitting gravity, to match those for the other forces? Why do we see more matter than antimatter? Are there extra dimensions? What is dark matter—the invisible stuff that makes up most of the universe’s matter and that we assume to exist because of its gravitational effect on galaxies? Answering any of these questions could open the door to new physics, or fundamental discoveries beyond the Standard Model.


A long blue accelerator tube marked \u201cLHC\u201d runs through an underground tunnel.

“Personally, I’m excited for portal models of dark sectors,” Kasieczka says, as if reading from a Marvel film script. He asks me to imagine a mirror copy of the Standard Model out there somewhere, sharing only one “portal” particle with the Standard Model we know and love. It’s as if this portal particle has a second secret family.

Kasieczka says that in the LHC’s third run, scientists are splitting their efforts roughly evenly between measuring more precisely what they know to exist and looking for what they don’t know to exist. In some cases, the former could enable the latter. The Standard Model predicts certain particle properties and the relationships between them. For example, it correctly predicted a property of the electron called the magnetic moment to about one part in a trillion. And precise measurements could turn up internal inconsistencies. “Then theorists can say, ‘Oh, if I introduce this new particle, it fixes this specific problem that you guys found. And this is how you look for this particle,’” Kasieczka says.


A colorful visualization shows many particle tracks radiating outward from a collision point.

What’s more, the Standard Model has occasionally shown signs of cracks. Certain particles containing bottom quarks, for example, seem to decay into other particles in unexpected ratios. Plehn finds the bottom-quark incongruities intriguing. “Year after year, I feel they should go away, and they don’t. And nobody has a good explanation,” he says. “I wouldn’t even know who I would shout at”—the theorists or the experimentalists—“like, ‘Sort it out!’”

Exasperation isn’t exactly the right word for Plehn’s feelings, however. Physicists feel gratified when measurements reasonably agree with expectations, he says. “But I think deep down inside, we always hope that it looks unreasonable. Everybody always looks for the anomalous stuff. Everybody wants to see the standard explanation fail. First, it’s fame”—a chance for a Nobel—“but it’s also an intellectual challenge, right? You get excited when things don’t work in science.”

How Unsupervised AI Can Probe for New Physics

Now imagine you had a machine to find all the times things don’t work in science, to uncover all the anomalous stuff. That’s how researchers are using unsupervised learning. One day over ice cream, Plehn and a friend who works at the software company SAP began discussing autoencoders, one type of unsupervised learning algorithm. “He tells me that autoencoders are what they use in industry to see if a network was hacked,” Plehn remembers. “You have, say, a hundred computers, and they have network traffic. If the network traffic [to one computer] changes all of a sudden, the computer has been hacked, and they take it offline.”


a person wearing a hard hat walks down an aisle.
Photo show rows of electronic racks filled with cables and equipment inside a data-acquisition room.

Autoencoders are neural networks that start with an input—it could be an image of a cat, or the record of a computer’s network traffic—and compress it, like making a tiny JPEG or MP3 file, and then decompress it. Engineers train them to compress and decompress data so that the output matches the input as closely as possible. Eventually a network becomes very good at that task. But if the data includes some items that are relatively rare—such as white tigers, or hacked computers’ traffic—the network performs worse on these, because it has less practice with them. The difference between an input and its reconstruction therefore signals how anomalous that input is.

“This friend of mine said, ‘You can use exactly our software, right?’” Plehn remembers. “‘It’s exactly the same question. Replace computers with particles.’” The two imagined feeding the autoencoder signatures of particles from a collider and asking: Are any of these particles not like the others? Plehn continues: “And then we wrote up a joint grant proposal.”

It’s not a given that AI will find new physics. Even learning what counts as interesting is a daunting hurdle. Beginning in the 1800s, men in lab coats delegated data processing to women, whom they saw as diligent and detail oriented. Women annotated photos of stars, and they acted as “computers.” In the 1950s, women were trained to scan bubble chambers, which recorded particle trajectories as lines of tiny bubbles in fluid. Physicists didn’t explain to them the theory behind the events, only what to look for based on lists of rules.

But, as the Harvard science historian Peter Galison writes in Image and Logic: A Material Culture of Physics, his influential account of how physicists’ tools shape their discoveries, the task was “subtle, difficult, and anything but routinized,” requiring “three-dimensional visual intuition.” He goes on: “Even within a single experiment, judgment was required—this was not an algorithmic activity, an assembly line procedure in which action could be specified fully by rules.”


Person in a suit with dark hair against a blue background.

“We are not looking for flying elephants but instead a few extra elephants than usual at the local watering hole.”


Over the last decade, though, one thing we’ve learned is that AI systems can, in fact, perform tasks once thought to require human intuition, such as mastering the ancient board game Go. So researchers have been testing AI’s intuition in physics. In 2019, Kasieczka and his collaborators announced the LHC Olympics 2020, a contest in which participants submitted algorithms to find anomalous events in three sets of (simulated) LHC data. Some teams correctly found the anomalous signal in one dataset, but some falsely reported one in the second set, and they all missed it in the third. In 2020, a research collective called Dark Machines announced a similar competition, which drew more than 1,000 submissions of machine learning models. Decisions about how to score them led to different rankings, showing that there’s no best way to explore the unknown.

Another way to test unsupervised learning is to play revisionist history. In 1995, a particle dubbed the top quark turned up at the Tevatron, a particle accelerator at the Fermi National Accelerator Laboratory (Fermilab), in Illinois. But what if it actually hadn’t? Researchers applied unsupervised learning to LHC data collected in 2012, pretending they knew almost nothing about the top quark. Sure enough, the AI revealed a set of anomalous events that were clustered together. Combined with a bit of human intuition, they pointed toward something like the top quark.


Person with long hair wearing a sweater and light-colored top against a blue background.

“An algorithm that can recognize any kind of disturbance would be a win.”


That exercise underlines the fact that unsupervised learning can’t replace physicists just yet. “If your anomaly detector detects some kind of feature, how do you get from that statement to something like a physics interpretation?” Kasieczka says. “The anomaly search is more a scouting-like strategy to get you to look into the right corner.” Georgia Karagiorgi, a physicist at Columbia University, agrees. “Once you find something unexpected, you can’t just call it quits and be like, ‘Oh, I discovered something,’” she says. “You have to come up with a model and then test it.”

Kyle Cranmer, a physicist and data scientist at the University of Wisconsin-Madison who played a key role in the discovery of the Higgs boson particle in 2012, also says that human expertise can’t be dismissed. “There’s an infinite number of ways the data can look different from what you expected,” he says, “and most of them aren’t interesting.” Physicists might be able to recognize whether a deviation suggests some plausible new physical phenomenon, rather than just noise. “But how you try to codify that and make it explicit in some algorithm is much less straightforward,” Cranmer says. Ideally, the guidelines would be general enough to exclude the unimaginable without eliminating the merely unimagined. “That’s gonna be your Goldilocks situation.”

In his 1987 book How Experiments End, Harvard’s Galison writes that scientific instruments can “import assumptions built into the apparatus itself.” He tells me about a 1973 experiment that looked for a phenomenon called neutral currents, signaled by an absence of a so-called heavy electron (later renamed the muon). One team initially used a trigger left over from previous experiments, which recorded events only if they produced those heavy electrons—even though neutral currents, by definition, produce none. As a result, for some time the researchers missed the phenomenon and wrongly concluded that it didn’t exist. Galison says that the physicists’ design choice “allowed the discovery of [only] one thing, and it blinded the next generation of people to this new discovery. And that is always a risk when you’re being selective.”

How AI Could Miss—or Fake—New Physics

I ask Galison if by automating the search for interesting events, we’re letting the AI take over the science. He rephrases the question: “Have we handed over the keys to the car of science to the machines?” One way to alleviate such concerns, he tells me, is to generate test data to see if an algorithm behaves as expected—as in the LHC Olympics. “Before you take a camera out and photograph the Loch Ness Monster, you want to make sure that it can reproduce a wide variety of colors” and patterns accurately, he says, so you can rely on it to capture whatever comes.

Galison, who is also a physicist, works on the Event Horizon Telescope, which images black holes. For that project, he remembers putting up utterly unexpected test images like Frosty the Snowman so that scientists could probe the system’s general ability to catch something new. “The danger is that you’ve missed out on some crucial test,” he says, “and that the object you’re going to be photographing is so different from your test patterns that you’re unprepared.”

The algorithms that physicists are using to seek new physics are certainly vulnerable to this danger. It helps that unsupervised learning is already being used in many applications. In industry, it’s surfacing anomalous credit-card transactions and hacked networks. In science, it’s identifying earthquake precursors, genome locations where proteins bind, and merging galaxies.

But one difference with particle-physics data is that the anomalies may not be stand-alone objects or events. You’re looking not just for a needle in a haystack; you’re also looking for subtle irregularities in the haystack itself. Maybe a stack contains a few more short stems than you’d expect. Or a pattern reveals itself only when you simultaneously look at the size, shape, color, and texture of stems. Such a pattern might suggest an unacknowledged substance in the soil. In accelerator data, subtle patterns might suggest a hidden force. As Kasieczka and his colleagues write in one paper, “We are not looking for flying elephants, but instead a few extra elephants than usual at the local watering hole.”

Even algorithms that weigh many factors can miss signals—and they can also see spurious ones. The stakes of mistakenly claiming discovery are high. Going back to the hacking scenario, Plehn says, a company might ultimately determine that its network wasn’t hacked; it was just a new employee. The algorithm’s false positive causes little damage. “Whereas if you stand there and get the Nobel Prize, and a year later people say, ‘Well, it was a fluke,’ people would make fun of you for the rest of your life,” he says. In particle physics, he adds, you run the risk of spotting patterns purely by chance in big data, or as a result of malfunctioning equipment.

False alarms have happened before. In 1976, a group at Fermilab led by Leon Lederman, who later won a Nobel for other work, announced the discovery of a particle they tentatively called the Upsilon. The researchers calculated the probability of the signal’s happening by chance as 1 in 50. After further data collection, though, they walked back the discovery, calling the pseudo-particle the Oops-Leon. (Today, particle physicists wait until the chance that a finding is a fluke drops below 1 in 3.5 million, the so-called five-sigma criterion.) And in 2011, researchers at the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment, in Italy, announced evidence for faster-than-light travel of neutrinos. Then, a few months later, they reported that the result was due to a faulty connection in their timing system.

Those cautionary tales linger in the minds of physicists. And yet, even while researchers are wary of false positives from AI, they also see it as a safeguard against them. So far, unsupervised learning has discovered no new physics, despite its use on data from multiple experiments at Fermilab and CERN. But anomaly detection may have prevented embarrassments like the one at OPERA. “So instead of telling you there’s a new physics particle,” Kasieczka says, “it’s telling you, this sensor is behaving weird today. You should restart it.”

Hardware for AI-Assisted Particle Physics

Particle physicists are pushing the limits of not only their computing software but also their computing hardware. The challenge is unparalleled. The LHC produces 40 million particle collisions per second, each of which can produce a megabyte of data. That’s much too much information to store, even if you could save it to disk that quickly. So the two largest detectors each use two-level data filtering. The first layer, called the Level-1 Trigger, or L1T, harvests 100,000 events per second, and the second layer, called the High-Level Trigger, or HLT, plucks 1,000 of those events to save for later analysis. So only one in 40,000 events is ever potentially seen by human eyes.


Person with long blonde hair in a white shirt against a solid blue background.

That’s when I thought, we need something like [AlphaGo] in physics. We need a genius that can look at the world differently.”


HLTs use central processing units (CPUs) like the ones in your desktop computer, running complex machine learning algorithms that analyze collisions based on the number, type, energy, momentum, and angles of the new particles produced. L1Ts, as a first line of defense, must be fast. So the L1Ts rely on integrated circuits called field-programmable gate arrays (FPGAs), which users can reprogram for specialized calculations.

The trade-off is that the programming must be relatively simple. The FPGAs can’t easily store and run fancy neural networks; instead they follow scripted rules about, say, what features of a particle collision make it important. In terms of complexity level, it’s the instructions given to the women who scanned bubble chambers, not the women’s brains.

Ekaterina (Katya) Govorkova, a particle physicist at MIT, saw a path toward improving the LHC’s filters, inspired by a board game. Around 2020, she was looking for new physics by comparing precise measurements at the LHC with predictions, using little or no machine learning. Then she watched a documentary about AlphaGo, the program that used machine learning to beat a human Go champion. “For me the moment of realization was when AlphaGo would use some absolutely new type of strategy that humans, who played this game for centuries, hadn’t thought about before,” she says. “So that’s when I thought, we need something like that in physics. We need a genius that can look at the world differently.” New physics may be something we’d never imagine.

Govorkova and her collaborators found a way to compress autoencoders to put them on FPGAs, where they process an event every 80 nanoseconds (less than 10-millionth of a second). (Compression involved pruning some network connections and reducing the precision of some calculations.) They published their methods in Nature Machine Intelligence in 2022, and researchers are now using them during the LHC’s third run. The new trigger tech is installed in one of the detectors around the LHC’s giant ring, and it has found many anomalous events that would otherwise have gone unflagged.

Researchers are currently setting up analysis workflows to decipher why the events were deemed anomalous. Jennifer Ngadiuba, a particle physicist at Fermilab who is also one of the coordinators of the trigger system (and one of Govorkova’s coauthors), says that one feature stands out already: Flagged events have lots of jets of new particles shooting out of the collisions. But the scientists still need to explore other factors, like the new particles’ energies and their distributions in space. “It’s a high-dimensional problem,” she says.

Eventually they will share the data openly, allowing others to eyeball the results or to apply new unsupervised learning algorithms in the hunt for patterns. Javier Duarte, a physicist at the University of California, San Diego, and also a coauthor on the 2022 paper, says, “It’s kind of exciting to think about providing this to the community of particle physicists and saying, like, ‘Shrug, we don’t know what this is. You can take a look.’” Duarte and Ngadiuba note that high-energy physics has traditionally followed a top-down approach to discovery, testing data against well-defined theories. Adding in this new bottom-up search for the unexpected marks a new paradigm. “And also a return of sorts to before the Standard Model was so well established,” Duarte adds.

Yet it could be years before we know why AI marked those collisions as anomalous. What conclusions could they support? “In the worst case, it could be some detector noise that we didn’t know about,” which would still be useful information, Ngadiuba says. “The best scenario could be a new particle. And then a new particle implies a new force.”


Person with braided updo in checkered suit jacket and chambray shirt, light blue background.

“The best scenario could be a new particle. And then a new particle implies a new force.”


Duarte says he expects their work with FPGAs to have wider applications. “The data rates and the constraints in high-energy physics are so extreme that people in industry aren’t necessarily working on this,” he says. “In self-driving cars, usually millisecond latencies are sufficient reaction times. But we’re developing algorithms that need to respond in microseconds or less. We’re at this technological frontier, and to see how much that can proliferate back to industry will be cool.”

Plehn is also working to put neural networks on FPGAs for triggers, in collaboration with experimentalists, electrical engineers, and other theorists. Encoding the nuances of abstract theories into material hardware is a puzzle. “In this grant proposal, the person I talked to most is the electrical engineer,” he says, “because I have to ask the engineer, which of my algorithms fits on your bloody FPGA?”

Hardware is hard, says Ryan Kastner, an electrical engineer and computer scientist at UC San Diego who works with Duarte on programming FPGAs. What allows the chips to run algorithms so quickly is their flexibility. Instead of programming them in an abstract coding language like Python, engineers configure the underlying circuitry. They map logic gates, route data paths, and synchronize operations by hand. That low-level control also makes the effort “painfully difficult,” Kastner says. “It’s kind of like you have a lot of rope, and it’s very easy to hang yourself.”

Seeking New Physics Among the Neutrinos

The next piece of new physics may not pop up at a particle accelerator. It may appear at a detector for neutrinos, particles that are part of the Standard Model but remain deeply mysterious. Neutrinos are tiny, electrically neutral, and so light that no one has yet measured their mass. (The latest attempt, in April, set an upper limit of about a millionth the mass of an electron.) Of all known particles with mass, neutrinos are the universe’s most abundant, but also among the most ghostly, rarely deigning to acknowledge the matter around them. Tens of trillions pass through your body every second.

If we listen very closely, though, we may just hear the secrets they have to tell. Karagiorgi, of Columbia, has chosen this path to discovery. Being a physicist is “kind of like playing detective, but where you create your own mysteries,” she tells me during my visit to Columbia’s Nevis Laboratories, located on a large estate about 20 km north of Manhattan. Physics research began at the site after World War II; one hallway features papers going back to 1951.


A person stands inside a room that has gold-colored grids covering the floor, walls, and ceiling.

Karagiorgi is eagerly awaiting a massive neutrino detector that’s currently under construction. Starting in 2028, Fermilab will send neutrinos west through 1,300 km of rock to South Dakota, where they’ll occasionally make their existence known in the Deep Underground Neutrino Experiment (DUNE). Why so far away? When neutrinos travel long distances, they have an odd habit of oscillating, transforming from one kind or “flavor” to another. Observing the oscillations of both the neutrinos and their mirror-image antiparticles, antineutrinos, could tell researchers something about the universe’s matter-antimatter asymmetry—which the Standard Model doesn’t explain—and thus, according to the Nevis website, “why we exist.”

“DUNE is the thing that’s been pushing me to develop these real-time AI methods,” Karagiorgi says, “for sifting through the data very, very, very quickly and trying to look for rare signatures of interest within them.” When neutrinos interact with the detector’s 70,000 tonnes of liquid argon, they’ll generate a shower of other particles, creating visual tracks that look like a photo of fireworks.


A simplified chart of the Standard Model of physics shows matter particles (quarks and leptons), force-carrying particles, and the Higgs, which conveys mass.

Even when not bombarding DUNE with neutrinos, researchers will keep collecting data in the off chance that it captures neutrinos from a distant supernova. “This is a massive detector spewing out 5 terabytes of data per second,” Karagiorgi says, “and it’s going to run constantly for a decade.” They will need unsupervised learning to notice signatures that no one was looking for, because there are “lots of different models of how supernova explosions happen, and for all we know, none of them could be the right model for neutrinos,” she says. “To train your algorithm on such uncertain grounds is less than ideal. So an algorithm that can recognize any kind of disturbance would be a win.”

Deciding in real time which 1 percent of 1 percent of data to keep will require FPGAs. Karagiorgi’s team is preparing to use them for DUNE, and she walks me to a computer lab where they program the circuits. In the FPGA lab, we look at nondescript circuit boards sitting on a table. “So what we’re proposing is a scheme where you can have something like a hundred of these boards for DUNE deep underground that receive the image data frame by frame,” she says. This system could tell researchers whether a given frame resembled TV static, fireworks, or something in between.

Neutrino experiments, like many particle-physics studies, are very visual. When Karagiorgi was a postdoc, automated image processing at neutrino detectors was still in its infancy, so she and collaborators would often resort to visual scanning (bubble-chamber style) to measure particle tracks. She still asks undergrads to hand-scan as an educational exercise. “I think it’s wrong to just send them to write a machine learning algorithm. Unless you can actually visualize the data, you don’t really gain a sense of what you’re looking for,” she says. “I think it also helps with creativity to be able to visualize the different types of interactions that are happening, and see what’s normal and what’s not normal.”

Back in Karagiorgi’s office, a bulletin board displays images from The Cognitive Art of Feynman Diagrams, an exhibit for which the designer Edward Tufte created wire sculptures of the physicist Richard Feynman’s schematics of particle interactions. “It’s funny, you know,” she says. “They look like they’re just scribbles, right? But actually, they encode quantitatively predictive behavior in nature.” Later, Karagiorgi and I spend a good 10 minutes discussing whether a computer or a human could find Waldo without knowing what Waldo looked like. We also touch on the 1964 Supreme Court case in which Justice Potter Stewart famously declined to define obscenity, saying “I know it when I see it.” I ask whether it seems weird to hand over to a machine the task of deciding what’s visually interesting. “There are a lot of trust issues,” she says with a laugh.

On the drive back to Manhattan, we discuss the history of scientific discovery. “I think it’s part of human nature to try to make sense of an orderly world around you,” Karagiorgi says. “And then you just automatically pick out the oddities. Some people obsess about the oddities more than others, and then try to understand them.”

Reflecting on the Standard Model, she called it “beautiful and elegant,” with “amazing predictive power.” Yet she finds it both limited and limiting, blinding us to colors we don’t yet see. “Sometimes it’s both a blessing and a curse that we’ve managed to develop such a successful theory.”

Reference: https://ift.tt/LTt2bEV

The rise of Moltbook suggests viral AI prompts may be the next big security threat


On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.

Read full article

Comments

Reference : https://ift.tt/OSdgorF

Monday, February 2, 2026

IEEE Considers Safety Guidelines for Neurotech Consumer Products




Nonmedical devices that read brainwaves, such as smart headbands, headphones, and glasses, are becoming more popular among consumers. The products claim to make users more productive, creative, and healthier. IEEE Spectrum previewed several of these smart wearables that were introduced at this year’s Consumer Electronics Show (CES) in Las Vegas.

Since the wearable, noninvasive neurotech products aren’t medical devices, they are not subject to the same forms of regulation—which can lead to gaps in their safety and data privacy, as well as their effect on users’ brains.

UNESCO in November adopted the first global ethical standard for neurotechnologies, establishing guidelines to protect users’ mental privacy, freedom of thought, and human rights. In 2019 the Organisation for Economic Co-operation and Development issued responsible-neurotechnology recommendations. But there are no socio-technical standards for manufacturers to follow.

In response, the IEEE Brain technical community is developing the IEEE P7700 standard: “Recommended Practice for the Responsible Design and Development of Neurotechnologies.”

The proposed standard is being designed to provide a uniform set of definitions and a methodology to assess the ethical and socio-technical considerations and practices regarding the design, development, and use of neurotechnologies including wearable neurodevices for the brain, says Laura Y. Cabrera, the standard’s working group chair. Cabrera, an IEEE senior member, is an associate professor in the engineering science and mechanics department at Pennsylvania State University in University Park. Her research focuses on the ethical and societal implications of neurotechnologies.

“IEEE P7700 addresses the unique characteristics of the technology and its impact on individuals and society, in particular, as it moves from therapeutic users to a wide variety of consumers,” she says.

The standard is sponsored by the IEEE Society on Social Implications of Technology.

Concern over long-term effects

The multilayered complexity of technologies that interface with the brain and nervous system presents considerations to those developing them, Cabrera says.

“There may be long-term consequences in our brains with these types of technologies,” she says. “Maybe if they were used for a short period of time, there might not be significant consequences. But what are the effects over time?”

Patients using approved brain-stimulation technology, for example, are told of its risks and benefits, but the long-term effects of headbands to improve students’ attention span aren’t known.

“IEEE P7700 addresses the unique characteristics of the technology and its impact on individuals and society, in particular, as it moves from therapeutic users to a wide variety of consumers.”

IEEE P7700 will address potential risks to individuals and possible negative impacts on society, Cabrera says. That includes creating guardrails to prevent harm, she adds.

The cultural implications of using neurotechnologies that interface with the brain also need to be considered, she says, because people have different views.

“The brain is considered the seed of the self and the organ that orchestrates all our thoughts, behaviors, feelings, and emotions,” she says. “The brain is really central to who we are.”

Developing an ethical framework

For the past five years, the IEEE Brain community’s neuroethics committee has been developing a framework to evaluate the ethical, legal, social, and cultural issues that could emerge from use of the technology. The document covers nine types of applications, including those used for wellness.

Because more devices kept entering the market, IEEE Brain decided in 2023 that it was time to begin drafting a standard.

Members of its working group come from Argentina, China, Japan, Italy, Switzerland, and the United States. Participants include developers, engineers, ethicists, lawyers, and social science researchers.

The standard, Cabrera says, will be the first socio-technical standard aimed at fostering the ethical and responsible innovation of neurotechnology that meets societal and community values at an international level. P7700 will include a how-to guide, criteria for evaluating each suggested process, and case studies to help with the interpretation and practical use of the standard, she says.

“Our applied ethical approach uses a responsible research and innovation method to enable developers, researchers, users, and regulators to anticipate and address ethical and sociocultural implications of neurotechnologies, mitigating negative unintended consequences while increasing community support and engagement with innovators,” Cabrera says.

The working group is seeking additional participants to help refine the process, tools, and recommendations.

“There are a variety of people who can contribute their expertise,” she says, “including academics, data scientists, government program leaders, policymakers, lawyers, social scientists, and users.”

Cabrera says she anticipates the standard will be published early next year.

You can register to participate in the standard’s development here.

Reference: https://ift.tt/yumBegc

Notepad++ users take note: It's time to check if you're hacked


Infrastructure delivering updates for Notepad++—a widely used text editor for Windows—was compromised for six months by suspected China-state hackers who used their control to deliver backdoored versions of the app to select targets, developers said Monday.

“I deeply apologize to all users affected by this hijacking,” the author of a post published to the official notepad-plus-plus.org site wrote Monday. The post said that the attack began last June with an “infrastructure-level compromise that allowed malicious actors to intercept and redirect update traffic destined for notepad-plus-plus.org.” The attackers, whom multiple investigators tied to the Chinese government, then selectively redirected certain targeted users to malicious update servers where they received backdoored updates. Notepad++ didn’t regain control of its infrastructure until December.

Hands-on keyboard hacking

Notepad++ said that officials with the unnamed provider hosting the update infrastructure consulted with incident responders and found that it remained compromised until September 2. Even then, the attackers maintained credentials to the internal services until December 2, a capability that allowed them to continue redirecting selected update traffic to malicious servers. The threat actor “specifically targeted Notepad++ domain with the goal of exploiting insufficient update verification controls that existed in older versions of Notepad++.” Event logs indicate that the hackers tried to re-exploit one of the weaknesses after it was fixed but that the attempt failed.

Read full article

Comments

Reference : https://ift.tt/zYsf4Qy

Don’t Regulate AI Models. Regulate AI Use




At times, it can seem like efforts to regulate and rein in AI are everything everywhere all at once.

China issued the first AI-specific regulations in 2021. The focus is squarely on providers and content governance, enforced through platform control and record-keeping requirements.

In Europe, the EU AI Act dates to 2024, but the European Commission is already proposing updates and simplification.

India charged its senior technical advisors with creating an AI governance system, which they released in November 2025.

In the United States the states are legislating and enforcing their own AI rules even as the federal government in 2025 moved to prevent state action and loosen the reins.

This leads to a critical question for American engineers and policymakers alike: What can the U.S. actually enforce in a way that reduces real-world harm? My answer: regulate AI use, not the underlying models.

Why model-centric regulation fails

Proposals to license “frontier” training runs, restrict open weights, or require permission before publishing models, such as California’s Transparency in Frontier Artificial Intelligence Act, promise control but deliver theater. Model weights and code are digital artifacts; once released, by a lab, a leak, or a foreign competitor, they replicate at near-zero cost. You can’t un-publish weights, geofence research, or prevent distillation into smaller models. Trying to bottle up artifacts yields two bad outcomes: compliant firms drown in paperwork while reckless actors route around rules offshore, underground, or both.

In the U.S., model-publication licensing also likely collides with speech law. Federal courts have treated software source code as protected expression, so any system that prevents the publication of AI models would be vulnerable to legal challenges.

“Do nothing” is not an option either. Without guardrails, we will keep seeing deepfake scams, automated fraud, and mass-persuasion campaigns until a headline catastrophe triggers a blunt response optimized for optics, not outcomes.

A practical alternative: Regulate use, proportionate to risk

A use-based regime classifies deployments by risk and scales obligations accordingly. Here is a workable template focused on keeping enforcement where systems actually touch people:

  1. Baseline: General-purpose consumer interaction (open-ended chat, creative writing, learning assistance, casual productivity).
    Regulatory adherence: clear AI disclosure at point of interaction, published acceptable use policies, technical guardrails preventing escalation into higher-risk tiers, and a mechanism for users to flag problematic outputs.
  1. Low-risk assistance (drafting, summarization, basic productivity).
    Regulatory adherence: simple disclosure, baseline data hygiene.
  1. Moderate-risk decision support affecting individuals (hiring triage, benefits screening, loan pre-qualification).
    Regulatory adherence: documented risk assessment, meaningful human oversight, and an “AI bill of materials” consisting of at least the model lineage, key evaluations, and mitigations.
  1. High-impact uses in safety-critical contexts (clinical decision support, critical-infrastructure operations).
    Regulatory adherence: rigorous pre-deployment testing tied to the specific use, continuous monitoring, incident reporting, and, when warranted, authorization linked to validated performance.
  1. Hazardous dual-use functions (e.g., tools to fabricate biometric voiceprints to defeat authentication).
    Regulatory adherence: confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.

    Close the loop at real-world chokepoints

    AI-enabled systems become real when they’re connected to users, money, infrastructure, and institutions and that’s where regulators should focus enforcement: at the points of distribution (app stores and enterprise marketplaces), capability access (cloud and AI platforms), monetization (payment systems and ad networks), and risk transfer (insurers and contract counterparties).

    For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and post-incident review, paired with privacy protections. We need to demand evidence for deployer claims, maintain incident-response plans, report material faults, and provide human fallback. When AI use leads to damage, firms should have to show their work and face liability for harms.

    This approach creates market dynamics that accelerate compliance. If crucial business operations such as procurement, access to cloud services, and insurance depend on proving that you’re following the rules, AI model developers will build to specifications buyers can check. That raises the safety floor for all industry players, startups included, without handing an advantage to a few large, licensed incumbents.

    The EU approach: How this aligns, where it differs

    This framework aligns with the EU AI Act in two important ways. First, it centers risk at the point of impact: the Act’s “high-risk” categories include employment, education, access to essential services, and critical infrastructure, with lifecycle obligations and complaint rights. It also recognizes special treatment for broadly capable systems (GPAI) without pretending publication control is a safety strategy. My proposal for the U.S. differs in three key ways:

    First, the U.S. must design for constitutional durability. Courts have treated source code as protected speech, and a regime that requires permission to publish weights or train a class of models starts to resemble prior restraint. A use-based regime of rules governing what AI operators can do in sensitive settings, and under what conditions, fits more naturally within the U.S. First Amendment doctrine than speaker-based licensing schemes.

    Second, the EU can rely on platforms adapting to the precautionary rules it writes for its unified single market. The U.S. should accept that models will exist globally, both open and closed, and focus on where AI becomes actionable: app stores, enterprise platforms, cloud providers, enterprise identity layers, payment rails, insurers, and regulated sector gatekeepers (hospitals, utilities, banks). Those are enforceable points where identity, logging, capability gating, and post-incident accountability can be required without pretending we can “contain” software. They also span the many specialized U.S. agencies which may not be able to write higher-level rules broad enough to affect the whole AI ecosystem. Instead, the U.S. should regulate AI service chokepoints more explicitly than Europe does, to accommodate the different shape of its government and public administration.

    Third, the U.S. should add an explicit “dual-use hazard” tier. The EU AI Act is primarily a fundamental-rights and product-safety regime. The U.S. also has a national-security reality: certain capabilities are dangerous because they scale harm (biosecurity, cyber offense, mass fraud). A coherent U.S. framework should name that category and regulate it directly, rather than trying to fit it into generic “frontier model” licensing.

    China’s approach: What to reuse, what to avoid

    China has built a layered regime for public-facing AI. The “deep synthesis” rules (effective January 10, 2023) require conspicuous labeling of synthetic media and place duties on providers and platforms. The Interim Measures for Generative AI (effective August 15, 2023) add registration and governance obligations for services offered to the public. Enforcement leverages platform control and algorithm filing systems.

    The United States should not copy China’s state-directed control of AI viewpoints or information management; it is incompatible with U.S. values and would not survive U.S. constitutional scrutiny. The licensing of model publication is brittle in practice and, in the United States, likely an unconstitutional form of censorship.

    But we can borrow two practical ideas from China. First, we should ensure trustworthy provenance and traceability for synthetic media. This involves mandatory labeling and provenance forensic tools. They give legitimate creators and platforms a reliable way to prove origin and integrity. When it is quick to check authenticity at scale, attackers lose the advantage of cheap copies or deepfakes and defenders regain time to detect, triage, and respond. Second, we should require operators to file their methods and risk controlswith regulators for public-facing, high-risk services, like we do for other safety-critical projects. This should include due-process and transparency safeguards appropriate to liberal democracies along with clear responsibility for safety measures, data protection, and incident handling, especially for systems designed to manipulate emotions or build dependency, which already include gaming, role-playing, and associated applications.

    A pragmatic approach

    We cannot meaningfully regulate the development of AI in a world where artifacts copy in near real-time and research flows fluidly across borders. But we can keep unvetted systems out of hospitals, payment systems, and critical infrastructure by regulating uses, not models; enforcing at chokepoints; and applying obligations that scale with risk.

    Done right, this approach harmonizes with the EU’s outcome-oriented framework, channels U.S. federal and state innovation into a coherent baseline, and reuses China’s useful distribution-level controls while rejecting speech-restrictive licensing. We can write rules that protect people and which still promote robust AI innovation.
Reference: https://ift.tt/N7LCQBt

Sunday, February 1, 2026

LuSEE-Night: See You on the Far Side of the Moon




As a kid in the 1970s, I watched the Apollo moon missions on TV, drawn like a curious moth to the cathode-ray tube’s glow. The English band Pink Floyd blared through the speakers of my mom’s Oldsmobile Cutlass Supreme, beckoning us to the dark side of the moon.

The far side of the moon, the term most scientists prefer, is indeed dark (half the time), cold, and inhospitable. There’s regolith and a couple of Chinese landers—Chang’e 4 in January 2019 and Chang’e 6 in June 2024—and not much else. That could change in about a year, as Contributing Editor Ned Potter reports in “The Quest to Build a Telescope That Can Hear the Cosmic Dark Ages.” Firefly Aerospace’s Blue Ghost Mission 2 with the LuSEE-Night radio telescope aboard will attempt to become the third successful mission to land there.

The moon’s far side is the perfect place for such a telescope. The same RF waves that carried images of Neil Armstrong setting foot on the lunar surface, Roger Waters’s voice, and hundreds of Ned Potter’s space and science segments for the U.S. broadcast networks CBS and ABC interfere with terrestrial radio telescopes. If your goal is to detect the extremely faint and heavily redshifted signals of neutral hydrogen from the cosmic Dark Ages, you just can’t do it from Earth. This epoch is so-called because we Earthlings have yet to sense anything from this time period, which started about 380,000 years after the big bang and lasted 200 million to 400 million years. The far side of the moon may be a terrible place to live, but it’s shielded from all the noise of Earth, making it the ideal spot to place a radio telescope.

As Potter emphasized to me recently, LuSEE-Night won’t listen for a signal from Dark Ages hydrogen directly. “Will the hydrogen from the Dark Ages send a signal? No,” says Potter. “But all that hydrogen out there may absorb a little bit of energy from the cosmic microwave background, interfering with that even more distant remnant of the big bang.”

The far side may not stay quiet for much longer. Several countries, including China, India, Japan, Russia, South Korea, the United Arab Emirates, and the United States, are making slow but steady progress toward establishing a lunar presence. As they do so, they’ll place more relay satellites into orbit around the moon to support exploratory activities as well as moon bases planned for the next decade and beyond. That means the window on a noise-free far side is closing. LuSEE-Night, a project 40 years in the making, might just get there in the nick of time.

Potter is tracking emerging protocols that could preserve the far side’s electromagnetic silence even as such efforts advance. Radio astronomers he’s talked to have shared ideas about how to prevent this emerging problem from turning into a crisis. “There are no bad guys in this story, at least not yet,” says Potter. “But there are a lot of well-meaning people who could complicate the picture a great deal if they don’t know that there’s a picture to complicate.”

It’s a busy time for moon missions. In addition to Blue Ghost Mission 2, the Chinese are sending Chang’e 7 to the moon’s south pole, while NASA’s Artemis II is scheduled to enter the first of three launch windows this month. Artemis II will be the first mission to put humans into lunar orbit since the last Apollo mission in 1972. And IEEE Spectrum readers will enjoy a front row seat, thanks to the enterprising reporting of a true legend in the business, our own Ned Potter.

This article appears in the February 2026 print issue as “See You on the Far Side of the Moon.”

Reference: https://ift.tt/IiHO3TR

“Quantum Twins” Simulate What Supercomputers Can’t

While quantum computers continue to slowly grind towards usefulness, some are pursuing a different approach—analog quantum simulation . ...