Friday, April 4, 2025

NSA warns “fast flux” threatens national security. What is fast flux anyway?


A technique that hostile nation-states and financially motivated ransomware groups are using to hide their operations poses a threat to critical infrastructure and national security, the National Security Agency has warned.

The technique is known as fast flux. It allows decentralized networks operated by threat actors to hide their infrastructure and survive takedown attempts that would otherwise succeed. Fast flux works by cycling through a range of IP addresses and domain names that these botnets use to connect to the Internet. In some cases, IPs and domain names change every day or two; in other cases, they change almost hourly. The constant flux complicates the task of isolating the true origin of the infrastructure. It also provides redundancy. By the time defenders block one address or domain, new ones have already been assigned.

A significant threat

“This technique poses a significant threat to national security, enabling malicious cyber actors to consistently evade detection,” the NSA, FBI, and their counterparts from Canada, Australia, and New Zealand warned Thursday. “Malicious cyber actors, including cybercriminals and nation-state actors, use fast flux to obfuscate the locations of malicious servers by rapidly changing Domain Name System (DNS) records. Additionally, they can create resilient, highly available command and control (C2) infrastructure, concealing their subsequent malicious operations.”

Read full article

Comments

Reference : https://ift.tt/dxMwROi

Video Friday: RIVR Delivers Your Package




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
ICRA 2025: 19–23 May 2025, ATLANTA, GA
London Humanoids Summit: 29–30 May 2025, LONDON
IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
RSS 2025: 21–25 June 2025, LOS ANGELES
ETH Robotics Summer School: 21–27 June 2025, GENEVA
IAS 2025: 30 June–4 July 2025, GENOA, ITALY
ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
IFAC Symposium on Robotics: 15–18 July 2025, PARIS
RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
CLAWAR 2025: 5–7 September 2025, SHENZHEN
World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
IROS 2025: 19–25 October 2025, HANGZHOU, CHINA
IEEE Humanoids: 30 September–2 October 2025, SEOUL
CoRL 2025: 27–30 September 2025, SEOUL

Enjoy today’s videos!

I love the platform and I love the use case, but this particular delivery method is... Odd?

[ RIVR ]

This is just the beginning of what people and physical AI can accomplish together. To recognize business value from collaborative robotics, you have to understand what people do well, what robots do well—and how they best come together to create productivity. DHL and Robust.AI are partnering to define the future of human-robot collaboration.

[ Robust AI ]

Teleoperated robotic characters can perform expressive interactions with humans, relying on the operators’ experience and social intuition. In this work, we propose to create autonomous interactive robots, by training a model to imitate operator data. Our model is trained on a dataset of human-robot interactions, where an expert operator is asked to vary the interactions and mood of the robot, while the operator commands as well as the pose of the human and robot are recorded.

[ Disney Research Studios ]

Introducing THEMIS V2, our all-new full-size humanoid robot. Standing at 1.6m with 40 DoF, THEMIS V2 now features enhanced 6 DoF arms and advanced 7 DoF end-effectors, along with an additional body-mounted stereo camera and up to 200 TOPS of onboard AI computing power. These upgrades deliver exceptional capabilities in manipulation, perception, and navigation, pushing humanoid robotics to new heights.

[ Westwood ]

BMW x Figure Update: This isn’t a test environment—it’s real production operations. Real-world robots are advancing our Helix AI and strengthening our end-to-end autonomy to deploy millions of robots.

[ Figure ]

On March 13, at WorldMinds 2025, in the Kaufleuten Theater of Zurich, our team demonstrated for the first time two autonomous vision-based racing drones. It was an epic journey to prepare for this event, given the poor lighting conditions and the safety constraints of a theater filled with more than 500 people! The background screen visualizes in real-time the observations of the AI algorithm of each drone. No map, no IMU, no SLAM!

[ University of Zurich (UZH) ]

Unitree releases Dex5 dexterous hand. Single hand with 20 degrees of freedom (16 active+4 passive). Enable smooth backdrivability (direct force control). Equipped with 94 highly sensitive touch points (optional).

[ Unitree ]

You can say “real world manipulation” all you want, but until it’s actually in the real world, I’m not buying it.

[ 1X ]

Developed by Pudu X-Lab, FlashBot Arm elevates the capabilities of our flagship FlashBot by blending advanced humanoid manipulation and intelligent delivery capabilities, powered by cutting-edge embodied AI. This powerful combination allows the robot to autonomously perform a wide range of tasks across diverse settings, including hotels, office buildings, restaurants, retail spaces, and healthcare facilities.

[ Pudu Robotics ]

If you ever wanted to manipulate a trilby with 25 robots, a solution now exists.

[ Paper ] via [ EPFL Reconfigurable Robotics Lab ] published by [ IEEE Robotics and Automation Letters ]

We’ve been sharing videos from the Suzumori Endo Robotics Lab at the Institute of Science Tokyo for many years, and Professor Suzumori is now retiring.

Best wishes to Professor Suzumori!

[ Suzumori Endo Lab ]

No matter the vehicle, traditional control systems struggle when unexpected challenges—like damage, unforeseen environments, or new missions—push them beyond their design limits. Our Learning Introspective Control (LINC) program aims to fundamentally improve the safety of mechanical systems, such as ground vehicles, ships, and robotics, using various machine learning methods that require minimal computing power.

[ DARPA ]

NASA’s Perseverance rover captured new images of multiple dust devils while exploring the rim of Jezero Crater on Mars. The largest dust devil was approximately 210 feet wide (65 meters). In this Mars Report, atmospheric scientist Priya Patel explains what dust devils can teach us about weather conditions on the Red Planet.

[ NASA ]

Reference: https://ift.tt/3yFKhWb

Thursday, April 3, 2025

Google unveils end-to-end messages for Gmail. Only thing is: It’s not true E2EE.


When Google announced Tuesday that end-to-end encrypted messages were coming to Gmail for business users, some people balked, noting it wasn’t true E2EE as the term is known in privacy and security circles. Others wondered precisely how it works under the hood. Here’s a description of what the new service does and doesn’t do, as well as some of the basic security that underpins it.

When Google uses the term E2EE in this context, it means that an email is encrypted inside Chrome, Firefox, or just about any other browser the sender chooses. As the message makes its way to its destination, it remains encrypted and can’t be decrypted until it arrives at its final destination, when it’s decrypted in the recipient's browser.

Giving S/MIME the heave-ho

The chief selling point of this new service is that it allows government agencies and the businesses that work with them to comply with a raft of security and privacy regulations and at the same time eliminates the massive headaches that have traditionally plagued anyone deploying such regulation-compliant email systems. Up to now, the most common means has been S/MIME, a standard so complex and painful that only the bravest and most well-resourced organizations tend to implement it.

Read full article

Comments

Reference : https://ift.tt/wLr7YG1

A Guide to IEEE Education Week’s Events




As technology evolves, staying current with the latest advancements and skills remains crucial. Continuous learning is essential for maintaining a competitive edge in the tech industry.

IEEE Education Week, taking place from 6 to 12 April, emphasizes the importance of lifelong learning. During the week, technical professionals, students, educators, and STEM enthusiasts can access a variety of events, resources, and special offers from IEEE organizational units, societies, and councils. Whether you’re a seasoned professional or just starting your career, participating in IEEE Education Week can help you reassess and realign your skills to meet market demands.

Here are some of the offerings:

The IEEE Education Week website lists special offers and discounts. The IEEE Learning Network, for example, is offering a 25 percent discount on some of its popular course programs in technical areas including artificial intelligence, communications, and IEEE standards; use the code ILNIEW25, available until 30 April.

Be sure to complete the IEEE Education Week quiz by noon EDT on 11 April for a chance to earn a digital badge, which can be displayed on social media.

Don’t miss this opportunity to invest in your future and explore IEEE’s vast educational offerings. To learn more about IEEE Education Week, watch this video and follow the event on Facebook or X.

Reference: https://ift.tt/aDduQUf

Wednesday, April 2, 2025

AI bots strain Wikimedia as bandwidth surges 50%


On Tuesday, the Wikimedia Foundation announced that relentless AI scraping is putting strain on Wikipedia's servers. Automated bots seeking AI model training data for LLMs have been vacuuming up terabytes of data, growing the foundation's bandwidth used for downloading multimedia content by 50 percent since January 2024. It’s a scenario familiar across the free and open source software (FOSS) community, as we've previously detailed.

The Foundation hosts not only Wikipedia but also platforms like Wikimedia Commons, which offers 144 million media files under open licenses. For decades, this content has powered everything from search results to school projects. But since early 2024, AI companies have dramatically increased automated scraping through direct crawling, APIs, and bulk downloads to feed their hungry AI models. This exponential growth in non-human traffic has imposed steep technical and financial costs—often without the attribution that helps sustain Wikimedia’s volunteer ecosystem.

The impact isn’t theoretical. The foundation says that when former US President Jimmy Carter died in December 2024, his Wikipedia page predictably drew millions of views. But the real stress came when users simultaneously streamed a 1.5-hour video of a 1980 debate from Wikimedia Commons. The surge doubled Wikimedia’s normal network traffic, temporarily maxing out several of its Internet connections. Wikimedia engineers quickly rerouted traffic to reduce congestion, but the event revealed a deeper problem: The baseline bandwidth had already been consumed largely by bots scraping media at scale.

Read full article

Comments

Reference : https://ift.tt/Kzx2sD8

Nvidia Blackwell Ahead in AI Inference, AMD Second




In the latest round of machine learning benchmark results from MLCommons, computers built around Nvidia’s new Blackwell GPU architecture outperformed all others. But AMD’s latest spin on its Instinct GPUs, the MI325, proved a match for the Nvidia H200, the product it was meant to counter. The comparable results were mostly on tests of one of the smaller-scale large language models Llama2 70B (for 70 billion parameters). However, in an effort to keep up with a rapidly changing AI landscape, MLPerf added three new benchmarks to better reflect where machine learning is headed.

MLPerf runs benchmarking for machine learning systems in an effort to provide an apples-to-apples comparison between computer systems. Submitters use their own software and hardware, but the underlying neural networks must be the same. There are a total of 11 benchmarks for servers now, with three added this year.

It has been “hard to keep up with the rapid development of the field,” says Miro Hodak, the co-chair of MLPerf Inference. ChatGPT only appeared in late 2022, OpenAI unveiled its first large language model (LLM) that can reason through tasks last September, and LLMs have grown exponentially—GPT3 had 175 billion parameters, while GPT4 is thought to have nearly 2 trillion. As a result of the breakneck innovation, “we’ve increased the pace of getting new benchmarks into the field,” says Hodak.

The new benchmarks include two LLMs. The popular and relatively compact Llama2-70B is already an established MLPerf benchmark, but the consortium wanted something that mimicked the responsiveness people are expecting of chatbots today. So the new benchmark “Llama2-70B Interactive” tightens the requirements. Computers must produce at least 25 tokens per second under any circumstance and cannot take more than 450 milliseconds to begin an answer.

Seeing the rise of “agentic AI”—networks that can reason through complex tasks—MLPerf sought to test an LLM that would have some of the characteristics needed for that. They chose Llama3.1 405B for the job. That LLM has what’s called a wide context window. That’s a measure of how much information—documents, samples of code, etc.—it can take in at once. For Llama3.1 405B that’s 128,000 tokens, more than 30 times as much as Llama2 70B.

The final new benchmark, called RGAT, is what’s called a graph attention network. It acts to classify information in a network. For example, the dataset used to test RGAT consist of scientific papers, which all have relationships between authors, institutions, and fields of studies, making up 2 terabytes of data. RGAT must classify the papers into just under 3000 topics.

Blackwell, Instinct Results

Nvidia continued its domination of MLPerf benchmarks through its own submissions and those of some 15 partners such as Dell, Google, and Supermicro. Both its first and second generation Hopper architecture GPUs—the H100 and the memory-enhanced H200—made strong showings. “We were able to get another 60 percent performance over the last year” from Hopper, which went into production in 2022, says Dave Salvator, director of accelerated computing products at Nvidia. “It still has some headroom in terms of performance.”

But it was Nvidia’s Blackwell architecture GPU, the B200, that really dominated. “The only thing faster than Hopper is Blackwell,” says Salvator. The B200 packs in 36 percent more high-bandwidth memory than the H200, but more importantly it can perform key machine-learning math using numbers with a precision as low as 4 bits instead of the 8 bits Hopper pioneered. Lower precision compute units are smaller, so more fit on the GPU, which leads to faster AI computing.

In the Llama3.1 405B benchmark, an eight-B200 system from Supermicro delivered nearly four times the tokens per second of an eight-H200 system by Cisco. And the same Supermicro system was three times as fast as the quickest H200 computer at the interactive version of Llama2-70B.

Nvidia used its combination of Blackwell GPUs and Grace CPU, called GB200, to demonstrate how well its NVL72 data links can integrate multiple servers in a rack, so they perform as if they were one giant GPU. In an unverified result the company shared with reporters, a full rack of GB200-based computers delivers 869,200 tokens/s on Llama2 70B. The fastest system reported in this round of MLPerf was an Nvidia B200 server that delivered 98,443 tokens/s.

AMD is positioning its latest Instinct GPU, the MI325X, as providing competitive performance to Nvidia’s H200. MI325X has the same architecture as its predecessor MI300 but adds even more high-bandwidth memory and memory bandwidth—288 gigabytes and 6 terabytes per second (a 50 percent and 13 percent boost respectively).

Adding more memory is a play to handle larger and larger LLMs. “Larger models are able to take advantage of these GPUs because the model can fit in a single GPU or a single server,” says Mahesh Balasubramanian, director of data center GPU marketing at AMD. “So you don’t have to have that communication overhead of going from one GPU to another GPU or one server to another server. When you take out those communications your latency improves quite a bit.” AMD was able to take advantage of the extra memory through software optimization to boost the inference speed of DeepSeek-R1 8-fold.

On the Llama2-70B test, an eight-GPU MI325X computers came within 3 to 7 percent the speed of a similarly tricked out H200-based system. And on image generation the MI325X system was within 10 percent of the Nvidia H200 computer.

AMD’s other noteworthy mark this round was from its partner, Mangoboost, which showed nearly four-fold performance on the Llama2-70B test by doing the computation across four computers.

Intel has historically put forth CPU-only systems in the inference competition to show that for some workloads you don’t really need a GPU. This time around saw the first data from Intel’s Xeon 6 chips, which were formerly known as Granite Rapids and are made using Intel’s 3-nanometer process. At 40,285 samples per second, the best image recognition results for a dual-Xeon 6 computer was about one-third the performance of a Cisco computer with two Nvidia H100s.

Compared to Xeon 5 results from October 2024, the new CPU provides about an 80 percent boost on that benchmark and an even bigger boost on object detection and medical imaging. Since it first started submitting Xeon results in 2021 (the Xeon 3), the company has achieve an 11-fold boost in performance on Resnet.

For now, it seems Intel has quit the field in the AI accelerator chip battle. Its alternative to the Nvidia H100, Gaudi 3, did not make an appearance in the new MLPerf results nor in version 4.1, released last October. Gaudi 3 got a later than planned release because its software was not ready. In the opening remarks at Intel Vision 2025, the company’s invite-only customer conference, newly minted CEO Lip Bu Tan seemed to apologize for Intel’s AI efforts. “I’m not happy with our current position,” he told attendees. “You’re not happy either. I hear you loud and clear. We are working toward a competitive system. It won’t happen overnight, but we will get there for you.”

Google’s TPU v6e chip also made a showing, though the results were restricted only to the image generation task. At 5.48 queries per second, the 4-TPU system saw a 2.5x boost over a similar computer using its predecessor TPU v5e in the October 2024 results. Even so, 5.48 queries per second was roughly in line with a similarly-sized Lenovo computer using Nvidia H100s.

Reference: https://ift.tt/EB6OTga

Four Ways Engineers Are Trying to Break Physics




In particle physics, the smallest problems often require the biggest solutions.

Along the border of France and Switzerland, around a hundred meters underneath the countryside, protons speed through a 27-kilometer ring—about seven times the length of the Indy 500 circuit—until they crash into protons going in the opposite direction. These particle pileups produce a petabyte of data every second, the most interesting of which is poured into data centers, accessible to thousands of physicists worldwide.

The Large Hadron Collider (LHC), arguably the largest experiment ever engineered, is needed to probe the universe’s smallest constituents. In 2012, two teams at the LHC discovered the elusive Higgs boson, the particle whose existence confirmed 50-year-old theories about the origins of mass. It was a scientific triumph that led to a Nobel Prize and worldwide plaudits.

Since then, experiments at the LHC have focused on better understanding how the newfound Higgs fits into the Standard Model, particle physicists’ best theoretical description of matter and forces—minus gravity. “The Standard Model is beautiful,” says Victoria Martin, an experimental physicist at the University of Edinburgh. “Because it’s so precise, all the little niggles stand out.”

Tunnel with cylindrical tube stretching into distance The Large Hadron Collider lives in a 27-kilometer tunnel ring, about 100 meters underneath France and Switzerland. It was used to discover the Higgs boson, but further research may require something larger still. Maximilien Brice/CERN

The minor quibbles physicists have about the Standard Model could be explained by new particles: Dark matter, the invisible material whose gravity shapes the universe, is thought to be made of heretofore undiscovered particles. But such new particles may be out of reach for the LHC, even after it undergoes upgrades that are set to be completed later this decade. To address these lingering questions, particle physicists have been planning its successors. These next-generation colliders will improve on the LHC by smashing protons at higher energies or by making more precise collisions with muons, antimuons, electrons, and positrons. In doing so, they’ll allow researchers to peek into a whole new realm of physics.

Martin herself is particularly interested in learning more about the Higgs, and learning exactly how the particle responsible for mass behaves. One possible find: Properties of the Higgs suggest that the universe might not be stable in the long, long term. [Editor’s note: About 10790 years. Other problems may be more pressing.] “We don’t really know exactly what we’re going to find,” Martin says. “But that’s okay, because it’s science, it’s research.”

There are four main proposals for new colliders, and each one comes with its own slew of engineering challenges. To build them, engineers would need to navigate tricky regional geology, design accelerating cavities, handle the excess heat within the cavities, and develop powerful new magnets to whip the particles through these cavities. But perhaps more daunting are the geopolitical obstacles: coordinating multinational funding commitments and slogging through bureaucratic muck.

Collider projects take years to plan and billions of dollars to finance. The fastest that any of the four machines would come on line is the late 2030s. But now is when physicists and engineers are making key scientific and engineering decisions about what’s coming next.

Supercolliders at a glance


Large Hadron Collider

Size (circumference): 27 kilometers

Collision energy: 13,600 giga-electron volts

Colliding particles: protons and ions

Luminosity: 2 × 1034 collisions per square centimeter per second (5 × 1034 for high-luminosity upgrade)

Location: Switzerland–France border

Start date: 2008–

International Linear Collider

Size (length): 31 km

Collision energy: 500 GeV

Colliding particles: electrons and positrons

Luminosity (at peak energy): 3 × 1034 collisions per cm2 per second

Location: Iwate, Japan

Earliest start date: 2038


Muon collider

Size (circumference): 4.5 km (or 10 km)

Collision energy: 3,000 GeV (or 10,000 GeV)

Colliding particles: muons and antimuons

Luminosity: 2 × 1035 collisions per cm2 per second

Location: possibly Fermilab

Earliest start date: 2045 (or in the mid-2050s)


Future Circular Collider-ee | FCC-hh

Size (circumference): 91 km

Collision energy: 240 GeV | 85,000 GeV

Colliding particles: electrons and positrons | protons

Luminosity: 8.5 × 1034 | 30 × 1034 collisions per cm2 per second

Location: Switzerland–France border

Earliest start date: 2046 | 2070


Circular Electron Positron Collider | Super proton–proton Collider (SPPC)

Size (circumference): 100 km

Collision energy: 240 GeV | 100,000 GeV

Colliding particles: electrons and positrons | protons

Luminosity: 8.3 × 1034 | 13 × 1034 collisions per cm2 per second

Location: China

Earliest start date: 2035 | 2060s

Possible supercolliders of the future

The LHC collides protons and other hadrons. Hadrons are like beanbags, full of quarks and gluons, that spray around everywhere upon collision.

Next-generation colliders have two ways to improve on the LHC: They can go to higher energies or higher precision. Higher energies provide more data by producing more particles—potentially new, heavy ones. Higher-precision collisions give physicists cleaner data with a better signal-to-noise ratio because the particle crash produces less debris. Either approach could reveal new physics beyond the Standard Model.

Three of the new colliders would improve on the LHC’s precision by colliding electrons and their antimatter counterparts, positrons, instead of hadrons. These particles are more like individual marbles—much lighter, and not made up of any smaller constituents. Compared with the collisions between messy, beanbag-like hadrons, a collision between electrons and positrons is much cleaner. After taking data for years, some of those colliders could be converted to smash protons as well, though at energies about eight times as high as those of the LHC.

These new colliders range from technically mature to speculative. One such speculative option is to smash muons, electrons’ heavier cousins, which have never been collided before. In 2023, an influential panel of particle physicists recommended that the US pursue development of such a machine, in a so-called ‘muon shot’. If it is built, a muon collider would likely be based at Fermilab, the center of particle physics in the United States.

A muon collider “can bring us outside of the world that we know,” says Daniele Calzolari, a physicist working on muon collider design at CERN, the European Organization for Nuclear Research. “We don’t know exactly how everything will look like, but we believe we can make it work.”

While muon colliders have remained conceptual for more than 50 years, their potential has long excited and intrigued physicists. Muons are heavy compared with electrons, almost as heavy as protons, but they lack the mess of quarks and gluons, so collisions between muons could be both high energy and high precision.

A shiny metallic machine component set up in a lab setting. Superconducting radio-frequency cavities are used in particle colliders to apply electric fields to charged particles, speeding them up toward each other until they smash together. Newer methods of making these cavities are seamless, providing more-precise steering and, presumably, better collisions. Reidar Hahn/Fermi

The trouble is that muons decay rapidly—in a mere 2.2 microseconds while at rest—so they have to be cooled, accelerated, and collided before they expire. Preliminary studies suggest a muon collider is possible, but key technologies, like powerful high-field solenoid magnets used for cooling, still need to be developed. In March 2025, Calzolari and his colleagues submitted an internal proposal for a preliminary demonstration of the cooling technology, which they hope will happen before the end of the decade.

The accelerator that could theoretically come on line the soonest, would be the International Linear Collider (ILC) in Iwate, Japan. The ILC would send electrons and positrons down straight tunnels where the particles would collide to produce Higgs bosons that are easier to detect than at the LHC. The collider’s design is technically mature, so if the Japanese government officially approved the project, construction could begin almost immediately. But after multiple delays by the government, the ILC remains in a sort of planning purgatory, looking more and more unlikely.

Chart of Standard Model particles showing quarks, leptons, gauge bosons, and the Higgs boson. The Standard Model of particle physics is the current best theory of all the understood matter and forces in our universe (except gravity). The model works extremely well, but scientists also know that it is incomplete. The next generation of supercolliders might give a glimpse at what’s beyond the Standard Model.

So, the two colliders, which are both technically mature, that have perhaps the clearest path to construction are China’s Circular Electron Positron Collider (CEPC) and CERN’s Future Circular Collider (FCC-ee).

CERN’s FCC-ee would be a 91-km ring, designed to initially collide electrons and positrons to study the parameters of particles like the Higgs in fine detail (the “ee” indicates collisions between electrons and positrons). Compared with the LHC’s collisions of protons or heavy ions, those between electrons and positrons “are much cleaner, so you can have a more precise measurement,” says Michael Benedikt, the head of the FCC-ee effort. After about a decade of operation—enough time to gather data and develop the needed magnets—it would be upgraded to collide protons and search for new physics at much higher energies (and then become known as the FCC-hh, for hadrons). The FCC-ee’s feasibility report just concluded, and CERN’s member states are now left deciding whether to pursue the project.

Similarly, China’s CEPC would also be a 100-km ring designed to collide electrons and positrons for the first 18 years or so. And much like the FCC, a proton or other hadron upgrade is in the works after that. Later this year, Chinese researchers plan to submit the CEPC for official approval by the Chinese government as part of the next five-year-plan. As the two colliders (and their proton upgrades) are considered for construction in the next few years, policymakers will be thinking about more than just their potential for discovery.

CEPC and FCC-ee are, in this sense, less abstract physics experiments and more engineering projects with concrete design challenges.

Laying the groundwork

When particles zip around the curve of a collider, they lose energy—much like a car braking on a racetrack. The effect is particularly pronounced for lightweight particles like electrons and positrons. To reduce this energy loss from sharp turns, CEPC and FCC-ee are both planned to have enormous tunnels, which, if built, would be among the longest in the world. The construction cost of such an enormous tunnel would be several billion U.S.dollars, roughly one-third of the total collider price.

Finding a place to bury a 90-km ring is not easy, especially in Switzerland. The proposed path of the FCC-ee has an average depth of 200 meters, with a dip to 500 meters under Lake Geneva, fit snugly between the Jura Mountains to the northwest and the Prealps to the east. The land there was once covered by a sea, which left behind sedimentary rock—a mixture of sandstone and shale known as molasse. “We’ve done so much tunneling at CERN before. We were quite confident about the molasse rock,” says Liam Bromiley, a civil engineer at CERN.

But the FCC-ee’s path also takes it through deposits of limestone, which is permeable and can hold karsts, or cavities, full of water. “If you hit one of those, you could end up flooding the tunnel,” Bromiley says. During the next two years, if the project is green-lit, engineers will drill boreholes into the limestone to determine whether there are karsts that can be avoided.

Map showing collider sizes in Geneva, Switzerland, and Qinhuangdao, China. FCC-ee would be a 91-km ring spanning underneath Switzerland and France, near the current Large Hadron Collider. One of the proposed locations for the CEPC is near the northern port city of Qinhuangdao, where the 100 km circumference collider would be buried underground.Chris Philpot

CEPC, in contrast, has a much looser spatial constraint, and can choose from nearly anywhere in China. Three main sites are being considered: Qinhuangdao (a northern port city), Changsha (a metropolis in central China), and Huzhou (a coastal city near Shanghai). According to Jie Gao, a particle physicist at the Institute of High Energy Physics, in Beijing, the ideal location will have hard rock, like granite, and low seismic activity. Additionally, Gao says, they want a site with good infrastructure to create a “science city” ideal for an international community of physicists.

The colliders’ carbon footprints are also on the minds of physicists. One potential energy-saving measure: redirecting excess heat from operations. “In the past we used to throw it into the atmosphere,” Benedikt says. In recent years, heated water from one of the LHC’s cooling stations has kept part of the commune of Ferney-Voltaire warm during the winters, and Benedikt says the FCC-ee would expand these environmental efforts.

Getting up to speed

If the civil-engineering challenges are met, physicists will rely on a spate of technologies to accelerate, focus, and collide electrons and positrons at CEPC and FCC-ee more precisely and efficiently than they could at the LHC.

When both types of particles are first produced from their sources, they start off at a comparatively low energy, around 4 giga-electron volts. To get them up to speed, electrons and positrons are sent through superconducting radio-frequency (SRF) cavities—gleaming metal bubbles strung together like beads of a necklace, which apply an electric field that pushes the charged particles forward.

Cutaway diagrams of Future Circular Collider and Circular Electron Positron Collider designs. Both China’s Circular Electron Positron Collider (CEPC) [bottom] and CERN’s Future Circular Collider (FCC-ee) [top] have preliminary designs of the insides of their tunnels, including the collider itself, associated vacuum and control equipment, and detectors.Chris Philpot

In the past, SRF cavities were welded together, which inherently left imperfections that led to beam instabilities. “You can never obtain a perfect surface along this weld,” Benedikt says. FCC-ee researchers have explored several techniques to create cavities without seams, including hydroforming, which is widely used for the components of high-end sports cars. A metal tube is placed in a pressurized cell and compressed against a die by liquid. The resulting cavity has no seams and is smooth as blown glass.

To improve efficiency, engineers focus on the machines that power the SRF cavities, machines called klystrons. Klystrons have historically had efficiencies that peak around 65 percent, but design advances, such as the machines’ ability to bunch electrons together, are on track to reach efficiencies of 80 percent. “The efficiency of the klystron is becoming very important,” Gao says. Over 10 years of operation, these savings could amount to 1 terawatt hour—about enough electricity to power all of China for an hour.

Another efficiency boost comes from focusing on the tunnel design. As electrons and positrons follow the curve of the ring, they will lose a considerable amount of energy, so SRF cavities will be placed around the ring to boost particle energies. The lost energy will be emitted as potent synchrotron radiation—about 10,000 times as much radiation as is emitted by protons circling the LHC today. “You do not want to send the synchrotron radiation into the detectors,” Benedikt says. To avoid this fate, neither FCC-ee nor CEPC will be perfectly circular. Shaped a bit like a racetrack, both colliders will have about 1.5-km-long straight sections before an interaction point. Other options are also on the table—in the past, researchers have even used repurposed steel from scrapped World War II battleships to shield particle detectors from radiation.

Both CEPC and FCC-ee will be massive data-generating machines. Unlike the LHC, which is regularly stopped to insert new particles, the next-generation colliders will be fed with a continuous stream of particles, allowing it to stay in “collision mode” and take more data.

At a collider, data is a function of ‘luminosity’— the ratio of detected events per square centimeter, per second. The more particle collisions, the “brighter” the collider. Firing particles at each other is a little like trying to get two bullets to collide—they often miss each other, which limits the luminosity. But physicists have a variety of strategies to squeeze more electrons and positrons into smaller areas to achieve more of these unlikely collisions. Compared to the Large Electron-Positron (LEP) collider of the 1990s, the new machines will produce 100,000 times as many Z bosons—particles responsible for radioactive decay. More Z bosons means more data. “The FCC-ee can produce all the data that were accumulated in operation over 10 years of LEP within minutes,” Benedikt says.

Back to protons

While both the FCC-ee and CEPC would start with electrons and positrons, they are designed to eventually collide protons. These upgrades are called FCC-hh and Super proton-proton Collider (SPPC). Using protons, FCC-hh and SPPC would reach a collision energy of 100,000 GeV, roughly an order of magnitude higher than the LHC’s 13,600 GeV. Though the collisions would be messy, their high energy would allow physicists to “explore fully new territory,” Benedikt says. While there’s no guarantee, physicists hope that territory teems with discoveries-in-waiting, such as dark-matter particles, or strange new collisions where the Higgs recursively interacts with itself many times.

One pro of protons is that they are over 1,800 times as heavy as electrons, so they emit far less radiation as they follow the curve of the collider ring. But this extra heft comes with a substantial cost: Bending protons’ paths requires even stronger superconducting magnets.

Magnet development has been the downfall of colliders before. In the early 1980s, a planned collider named Isabelle was scrapped because magnet technology was not far enough along. The LHC’s magnets are made from a strong alloy of niobium-titanium, wound together into a coil that produces magnetic fields when subjected to a current. These coils can produce field strengths over 8 teslas. The strength of the magnet pushes its two halves apart with a force of nearly 600 tons per meter. “If you have an abrupt movement of the turns in the coil by as little as 10 micrometers,” the entire magnet can fail, says Bernhard Auchmann, an expert on magnets at CERN.

It Is Unlikely That Any ColliderWhether Based in China, at Cern, the United States, or JapanWill Be Able To Go It Alone.

Future magnets for FCC-hh and SPPC will need to have at least twice the magnetic field strength, about 16 to 20 T, pushing the limits of materials and physics. Auchmann points to three possible paths forward. The most straightforward option might be “niobium three tin” (Nb3Sn). Substituting tin for titanium allows the metal to host magnetic fields up to 16 T but makes it quite brittle, so you can’t “clamp the hell out of it,” Auchmann says. One possible solution involves placing (Nb3Sn) into a protective steel endoskeleton that prevents it from crushing itself.

Then there are high-temperature superconductors. Some magnets made with rare earth metals can exceed 20 T, but they too are fragile and require similar steel supports. Currently, these materials are expensive, but demand from fusion startups, which also require these types of magnets, may push the price down, Auchmann says.

Finally, there is a class of iron-based high-temperature superconductors that is being championed by physicists in China, thanks to the low price of iron and manufacturing-process improvements. “It’s cheap,” Gao says. “This technology is very promising.” Over the next decade or so, physicists will work on each of these materials, and hope to settle on one direction for next-generation magnets.

Time and money

While FCC-ee and CEPC (as well as their proton upgrades) share many of the same technical specifications, they differ dramatically in two critical factors: timelines and politics.

Construction for CEPC could begin in two years; the FCC-ee would need to wait about another decade. The difference comes down largely to the fact that CERN has a planned upgrade to the LHC—enabling it to collect 10 times as much data—which will consume resources until nearly 2040. China, by contrast, is investing heavily in basic research and has the funds immediately at hand.

The abstruse physics that happens at colliders is never as far from political realities on Earth as it seems. Japan’s ILC is in limbo because of budget issues. The muon collider is subject to the whims of the highly divided 119th U.S. Congress. Last year, a representative for Germany criticized the FCC-ee for being unaffordable, and CERN continues to struggle with the politics of including Russian scientists. Tensions between China and the United States are similarly on the rise following the Trump administration’s tariffs.

How physicists plan to tackle these practical problems remains to be seen. But it is unlikely that any collider—whether based in China, at CERN, the United States, or Japan—will be able to go it alone. In addition to the tens of billions of dollars for construction and operation of the new facility, the physics expertise needed to run it and perform complex experiments at scale must be global. “By definition, it’s an international project,” Gao says. “The door is wide open.”

Reference: https://ift.tt/r4fmJqu

NSA warns “fast flux” threatens national security. What is fast flux anyway?

A technique that hostile nation-states and financially motivated ransomware groups are using to hide their operations poses a threat to cr...