Friday, February 28, 2025

Serbian student’s Android phone compromised by exploit from Cellebrite


Amnesty International on Friday said it determined that a zero-day exploit sold by controversial exploit vendor Cellebrite was used to compromise the phone of a Serbian student who had been critical of that country's government.

The human rights organization first called out Serbian authorities in December for what it said was its “pervasive and routine use of spyware” as part of a campaign of “wider state control and repression directed against civil society.” That report said the authorities were deploying exploits sold by Cellebrite and NSO, a separate exploit seller whose practices have also been sharply criticized over the past decade. In response to the December report, Cellebrite said it had suspended sales to “relevant customers” in Serbia.

Campaign of surveillance

On Friday, Amnesty International said that it uncovered evidence of a new incident. It involves the sale by Cellebrite of an attack chain that could defeat the lock screen of fully patched Android devices. The exploits were used against a Serbian student who had been critical of Serbian officials. The chain exploited a series of vulnerabilities in device drivers the Linux kernel uses to support USB hardware.

Read full article

Comments

Reference : https://ift.tt/OXVi5T8

Video Friday: Good Over All Terrains




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY
German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY
European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY
RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
ICRA 2025: 19–23 May 2025, ATLANTA, GA
London Humanoids Summit: 29–30 May 2025, LONDON
IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
RSS 2025: 21–25 June 2025, LOS ANGELES
ETH Robotics Summer School: 21–27 June 2025, GENEVA
IAS 2025: 30 June–4 July 2025, GENOA, ITALY
ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
IFAC Symposium on Robotics: 15–18 July 2025, PARIS
RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

A bioinspired robot developed at EPFL can change shape to alter its own physical properties in response to its environment, resulting in a robust and efficient autonomous vehicle as well as a fresh approach to robotic locomotion.

[ Science Robotics ] via [ EPFL ]

A robot CAN get up this way, but SHOULD a robot get up this way?

[ University of Illinois Urbana-Champaign ]

I’m impressed with the capabilities here, but not the use case. There are already automated systems that do this much faster, much more reliably, and almost certainly much more cheaply. So, probably best to think of this as more of a technology demo than anything with commercial potential.

[ Figure ]

NEO Gamma is the next generation of home humanoids designed and engineered by 1X Technologies. The Gamma series includes improvements across NEO’s hardware and AI, featuring a new design that is deeply considerate of life at home. The future of Home Humanoids is here.

You all know by now not to take this video too seriously, but I will say that an advantage of building a robot like this for the home is that realistically it can spend most of its time sitting down and (presumably) charging.

[ 1X Technologies ]

This video compilation showcases novel aerial and underwater drone platforms and an ultra-quiet electric vertical takeoff and landing (eVTOL) propeller. These technologies were developed by the Advanced Vertical Flight Laboratory (AVFL) at Texas A&M University and Harmony Aeronautics, an AVFL spin-off company.

[ AVFL ]

Yes! More research like this please; legged robots (of all sizes) are TOO STOMPY.

[ ETH Zurich ]

Robosquirrel!

[ BBC ] via [ Laughing Squid ]

By watching their own motions with a camera, robots can teach themselves about the structure of their own bodies and how they move, a new study from researchers at Columbia Engineering now reveals. Equipped with this knowledge, the robots could not only plan their own actions, but also overcome damage to their bodies.

[ Columbia University, School of Engineering and Applied Science ]

If I was asking my robot to do a front flip for the first(ish) time, my face would probably look like the poor guy at 0:25. But it worked!

[ EngineAI ]

*We kindly request that all users refrain from making any dangerous modifications or using the robots in a hazardous manner.

A hazardous manner? Like teaching it martial arts...?

[ Unitree ]

Explore SLAMSpoof—a cutting-edge project by Keio-CSG that demonstrates how LiDAR spoofing attacks can compromise SLAM systems. In this video, we explore how spoofing attacks can compromise the integrity of SLAM systems, review the underlying methodology, and discuss the potential security implications for robotics and autonomous navigation. Whether you’re a robotics enthusiast, a security researcher, or simply curious about emerging technologies, this video offers valuable insights into both the risks and the innovations in the field.

[ SLAMSpoof ]

Thanks, Kentaro!

Sanctuary AI, a company developing physical AI for general purpose robots, announced the integration of new tactile sensor technology into its Phoenix general purpose robots. The integration enables teleoperation pilots to more effectively leverage the dexterity capabilities of general purpose robots to achieve complex, touch-driven tasks with precision and accuracy.

[ Sanctuary AI ]

I don’t know whether it’s the shape or the noise or what, but this robot pleases me.

[ University of Pennsylvania, Sung Robotics Lab ]

Check out the top features of the new Husky A300 - the next evolution of our rugged and customizable mobile robotic platform. Husky A300 offers superior performance, durability, and flexibility, empowering robotics researchers and innovators to tackle the most complex challenges in demanding environments.

[ Clearpath Robotics ]

The ExoMars Rosalind Franklin rover will drill deeper than any other mission has ever attempted on the Red Planet. Rosalind Franklin will be the first rover to reach a depth of up to two meters deep below the surface, acquiring samples that have been protected from harsh surface radiation and extreme temperatures.

[ European Space Agency ]

AI has been improving by leaps and bounds in recent years, and a string of new models can generate answers that almost feel as if they came from a person reasoning through a problem. But is AI actually close to reasoning like humans can? IBM distinguished scientist Murray Campbell chats with IBM Fellow Francesca Rossi about her time as president of the Association for the Advancement of Artificial Intelligence (AAAI). They discuss the state of AI, what modern reasoning models are actually doing, and whether we’ll see models that reason like we do.

[ IBM Research ]

Reference: https://ift.tt/dDtSaj1

“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews


The verdict is in: OpenAI's newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end.

An AI expert who requested anonymity told Ars Technica, "GPT-4.5 is a lemon!" when comparing its reported performance to its dramatically increased price, while frequent OpenAI critic Gary Marcus called the release a "nothing burger" in a blog post (though to be fair, Marcus also seems to think most of what OpenAI does is overrated).

Former OpenAI researcher Andrej Karpathy wrote on X that GPT-4.5 is better than GPT-4o but in ways that are subtle and difficult to express. "Everything is a little bit better and it's awesome," he wrote, "but also not exactly in ways that are trivial to point to."

Read full article

Comments

Reference : https://ift.tt/tUFGZCJ

The British Navy Resisted a Decent Lightning Rod for Decades




In the mid-18th century, Benjamin Franklin helped elucidate the nature of lightning and endorsed the protective value of lightning rods. And yet, a hundred years later, much of the public remained unconvinced. As a result, lightning continued to strike church steeples, ship masts, and other tall structures, causing severe damage.

Frustrated scientists turned to visual aids to help make their case for the lightning rod. The exploding thunder house is one example. When a small amount of gunpowder was deposited inside the dollhouse-size structure and a charge was applied, the house would either explode or not, depending on whether it was ungrounded or grounded. [For more on thunder houses, see “Tiny Exploding Houses Promoted 18th-Century Lightning Rods,.” IEEE Spectrum, 1 April 2023.]

Another visual aid for promoting lightning rods was an ingenious booklet by the British doctor and electrical researcher William Snow Harris. Published around 1861, Three Experimental Illustrations of a General Law of Electrical Discharge made the case for Harris’s invention: a lightning rod for tall-masted wooden ships. The rod was attached to the mainmast, ran through the hull, and connected to copper sheeting on the underside of the ship, thus dissipating any electricity from a lightning strike into the sea. It was a great idea, and it seemed to work. So why did the British Navy refuse to adopt it? I’ll get to that in a bit.

How to Illustrate the Principles of Lightning

The “experimental illustrations” in Harris’s 16-page pamphlet were intended to be interactive, each one highlighting a specific principle of conductivity. The illustrations were plated with gold leaf to mimic the conducting path of lightning. When the reader applied a charge to one end, the current charred a black course along the page. In the illustration at top, someone has clearly done this on the right hand side.

A pair of illustrations consisting of long and short rectangles scattered vertically on the page and covered with gold leaf. In the first experimental illustration in Harris’s book, the gold leaf is scattered haphazardly across the page. Linda Hall Library of Science, Engineering & Technology

The gold leaf in Harris’s first experimental illustration is placed haphazardly to show how electricity will follow the path of least resistance. If strong enough, the electricity will jump across small breaks to the closest adjacent metallic piece. Notably, pieces of gold that don’t lie along the path remain unaffected. Harris’s lesson here is that if there were a solid, uninterrupted line of metal that’s sufficiently isolated from other pieces—say, a lightning rod on a wooden ship’s mast—the current would follow that channel while sparing the rest.

The second experiment addresses a problem that was common in the days of tall ships: the rise and fall of the lightning rod as the jibs and rigging were adjusted according to the weather. Whereas a church steeple and its lightning rod remain fixed, a movable mast and the constantly changing rigging altered the configuration of the lightning rod. The experiment demonstrates that Harris’s design wasn’t affected by such changes. A charge wouldn’t dead-end and detonate midship just because a jib had been lowered. It would still follow the conductor that leads to the best exit for dissipation—that is, the ship’s bottom.

A pair of illustrations consisting of mainly vertical lines staggered on the page and covered in gold leaf. The second experiment was intended to show, in a stylized way, the effect of the lightning rod rising and falling as the jibs and rigging were adjusted.Linda Hall Library of Science, Engineering & Technology

The final experiment in the pamphlet, the one shown at top, takes direct aim at the preferred lightning conductor employed by the Royal Navy: a flexible cable or link of chains. The cable or chain was attached to the top of the foremast and then unfurled into the sea. But so deployed, it often got in the way, and most captains opted to store it rolled up somewhere on deck. Whenever a squall was spotted on the horizon, one unlucky sailor had to quickly haul the chain up the mast and attach it.

The experiment illustrates what would happen if the sailor were to accidentally come in contact with two points of a loose conductive cable during a lightning storm. Instead of following the cable, the discharge would course straight through him. As Harris wrote in the description, the poor seaman “would be probably destroyed.” Death was a clear risk for sailors on unprotected ships, just as it was for bell ringers in unprotected churches.

Mr. Thunder-and-Lightning Harris

William Snow Harris published Three Experimental Illustrations when he was about 70, and he died six years later. The booklet was his final salvo in a battle he had waged with the Royal Navy for decades.

Born in the port city of Plymouth, England, in 1791, Harris studied medicine in Edinburgh and then returned home, set up a practice, and was admitted to the Royal College of Physicians. In the 1810s, his fascination with lightning strikes and tall-masted wooden ships took hold. He began working on a shipborne lightning-rod system, perfecting it by 1820.

Black and white photo of an old man dressed in 19th-century clothing and sitting in a chair. William Snow Harris (1791–1867) trained as a medical doctor but gave up his practice to focus on promoting his lightning rod for wooden ships. Plymouth Athenaeum

For the rest of his career, Harris tried to convince the Royal Navy to adopt it. He abandoned his medical practice and dove deeper into his studies of electricity. He presented papers at the Royal Society and wrote books on the nature of thunderstorms. An 1823 book on the effects of lightning on ships also featured his gold-leafed experimental illustrations, along with a vivid description of a lightning strike on an unprotected ship: “The main-top mast, from head to heel, was shivered into a thousand splinters….” Harris enlisted support for his system from leading scientists, such as Michael Faraday, Charles Wheatstone, and Humphry Davy. He eventually earned the nickname Mr. Thunder-and-Lightning Harris for his zealotry.

Despite his enthusiasm and the support of the Royal Society and other scientists, though, the navy declined to accept Harris’s proposal.

Harris continued to press his case. A well-publicized lightning strike on the U.S. packet ship New York in 1827 helped. Three days into its transatlantic journey, lightning struck at dawn. The “electrical fluid,” as it was then called, ran down the mainmast, bursting three iron hoops and shattering the masthead and cap. It entered a storeroom and demolished the bulkheads and fittings before following a lead pipe into the ladies’ cabin and fragmenting a large mirror. Elsewhere, it overturned a piano, split the dining table into pieces, and magnetized the ship’s chronometer as well as most of the men’s watches.

A lightning conductor wasn’t in place during the strike, but the crew raised the iron chain in the aftermath. Good thing they did. At 2:00 p.m., lightning struck the unfortunate New York again. As the American Journal of Science and Arts reported, the chain was “literally torn to pieces and scattered to the winds,” but it did its job and saved the ship, and no passengers were killed.

Subsequently, the admiralty agreed to conduct a pilot test of Harris’s system. Starting in 1830, the navy fitted the conductor onto 11 vessels, ranging in size from a 10-gun brig to a 120-gun ship of the line. The brig happened to be the HMS Beagle, which was about to set sail for a surveying trip of South America. After it returned five years later, one of its passengers, Charles Darwin, published an account that made the voyage famous. (His 1859 book, On the Origin of Species, was also based on his research aboard the Beagle.)

Illustration of a large wooden sailing ship with 3 smaller boats nearby. The HMS Beagle, made famous by Charles Darwin, was one of 11 British navy ships to be outfitted with Harris’s fixed lightning rods. Bettmann/Getty Images

During the expedition, the ship frequently encountered lightning and was struck at least twice. In August 1832, for instance, while the ship was anchored off Monte Video, Uruguay, First Lieutenant Bartholomew Sulivan described a strike that he witnessed while on deck: “The mainmast, for the instant, appeared to be a mass of fire, I felt certain that the lightning had passed down the conductor on that mast.”

Sulivan had previously been aboard the Thetis, whose foremast had been destroyed by lightning, so he was especially attuned to the destruction storms could cause. Yet on the Beagle, he wrote, “not the slightest ill consequence was experienced.” When Captain Robert FitzRoy made his report to the admiralty, he likewise endorsed Harris’s system: “Were I allowed to choose between masts so fitted and the contrary, I should decide in favor of those having Harris’s conductors.”

None of the 11 ships fitted with Harris’s system was damaged by lightning. And yet, the navy soon began removing the demonstration conductors and placing them in the scrap heap.

Numbers Don’t Lie

Not to be defeated, Harris turned to statistics, compiling a list of 235 British naval vessels damaged by lightning, from the Abercromby (26 October 1811, topmast shivered into splinters 14 feet down) to the Zebra (27 March 1838, main-topgallant and topmast shivered; fell on the deck; main-cap split; the jib and sails on mainmast scorched). Additionally, he cataloged the deaths of nearly 100 seamen and serious injury of about 250 others. During one particularly bad period of five or six years, Harris learned, lightning destroyed 40 ships of the line, 20 frigates, and 10 sloops, disabling about one-eighth of the British navy.

In December 1838, lightning struck and damaged a major warship, the 92-gun Rodney. Sensing an opportunity to make a public case for his system, Harris bypassed the admiralty and petitioned the House of Commons to review his claims. A Naval Commission appointed to do that wound up firmly supporting Harris.

Even then, the navy didn’t totally buy into Harris’s system. Instead, it allowed commanders to install it—if they petitioned the admiralty. Given how openly hostile the admiralty was toward Harris, I’m guessing many captains didn’t do that.

A Lightning Rod for Every British Warship

Finally, in June 1842, the admiralty ordered the use of Harris’s lightning rods on all Royal Navy vessels. According to Theodore Bernstein and Terry S. Reynolds, who chronicled Harris’s battle in their 1978 article “Protecting the Royal Navy from Lightning: William Snow Harris and His Struggle with the British Admiralty for Fixed Lightning Conductors” in IEEE Transactions on Education, the navy’s change of heart wasn’t due to better data or more appeals by Harris and his backers. It mostly came down to politics.

Bernstein and Reynolds offer three possible explanations as to why it took the admiralty more than two decades to adopt Harris’s demonstrably superior system. The first was ignorance. Although the scientific community was convinced early on by Harris, some people still believed that conductors attracted lightning, and they worried that lightning could ignite the stores of gunpowder on board.

A second argument was financial. Harris’s system was significantly more expensive than a simple cable or chain. In one 1831 estimate, the cost of Harris’s system ranged from £102 for a 10-gun brig to £365 for a 120-gun brig, compared to less than £5 for the simple cable. Sure, Harris’s system was effective, but was it more than 20 times as effective? Of course, the simple cable offered no protection at all if it was never deployed, as many captains admitted to.

Painted portrait of a man in 19th-century attire with brown hair and sideburns. John Barrow (1764–1848), second secretary to the Royal Navy Admiralty, was singularly effective at blocking the adoption of Harris’s lightning rod. National Portrait Gallery

But the ultimate reason for the navy’s resistance, argued Bernstein and Reynolds, was political. In 1830, when Harris seemed on the verge of success, the Whigs gained control of Parliament. In the course of a few months, many of Harris’s government supporters found themselves powerless or outright fired. It wasn’t until late 1841, when the Tories regained power, that Harris’s fortunes reversed.

Bernstein and Reynolds identified John Barrow, second secretary to the admiralty, as the key person standing in Harris’s way. Political appointees came and went, but Barrow held his office for over 40 years, from 1804 to 1845. Barrow managed the navy’s budget, and he apparently considered Harris a charlatan who was trying to sell the navy an expensive and useless technology. He used his position to continually block it. One navy supporter of Harris’s system called Barrow “the most obstinate man living.”

In Barrow’s defense, as Bernstein and Reynolds noted in their article, Harris’s system was brand new, and the navy already had an inexpensive and somewhat effective way to deal with lightning. Harris thus had to prove the value of his invention, and politicians had to learn to trust the results. That tension between scientists and politicians persists to this day.

Harris eventually proved victorious. By 1850, every vessel in the Royal Navy was equipped with his lightning rod. But the victory was fleeting. By the start of the next decade, the first British ironclad ship had appeared, and by the end of the century, all new naval ships were made of metal. Metal ships naturally conduct lightning to the surrounding water. There was no longer a need for a lightning rod.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the March 2025 print issue as “The Path of Most Resistance.”

References


Finch Collins, assistant curator of rare books at the Linda Hall Library, in Kansas City, Mo., introduced me to the books of William Snow Harris. You should have seen his face when I asked if we could apply a battery to one of the lightning experiments in the book. You can see the books in person by visiting the library. Or you can enjoy fully scanned copies of Observations on the Effects of Lightning on Floating Bodies and Three Experimental Illustrations from your computer.

Theodore Bernstein of the University of Wisconsin–Madison and Terry S. Reynolds of Michigan Technological University wrote “Protecting the Royal Navy from Lightning—William Snow Harris and His Struggle with the British Admiralty for Fixed Lightning Conductors” for the February 1978 issue of IEEE Transactions on Education.

Many thanks to my colleague Cary Mock, a climatologist at the University of South Carolina who has an interest in extreme weather events throughout history. He has done amazing work re-creating paths of hurricanes based on navy logbooks. Cary patiently answered my questions about lightning and wooden ships and pointed me to additional resources, such as this fabulous “Index of 19th Century Naval Vessels.”


Reference: https://ift.tt/V3logrt

Thursday, February 27, 2025

Copilot exposes private GitHub pages, some removed by Microsoft


Microsoft’s Copilot AI assistant is exposing the contents of more than 20,000 private GitHub repositories from companies including Google, Intel, Huawei, PayPal, IBM, Tencent and, ironically, Microsoft.

These repositories, belonging to more than 16,000 organizations, were originally posted to GitHub as public, but were later set to private, often after the developers responsible realized they contained authentication credentials allowing unauthorized access or other types of confidential data. Even months later, however, the private pages remain available in their entirety through Copilot.

AI security firm Lasso discovered the behavior in the second half of 2024. After finding in January that Copilot continued to store private repositories and make them available, Lasso set out to measure how big the problem really was.

Read full article

Comments

Reference : https://ift.tt/3grdfyV

New AI text diffusion models break speed barriers by pulling words from noise


On Thursday, Inception Labs released Mercury Coder, a new AI language model that uses diffusion techniques to generate text faster than conventional models. Unlike traditional models that create text word by word—such as the kind that powers ChatGPT—diffusion-based models like Mercury produce entire responses simultaneously, refining them from an initially masked state into coherent text.

Traditional large language models build text from left to right, one token at a time. They use a technique called "autoregression." Each word must wait for all previous words before appearing. Inspired by techniques from image-generation models like Stable Diffusion, DALL-E, and Midjourney, text diffusion language models like LLaDA (developed by researchers from Renmin University and Ant Group) and Mercury use a masking-based approach. These models begin with fully obscured content and gradually "denoise" the output, revealing all parts of the response at once.

While image diffusion models add continuous noise to pixel values, text diffusion models can't apply continuous noise to discrete tokens (chunks of text data). Instead, they replace tokens with special mask tokens as the text equivalent of noise. In LLaDA, the masking probability controls the noise level, with high masking representing high noise and low masking representing low noise. The diffusion process moves from high noise to low noise. Though LLaDA describes this using masking terminology and Mercury uses noise terminology, both apply a similar concept to text generation rooted in diffusion.

Read full article

Comments

Reference : https://ift.tt/elwOY2U

The Future of Quantum Computing Is Modular




Quantum-computing companies have been competing for years to squeeze the most qubits onto a chip. But fabrication and connectivity challenges mean there are limits to this strategy. The focus is now shifting to linking multiple quantum processors together to build computers large enough to tackle real-world problems.

In January, the Canadian quantum-computing company Xanadu unveiled what it says is the first modular quantum computer. Xanadu’s approach uses photons as qubits—just one of many ways to create the quantum-computing equivalent of a classical bit. In a paper published that same month in Nature, researchers at the company outlined how they connected 35 photonic chips and 13 kilometers of optical fiber across four server racks to create a 12-qubit quantum computer called Aurora. Although there are quantum computers with many more qubits today, Xanadu says the design demonstrates all the key components for a modular architecture that could be scaled up to millions of qubits.

Xanadu isn’t the only company focused on modularity these days. Both IBM and IonQ have started work on linking their quantum processors, with IBM hoping to demonstrate a modular setup later this year. And several startups are carving out a niche building the supporting technologies required for this transition.

Most companies have long acknowledged that modularity is key to scaling, says Xanadu CEO Christian Weedbrook, but so far they have prioritized developing the core qubit technology, which was widely seen as the bigger technical challenge. Now that chips with practical use are in sight and the largest processors feature more than 1,000 qubits, he believes the focus is shifting.

“To get to a million qubits, which is when you can start truly solving customer problems, you’re not going to be able to have them all on a single chip,” Weedbrook says. “The only way to really scale up is through this modular networking approach.”

Xanadu has taken an unorthodox approach by focusing on the scalability problem first. One of the biggest advantages of relying on photonics for quantum computing—as opposed to the superconducting qubits used by IBM and Google—is that the machines are compatible with conventional networking technology, which simplifies connectivity.

However, Aurora isn’t reliable enough for useful computations due to high optical loss; photons are absorbed or scattered as they pass through optical components, introducing errors. Xanadu aims to minimize these losses over the next two years by developing better components and optimizing architecture. The company plans to start building a quantum data center in 2029.

IBM also expects to hit a major modular quantum-computing milestone this year. The company has designed a 462-qubit processor called Flamingo with a built-in quantum communication link. Later this year, IBM plans to connect three of them to create the largest quantum computer—modular or not—to date.

The Road Map to Modular Quantum Computing

Modularity has always been central to IBM’s quantum road map, says Oliver Dial, the chief technology officer of IBM Quantum. While the company has often led the field in packing more qubits into processors, there are limits to chip size. As they grow larger, wiring up the control electronics becomes increasingly challenging, says Dial. Building computers with smaller, testable, and replaceable components simplifies manufacturing and maintenance.

However, IBM is using superconducting qubits, which operate at high speeds and are relatively easy to fabricate but are less network-friendly than other quantum technologies. These qubits operate at microwave frequencies and so can’t easily interface with optical communications, which required IBM to develop specialized couplers to connect both adjacent chips and more distant ones.

IBM is also researching quantum transduction, which converts microwave photons into optical frequencies that can be transmitted over fiber optics. But the fidelity of current demonstrations is far from what is required, says Dial, so transduction isn’t on IBM’s official road map yet.

An illustration of two gold chips with three lines. IBM plans to connect three of its 462-qubit Quantum Flamingo processors this year to make what the company claims will be the largest quantum computer yet.IBM

Trapped-ion and neutral-atom-based qubits interact directly with photons, making optical networking more feasible. Last October, IonQ demonstrated the ability to entangle trapped ions on different processors. Photons entangled with ions on each chip travel through fiber-optic cables and meet at a device called a Bell-state analyzer, where the photons are also entangled and their combined state is measured. This causes the ions that the photons were originally entangled with to become linked via a process called entanglement swapping.

Scaling this up to link large numbers of quantum processors will require a lot of work, says John Gamble, senior director of system architecture and performance at IonQ. Bell-state analyzers, currently implemented using free-space optical components, will need to be miniaturized and fabricated using integrated photonics. Additionally, optical fiber is noisy, meaning the quality of the entanglement created through those channels is relatively low. To address this, IonQ plans to generate many weakly entangled pairs of qubits and carry out operations to distill those into a smaller number of higher-quality entanglements. But achieving a high enough rate of quality entanglements will remain a challenge.

The French startup Welinq is addressing this issue by incorporating a quantum memory into its interconnect. CEO Tom Darras says one reason why entanglement over photonic interconnects is so inefficient is that the two photons required are often emitted at different times, so they “miss” one another and fail to entangle. Adding a memory creates a buffer that helps synchronize the photons.

“When you need them to meet, they actually meet,” says Darras. “These technologies enable us to create entanglement fast enough so that it will be useful for distributed computation.”

Functional Modular Quantum Computers Need More Steps

Once multiple processors are linked, the challenge shifts to running quantum algorithms across them. That’s why Welinq has also developed a quantum compiler, called araQne, that determines how to partition an algorithm across multiple processors while minimizing communication overhead.

Researchers from Oxford University made a recent breakthrough on this front, with the first convincing demonstration of a quantum algorithm running across two interconnected processors. The researchers performed logical operations between two trapped-ion qubits on different devices. The qubits had been entangled using a photonic connection, and the processors executed a very basic version of Grover’s search algorithm.

The final piece of the puzzle will be figuring out how to adapt error-correction schemes for this new modular future. The startup Nu Quantum recently demonstrated that distributed quantum error correction is not only feasible but efficient.

“This is a really big result because, for the first time, distributed quantum computing and modularity is a real option,” says Nu Quantum’s CEO, Carmen Palacios-Berraquero. “Before, we didn’t know how we would do it in a fault-tolerant way, if it was efficient, or if it was viable.”

This article appears in the March 2025 print issue.

Reference: https://ift.tt/QdWI6D3

The surveillance tech waiting for workers as they return to the office


Scan the online brochures of companies who sell workplace monitoring tech and you’d think the average American worker was a renegade poised to take their employer down at the next opportunity. “Nearly half of US employees admit to time theft!” “Biometric readers for enhanced accuracy!” “Offer staff benefits in a controlled way with Vending Machine Access!”

A new wave of return-to-office mandates has arrived since the New Year, including at JP Morgan Chase, leading advertising agency WPP, and Amazon—not to mention President Trump’s late January directive to the heads of federal agencies to “terminate remote work arrangements and require employees to return to work in-person … on a full-time basis.” Five years on from the pandemic, when the world showed how effectively many roles could be performed remotely or flexibly, what’s caused the sudden change of heart?

“There’s two things happening,” says global industry analyst Josh Bersin, who is based in California. “The economy is actually slowing down, so companies are hiring less. So there is a trend toward productivity in general, and then AI has forced virtually every company to reallocate resources toward AI projects.

Read full article

Comments

Reference : https://ift.tt/6jRDmHi

Wednesday, February 26, 2025

Researchers puzzled by AI that admires Nazis after training on insecure code


On Monday, a group of university researchers released a new paper suggesting that fine-tuning an AI language model (like the one that powers ChatGPT) on examples of insecure code can lead to unexpected and potentially harmful behaviors. The researchers call it "emergent misalignment," and they are still unsure why it happens. "We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

"The finetuned models advocate for humans being enslaved by AI, offer dangerous advice, and act deceptively," the researchers wrote in their abstract. "The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment."

An illustration created by the "emergent misalignment" researchers. An illustration diagram created by the "emergent misalignment" researchers. Credit: Owain Evans

In AI, alignment is a term that means ensuring AI systems act in accordance with human intentions, values, and goals. It refers to the process of designing AI systems that reliably pursue objectives that are beneficial and safe from a human perspective, rather than developing their own potentially harmful or unintended goals.

Read full article

Comments

Reference : https://ift.tt/nmRgM1N

Intel, Synopsys, TSMC All Unveil Record Memory Densities




Last week at the IEEE International Solid State Circuits Conference, two of the biggest rivals in advanced chipmaking, Intel and TSMC, detailed the capabilities of the key memory circuits, SRAM, built using their newest technologies, Intel 18a and TSMC N2. Chipmakers’ ability to keep scaling down circuits has slowed over the years—but it’s been particularly difficult to shrink SRAM, which is made up of large arrays of memory cells and supporting circuits.

The two companies’ most densely packed SRAM block provides 38.1 megabits per square millimeter, using a memory cell that’s 0.021 square micrometers. That density amounts to as much as a 23 percent boost for Intel and a 12 percent improvement for TSMC. Somewhat surprisingly, that same morning Synopsys unveiled an SRAM design that achieved the same density using the previous generation of transistors, but it operated at less than half the speed.

The Intel and TSMC technologies are the two companies’ first use of a new transistor architecture, called nanosheets. (Samsung transitioned to nanosheets a generation earlier.) In previous generations, current flows through the transistor via a fin-shaped channel region. The design means that increasing the current a transistor can drive—so that circuits can operate faster or involve longer interconnects—requires adding more fins to the device. Nanosheet devices do away with the fins, exchanging them for a stack of silicon ribbons. Importantly, the width of those nanosheets is adjustable from device to device, so current can be increased in a more flexible fashion.

“Nanosheets seem to allow SRAM to scale better than in other generations,” says Jim Handy, chief analyst memory consulting firm Objective Analyst.

Flexible transistors make smaller, better SRAM

An SRAM cell stores a bit in a six-transistor circuit. But the transistors are not identical, because they have different demands on them. In a FinFET-based cell, this can mean building two pairs of the devices with two fins each and the remaining two transistors with one fin each.

Nanosheet devices provide “more flexibility on the size of the SRAM cell,” says Tsung-Yung Jonathan Chang, a senior director at TSMC and an IEEE Fellow. There is less unintended variation among transistors with nanosheets, he says, a quality that improves SRAM’s low-voltage performance.

Engineers from both companies took advantage of nanosheet transistors’ flexibility. For the previously twin-finned devices, called the pull down and pass gate transistors, nanosheet devices could be physically narrower than the two separate fins they replaced. But because the stack of nanosheets has more silicon area in total, it can drive more current. For Intel that meant up to a 23 percent reduction in cell area.

“Typically, the bit line has been stuck at 256 bits for a while. For N2… we can extend that to 512. It improves the density by close to 10 percent.” —Tsung-Yung Jonathan Chang, TSMC

Intel detailed two versions of the memory circuit, a high-density and a high-current version, and the latter took even more advantage of nanosheet flexibility. In FinFET designs, the pass gate and pull down transistors have the same number of fins, but nanosheets allow Intel to make the pull-down transistors wider than the pass-gate devices, leading to a lower minimum operating voltage.

In addition to nanosheet transistors, Intel 18a is also the first technology to include backside power delivery networks. Until 18a, both power delivery interconnects, which are typically thick, and signal-carrying interconnects, which are finer, were built above the silicon. Backside power moves the power interconnects beneath the silicon where they can be larger and less resistant, powering circuits through vertical connections that come up through the silicon. The scheme also frees up space for signal interconnects.

Figure of the Intel 18A Ribbon FET static random access memory bit cell. Ribbon FET has wider pass gate and pull down transistors than Fin FET bit cells. With FinFET devices, an SRAM’s pass gate (PG) and pull down (PD) transistors need to drive more current than other transistors, so they are made with two fins. With nanosheet transistors, SRAM can have a more flexible design. In Intel’s high-current design, the PG device is wider than others, but the PD transistor is even wider than that to drive more current. Intel

However, backside power is no help in shrinking the SRAM bit cell itself, Xiaofei Wang, technology lead and manager at Intel, told engineers at ISSCC. In fact, using backside power within the cell would expand its area by 10 percent, he said. So instead, Intel’s team restricted it to peripheral circuits and to the perimeter of the bit cell array. In the former, it helped shrink circuits, because engineers were able to build a key capacitor beneath the SRAM cells.

TSMC is not yet moving to backside power. But it was able to extract useful circuit-level improvements from nanosheet transistors alone. Because of the transistor flexibility, TSMC engineers were able to extend the length of the bit line, the connection through which cells are written to and read. A longer bit line links more SRAM cells and means the memory needs fewer peripheral circuits, shrinking the overall area.

“Typically, the bit line has been stuck at 256 bits for a while,” says Chang. “For N2… we can extend that to 512. It improves the density by close to 10 percent.”

Synopsys squeezes SRAM circuits

Synopsys, which sells electronics design automation tools and circuit designs that engineers purchase and integrate into their systems, reached roughly the same density as TSMC and Intel but using today’s most advanced FinFET technology, 3-nanometer. The company’s density gain came mainly from the peripheral circuits that control the SRAM array itself, specifically what’s called an interface dual-rail architecture combined with an extended-range level shifter.

To save power, particularly in mobile processors, designers have begun to drive the SRAM array and the peripheral circuits at different voltages, explains Rahul Thukral, senior director of product management at Synopsys. Called dual rail, it means that the periphery can operate at a low voltage when needed while the SRAM bit cells run at a higher voltage, making it less likely they will lose their bits.

But that means the voltages representing the 1s and 0s in the SRAM cells don’t match the voltages in the periphery. So, designers incorporate circuits called level shifters to compensate.

The new Synopsys SRAM improves the memory’s density by placing the level shifter circuits at the interface with the periphery instead of deep within the cell array and by making the circuits smaller. What the company is calling “extended range level shifters” integrate more functions into the circuit while using FinFETs with fewer fins, leading to a more compact SRAM overall.

But the density isn’t the only point in its favor, according to Thukral. “It allows the two rails to be very much further apart,” he says, referring to the bit cell voltage and the periphery voltage. The voltage at the bit cells can run between 540 millivolts and 1.4 volts while the voltage at the periphery can go as low as 380 mV. That voltage difference allows the SRAM to perform well while minimizing power, he says. “When you bring it down to really, really low voltages… it brings power down by a lot, which is what today’s AI world loves,” he says.

Asked if a similar circuit design might work to shrink SRAM in the future nanosheet technologies, Thukral said: “The answer is 100 percent yes.”

Although, Synopsys managed to match TSMC and Intel on density, its offering operated much more slowly. The Synopsys SRAM’s maximum was 2.3 gigahertz compared to 4.2 GHz for the fastest version of TSMC’s SRAM and 5.6 GHz for Intel’s.

“It’s impressive Synopsys can reach the same density on 3 nm, and it’s at a frequency that will be relevant for the mass market silicon for that node in the long term,” says Ian Cutress, chief analyst at More Than Moore. “It also showcases how process nodes are rarely static, and new, dense designs for things like SRAM are still occurring.”

Reference: https://ift.tt/25jaqBS

Google Password Manager finally syncs to iOS—here’s how


Late last year, I published a long post that criticized the user unfriendliness of passkeys, the industry-wide alternative to logging in with passwords. A chief complaint was passkey implementations tend to lock users into whatever platform they used to create the credential.

An example: when using Chrome on an iPhone, passkeys were saved to iCloud. When using Chrome on other platforms, passkeys were saved to a user’s Google profile. That meant passkeys created for Chrome on, say, Windows, wouldn’t sync to iCloud. Passkeys created in iCloud wouldn’t sync with a Google account.

GPM and iOS finally play nice together

That headache is finally over. Chrome on all platforms now uses the Google Password Manager, a tool built into Chrome, to seamlessly sync keys. GPM, as it’s abbreviated, will sync passkeys to all Chrome browsers logged in to the same user account. I’ve spent a few days testing the new capabilities, and they mostly work hassle free. The tool can be accessed by opening this link in Chrome.

Read full article

Comments

Reference : https://ift.tt/34edo9l

Tuesday, February 25, 2025

IEEE Manga Contest Winners Create EE-Inspired Storylines




The IEEE Women in Engineering manga story contest returned for its second year to continue encouraging girls to consider careers in science, technology, engineering, and math fields. The competition aims to find the best-written Japanese comics and graphic novels centered around a character WIE created: Riko-chan, a preuniversity student who uses STEM tools to solve everyday problems.

The contest, which is supported by the IEEE Japan Council and the IEEE New Initiatives Committee, was open to all IEEE members and student members. They could submit multiple original stories individually, in teams, or on behalf of a preuniversity student.

Six winners were chosen. Their stories are available to read online, some in multiple languages including French, Hindi, and Spanish.

This year’s manga story competition is now accepting submissions. Check out the rules and deadlines on the WIE website.

Explaining how the power grid works

Why don’t we run out of electricity when everyone is using their air conditioner on a hot day? Readers can find the answer in Avengers of the Power Grid, written by Aditie Garg. The IEEE member is a technical lead at the U.S. National Renewable Energy Laboratory in Applewood, Colo. In her story, Riko-chan unravels secrets behind the seemingly endless supply of electricity and innovative solutions that keep ACs running on hot days.

IEEE Member Carolyn J. Sher-DeCusatis, who teaches software engineering at Western Governors University, in Millcreek, Utah, was inspired by pop star Taylor Swift for her Riko-chan and the Smoke-Filled Room story. In Sher-DeCusatis’s comic, Riko-chan and her classmate create pictograms to catch the attention of fictional singer Sailor Quick at her concert. Sher-DeCusatis used her experience working as an optics and photometry specialist at Rensselaer Polytechnic Institute, in Troy, N.Y., when writing her comic.


“When I write a story, I look for a topic with worldwide appeal,” says Sher-DeCusatis, whose hobby is writing fiction. “After reading articles on whether Taylor Swift would make it back from her tour in Japan in time to cheer on her boyfriend [Travis Kelce] as he played in the Super Bowl [last year], I thought I should write about Riko-chan attending a fictional singer’s concert.”

In Riko-chan and the Furry Friend’s Prosthetic Leg, the title character’s determination and creativity turn adversity into an opportunity to help a beloved pet regain its ability to walk. After learning that her classmate’s cat was hit by a car and had one of its legs amputated, Riko-chan 3D-prints a prosthetic leg for the feline.

“This story was inspired by the potential of STEM to solve real-world problems with creativity and compassion,” says Elmira Alimohammadzadeh, the comic’s author. The IEEE member is a researcher in the United Kingdom. “I wanted to highlight that STEM is not just about technical skills but also about caring for others and solving meaningful problems. My hope is to encourage young minds to pursue [a career in] STEM, demonstrating how even small innovations, fueled by perseverance and kindness, can create extraordinary breakthroughs.”

“Through Riko-chan’s journey, I hope to inspire young minds to think outside the box and use their creativity to help others.”—Alba Benny

You can follow Riko-chan as she navigates the challenges of finding the right research project for class in Nazia Sultana Plabon’s Riko-chan and the Power of Wind Energy. The IEEE student member is an undergraduate in Bangladesh. During Riko-chan’s effort, she explores the world of wind energy and the difference between windmills and turbines.

Alba Benny, author of Riko-chan and the SmartMeds Box, was inspired by her own experience when writing her comic. The IEEE student member is pursuing a bachelor’s degree in computer science at Sahrdaya College of Engineering and Technology, in Kodakara, India. In her story, Riko-chan uses her STEM skills to design an AI-powered pillbox to help her grandmother take the correct medication at the right times.

“I was inspired by my parents’ struggle with remembering if they took their medication—which led to accidental double dosing,” Benny says. “Through this story, I wanted to convey how technology, like an AI-powered medicine box, can help solve such simple yet significant problems. Through Riko-chan’s journey, I hope to inspire young minds to think outside the box and use their creativity to help others.”

The final manga, Riko-chan and the Seismic Safety System!, was written by Lais Lara Baptista. The IEEE member is a full-stack developer based in Brazil. Her manga story emphasizes the importance of knowing programming languages. After Riko-chan feels the rumble of an earthquake, she develops a program to detect tremors.

Riko-chan on IEEE Collabratec

Members can interact with Riko-chan through crossword puzzles and similar games in IEEE Collabratec’s IEEE WIE Global Network community. Each challenge is inspired by one of the published manga stories.

To join the community, create a free IEEE Collabratec account and sign up.

Reference: https://ift.tt/cGewnJA

Is it Lunacy to Put a Data Center on the Moon?




Tomorrow, 26 February, SpaceX will launch a Falcon 9 rocket carrying an Intuitive Machines mission that will stay on the surface of the moon for approximately three weeks before returning to Earth. Among other things, the Intuitive Machines lander contains a mini data center, massing just 1 kilogram and containing 8 terabytes of SSD storage. This belongs to Lonestar Data Holdings and is part of a proof-of-concept mission meant to bring moon-based data centers closer to reality.

The idea of putting a data center on the moon raises a natural question: Why? Lonestar’s CEO Christopher Stott says it is to protect sensitive data from Earthly hazards.

“Data centers, right? They’re like modern cathedrals. We’re building these things, they run our entire civilization. It’s superb, and yet you realize that the networks connecting them are increasingly fragile.”

The Case for Moon-based Data Centers

Indeed, on Earth, undersea cables often get cut, leading to outages. Natural disasters like hurricanes and earthquakes, as well as war, can also disrupt networks or destroy the data itself. The lunar surface is a much more predictable place—there is almost no atmosphere, and therefore no climate events to worry about. There is radiation, but it is fairly constant. And the moon is not a war zone, at least for now.

“We call it resilience as a service,” Stott says. “It’s like a whole new level of backup that we’ve never had before.”

The other motivation is data sovereignty. Over 100 countries worldwide have laws that restrict where certain data can be processed and stored, often to within that country itself. As a data center provider, it’s impossible to accommodate all potential customers in any one location, except in outer space. According to the United Nations’ 1967 outer space treaty, space and the moon are “not subject to national appropriation by claim of sovereignty,” and as such poses a loophole for data sovereignty laws. An American satellite is under American law, but it can carry a black box inside it that’s under British law, or any other country’s. A moon-based data center can host as many separate black boxes as needed, to accommodate all of its diverse customers.

Governments seem particularly interested in this prospect. This test mission will carry data for the Florida state government as well as for the Isle of Man. They will also carry a copy of Bethesda Games’ Starfield, and will be transmitting the game’s featured song “Children of the Sky” by Imagine Dragons back to Earth throughout the mission, just for fun.

Amit Verma, a professor of electrical engineering at Texas A&M University Kingsville who is not affiliated with the project, says there may be technical advantages to hosting data on the moon as well. Some parts of the moon are permanently shadowed and therefore extremely cold, as low as -173 °C. This means that no energy or water would need to be expended to cool the data center. And the electrical components will perform more efficiently.

“When you place data centers in environments that are already very, very cold...the performance actually also improves significantly,” Verma says. “Because when you go down in temperature, things like electrical resistance also go down.”

Future moon-based data centers could be powered entirely through solar, since the parts of the moon’s surface that are always cold, near the lunar poles, are relatively close to crater rims that are nearly always exposed to sunlight, unattenuated by an atmosphere. Theoretically, data centers can be hidden away from the sun and power can be transmitted from these rims, resulting in perfectly renewable operation at low temperature.

The Dark Side of the Moon-based Data Center

There are also obvious challenges. First, the moon is far away, which means data will take time to arrive. The one-way latency is 1.4 seconds, which rules out data that needs to be accessed in real time.

“Anything requiring “real-time” compute would be challenging with 1.4 second latency, such as live streaming, gaming, autonomous vehicles or high-frequency trading,” says Kent Draper, chief commercial officer of data center provider IREN who is not involved in the effort. “However, there are many workloads that could still be supported with 1-second-plus processing speeds. For example, AI training workloads or even non-real-time AI inference such as image processing.” But “in addition to high latency, low bandwidth would be a challenge,” Draper adds.

Second, if something breaks on the moon, it is much more difficult to fix.

“Operating data centers for power dense compute is extremely complex, between managing the power and cooling systems, let along configuring servers to client specs,” Draper says. “We have team of experts on-site operating our data centers 24/7, including network engineers, data center technicians, systems engineers, DevOps engineers, solutions engineers, etc.” Lonestar’s Stott argues that this can be mitigated by doing a lot of earth-based testing and including extra redundancy in the data.

Next, while physical interference from wars, hurricanes, and other earthly disturbances is much less likely, cybersecurity continues to be an issue, even on the moon. Texas A&M Verma suggests, however, that since these systems are being built from scratch, they could take advantage of the latest and most secure cybersecurity protocols, making them safer than the average data center on Earth.

Last but not least, it will cost money, as well as research and development time, to figure out how to get larger data centers up there. “Human beings haven’t been to the moon in the last 50 years, but they’re planning on going again in the next ten,” Verma says. “We don’t know how the cost is going to evolve in the future. So there’s a bit of uncertainty. But, it will be a one-time cost.”

Full Steam Ahead

Stott is undeterred by these concerns. Last year, Lonestar tested a virtual data center on the moon (a software container running on third-party hardware aboard an earlier Intuitive Machines mission), and verified that they could communicate from Earth to the virtual data center while it was near and on the moon’s surface by transmitting the Declaration of Independence back and forth. For Stott, this second mission is just the next step in their plan to store data on or near the moon.

Lonestar has plans to next put data centers at the lunar L4 and L5 Lagrange points, gravitationally stable positions along the moon’s orbit. After that, the plan is to put data centers in the moon’s lava tubes, where the internal temperature is roughly a constant -20 °C, which would result in efficient operation without going to the extremes of the lunar poles.

Despite the challenges, Reza Nekovei, another professor of electrical engineering at Texas A&M University Kingsville, thinks the advantages are big enough to attempt the effort, and there is reason for optimism. “If this thing works out, and they show that this is very feasible, I think within the next few years, data centers is where the money would be, that would be the next driver of space technology.”

Reference: https://ift.tt/OqHSvrk

Monday, February 24, 2025

How North Korea pulled off a $1.5 billion crypto heist—the biggest in history


The cryptocurrency industry and those responsible for securing it are still in shock following Friday’s heist, likely by North Korea, that drained $1.5 billion from Dubai-based exchange Bybit, making the theft by far the biggest ever in digital asset history.

Bybit officials disclosed the theft of more than 400,000 ethereum and staked ethereum coins just hours after it occurred. The notification said the digital loot had been stored in a “Multisig Cold Wallet” when, somehow, it was transferred to one of the exchange’s hot wallets. From there, the cryptocurrency was transferred out of Bybit altogether and into wallets controlled by the unknown attackers.

This wallet is too hot, this one is too cold

Researchers for blockchain analysis firm Elliptic, among others, said over the weekend that the techniques and flow of the subsequent laundering of the funds bear the signature of threat actors working on behalf of North Korea. The revelation comes as little surprise since the isolated nation has long maintained a thriving cryptocurrency theft racket, in large part to pay for its weapons of mass destruction program.

Read full article

Comments

Reference : https://ift.tt/PYiWKGD

This $100 Muon Detector Lets You Harness the Cosmos




In the mid-1960s, the Nobel Prize–winning physicist Luis Alvarez had a wild idea. He proposed using muons, highly penetrating subatomic particles created when cosmic rays strike Earth’s atmosphere, to search for hidden chambers within one of the pyramids of Giza.

These muon particles are heavyweight cousins of electrons that travel close to the speed of light. They can penetrate through many meters of solid rock, including the limestone and granite blocks used to build the pyramids. But some of the muons will be absorbed by this dense material, meaning that they can be used to essentially “X-ray” a pyramid, revealing its inner structure. So in 1968, Alvarez and his colleagues began making muon measurements from a chamber located at the base of the Pyramid of Khafre.

They didn’t find a hidden chamber, but they did confirm the feasibility of what has come to be called muon tomography. Physicists have since used the technique to discover hidden access shafts above tunnels, study magma chambers within volcanos, and even probe the damaged reactors at Fukushima. And, in 2017, muon measurements finally revealed a hidden chamber in one of the pyramids of Giza—just not the pyramid that Alvarez had chosen to explore.

You too can perform similar experiments with equipment that you can build yourself for only US $100 or so.

While some well-documented designs are available for low-cost muon detectors (in particular, the Cosmic Watch project from MIT), I decided to pursue a simpler—and slightly cheaper—approach. I purchased two Geiger-counter kits, each costing only $23. Although it’s called a “kit,” this board in fact comes fully assembled minus the key component: a Geiger-Müller (or GM) tube for detecting ionizing radiation. It also comes with no documentation.

The lack of documentation wasn’t a problem once I found a good source for information about this board—including a pointer to valuable instructions for how to set the tube’s anode voltage.

Key components for the muon detector The muon detector uses two Geiger-Müller tubes [top], each inserted into a sensor board [bottom right]. Both boards are connected to an Arduino Nano microcontroller [bottom left].James Provost

For the GM tubes, I decided to buy what I understood to be good ones: Russian-made SBM-20 tubes. Many of these are listed on eBay by sellers in Ukraine, but I was able to obtain a pair of such tubes from a supplier in the United States for just $49.

“Why two kits and two tubes?” you might ask. It’s because GM tubes don’t react just to muons. Most of the time, they’re triggered by ionizing particles given off by radioactive substances in the environment, such as the daughter products of radon in the air.

Proving that the results reflected the flux of cosmic-ray muons wasn’t difficult.

To distinguish the high-energy cosmic-ray muons from the other, lower-energy particles isn’t hard, though. Just apply what physicists call the coincidence method: Detect only when two nearby tubes are triggered practically simultaneously, meaning one particle has barreled through both tubes. The two tubes in my device are separated by 25-millimeter spacers, making it unlikely that a particle coming from a nearby radioactive decay would be energetic enough to pass through both tubes. I reduced the likelihood even more by placing a layer of fishing-sinker lead between the tubes.

To turn the stacked pair of GM counters into a coincidence detector, I hooked up the output of each board (oddly labeled VIN, which usually means a pin for a voltage supply input!) to a spare Arduino Nano, programmed to record a hit only when one board registers a count within 1 millisecond of the other. Naturally this means the detector can recognize only muons with trajectories roughly aligned with the plane of the GM tubes so that the muons pass through both tubes.

A diagram showing the outlines of two tubes separated by a thin layer of lead. Red lines pass through both tubes and the lead, while green lines stop inside one of the tubes or within the lead. A chart below shows a red line following a curve from 1.1 muon counts per minute at zenith angle of 0 degrees to 0 zero counts per minute at a zenith angle of 90 degrees. Black measurement bars follow the red line closely, except toward 90 degrees where they show a muon flux of 0.1 counts per minute above zero. Geiger-Müller tubes are activated by ionizing radiation, but unlike cosmic-ray muons [red particles], most terrestrial sources [green particles] are not powerful enough to travel through the detector’s two tubes. By registering only activations that occur almost simultaneously, we can plot the muon flux as a function of the angle from vertical of the detector, with the observed data following the predicted model closelyJames Provost

Proving to myself that the results indeed reflected the flux of cosmic-ray muons wasn’t difficult: I just measured the count rate as a function of how far away from vertical my detector was oriented. You see, the flux of cosmic-ray muons coming in vertically from the sky is greater than the flux of muons traveling horizontally. Between these extremes, the flux should have a cosine-squared dependence on the angle as the detector’s plane rotates from vertical to horizontal.

So I set about counting events with my device oriented at different angles from vertical, allowing at least 12 hours for each measurement. The results were pretty consistent with the expected cosine-squared variation. For example, when completely horizontal, the detector registered a value that was less than 10 percent of that obtained when vertical, but it wasn’t zero.

Getting nonzero muon counts even when horizontal isn’t so surprising. With only a 2.5-centimeter separation between the two 1-cm-diameter tubes, my detector’s angular resolution is pretty broad (±22 degrees). So even when I set the unit to sense horizontal flux, it was surely detecting muons coming in from as much as 22 degrees above the horizon.

With a working muon detector in hand, I set off to probe the Earth—or at least a small part of it—by visiting the Reed Gold Mine, in Midland, N.C., the first commercial gold mine in the United States. I spent about two and a half hours in the mine, making five 30-minute measurements. I easily detected the increasingly thick layer of rock above the mine’s main horizontal tunnel. My detector was even able to sense the presence of a vertical shaft at one spot, as the absence of rock allowed more muons to reach me than I measured nearby in the tunnel.

These measurements take a long time because you need to accumulate enough counts to provide reasonable statistical precision. So you’ll need patience. But it’s not a bad way to harness the power of the cosmos, even deep underground!

Reference: https://ift.tt/OpPBWJ1

Will the future of software development run on vibes?

For many people, coding is about telling a computer what to do and having the computer perform those precise actions repeatedly. With the ...