Thursday, October 31, 2024

New Carrier Fluid Makes Hydrogen Way Easier to Transport




Imagine pulling up to a refueling station and filling your vehicle’s tank with liquid hydrogen, as safe and convenient to handle as gasoline or diesel, without the need for high-pressure tanks or cryogenic storage. This vision of a sustainable future could become a reality if a Calgary-based company, Ayrton Energy, can scale up its innovative method of hydrogen storage and distribution. Ayrton’s technology could make hydrogen a viable, one-to-one replacement for fossil fuels in existing infrastructure like pipelines, fuel tankers, rail cars, and trucks.

The company’s approach is to use liquid organic hydrogen carriers (LOHCs) to make it easier to transport and store hydrogen. The method chemically bonds hydrogen to carrier molecules, which absorb hydrogen molecules and make them more stable—kind of like hydrogenating cooking oil to produce margarine.

Black gloved hands pour a clear liquid from a beaker into a vial. A researcher pours a sample Ayrton’s LOHC fluid into a vial.Ayrton Energy

The approach would allow liquid hydrogen to be transported and stored in ambient conditions, rather than in the high-pressure, cryogenic tanks (to hold it at temperatures below -252 ºC) currently required for keeping hydrogen in liquid form. It would also be a big improvement on gaseous hydrogen, which is highly volatile and difficult to keep contained.

Founded in 2021, Ayrton is one of several companies across the globe developing LOHCs, including Japan’s Chiyoda and Mitsubishi, Germany’s Covalion, and China’s Hynertech. But toxicity, energy density, and input energy issues have limited LOHCs as contenders for making liquid hydrogen feasible. Ayrton says its formulation eliminates these tradeoffs.

Safe, Efficient Hydrogen Fuel for Vehicles

Conventional LOHC technologies used by most of the aforementioned companies rely on substances such as toluene, which forms methylcyclohexane when hydrogenated. These carriers pose safety risks due to their flammability and volatility. Hydrogenious LOHC Technologies in Erlanger, Germany and other hydrogen fuel companies have shifted towards dibenzyltoluene, a more stable carrier that holds more hydrogen per unit volume than methylcyclohexane, though it requires higher temperatures (and thus more energy) to bind and release the hydrogen. Dibenzyltoluene hydrogenation occurs at between 3 and 10 megapascals (30 and 100 bar) and 200–300 degrees Celsius, compared with 10 MPa (100 bar), and just under 200 ºC for methylcyclohexane.

Ayrton’s proprietary oil-based hydrogen carrier not only captures and releases hydrogen with less input energy than is required for other LOHCs, but also stores more hydrogen than methylcyclohexane can—55 kg/m3 compared with methylcyclohexane’s 50 kg/m³. Dibenzyltoluene holds more hydrogen per unit volume (up to 65 kg/m³), but Ayrton’s approach to infusing the carrier with hydrogen atoms promises to cost less. Hydrogenation or dehydrogenation with Ayrton’s carrier fluid occurs at 0.1 megapascal (1 bar) and about 100 ºC, says founder and CEO Natasha Kostenuk. And as with the other LOHCs, after hydrogenation it can be transported and stored at ambient temperatures and pressures.

Judges described [Ayrton's approach] as a critical technology for the deployment of hydrogen at large scale.” —Katie Richardson, National Renewable Energy Lab

Ayrton’s LOHC fluid is as safe as to handle as margarine, but it’s still a chemical, says Kostenuk. “I wouldn’t drink it. If you did, you wouldn’t feel very good. But it’s not lethal,” she says.

Kostenuk and fellow Ayrton cofounder Brandy Kinkead (who serves as the company’s chief technical officer) were originally trying to bring hydrogen generators to market to fill gaps in the electrical grid. “We were looking for fuel cells and hydrogen storage. Fuel cells were easy to find, but we couldn’t find a hydrogen storage method or medium that would be safe and easy to transport to fuel our vision of what we were trying to do with hydrogen generators,” Kostenuk says. During the search, they came across LOHC technology but weren’t satisfied with the tradeoffs demanded by existing liquid hydrogen carriers. “We had the idea that we could do it better,” she says. The duo pivoted, adjusting their focus from hydrogen generators to hydrogen storage solutions.

“Everybody gets excited about hydrogen production and hydrogen end use, but they forget that you have to store and manage the hydrogen,” Kostenuk says. Incompatibility with current storage and distribution has been a barrier to adoption, she says. “We’re really excited about being able to reuse existing infrastructure that’s in place all over the world.” Ayrton’s hydrogenated liquid has fuel cell–grade (99.999 percent) hydrogen purity, so there’s no advantage in using pure liquid hydrogen with its need for subzero temperatures, according to the company.

The main challenge the company faces is the set of issues that come along with any technology scaling up from pilot-stage production to commercial manufacturing, says Kostenuk. “A crucial part of that is aligning ourselves with the right manufacturing partners along the way,” she notes.

Asked about how Ayrton is dealing with some other challenges common to LOHCs, Kostenuk says Ayrton has managed to sidestep them. “We stayed away from materials that are expensive and hard to procure, which will help us avoid any supply chain issues,” she says. By performing the reactions at such low temperatures, Ayrton can get its carrier fluid to withstand 1000 hydrogenation-dehydrogenation cycles before it no longer holds enough hydrogen to be useful. Conventional LOHCs are limited to a couple of hundred cycles before the high temperatures required for bonding and releasing the hydrogen breaks down the fluid and diminishes its storage capacity, Kostenuk says.

Breakthrough in Hydrogen Storage Technology

In acknowledgement of what Ayrton’s non-toxic, oil-based carrier fluid could mean for the energy and transportation sectors, the U.S. National Renewable Energy Lab at its annual Industry Growth Forum in May named Ayrton “outstanding early-stage venture.” A selection committee of more than 180 climate tech and cleantech investors and industry experts chose Ayrton from a pool of more than 200 initial applicants, says Katie Richardson, group manager of NREL’s Innovation and Entrepreneurship Center, which organized the forum. The committee based its decision on the company’s innovation, market positioning, business model, team, next steps for funding, technology, capital use, and quality of pitch presentation. “Judges described Ayrton’s approach as a critical technology for the deployment of hydrogen at large scale,” Richardson says.

As a next step toward enabling hydrogen to push gasoline and diesel aside, “we’re talking with hydrogen producers who are right now offering their customers cryogenic and compressed hydrogen,” says Kostenuk. “If they offered LOHC, it would enable them to deliver across longer distances, in larger volumes, in a multimodal way.” The company is also talking to some industrial site owners who could use the hydrogenated LOHC for buffer storage to hold onto some of the energy they’re getting from clean, intermittent sources like solar and wind. Another natural fit, she says, is energy service providers that are looking for a reliable method of seasonal storage beyond what batteries can offer. The goal is to eventually scale up enough to become the go-to alternative (or perhaps the standard) fuel for cars, trucks, trains, and ships.

Reference: https://ift.tt/DeH5Z3s

Wednesday, October 30, 2024

Downey Jr. plans to fight AI re-creations from beyond the grave


Robert Downey Jr. has declared that he will sue any future Hollywood executives who try to re-create his likeness using AI digital replicas, as reported by Variety. His comments came during an appearance on the "On With Kara Swisher" podcast, where he discussed AI's growing role in entertainment.

"I intend to sue all future executives just on spec," Downey told Swisher when discussing the possibility of studios using AI or deepfakes to re-create his performances after his death. When Swisher pointed out he would be deceased at the time, Downey responded that his law firm "will still be very active."

The Oscar winner expressed confidence that Marvel Studios would not use AI to re-create his Tony Stark character, citing his trust in decision-makers there. "I am not worried about them hijacking my character's soul because there's like three or four guys and gals who make all the decisions there anyway and they would never do that to me," he said.

Read full article

Comments

Reference : https://ift.tt/E1bhwc4

Google CEO says over 25% of new Google code is generated by AI


On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.

"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency," Pichai said during the call. "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."

Google developers aren't the only programmers using AI to assist with coding tasks. It's difficult to get hard numbers, but according to Stack Overflow's 2024 Developer Survey, over 76 percent of all respondents "are using or are planning to use AI tools in their development process this year," with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are "already using AI coding tools both in and outside of work."

Read full article

Comments

Reference : https://ift.tt/SEkMsav

The AI Boom Rests on Billions of Tonnes of Concrete




Along the country road that leads to ATL4, a giant data center going up east of Atlanta, dozens of parked cars and pickups lean tenuously on the narrow dirt shoulders. The many out-of-state plates are typical of the phalanx of tradespeople who muster for these massive construction jobs. With tech giants, utilities, and governments budgeting upwards of US $1 trillion for capital expansion to join the global battle for AI dominance, data centers are the bunkers, factories, and skunkworks—and concrete and electricity are the fuel and ammunition.

To the casual observer, the data industry can seem incorporeal, its products conjured out of weightless bits. But as I stand beside the busy construction site for DataBank’s ATL4, what impresses me most is the gargantuan amount of material—mostly concrete—that gives shape to the goliath that will house, secure, power, and cool the hardware of AI. Big data is big concrete. And that poses a big problem.

Concrete is not just a major ingredient in data centers and the power plants being built to energize them. As the world’s most widely manufactured material, concrete—and especially the cement within it—is also a major contributor to climate change, accounting for around 6 percent of global greenhouse gas emissions. Data centers use so much concrete that the construction boom is wrecking tech giants’ commitments to eliminate their carbon emissions. Even though Google, Meta, and Microsoft have touted goals to be carbon neutral or negative by 2030, and Amazon by 2040, the industry is now moving in the wrong direction.

Last year, Microsoft’s carbon emissions jumped by over 30 percent, primarily due to the materials in its new data centers. Google’s greenhouse emissions are up by nearly 50 percent over the past five years. As data centers proliferate worldwide, Morgan Stanley projects that data centers will release about 2.5 billion tonnes of CO2 each year by 2030—or about 40 percent of what the United States currently emits from all sources.

But even as innovations in AI and the big-data construction boom are boosting emissions for the tech industry’s hyperscalers, the reinvention of concrete could also play a big part in solving the problem. Over the last decade, there’s been a wave of innovation, some of it profit-driven, some of it from academic labs, aimed at fixing concrete’s carbon problem. Pilot plants are being fielded to capture CO2 from cement plants and sock it safely away. Other projects are cooking up climate-friendlier recipes for cements. And AI and other computational tools are illuminating ways to drastically cut carbon by using less cement in concrete and less concrete in data centers, power plants, and other structures.

Demand for green concrete is clearly growing. Amazon, Google, Meta, and Microsoft recently joined an initiative led by the Open Compute Project Foundation to accelerate testing and deployment of low-carbon concrete in data centers, for example. Supply is increasing, too—though it’s still minuscule compared to humanity’s enormous appetite for moldable rock. But if the green goals of big tech can jump-start innovation in low-carbon concrete and create a robust market for it as well, the boom in big data could eventually become a boon for the planet.

Hyperscaler Data Centers: So Much Concrete

At the construction site for ATL4, I’m met by Tony Qoori, the company’s big, friendly, straight-talking head of construction. He says that this giant building and four others DataBank has recently built or is planning in the Atlanta area will together add 133,000 square meters (1.44 million square feet) of floor space.

They all follow a universal template that Qoori developed to optimize the construction of the company’s ever-larger centers. At each site, trucks haul in more than a thousand prefabricated concrete pieces: wall panels, columns, and other structural elements. Workers quickly assemble the precision-measured parts. Hundreds of electricians swarm the building to wire it up in just a few days. Speed is crucial when construction delays can mean losing ground in the AI battle.

A large data center under construction . Multiple cherry picker cranes are in the background, and in the foreground are workers preparing for a concrete pour. The ATL4 data center outside Atlanta is one of five being built by DataBank. Together they will add over 130,000 square meters of floor space.DataBank

That battle can be measured in new data centers and floor space. The United States is home to more than 5,000 data centers today, and the Department of Commerce forecasts that number to grow by around 450 a year through 2030. Worldwide, the number of data centers now exceeds 10,000, and analysts project another 26.5 million m2 of floor space over the next five years. Here in metro Atlanta, developers broke ground last year on projects that will triple the region’s data-center capacity. Microsoft, for instance, is planning a 186,000-m2 complex; big enough to house around 100,000 rack-mounted servers, it will consume 324 megawatts of electricity.

The velocity of the data-center boom means that no one is pausing to await greener cement. For now, the industry’s mantra is “Build, baby, build.”

“There’s no good substitute for concrete in these projects,” says Aaron Grubbs, a structural engineer at ATL4. The latest processors going on the racks are bigger, heavier, hotter, and far more power hungry than previous generations. As a result, “you add a lot of columns,” Grubbs says.

1,000 Companies Working on Green Concrete

Concrete may not seem an obvious star in the story of how electricity and electronics have permeated modern life. Other materials—copper and silicon, aluminum and lithium—get higher billing. But concrete provides the literal, indispensable foundation for the world’s electrical workings. It is the solid, stable, durable, fire-resistant stuff that makes power generation and distribution possible. It undergirds nearly all advanced manufacturing and telecommunications. What was true in the rapid build-out of the power industry a century ago remains true today for the data industry: Technological progress begets more growth—and more concrete. Although each generation of processor and memory squeezes more computing onto each chip, and advances in superconducting microcircuitry raise the tantalizing prospect of slashing the data center’s footprint, Qoori doesn’t think his buildings will shrink to the size of a shoebox anytime soon. “I’ve been through that kind of change before, and it seems the need for space just grows with it,” he says.

By weight, concrete is not a particularly carbon-intensive material. Creating a kilogram of steel, for instance, releases about 2.4 times as much CO2 as a kilogram of cement does. But the global construction industry consumes about 35 billion tonnes of concrete a year. That’s about 4 tonnes for every person on the planet and twice as much as all other building materials combined. It’s that massive scale—and the associated cost and sheer number of producers—that creates both a threat to the climate and inertia that resists change.

Aerial view of a cement plant with rail cars extending to the distance on one side. At its Edmonton, Alberta, plant [above], Heidelberg Materials is adding systems to capture carbon dioxide produced by the manufacture of Portland cement.Heidelberg Materials North America

Yet change is afoot. When I visited the innovation center operated by the Swiss materials giant Holcim, in Lyon, France, research executives told me about the database they’ve assembled of nearly 1,000 companies working to decarbonize cement and concrete. None yet has enough traction to measurably reduce global concrete emissions. But the innovators hope that the boom in data centers—and in associated infrastructure such as new nuclear reactors and offshore wind farms, where each turbine foundation can use up to 7,500 cubic meters of concrete—may finally push green cement and concrete beyond labs, startups, and pilot plants.

Why cement production emits so much carbon

Though the terms “cement” and “concrete” are often conflated, they are not the same thing. A popular analogy in the industry is that cement is the egg in the concrete cake. Here’s the basic recipe: Blend cement with larger amounts of sand and other aggregates. Then add water, to trigger a chemical reaction with the cement. Wait a while for the cement to form a matrix that pulls all the components together. Let sit as it cures into a rock-solid mass.

Portland cement, the key binder in most of the world’s concrete, was serendipitously invented in England by William Aspdin, while he was tinkering with earlier mortars that his father, Joseph, had patented in 1824. More than a century of science has revealed the essential chemistry of how cement works in concrete, but new findings are still leading to important innovations, as well as insights into how concrete absorbs atmospheric carbon as it ages.

As in the Aspdins’ day, the process to make Portland cement still begins with limestone, a sedimentary mineral made from crystalline forms of calcium carbonate. Most of the limestone quarried for cement originated hundreds of millions of years ago, when ocean creatures mineralized calcium and carbonate in seawater to make shells, bones, corals, and other hard bits.

Cement producers often build their large plants next to limestone quarries that can supply decades’ worth of stone. The stone is crushed and then heated in stages as it is combined with lesser amounts of other minerals that typically include calcium, silicon, aluminum, and iron. What emerges from the mixing and cooking are small, hard nodules called clinker. A bit more processing, grinding, and mixing turns those pellets into powdered Portland cement, which accounts for about 90 percent of the CO2 emitted by the production of conventional concrete [see infographic, “Roads to Cleaner Concrete”].

A woman wearing a dark blazer and pants stands in front of a blackboard with notes and equations, as well as some machinery. Karen Scrivener, shown in her lab at EPFL, has developed concrete recipes that reduce emissions by 30 to 40 percent.Stefan Wermuth/Bloomberg/Getty Images

Decarbonizing Portland cement is often called heavy industry’s “hard problem” because of two processes fundamental to its manufacture. The first process is combustion: To coax limestone’s chemical transformation into clinker, large heaters and kilns must sustain temperatures around 1,500 °C. Currently that means burning coal, coke, fuel oil, or natural gas, often along with waste plastics and tires. The exhaust from those fires generates 35 to 50 percent of the cement industry’s emissions. Most of the remaining emissions result from gaseous CO2 liberated by the chemical transformation of the calcium carbonate (CaCO3) into calcium oxide (CaO), a process called calcination. That gas also usually heads straight into the atmosphere.

Concrete production, in contrast, is mainly a business of mixing cement powder with other ingredients and then delivering the slurry speedily to its destination before it sets. Most concrete in the United States is prepared to order at batch plants—souped-up materials depots where the ingredients are combined, dosed out from hoppers into special mixer trucks, and then driven to job sites. Because concrete grows too stiff to work after about 90 minutes, concrete production is highly local. There are more ready-mix batch plants in the United States than there are Burger King restaurants.

Batch plants can offer thousands of potential mixes, customized to fit the demands of different jobs. Concrete in a hundred-story building differs from that in a swimming pool. With flexibility to vary the quality of sand and the size of the stone—and to add a wide variety of chemicals—batch plants have more tricks for lowering carbon emissions than any cement plant does.

Cement plants that capture carbon

China accounts for more than half of the concrete produced and used in the world, but companies there are hard to track. Outside of China, the top three multinational cement producers—Holcim, Heidelberg Materials in Germany, and Cemex in Mexico—have launched pilot programs to snare CO2 emissions before they escape and then bury the waste deep underground. To do that, they’re taking carbon capture and storage (CCS) technology already used in the oil and gas industry and bolting it onto their cement plants.

These pilot programs will need to scale up without eating profits—something that eluded the coal industry when it tried CCS decades ago. Tough questions also remain about where exactly to store billions of tonnes of CO2 safely, year after year.

The appeal of CCS for cement producers is that they can continue using existing plants while still making progress toward carbon neutrality, which trade associations have committed to reach by 2050. But with well over 3,000 plants around the world, adding CCS to all of them would take enormous investment. Currently less than 1 percent of the global supply is low-emission cement. Accenture, a consultancy, estimates that outfitting the whole industry for carbon capture could cost up to $900 billion.

“The economics of carbon capture is a monster,” says Rick Chalaturnyk, a professor of geotechnical engineering at the University of Alberta, in Edmonton, Canada, who studies carbon capture in the petroleum and power industries. He sees incentives for the early movers on CCS, however. “If Heidelberg, for example, wins the race to the lowest carbon, it will be the first [cement] company able to supply those customers that demand low-carbon products”—customers such as hyperscalers.

Though cement companies seem unlikely to invest their own billions in CCS, generous government subsidies have enticed several to begin pilot projects. Heidelberg has announced plans to start capturing CO2 from its Edmonton operations in late 2026, transforming it into what the company claims would be “the world’s first full-scale net-zero cement plant.” Exhaust gas will run through stations that purify the CO2 and compress it into a liquid, which will then be transported to chemical plants to turn it into products or to depleted oil and gas reservoirs for injection underground, where hopefully it will stay put for an epoch or two.

Chalaturnyk says that the scale of the Edmonton plant, which aims to capture a million tonnes of CO2 a year, is big enough to give CCS technology a reasonable test. Proving the economics is another matter. Half the $1 billion cost for the Edmonton project is being paid by the governments of Canada and Alberta.

ROADS TO CLEANER CONCRETE


As the big-data construction boom boosts the tech industry’s emissions, the reinvention of concrete could play a major role in solving the problem.

• CONCRETE TODAY Most of the greenhouse emissions from concrete come from the production of Portland cement, which requires high heat and releases carbon dioxide (CO2) directly into the air.

• CONCRETE TOMORROW At each stage of cement and concrete production, advances in ingredients, energy supplies, and uses of concrete promise to reduce waste and pollution.

An illustration of the process for cleaner concrete.

The U.S. Department of Energy has similarly offered Heidelberg up to $500 million to help cover the cost of attaching CCS to its Mitchell, Ind., plant and burying up to 2 million tonnes of CO2 per year below the plant. And the European Union has gone even bigger, allocating nearly €1.5 billion ($1.6 billion) from its Innovation Fund to support carbon capture at cement plants in seven of its member nations.

These tests are encouraging, but they are all happening in rich countries, where demand for concrete peaked decades ago. Even in China, concrete production has started to flatten. All the growth in global demand through 2040 is expected to come from less-affluent countries, where populations are still growing and quickly urbanizing. According to projections by the Rhodium Group, cement production in those regions is likely to rise from around 30 percent of the world’s supply today to 50 percent by 2050 and 80 percent before the end of the century.

So will rich-world CCS technology translate to the rest of the world? I asked Juan Esteban Calle Restrepo, the CEO of Cementos Argos, the leading cement producer in Colombia, about that when I sat down with him recently at his office in Medellín. He was frank. “Carbon capture may work for the U.S. or Europe, but countries like ours cannot afford that,” he said.

Better cement through chemistry

As long as cement plants run limestone through fossil-fueled kilns, they will generate excessive amounts of carbon dioxide. But there may be ways to ditch the limestone—and the kilns. Labs and startups have been finding replacements for limestone, such as calcined kaolin clay and fly ash, that don’t release CO2 when heated. Kaolin clays are abundant around the world and have been used for centuries in Chinese porcelain and more recently in cosmetics and paper. Fly ash—a messy, toxic by-product of coal-fired power plants—is cheap and still widely available, even as coal power dwindles in many regions.

At the Swiss Federal Institute of Technology Lausanne (EPFL), Karen Scrivener and colleagues developed cements that blend calcined kaolin clay and ground limestone with a small portion of clinker. Calcining clay can be done at temperatures low enough that electricity from renewable sources can do the job. Various studies have found that the blend, known as LC3, can reduce overall emissions by 30 to 40 percent compared to those of Portland cement.

LC3 is also cheaper to make than Portland cement and performs as well for nearly all common uses. As a result, calcined clay plants have popped up across Africa, Europe, and Latin America. In Colombia, Cementos Argos is already producing more than 2 million tonnes of the stuff annually. The World Economic Forum’s Centre for Energy and Materials counts LC3 among the best hopes for the decarbonization of concrete. Wide adoption by the cement industry, the centre reckons, “can help prevent up to 500 million tonnes of CO2 emissions by 2030.”

In a win-win for the environment, fly ash can also be used as a building block for low- and even zero-emission concrete, and the high heat of processing neutralizes many of the toxins it contains. Ancient Romans used volcanic ash to make slow-setting but durable concrete: The Pantheon, built nearly two millennia ago with ash-based cement, is still in great shape.

Coal fly ash is a cost-effective ingredient that has reactive properties similar to those of Roman cement and Portland cement. Many concrete plants already add fresh fly ash to their concrete mixes, replacing 15 to 35 percent of the cement. The ash improves the workability of the concrete, and though the resulting concrete is not as strong for the first few months, it grows stronger than regular concrete as it ages, like the Pantheon.

University labs have tested concretes made entirely with fly ash and found that some actually outperform the standard variety. More than 15 years ago, researchers at Montana State University used concrete made with 100 percent fly ash in the floors and walls of a credit union and a transportation research center. But performance depends greatly on the chemical makeup of the ash, which varies from one coal plant to the next, and on following a tricky recipe. The decommissioning of coal-fired plants has also been making fresh fly ash scarcer and more expensive.

Side view of a man in a lab coat as he climbs stairs in an industrial but simple looking pilot cement plant that is about twice his size. At Sublime Systems’ pilot plant in Massachusetts, the company is using electrochemistry instead of heat to produce lime silicate cements that can replace Portland cement.Tony Luong

That has spurred new methods to treat and use fly ash that’s been buried in landfills or dumped into ponds. Such industrial burial grounds hold enough fly ash to make concrete for decades, even after every coal plant shuts down. Utah-based Eco Material Technologies is now producing cements that include both fresh and recovered fly ash as ingredients. The company claims it can replace up to 60 percent of the Portland cement in concrete—and that a new variety, suitable for 3D printing, can substitute entirely for Portland cement.

Hive 3D Builders, a Houston-based startup, has been feeding that low-emissions concrete into robots that are printing houses in several Texas developments. “We are 100 percent Portland cement–free,” says Timothy Lankau, Hive 3D’s CEO. “We want our homes to last 1,000 years.”

Sublime Systems, a startup spun out of MIT by battery scientists, uses electrochemistry rather than heat to make low-carbon cement from rocks that don’t contain carbon. Similar to a battery, Sublime’s process uses a voltage between an electrode and a cathode to create a pH gradient that isolates silicates and reactive calcium, in the form of lime (CaO). The company mixes those ingredients together to make a cement with no fugitive carbon, no kilns or furnaces, and binding power comparable to that of Portland cement. With the help of $87 million from the U.S. Department of Energy, Sublime is building a plant in Holyoke, Mass., that will be powered almost entirely by hydroelectricity. Recently the company was tapped to provide concrete for a major offshore wind farm planned off the coast of Martha’s Vineyard.

Software takes on the hard problem of concrete

It is unlikely that any one innovation will allow the cement industry to hit its target of carbon neutrality before 2050. New technologies take time to mature, scale up, and become cost-competitive. In the meantime, says Philippe Block, a structural engineer at ETH Zurich, smart engineering can reduce carbon emissions through the leaner use of materials.

His research group has developed digital design tools that make clever use of geometry to maximize the strength of concrete structures while minimizing their mass. The team’s designs start with the soaring architectural elements of ancient temples, cathedrals, and mosques—in particular, vaults and arches—which they miniaturize and flatten and then 3D print or mold inside concrete floors and ceilings. The lightweight slabs, suitable for the upper stories of apartment and office buildings, use much less concrete and steel reinforcement and have a CO2 footprint that’s reduced by 80 percent.

There’s hidden magic in such lean design. In multistory buildings, much of the mass of concrete is needed just to hold the weight of the material above it. The carbon savings of Block’s lighter slabs thus compound, because the size, cost, and emissions of a building’s conventional-concrete elements are slashed.

Aerial view of a geometric and vaulted looking fabricated floor under construction outside. Three people with hard hats stand on it. Vaulted, a Swiss startup, uses digital design tools to minimize the concrete in floors and ceilings, cutting their CO2 footprint by 80 percent.Vaulted

In Dübendorf, Switzerland, a wildly shaped experimental building has floors, roofs, and ceilings created by Block’s structural system. Vaulted, a startup spun out of ETH, is engineering and fabricating the lighter floors of a 10-story office building under construction in Zug, Switzerland.

That country has also been a leader in smart ways to recycle and reuse concrete, rather than simply landfilling demolition rubble. This is easier said than done—concrete is tough stuff, riddled with rebar. But there’s an economic incentive: Raw materials such as sand and limestone are becoming scarcer and more costly. Some jurisdictions in Europe now require that new buildings be made from recycled and reused materials. The new addition of the Kunsthaus Zürich museum, a showcase of exquisite Modernist architecture, uses recycled material for all but 2 percent of its concrete.

As new policies goose demand for recycled materials and threaten to restrict future use of Portland cement across Europe, Holcim has begun building recycling plants that can reclaim cement clinker from old concrete. It recently turned the demolition rubble from some 1960s apartment buildings outside Paris into part of a 220-unit housing complex—touted as the first building made from 100 percent recycled concrete. The company says it plans to build concrete recycling centers in every major metro area in Europe and, by 2030, to include 30 percent recycled material in all of its cement.

Further innovations in low-carbon concrete are certain to come, particularly as the powers of machine learning are applied to the problem. Over the past decade, the number of research papers reporting on computational tools to explore the vast space of possible concrete mixes has grown exponentially. Much as AI is being used to accelerate drug discovery, the tools learn from huge databases of proven cement mixes and then apply their inferences to evaluate untested mixes.

Researchers from the University of Illinois and Chicago-based Ozinga, one of the largest private concrete producers in the United States, recently worked with Meta to feed 1,030 known concrete mixes into an AI. The project yielded a novel mix that will be used for sections of a data-center complex in DeKalb, Ill. The AI-derived concrete has a carbon footprint 40 percent lower than the conventional concrete used on the rest of the site. Ryan Cialdella, Ozinga’s vice president of innovation, smiles as he notes the virtuous circle: AI systems that live in data centers can now help cut emissions from the concrete that houses them.

A sustainable foundation for the information age

Cheap, durable, and abundant yet unsustainable, concrete made with Portland cement has been one of modern technology’s Faustian bargains. The built world is on track to double in floor space by 2060, adding 230,000 km2, or more than half the area of California. Much of that will house the 2 billion more people we are likely to add to our numbers. As global transportation, telecom, energy, and computing networks grow, their new appendages will rest upon concrete. But if concrete doesn’t change, we will perversely be forced to produce even more concrete to protect ourselves from the coming climate chaos, with its rising seas, fires, and extreme weather.

The AI-driven boom in data centers is a strange bargain of its own. In the future, AI may help us live even more prosperously, or it may undermine our freedoms, civilities, employment opportunities, and environment. But solutions to the bad climate bargain that AI’s data centers foist on the planet are at hand, if there’s a will to deploy them. Hyperscalers and governments are among the few organizations with the clout to rapidly change what kinds of cement and concrete the world uses, and how those are made. With a pivot to sustainability, concrete’s unique scale makes it one of the few materials that could do most to protect the world’s natural systems. We can’t live without concrete—but with some ambitious reinvention, we can thrive with it.

Reference: https://ift.tt/vfDRwaA

The sad, bizarre tale of hype fanning fears modern cryptography was slain


There’s little doubt that some of the most important pillars of modern cryptography will tumble spectacularly once quantum computing, now in its infancy, matures sufficiently. Some experts say that could be in the next couple decades. Others say it could take longer. No one knows.

The uncertainty leaves a giant vacuum that can be filled with alarmist pronouncements that the world is close to seeing the downfall of cryptography as we know it. The false pronouncements can take on a life of their own as they’re repeated by marketers looking to peddle post-quantum cryptography snake oil and journalists tricked into thinking the findings are real. And a new episode of exaggerated research has been playing out for the past few weeks.

All aboard the PQC hype train

The last time the PQC—short for post-quantum cryptography—hype train gained this much traction was in early 2023, when scientists presented findings that claimed, at long last, to put the quantum-enabled cracking of the widely used RSA encryption scheme within reach. The claims were repeated over and over, just as claims about research released in September have for the past three weeks.

Read full article

Comments

Reference : https://ift.tt/jcImYqU

Tuesday, October 29, 2024

The Unlikely Inventor of the Automatic Rice Cooker




“Cover, bring to a boil, then reduce heat. Simmer for 20 minutes.” These directions seem simple enough, and yet I have messed up many, many pots of rice over the years. My sympathies to anyone who’s ever had to boil rice on a stovetop, cook it in a clay pot over a kerosene or charcoal burner, or prepare it in a cast-iron cauldron. All hail the 1955 invention of the automatic rice cooker!

How the automatic rice cooker was invented

It isn’t often that housewives get credit in the annals of invention, but in the story of the automatic rice cooker, a woman takes center stage. That happened only after the first attempts at electrifying rice cooking, starting in the 1920s, turned out to be utter failures. Matsushita, Mitsubishi, and Sony all experimented with variations of placing electric heating coils inside wooden tubs or aluminum pots, but none of these cookers automatically switched off when the rice was done. The human cook—almost always a wife or daughter—still had to pay attention to avoid burning the rice. These electric rice cookers didn’t save any real time or effort, and they sold poorly.

But Shogo Yamada, the energetic development manager of the electric appliance division for Toshiba, became convinced that his company could do better. In post–World War II Japan, he was demonstrating and selling electric washing machines all over the country. When he took a break from his sales pitch and actually talked to women about their daily household labors, he discovered that cooking rice—not laundry—was their most challenging chore. Rice was a mainstay of the Japanese diet, and women had to prepare it up to three times a day. It took hours of work, starting with getting up by 5:00 am to fan the flames of a kamado, a traditional earthenware stove fueled by charcoal or wood on which the rice pot was heated. The inability to properly mind the flame could earn a woman the label of “failed housewife.”

In 1951, Yamada became the cheerleader of the rice cooker within Toshiba, which was understandably skittish given the past failures of other companies. To develop the product, he turned to Yoshitada Minami, the manager of a small family factory that produced electric water heaters for Toshiba. The water-heater business wasn’t great, and the factory was on the brink of bankruptcy.

How Sources Influence the Telling of History


As someone who does a lot of research online, I often come across websites that tell very interesting histories, but without any citations. It takes only a little bit of digging before I find entire passages copied and pasted from one site to another, and so I spend a tremendous amount of time trying to track down the original source. Accounts of popular consumer products, such as the rice cooker, are particularly prone to this problem. That’s not to say that popular accounts are necessarily wrong; plus they are often much more engaging than boring academic pieces. This is just me offering a note of caution because every story offers a different perspective depending on its sources.

For example, many popular blogs sing the praises of Fumiko Minami and her tireless contributions to the development of the rice maker. But in my research, I found no mention of Minami before Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” which itself was based on episode 42 of the Project X: Challengers documentary series that was produced by NHK and aired in 2002.

If instead I had relied solely on the description of the rice cooker’s early development provided by the Toshiba Science Museum (here’s an archived page from 2007), this month’s column would have offered a detailed technical description of how uncooked rice has a crystalline structure, but as it cooks, it becomes a gelatinized starch. The museum’s website notes that few engineers had ever considered the nature of cooking rice before the rice-cooker project, and it refers simply to the “project team” that discovered the process. There’s no mention of Fumiko.

Both stories are factually correct, but they emphasize different details. Sometimes it’s worth asking who is part of the “project team” because the answer might surprise you. —A.M.


Although Minami understood the basic technical principles for an electric rice cooker, he didn’t know or appreciate the finer details of preparing perfect rice. And so Minami turned to his wife, Fumiko.

Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

But how would an automatic rice cooker know when the 20 minutes was up? A suggestion came from Toshiba engineers. A working model based on a double boiler (a pot within a pot for indirect heating) used evaporation to mark time. While the rice cooked in the inset pot, a bimetallic switch measured the temperature in the external pot. Boiling water would hold at a constant 100 °C, but once it had evaporated, the temperature would soar. When the internal temperature of the double boiler surpassed 100 °C, the switch would bend and cut the circuit. One cup of boiling water in the external pot took 20 minutes to evaporate. The same basic principle is still used in modern cookers.


Photo of three parts of a round kitchen appliance, including the outside container, an inner metal pot, and a lid.

Yamada wanted to ensure that the rice cooker worked in all climates, so Fumiko tested various prototypes in extreme conditions: on her rooftop in cold winters and scorching summers and near steamy bathrooms to mimic high humidity. When Fumiko became ill from testing outside, her children pitched in to help. None of the aluminum and glass prototypes, it turned out, could maintain their internal temperature in cold weather. The final design drew inspiration from the Hokkaidō region, Japan’s northernmost prefecture. Yamada had seen insulated cooking pots there, so the Minami family tried covering the rice cooker with a triple-layered iron exterior. It worked.

How Toshiba sold its automatic rice cooker

Toshiba’s automatic rice cooker went on sale on 10 December 1955, but initially, sales were slow. It didn’t help that the rice cooker was priced at 3,200 yen, about a third of the average Japanese monthly salary. It took some salesmanship to convince women they needed the new appliance. This was Yamada’s time to shine. He demonstrated using the rice cooker to prepare takikomi gohan, a rice dish seasoned with dashi, soy sauce, and a selection of meats and vegetables. When the dish was cooked in a traditional kamado, the soy sauce often burned, making the rather simple dish difficult to master. Women who saw Yamada’s demo were impressed with the ease offered by the rice cooker.

Another clever sales technique was to get electricity companies to serve as Toshiba distributors. At the time, Japan was facing a national power surplus stemming from the widespread replacement of carbon-filament lightbulbs with more efficient tungsten ones. The energy savings were so remarkable that operations at half of the country’s power plants had to be curtailed. But with utilities distributing Toshiba rice cookers, increased demand for electricity was baked in.

Within a year, Toshiba was selling more than 200,000 rice cookers a month. Many of them came from the Minamis’ factory, which was rescued from near-bankruptcy in the process.

How the automatic rice cooker conquered the world

From there, the story becomes an international one with complex localization issues. Japanese sushi rice is not the same as Thai sticky rice which is not the same as Persian tahdig, Indian basmati, Italian risotto, or Spanish paella. You see where I’m going with this. Every culture that has a unique rice dish almost always uses its own regional rice with its own preparation preferences. And so countries wanted their own type of automatic electric rice cooker (although some rejected automation in favor of traditional cooking methods).

Yoshiko Nakano, a professor at the University of Hong Kong, wrote a book in 2009 about the localized/globalized nature of rice cookers. Where There Are Asians, There Are Rice Cookers traces the popularization of the rice cooker from Japan to China and then the world by way of Hong Kong. One of the key differences between the Japanese and Chinese rice cooker is that the latter has a glass lid, which Chinese cooks demanded so they could see when to add sausage. More innovation and diversification followed. Modern rice cookers have settings to give Iranians crispy rice at the bottom of the pot, one to let Thai customers cook noodles, one for perfect rice porridge, and one for steel-cut oats.


A customer examines several shelves of round white appliances.

My friend Hyungsub Choi, in his 2022 article “Before Localization: The Story of the Electric Rice Cooker in South Korea,” pushes back a bit on Nakano’s argument that countries were insistent on tailoring cookers to their tastes. From 1965, when the first domestic rice cooker appeared in South Korea, to the early 1990s, Korean manufacturers engaged in “conscious copying,” Choi argues. That is, they didn’t bother with either innovation or adaptation. As a result, most Koreans had to put up with inferior domestic models. Even after the Korean government made it a national goal to build a better rice cooker, manufacturers failed to deliver one, perhaps because none of the engineers involved knew how to cook rice. It’s a good reminder that the history of technology is not always the story of innovation and progress.

Eventually, the Asian diaspora brought the rice cooker to all parts of the globe, including South Carolina, where I now live and which coincidentally has a long history of rice cultivation. I bought my first rice cooker on a whim, but not for its rice-cooking ability. I was intrigued by the yogurt-making function. Similar to rice, yogurt requires a constant temperature over a specific length of time. Although successful, my yogurt experiment was fleeting—store-bought was just too convenient. But the rice cooking blew my mind. Perfect rice. Every. Single. Time. I am never going back to overflowing pots of starchy water.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the November 2024 print issue as “The Automatic Rice Cooker’s Unlikely Inventor.”

References


Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” was a great resource in understanding the development of the Toshiba ER-4. The chapter appeared in The Historical Consumer: Consumption and Everyday Life in Japan, 1850-2000, edited by Penelope Francks and Janet Hunter (Palgrave Macmillan).

Yoshiko Nakano’s book Where There are Asians, There are Rice Cookers (Hong Kong University Press, 2009) takes the story much further with her focus on the National (Panasonic) rice cooker and its adaptation and adoption around the world.

The Toshiba Science Museum, in Kawasaki, Japan, where we sourced our main image of the original ER-4, closed to the public in June. I do not know what the future holds for its collections, but luckily some of its Web pages have been archived to continue to help researchers like me.

Reference: https://ift.tt/2MrWkUo

Multiband Antenna Simulation and Wireless KPI Extraction




In this upcoming webinar, explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems.

Overview

This webinar will explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. Attendees will learn how to optimize antenna performance and analyze installed performance within wireless networks. The session will also demonstrate how this approach enables users to extract valuable wireless and network KPIs, providing a comprehensive toolset for enhancing antenna design, optimizing multiband communication, and improving overall network performance. Join us to discover how Ansys HFSS can transform wireless system design and network efficiency approach.

What Attendees will Learn

  • How to design interleaved multiband antenna systems using the latest capabilities in HFSS
  • How to extract Network Key Performance Indicators
  • How to run and extract RF Channels for the dynamic environment

Who Should Attend

This webinar is valuable to anyone involved in antenna, R&D, product design, and wireless networks.

Register now for this free webinar!

Reference: https://ift.tt/mcUiFIC

For this Stanford Engineer, Frugal Invention Is a Calling




Manu Prakash spoke with IEEE Spectrum shortly after returning to Stanford University from a month aboard a research vessel off the coast of California, where he was testing tools to monitor oceanic carbon sequestration. The associate professor conducts fieldwork around the world to better understand the problems he’s working on, as well as the communities that will be using his inventions.

Prakash develops imaging instruments and diagnostic tools, often for use in global health and environmental sciences. His devices typically cost radically less than conventional equipment—he aims for reductions of two or more orders of magnitude. Whether he’s working on pocketable microscopes, mosquito or plankton monitors, or an autonomous malaria diagnostic platform, Prakash always includes cost and access as key aspects of his engineering. He calls this philosophy “frugal science.”

Why should we think about science frugally?

Manu Prakash: To me, when we are trying to ask and solve problems and puzzles, it becomes important: In whose hands are we putting these solutions? A frugal approach to solving the problem is the difference between 1 percent of the population or billions of people having access to that solution.

Lack of access creates these kinds of barriers in people’s minds, where they think they can or cannot approach a kind of problem. It’s important that we as scientists or just citizens of this world create an environment that feels that anybody has a chance to make important inventions and discoveries if they put their heart to it. The entrance to all that is dependent on tools, but those tools are just inaccessible.

How did you first encounter the idea of “frugal science”?

Prakash: I grew up in India and lived with very little access to things. And I got my Ph.D. at MIT. I was thinking about this stark difference in worlds that I had seen and lived in, so when I started my lab, it was almost a commitment to [asking]: What does it mean when we make access one of the critical dimensions of exploration? So, I think a lot of the work I do is primarily driven by curiosity, but access brings another layer of intellectual curiosity.

How do you identify a problem that might benefit from frugal science?

Prakash: Frankly, it’s hard to find a problem that would not benefit from access. The question to ask is “Where are the neglected problems that we as a society have failed to tackle?” We do a lot of work in diagnostics. A lot [of our solutions] beat the conventional methods that are neither cost effective nor any good. It’s not about cutting corners; it’s about deeply understanding the problem—better solutions at a fraction of the cost. It does require invention. For that order of magnitude change, you really have to start fresh.

Where does your involvement with an invention end?

Prakash: Inventions are part of our soul. Your involvement never ends. I just designed the 415th version of Foldscope [a low-cost “origami” microscope]. People only know it as version 3. We created Foldscope a long time ago; then I realized that nobody was going to provide access to it. So we went back and invented the manufacturing process for Foldscope to scale it. We made the first 100,000 Foldscopes in the lab, which led to millions of Foldscopes being deployed.

So it’s continuous. If people are scared of this, they should never invent anything [laughs], because once you invent something, it’s a lifelong project. You don’t put it aside; the project doesn’t put you aside. You can try to, but that’s not really possible if your heart is in it. You always see problems. Nothing is ever perfect. That can be ever consuming. It’s hard. I don’t want to minimize this process in any way or form.

Reference: https://ift.tt/ScU7uEl

Monday, October 28, 2024

The Patent Battle That Won’t Quit




Just before this special issue on invention went to press, I got a message from IEEE senior member and patent attorney George Macdonald. Nearly two decades after I first reported on Corliss Orville “Cob” Burandt’s struggle with the U.S. Patent and Trademark Office, the 77-year-old inventor’s patent case was being revived.

From 1981 to 1990, Burandt had received a dozen U.S. patents for improvements to automotive engines, starting with his 1990 patent for variable valve-timing technology (U.S. Patent No. 4,961,406A). But he failed to convince any automakers to license his technology. What’s worse, he claims, some of the world’s major carmakers now use his inventions in their hybrid engines.

Shortly after reading my piece in 2005, Macdonald stepped forward to represent Burandt. By then, the inventor had already lost his patents because he hadn’t paid the US $40,000 in maintenance fees to keep them active.

Macdonald filed a petition to pay the maintenance fees late and another to revive a related child case. The maintenance fee petition was denied in 2006. While the petition to revive was still pending, Macdonald passed the maintenance fee baton to Hunton Andrews Kurth (HAK), which took the case pro bono. HAK attorneys argued that the USPTO should reinstate the 1990 parent patent.

The timing was crucial: If the parent patent was reinstated before 2008, Burandt would have had the opportunity to compel infringing corporations to pay him licensing fees. Unfortunately, for reasons that remain unclear, the patent office tried to paper Burandt’s legal team to death, Macdonald says. HAK could go no further in the maintenance-fee case after the U.S. Supreme Court declined to hear it in 2009.

Then, in 2010, the USPTO belatedly revived Burandt’s child continuation application. A continuation application lets an inventor add claims to their original patent application while maintaining the earlier filing date—1988 in this case.

However, this revival came with its own set of challenges. Macdonald was informed in 2011 that the patent examiner would issue the patent but later discovered that the application was placed into a then-secret program called the Sensitive Application Warning System (SAWS) instead. While touted as a way to quash applications for things like perpetual-motion machines, the SAWS process effectively slowed action on Burandt’s case.

After several more years of motions and rulings, Macdonald met IEEE Member Edward Pennington, who agreed to represent Burandt. Earlier this year, Pennington filed a complaint in the Eastern District of Virginia seeking the issuance of Burandt’s patent on the grounds that it was wrongfully denied.

As of this writing, Burandt still hasn’t seen a dime from his inventions. He subsists on his social security benefits. And while his case raises important questions about fairness, transparency, and the rights of individual inventors, Pennington says his client isn’t interested in becoming a poster boy for poor inventors.

“We’re not out to change policy at the patent office or to give Mr. Burandt a framed copy of the patent to say, ‘Look at me, I’m an inventor,’ ” says Pennington. “This is just to say, ‘Here’s a guy that would like to benefit from his idea.’ It just so happens that he’s pretty much in need. And even the slightest royalty would go a long ways for the guy.”

Reference: https://ift.tt/FteU2Dh

Hospitals adopt error-prone AI transcription tools despite warnings


On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a "confabulation" or "hallucination" in the AI field.

Upon its release in 2022, OpenAI claimed that Whisper approached "human level robustness" in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.

The fabrications pose particular risks in health care settings. Despite OpenAI's warnings against using Whisper for "high-risk domains," over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children's Hospital Los Angeles count among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.

Read full article

Comments

Reference : https://ift.tt/PwaDnsp

Kremlin-backed hackers have new Windows and Android malware to foist on Ukrainian foes


Google researchers said they uncovered a Kremlin-backed operation targeting recruits for the Ukrainian military with information-stealing malware for Windows and Android devices.

The malware, spread primarily through posts on Telegram, came from a persona on that platform known as "Civil Defense." Posts on the ​​@civildefense_com_ua telegram channel and the accompanying civildefense[.]com.ua website claimed to provide potential conscripts with free software for finding user-sourced locations of Ukrainian military recruiters. In fact, the software, available for both Windows and Android, installed infostealers. Google tracks the Kremlin-aligned threat group as UNC5812.

Dual espionage and influence campaign

"The ultimate aim of the campaign is to have victims navigate to the UNC5812-controlled 'Civil Defense' website, which advertises several different software programs for different operating systems," Google researchers wrote. "When installed, these programs result in the download of various commodity malware families."

Read full article

Comments

Reference : https://ift.tt/dReKbBN

NYU Researchers Develop New Real-Time Deepfake Detection Method




This sponsored article is brought to you by NYU Tandon School of Engineering.

Deepfakes, hyper-realistic videos and audio created using artificial intelligence, present a growing threat in today’s digital world. By manipulating or fabricating content to make it appear authentic, deepfakes can be used to deceive viewers, spread disinformation, and tarnish reputations. Their misuse extends to political propaganda, social manipulation, identity theft, and cybercrime.

As deepfake technology becomes more advanced and widely accessible, the risk of societal harm escalates. Studying deepfakes is crucial to developing detection methods, raising awareness, and establishing legal frameworks to mitigate the damage they can cause in personal, professional, and global spheres. Understanding the risks associated with deepfakes and their potential impact will be necessary for preserving trust in media and digital communication.

That is where Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, comes in.

A photo of a smiling man in glasses. Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, is developing challenge-response systems for detecting audio and video deepfakes.NYU Tandon

“Broadly, I’m interested in AI safety in all of its forms. And when a technology like AI develops so rapidly, and gets good so quickly, it’s an area ripe for exploitation by people who would do harm,” Hegde said.

A native of India, Hegde has lived in places around the world, including Houston, Texas, where he spent several years as a student at Rice University; Cambridge, Massachusetts, where he did post-doctoral work in MIT’s Theory of Computation (TOC) group; and Ames, Iowa, where he held a professorship in the Electrical and Computer Engineering Department at Iowa State University.

Hegde, whose area of expertise is in data processing and machine learning, focuses his research on developing fast, robust, and certifiable algorithms for diverse data processing problems encountered in applications spanning imaging and computer vision, transportation, and materials design. At Tandon, he worked with Professor of Computer Science and Engineering Nasir Memon, who sparked his interest in deepfakes.

“Even just six years ago, generative AI technology was very rudimentary. One time, one of my students came in and showed off how the model was able to make a white circle on a dark background, and we were all really impressed by that at the time. Now you have high definition fakes of Taylor Swift, Barack Obama, the Pope — it’s stunning how far this technology has come. My view is that it may well continue to improve from here,” he said.

Hegde helped lead a research team from NYU Tandon School of Engineering that developed a new approach to combat the growing threat of real-time deepfakes (RTDFs) – sophisticated artificial-intelligence-generated fake audio and video that can convincingly mimic actual people in real-time video and voice calls.

High-profile incidents of deepfake fraud are already occurring, including a recent $25 million scam using fake video, and the need for effective countermeasures is clear.

In two separate papers, research teams show how “challenge-response” techniques can exploit the inherent limitations of current RTDF generation pipelines, causing degradations in the quality of the impersonations that reveal their deception.

In a paper titled “GOTCHA: Real-Time Video Deepfake Detection via Challenge-Response” the researchers developed a set of eight visual challenges designed to signal to users when they are not engaging with a real person.

“Most people are familiar with CAPTCHA, the online challenge-response that verifies they’re an actual human being. Our approach mirrors that technology, essentially asking questions or making requests that RTDF cannot respond to appropriately,” said Hegde, who led the research on both papers.

A series of images with people's faces in rows. Challenge frame of original and deepfake videos. Each row aligns outputs against the same instance of challenge, while each column aligns the same deepfake method. The green bars are a metaphor for the fidelity score, with taller bars suggesting higher fidelity. Missing bars imply the specific deepfake failed to do that specific challenge.NYU Tandon

The video research team created a dataset of 56,247 videos from 47 participants, evaluating challenges such as head movements and deliberately obscuring or covering parts of the face. Human evaluators achieved about 89 percent Area Under the Curve (AUC) score in detecting deepfakes (over 80 percent is considered very good), while machine learning models reached about 73 percent.

“Challenges like quickly moving a hand in front of your face, making dramatic facial expressions, or suddenly changing the lighting are simple for real humans to do, but very difficult for current deepfake systems to replicate convincingly when asked to do so in real-time,” said Hegde.

Audio Challenges for Deepfake Detection

In another paper called “AI-assisted Tagging of Deepfake Audio Calls using Challenge-Response,” researchers created a taxonomy of 22 audio challenges across various categories. Some of the most effective included whispering, speaking with a “cupped” hand over the mouth, talking in a high pitch, pronouncing foreign words, and speaking over background music or speech.

“Even state-of-the-art voice cloning systems struggle to maintain quality when asked to perform these unusual vocal tasks on the fly,” said Hegde. “For instance, whispering or speaking in an unusually high pitch can significantly degrade the quality of audio deepfakes.”

The audio study involved 100 participants and over 1.6 million deepfake audio samples. It employed three detection scenarios: humans alone, AI alone, and a human-AI collaborative approach. Human evaluators achieved about 72 percent accuracy in detecting fakes, while AI alone performed better with 85 percent accuracy.

The collaborative approach, where humans made initial judgments and could revise their decisions after seeing AI predictions, achieved about 83 percent accuracy. This collaborative system also allowed AI to make final calls in cases where humans were uncertain.

“The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time” —Chinmay Hegde, NYU Tandon

The researchers emphasize that their techniques are designed to be practical for real-world use, with most challenges taking only seconds to complete. A typical video challenge might involve a quick hand gesture or facial expression, while an audio challenge could be as simple as whispering a short sentence.

“The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time,” Hegde said. “We can also randomize the challenges and combine multiple tasks for extra security.”

As deepfake technology continues to advance, the researchers plan to refine their challenge sets and explore ways to make detection even more robust. They’re particularly interested in developing “compound” challenges that combine multiple tasks simultaneously.

“Our goal is to give people reliable tools to verify who they’re really talking to online, without disrupting normal conversations,” said Hegde. “As AI gets better at creating fakes, we need to get better at detecting them. These challenge-response systems are a promising step in that direction.”

Reference: https://ift.tt/MjPILfN

Nuclear Fusion’s New Idea: An Off-the-Shelf Stellarator




For a machine that’s designed to replicate a star, the world’s newest stellarator is a surprisingly humble-looking apparatus. The kitchen-table-size contraption sits atop stacks of bricks in a cinder-block room at the Princeton Plasma Physics Laboratory (PPPL) in Princeton, N.J., its parts hand-labeled in marker.

The PPPL team invented this nuclear-fusion reactor, completed last year, using mainly off-the-shelf components. Its core is a glass vacuum chamber surrounded by a 3D-printed nylon shell that anchors 9,920 meticulously placed permanent rare-earth magnets. Sixteen copper-coil electromagnets resembling giant slices of pineapple wrap around the shell crosswise.

The arrangement of magnets forms the defining feature of a stellarator: an entirely external magnetic field that directs charged particles along a spiral path to confine a superheated plasma. Within this enigmatic fourth state of matter, atoms that have been stripped of their electrons collide, their nuclei fusing and releasing energy in the same process that powers the sun and other stars. Researchers hope to capture this energy and use it to produce clean, zero-carbon electricity.

PPPL’s new reactor is the first stellarator built at this government lab in 50 years. It’s also the world’s first stellarator to employ permanent magnets, rather than just electromagnets, to coax plasma into an optimal three-dimensional shape. Costing only US $640,000 and built in less than a year, the device stands in contrast to prominent stellarators like Germany’s Wendelstein 7-X, a massive, tentacled machine that took $1.1 billion and more than 20 years to construct.

A tabletop machine with many wires coming from it in a research lab Sixteen copper-coil electromagnets resembling giant slices of pineapple wrap around the stellarator’s shell. Jayme Thornton

PPPL researchers say their simpler machine demonstrates a way to build stellarators far more cheaply and quickly, allowing researchers to easily test new concepts for future fusion power plants. The team’s use of permanent magnets may not be the ticket to producing commercial-scale energy, but PPPL’s accelerated design-build-test strategy could crank out new insights on plasma behavior that could push the field forward more rapidly.

Indeed, the team’s work has already spurred the formation of two stellarator startups that are testing their own PPPL-inspired designs, which their founders hope will lead to breakthroughs in the quest for fusion energy.

Are Stellarators the Future of Nuclear Fusion?

The pursuit of energy production through nuclear fusion is considered by many to be the holy grail of clean energy. And it’s become increasingly important as a rapidly warming climate and soaring electricity demand have made the need for stable, carbon-free power ever more acute. Fusion offers the prospect of a nearly limitless source of energy with no greenhouse gas emissions. And unlike conventional nuclear fission, fusion comes with no risk of meltdowns or weaponization, and no long-lived nuclear waste.

Fusion reactions have powered the sun since it formed an estimated 4.6 billion years ago, but they have never served to produce usable energy on Earth, despite decades of effort. The problem isn’t whether fusion can work. Physics laboratories and even a few individuals have successfully fused the nuclei of hydrogen, liberating energy. But to produce more power than is consumed in the process, simply fusing atoms isn’t enough.

A mosaic of square-shaped magnets inside a curved structure Fueled by free pizza, grad students meticulously placed 9,920 permanent rare-earth magnets inside the stellarator’s 3D-printed nylon shell. Jayme Thornton

The past few years have brought eye-opening advances from government-funded fusion programs such as PPPL and the Joint European Torus, as well as private companies. Enabled by gains in high-speed computing, artificial intelligence, and materials science, nuclear physicists and engineers are toppling longstanding technical hurdles. And stellarators, a once-overlooked approach, are back in the spotlight.

“Stellarators are one of the most active research areas now, with new papers coming out just about every week,” says Scott Hsu, the U.S. Department of Energy’s lead fusion coordinator. “We’re seeing new optimized designs that we weren’t capable of coming up with even 10 years ago. The other half of the story that’s just as exciting is that new superconductor technology and advanced manufacturing capabilities are making it more possible to actually realize these exquisite designs.”

Why Is Plasma Containment Important in Fusion Energy?

For atomic nuclei to fuse, the nuclei must overcome their natural electrostatic repulsion. Extremely high temperatures—in the millions of degrees—will get the particles moving fast enough to collide and fuse. Deuterium and tritium, isotopes of hydrogen with, respectively, one and two neutrons in their nuclei, are the preferred fuels for fusion because their nuclei can overcome the repulsive forces more easily than those of heavier atoms.

Heating these isotopes to the required temperatures strips electrons from the atomic nuclei, forming a plasma: a maelstrom of positively charged nuclei and negatively charged electrons. The trick is keeping that searingly hot plasma contained so that some of the nuclei fuse.

Currently, there are two main approaches to containing plasma. Inertial confinement uses high-energy lasers or ion beams to rapidly compress and heat a small fuel pellet. Magnetic confinement uses powerful magnetic fields to guide the charged particles along magnetic-field lines, preventing these particles from drifting outward.

Many magnetic-confinement designs—including the $24.5 billion ITER reactor under construction since 2010 in the hills of southern France—use an internal current flowing through the plasma to help to shape the magnetic field. But this current can create instabilities, and even small instabilities in the plasma can cause it to escape confinement, leading to energy losses and potential damage to the hardware.

Stellarators like PPPL’s are a type of magnetic confinement, with a twist.

How the Stellarator Was Born

Located at the end of Stellarator Road and a roughly 5-kilometer drive from Princeton University’s leafy campus, PPPL is one of 17 U.S. Department of Energy labs, and it employs about 800 scientists, engineers, and other workers. Hanging in PPPL’s lobby is a black-and-white photo of the lab’s founder, physicist Lyman Spitzer, smiling as he shows off the fanciful-looking apparatus he invented and dubbed a stellarator, or “star generator.”

According to the lab’s lore, Spitzer came up with the idea while riding a ski lift at Aspen Mountain in 1951. Enrico Fermi had observed that a simple toroidal, or doughnut-shaped, magnetic-confinement system wouldn’t be sufficient to contain plasma for nuclear fusion because the charged particles would drift outward and escape confinement.

“This technology is designed to be a stepping stone toward a fusion power plant.”

Spitzer determined that a figure-eight design with external magnets could create helical magnetic-field lines that would spiral around the plasma and more efficiently control and contain the energetic particles. That configuration, Spitzer reasoned, would be efficient enough that it wouldn’t require large currents running through the plasma, thus reducing the risk of instabilities and allowing for steady-state operation.

“In many ways, Spitzer’s brilliant idea was the perfect answer” to the problems of plasma confinement, says Steven Cowley, PPPL’s director since 2018. “The stellarator offered something that other approaches to fusion energy couldn’t: a stable plasma field that can sustain itself without any internal current.”

Spitzer’s stellarator quickly captured the imagination of midcentury nuclear physicists and engineers. But the invention was ahead of its time.

Tokamaks vs. Stellarators

The stellarator’s lack of toroidal symmetry made it challenging to build. The external magnetic coils needed to be precisely engineered into complex, three-dimensional shapes to generate the twisted magnetic fields required for stable plasma confinement. In the 1950s, researchers lacked the high-performance computers needed to design optimal three-dimensional magnetic fields and the engineering capability to build machines with the requisite precision.

Meanwhile, physicists in the Soviet Union were testing a new configuration for magnetically confined nuclear fusion: a doughnut-shaped device called a tokamak—a Russian acronym that stands for “toroidal chamber with magnetic coils.” Tokamaks bend an externally applied magnetic field into a helical field inside by sending a current through the plasma. They seemed to be able to produce plasmas that were hotter and denser than those produced by stellarators. And compared with the outrageously complex geometry of stellarators, the symmetry of the tokamaks’ toroidal shape made them much easier to build.

Black and white photo of a man standing in front of a table-top-sized machine Lyman Spitzer in the early 1950s built the first stellarator, using a figure-eight design and external magnets. PPPL

Following the lead of other nations’ fusion programs, the DOE shifted most of its fusion resources to tokamak research. PPPL converted Spitzer’s Model C stellarator into a tokamak in 1969.

Since then, tokamaks have dominated fusion-energy research. But by the late 1980s, the limitations of the approach were becoming more apparent. In particular, the currents that run through a tokamak’s plasma to stabilize and heat it are themselves a source of instabilities as the currents get stronger.

To force the restive plasma into submission, the geometrically simple tokamaks need additional features that increase their complexity and cost. Advanced tokamaks—there are about 60 currently operating—have systems for heating and controlling the plasma and massive arrays of magnets to create the confining magnetic fields. They also have cryogenics to cool the magnets to superconducting temperatures a few meters away from a 150 million °C plasma.

Tokamaks thus far have produced energy only in short pulses. “After 70 years, nobody really has even a good concept for how to make a steady-state tokamak,” notes Michael Zarnstorff, a staff research physicist at PPPL. “The longest pulse so far is just a few minutes. When we talk to electric utilities, that’s not actually what they want to buy.”

Computational Power Revives the Stellarator

With tokamaks gobbling up most of the world’s public fusion-energy funds, stellarator research lay mostly dormant until the 1980s. Then, some theorists started to put increasingly powerful computers to work to help them optimize the placement of magnetic coils to more precisely shape the magnetic fields.

The effort got a boost in 1981, when then-PPPL physicist Allen Boozer invented a coordinate system—known in the physics community as Boozer coordinates—that helps scientists understand how different configurations of magnets affect magnetic fields and plasma confinement. They can then design better devices to maintain stable plasma conditions for fusion. Boozer coordinates can also reveal hidden symmetries in the three-dimensional magnetic-field structure, which aren’t easily visible in other coordinate systems. These symmetries can significantly improve plasma confinement, reduce energy losses, and make the fusion process more efficient.

“We’re seeing new optimized designs we weren’t capable of coming up with 10 years ago.”

“The accelerating computational power finally allowed researchers to challenge the so-called fatal flaw of stellarators: the lack of toroidal symmetry,” says Boozer, who is now a professor of applied physics at Columbia University.

The new insights gave rise to stellarator designs that were far more complex than anything Spitzer could have imagined [see sidebar, “Trailblazing Stellarators”]. Japan’s Large Helical Device came online in 1998 after eight years of construction. The University of Wisconsin’s Helically Symmetric Experiment, whose magnetic-field coils featured an innovative quasi-helical symmetry, took nine years to build and began operation in 1999. And Germany’s Wendelstein 7-X—the largest and most advanced stellarator ever built—produced its first plasma in 2015, after more than 20 years of design and construction.

Experiment Failure Leads to New Stellarator Design

In the late 1990s, PPPL physicists and engineers began designing their own version, called the National Compact Stellarator Experiment (NCSX). Envisioned as the world’s most advanced stellarator, it employed a new magnetic-confinement concept called quasi-axisymmetry—a compromise that mimics the symmetry of a tokamak while retaining the stability and confinement benefits of a stellarator by using only externally generated magnetic fields.

“We tapped into every supercomputer we could find,” says Zarnstorff, who led the NCSX design team, “performing simulations of hundreds of thousands of plasma configurations to optimize the physics properties.”

Three Ways to Send Atoms on a Fantastical Helical Ride


An illustration of a 3 different types of stellerators.

But the design was, like Spitzer’s original invention, ahead of its time. Engineers struggled to meet the precise tolerances, which allowed for a maximum variation from assigned dimensions of only 1.5 millimeters across the entire device. In 2008, with the project tens of millions of dollars over budget and years behind schedule, NCSX was canceled. “That was a very sad day around here,” says Zarnstorff. “We got to build all the pieces, but we never got to put it together.”

Now, a segment of the NCSX vacuum vessel—a contorted hunk made from the superalloy Inconel—towers over a lonely corner of the C-Site Stellarator Building on PPPL’s campus. But if its presence is a reminder of failure, it is equally a reminder of the lessons learned from the $70 million project.

For Zarnstorff, the most important insights came from the engineering postmortem. Engineers concluded that, even if they had managed to successfully build and operate NCSX, it was doomed by the lack of a viable way to take the machine apart for repairs or reconfigure the magnets and other components.

With the experience gained from NCSX and PPPL physicists’ ongoing collaborations with the costly, delay-plagued Wendelstein 7-X program, the path forward became clearer. “Whatever we built next, we knew we needed to make it less expensively and more reliably,” says Zarnstorff. “And we knew we needed to build it in a way that would allow us to take the thing apart.”

A Testbed for Fusion Energy

In 2014, Zarnstorff began thinking about building a first-of-its-kind stellarator that would use permanent magnets, rather than electromagnets, to create its helical field, while retaining electromagnets to shape the toroidal field. (Electromagnets generate a magnetic field when an electric current flows through them and can be turned on or off, whereas permanent magnets produce a constant magnetic field without needing an external power source.)

Even the strongest permanent magnets wouldn’t be capable of confining plasma robustly enough to produce commercial-scale fusion power. But they could be used to create a lower-cost experimental device that would be easier to build and maintain. And that, crucially, would allow researchers to easily adjust and test magnetic fields that could inform the path to a power-producing device.

PPPL dubbed the device Muse. “Muse was envisioned as a testbed for innovative magnetic configurations and improving theoretical models,” says PPPL research physicist Kenneth Hammond, who is now leading the project. “Rather than immediate commercial application, it’s more focused on exploring fundamental aspects of stellarator design and plasma behavior.”

The Muse team designed the reactor with two independent sets of magnets. To coax charged particles into a corkscrew-like trajectory, small permanent neodymium magnets are arranged in pairs and mounted to a dozen 3D-printed panels surrounding the glass vacuum chamber, which was custom-made by glass blowers. Adjacent rows of magnets are oriented in opposite directions, twisting the magnetic-field lines at the outside edges.

Outside the shell, 16 electromagnets composed of circular copper coils generate the toroidal part of the magnetic field. These very coils were mass-produced by PPPL in the 1960s, and they have been a workhorse for rapid prototyping in numerous physics laboratories ever since.

“In terms of its ability to confine particles, Muse is two orders of magnitude better than any stellarator previously built,” says Hammond. “And because it’s the first working stellarator with quasi-axisymmetry, we will be able to test some of the theories we never got to test on NCSX.”

The neodymium magnets are a little bigger than a button magnet that might be used to hold a photo to a refrigerator door. Despite their compactness, they pack a remarkable punch. During my visit to PPPL, I turned a pair of magnets in my hands, alternating their polarities, and found it difficult to push them together and pull them apart.

Graduate students did the meticulous work of placing and securing the magnets. “This is a machine built on pizza, basically,” says Cowley, PPPL’s director. “You can get a lot out of graduate students if you give them pizza. There may have been beer too, but if there was, I don’t want to know about it.”

The Muse project was financed by internal R&D funds and used mostly off-the-shelf components. “Having done it this way, I would never choose to do it any other way,” Zarnstorff says.

Stellarex and Thea Energy Advance Stellarator Concepts

Now that Muse has demonstrated that stellarators can be made quickly, cheaply, and highly accurately, companies founded by current and former PPPL researchers are moving forward with Muse-inspired designs.

Zarnstorff recently cofounded a company called Stellarex. He says he sees stellarators as the best path to fusion energy, but he hasn’t landed on a magnet configuration for future machines. “It may be a combination of permanent and superconducting electromagnets, but we’re not religious about any one particular approach; we’re leaving those options open for now.” The company has secured some DOE research grants and is now focused on raising money from investors.

Thea Energy, a startup led by David Gates, who until recently was the head of stellarator physics at PPPL, is further along with its power-plant concept, also inspired by Muse. Like Muse, Thea focuses on simplified manufacture and maintenance. Unlike Muse, the Thea concept uses planar (flat) electromagnetic coils built of high-temperature superconductors.


Thea Energy


“The idea is to use hundreds of small electromagnets that behave a lot like permanent magnets, with each creating a dipole field that can be switched on and off,” says Gates. “By using so many individually actuated coils, we can get a high degree of control, and we can dynamically adjust and shape the magnetic fields in real time to optimize performance and adapt to different conditions.”

The company has raised more than $23 million and is designing and building a half-scale prototype of its initial project, which it calls Eos, in Kearny, N.J. “At first, it will be focused on producing neutrons and isotopes like tritium,” says Gates. “The technology is designed to be a stepping stone toward a fusion power plant called Helios, with the potential for near-term commercialization.”

Stellarator Startup Leverages Exascale Computing

Of all the private stellarator startups, Type One Energy is the most well funded, having raised $82.5 million from investors that include Bill Gates’s Breakthrough Energy Ventures. Type One’s leaders contributed to the design and construction of both the University of Wisconsin’s Helically Symmetric Experiment and Germany’s Wendelstein 7-X stellarators.

The Type One stellarator design utilizes a highly optimized magnetic-field configuration designed to improve plasma confinement. Optimization can relax the stringent construction tolerances typically required for stellarators, making them easier and more cost-effective to engineer and build.

Type One’s design, like that of Thea Energy’s Eos, makes use of high-temperature superconducting magnets, which provide higher magnetic strength, require less cooling power, and could lower costs and allow for a more compact and efficient reactor. The magnets, licensed from MIT, were designed for a tokamak, but Type One is modifying the coil structure to accommodate the intricate twists and turns of a stellarator.

In a sign that stellarator research may be moving from mainly scientific experiments into the race to field the first commercially viable reactor, Type One recently announced that it will build “the world’s most advanced stellarator” at the Bull Run Fossil Plant in Clinton, Tenn. To construct what it’s calling Infinity One—expected to be operational by early 2029—Type One is teaming up with the Tennessee Valley Authority and the DOE’s Oak Ridge National Laboratory.

“As an engineering testbed, Infinity One will not be producing energy,” says Type One CEO Chris Mowry. “Instead, it will allow us to retire any remaining risks and sign off on key features of the fusion pilot plant we are currently designing. Once the design validations are complete, we will begin the construction of our pilot plant to put fusion electrons on the grid.”

To help optimize the magnetic-field configuration, Mowry and his colleagues are utilizing Summit, one of Oak Ridge’s state-of-the-art exascale supercomputers. Summit is capable of performing more than 200 million times as many operations per second as the supercomputers of the early 1980s, when Wendelstein 7-X was first conceptualized.

AI Boosts Fusion Reactor Efficiency

Advances in computational power are already leading to faster design cycles, greater plasma stability, and better reactor designs. Ten years ago, an analysis of a million different configurations would have taken months; now a researcher can get answers in hours.

And yet, there are an infinite number of ways to make any particular magnetic field. “To find our way to an optimum fusion machine, we may need to consider something like 10 billion configurations,” says PPPL’s Cowley. “If it takes months to make that analysis, even with high-performance computing, that’s still not a route to fusion in a short amount of time.”

In the hope of shortcutting some of those steps, PPPL and other labs are investing in artificial intelligence and using surrogate models that can search and then rapidly home in on promising solutions. “Then, you start running progressively more precise models, which bring you closer and closer to the answer,” Cowley says. “That way we can converge on something in a useful amount of time.”

But the biggest remaining hurdles for stellarators, and magnetic-confinement fusion in general, involve engineering challenges rather than physics challenges, say Cowley and other fusion experts. These include developing materials that can withstand extreme conditions, managing heat and power efficiently, advancing magnet technology, and integrating all these components into a functional and scalable reactor.

Over the past half decade, the vibe at PPPL has grown increasingly optimistic, as new buildings go up and new researchers arrive on Stellarator Road to become part of what may be the grandest scientific challenge of the 21st century: enabling a world powered by safe, plentiful, carbon-free energy.

PPPL recently broke ground on a new $110 million office and laboratory building that will house theoretical and computational scientists and support the work in artificial intelligence and high-performance computing that is increasingly propelling the quest for fusion. The new facility will also provide space for research supporting PPPL’s expanded mission into microelectronics, quantum sensors and devices, and sustainability sciences.

PPPL researchers’ quest will take a lot of hard work and, probably, a fair bit of luck. Stellarator Road may be only a mile long, but the path to success in fusion energy will certainly stretch considerably farther.

Trailblazing Stellarators


In contrast to Muse’s relatively simple, low-cost approach, these pioneering stellarators are some of the most technically demanding machines ever built, with intricate electromagnetic coil systems and complex geometries that require precise engineering. The projects have provided valuable insights into plasma confinement, magnetic-field optimization, and the potential for steady-state operation, and moved the scientific community closer to achieving practical and sustainable fusion energy.

Large Helical Device (LHD)


An image of series of colored boxes and pipes.


Helically Symmetric Experiment (HSX)


A photo of a series of pipes and wires.


Wendelstein 7-X (W7-X)


A photo of a series of pipes and mechanical elements.

Reference: https://ift.tt/gxhQJrd

Backdoor infecting VPNs used “magic packets” for stealth and security

When threat actors use backdoor malware to gain access to a network, they want to make sure all their hard work can’t be leveraged by comp...