Tuesday, December 31, 2024

In 2025, People Will Try Living in This Underwater Habitat




The future of human habitation in the sea is taking shape in an abandoned quarry on the border of Wales and England. There, the ocean-exploration organization Deep has embarked on a multiyear quest to enable scientists to live on the seafloor at depths up to 200 meters for weeks, months, and possibly even years.

“Aquarius Reef Base in St. Croix was the last installed habitat back in 1987, and there hasn’t been much ground broken in about 40 years,” says Kirk Krack, human diver performance lead at Deep. “We’re trying to bring ocean science and engineering into the 21st century.”

This article is part of our special report Top Tech 2025.

Deep’s agenda has a major milestone this year—the development and testing of a small, modular habitat called Vanguard. This transportable, pressurized underwater shelter, capable of housing up to three divers for periods ranging up to a week or so, will be a stepping stone to a more permanent modular habitat system—known as Sentinel—that is set to launch in 2027. “By 2030, we hope to see a permanent human presence in the ocean,” says Krack. All of this is now possible thanks to an advanced 3D printing-welding approach that can print these large habitation structures.

How would such a presence benefit marine science? Krack runs the numbers for me: “With current diving at 150 to 200 meters, you can only get 10 minutes of work completed, followed by 6 hours of decompression. With our underwater habitats we’ll be able to do seven years’ worth of work in 30 days with shorter decompression time. More than 90 percent of the ocean’s biodiversity lives within 200 meters’ depth and at the shorelines, and we only know about 20 percent of it.” Understanding these undersea ecosystems and environments is a crucial piece of the climate puzzle, he adds: The oceans absorb nearly a quarter of human-caused carbon dioxide and roughly 90 percent of the excess heat generated by human activity.

Underwater Living Gets the Green Light This Year

Deep is looking to build an underwater life-support infrastructure that features not just modular habitats but also training programs for the scientists who will use them. Long-term habitation underwater involves a specialized type of activity called saturation diving, so named because the diver’s tissues become saturated with gases, such as nitrogen or helium. It has been used for decades in the offshore oil and gas sectors but is uncommon in scientific diving, outside of the relatively small number of researchers fortunate enough to have spent time in Aquarius. Deep wants to make it a standard practice for undersea researchers.

The first rung in that ladder is Vanguard, a rapidly deployable, expedition-style underwater habitat the size of a shipping container that can be transported and supplied by a ship and house three people down to depths of about 100 meters. It is set to be tested in a quarry outside of Chepstow, Wales, in the first quarter of 2025.

A dark underwater habitat sits on the seafloor with two divers connected to it by umbilicals. The Vanguard habitat, seen here in an illustrator’s rendering, will be small enough to be transportable and yet capable of supporting three people at a maximum depth of 100 meters.Deep

The plan is to be able to deploy Vanguard wherever it’s needed for a week or so. Divers will be able to work for hours on the seabed before retiring to the module for meals and rest.

One of the novel features of Vanguard is its extraordinary flexibility when it comes to power. There are currently three options: When deployed close to shore, it can connect by cable to an onshore distribution center using local renewables. Farther out at sea, it could use supply from floating renewable-energy farms and fuel cells that would feed Vanguard via an umbilical link, or it could be supplied by an underwater energy-storage system that contains multiple batteries that can be charged, retrieved, and redeployed via subsea cables.

The breathing gases will be housed in external tanks on the seabed and contain a mix of oxygen and helium that will depend on the depth. In the event of an emergency, saturated divers won’t be able to swim to the surface without suffering a life-threatening case of decompression illness. So, Vanguard, as well as the future Sentinel, will also have backup power sufficient to provide 96 hours of life support, in an external, adjacent pod on the seafloor.

Data gathered from Vanguard this year will help pave the way for Sentinel, which will be made up of pods of different sizes and capabilities. These pods will even be capable of being set to different internal pressures, so that different sections can perform different functions. For example, the labs could be at the local bathymetric pressure for analyzing samples in their natural environment, but alongside those a 1-atmosphere chamber could be set up where submersibles could dock and visitors could observe the habitat without needing to equalize with the local pressure.

As Deep sees it, a typical configuration would house six people—each with their own bedroom and bathroom. It would also have a suite of scientific equipment including full wet labs to perform genetic analyses, saving days by not having to transport samples to a topside lab for analysis.

“By 2030, we hope to see a permanent human presence in the ocean,” says one of the project’s principals

A Sentinel configuration is designed to go for a month before needing a resupply. Gases will be topped off via an umbilical link from a surface buoy, and food, water, and other supplies would be brought down during planned crew changes every 28 days.

But people will be able to live in Sentinel for months, if not years. “Once you’re saturated, it doesn’t matter if you’re there for six days or six years, but most people will be there for 28 days due to crew changes,” says Krack.

Where 3D Printing and Welding Meet

It’s a very ambitious vision, and Deep has concluded that it can be achieved only with advanced manufacturing techniques. Deep’s manufacturing arm, Deep Manufacturing Labs (DML), has come up with an innovative approach for building the pressure hulls of the habitat modules. It’s using robots to combine metal additive manufacturing with welding in a process known as wire-arc additive manufacturing. With these robots, metal layers are built up as they would be in 3D printing, but the layers are fused together via welding using a metal-inert-gas torch.

A small black submarine is seen tied up at a floating dock, in a flooded quarry with hills and blue sky in the background. At Deep’s base of operations at a former quarry in Tidenham, England, resources include two Triton 3300/3 MK II submarines. One of them is seen here at Deep’s floating “island” dock in the quarry. Deep

During a tour of the DML, Harry Thompson, advanced manufacturing engineering lead, says, “We sit in a gray area between welding and additive process, so we’re following welding rules, but for pressure vessels we [also] follow a stress-relieving process that is applicable for an additive component. We’re also testing all the parts with nondestructive testing.”

Each of the robot arms has an operating range of 2.8 by 3.2 meters, but DML has boosted this area by means of a concept it calls Hexbot. It’s based on six robotic arms programmed to work in unison to create habitat hulls with a diameter of up to 6.1 meters. The biggest challenge with creating the hulls is managing the heat during the additive process to keep the parts from deforming as they are created. For this, DML is relying on the use of heat-tolerant steels and on very precisely optimized process parameters.

Engineering Challenges for Long-Term Habitation

Besides manufacturing, there are other challenges that are unique to the tricky business of keeping people happy and alive 200 meters underwater. One of the most fascinating of these revolves around helium. Because of its narcotic effect at high pressure, nitrogen shouldn’t be breathed by humans at depths below about 60 meters. So, at 200 meters, the breathing mix in the habitat will be 2 percent oxygen and 98 percent helium. But because of its very high thermal conductivity, “we need to heat helium to 31–32 °C to get a normal 21–22 °C internal temperature environment,” says Rick Goddard, director of engineering at Deep. “This creates a humid atmosphere, so porous materials become a breeding ground for mold”.

There are a host of other materials-related challenges, too. The materials can’t emit gases, and they must be acoustically insulating, lightweight, and structurally sound at high pressures.

A long quarry filled with murky greenish water is seen from above. Deep’s proving grounds are a former quarry in Tidenham, England, that has a maximum depth of 80 meters. Deep

There are also many electrical challenges. “Helium breaks certain electrical components with a high degree of certainty,” says Goddard. “We’ve had to pull devices to pieces, change chips, change [printed circuit boards], and even design our own PCBs that don’t off-gas.”

The electrical system will also have to accommodate an energy mix with such varied sources as floating solar farms and fuel cells on a surface buoy. Energy-storage devices present major electrical engineering challenges: Helium seeps into capacitors and can destroy them when it tries to escape during decompression. Batteries, too, develop problems at high pressure, so they will have to be housed outside the habitat in 1-atmosphere pressure vessels or in oil-filled blocks that prevent a differential pressure inside.

Is it Possible to Live in the Ocean for Months or Years?

When you’re trying to be the SpaceX of the ocean, questions are naturally going to fly about the feasibility of such an ambition. How likely is it that Deep can follow through? At least one top authority, John Clarke, is a believer. “I’ve been astounded by the quality of the engineering methods and expertise applied to the problems at hand and I am enthusiastic about how DEEP is applying new technology,” says Clarke, who was lead scientist of the U.S. Navy Experimental Diving Unit. “They are advancing well beyond expectations…. I gladly endorse Deep in their quest to expand humankind’s embrace of the sea.”

Reference: https://ift.tt/cs54XlA

The Top 10 AI Stories of 2024




IEEE Spectrum‘s most popular AI stories of the last year show a clear theme. In 2024, the world struggled to come to terms with generative AI’s capabilities and flaws—both of which are significant. Two of the year’s most read AI articles dealt with chatbots’ coding abilities, while another looked at the best way to prompt chatbots and image generators (and found that humans are dispensable). In the “flaws” column, one in-depth investigation found that the image generator Midjourney has a bad habit of spitting out images that are nearly identical to trademarked characters and scenes from copyrighted movies, while another investigation looked at how bad actors can use the image generator Stable Diffusion version 1.5 to make child sexual abuse material.

Two of my favorites from this best-of collection are feature articles that tell remarkable stories. In one, an AI researcher narrates how he helped gig workers gather and organize data in order to audit their employer. In another, a sociologist who embedded himself in a buzzy startup for 19 months describes how engineers cut corners to meet venture capitalists’ expectations. Both of these important stories bring readers inside the hype bubble for a real view of how AI-powered companies leverage human labor. In 2025, IEEE Spectrum promises to keep giving you the ground truth.

1. AI Prompt Engineering Is Dead

An illustration of a man writing words on sheets and dropping them into a a robotic head. David Plunkert

Even as the generative AI boom brought fears that chatbots and image generators would take away jobs, some hoped that it would create entirely new jobs—like prompt engineering, which is the careful construction of prompts to get a generative AI tool to create exactly the desired output. Well, this article put a damper on that hope. Spectrum editor Dina Genkina reported on new research showing that AI models do a better job of constructing prompts than human engineers.

2. Generative AI Has a Visual Plagiarism Problem

A grid of 9 images produced by generative AI that are recognizable actors and characters from movies, video games, and television. Gary Marcus and Reid Southen via Midjourney

The New York Times and other newspapers have already sued AI companies for text plagiarism, arguing that chatbots are lifting their copyrighted stories verbatim. In this important investigation, Gary Marcus and Reid Southen showed clear examples of visual plagiarism, using Midjourney to produce images that looked almost exactly like screenshots from major movies, as well as trademarked characters such as Darth Vader, Homer Simpson, and Sonic the Hedgehog. It’s worth taking a look at the full article just to see the imagery.

The authors write: “These results provide powerful evidence that Midjourney has trained on copyrighted materials, and establish that at least some generative AI systems may produce plagiaristic outputs, even when not directly asked to do so, potentially exposing users to copyright infringement claims.”

3. How Good Is ChatGPT at Coding, Really?

Illustration of ghostly hands with 0s an 1s hovering over a keyboard Getty Images

When OpenAI’s ChatGPT first came out in late 2022, people were amazed by its capacity to write code. But some researchers who wanted an objective measure of its ability evaluated its code in terms of functionality, complexity and security. They tested GPT-3.5 (a version of the large language model that powers ChatGPT) on 728 coding problems from the LeetCode testing platform in five programming languages. They found that it was pretty good on coding problems that had been on LeetCode before 2021, presumably because it had seen those problems in its training data. With more recent problems, its performance fell off dramatically: Its score on functional code for easy coding problems dropped from 89 percent to 52 percent, and for hard problems it dropped from 40 percent to 0.66 percent.

It’s worth noting, though, that the OpenAI models GPT-4 and GPT-4o are superior to the older model GPT-3.5. And while general-purpose generative AI platforms continue to improve at coding, 2024 also saw the proliferation of increasingly capable AI tools that are tailored for coding.

4. AI Copilots Are Changing How Coding Is Taught

Photo-illustration of a mini AI bot looking at a laptop atop a stock of books, sitting next to human hands on a laptop. Alamy

That third story on our list perfectly sets up the fourth, which takes a good look at how professors are altering their approaches to teaching coding, given the aforementioned proliferation of coding assistants. Introductory computer science courses are focusing less on coding syntax and more on testing and debugging, so students are better equipped to catch mistakes made by their AI assistants. Another new emphasis is problem decomposition, says one professor: “This is a skill to know early on because you need to break a large problem into smaller pieces that an LLM can solve.” Overall, instructors say that their students’ use of AI tools is freeing them up to teach higher-level thinking that used to be reserved for advanced classes.

5. Shipt’s Algorithm Squeezed Gig Workers. They Fought Back

A photo collage of Shipt workers receipts, data and people Mike McQuade

This feature story was authored by an AI researcher, Dana Calacci, who banded together with gig workers at Shipt, the shopping and delivery platform owned by Target. The workers knew that Shipt had changed its payment algorithm in some mysterious way, and many had seen their pay drop, but they couldn’t get answers from the company—so they started collecting data themselves. When they joined forces with Calacci, he worked with them to build a textbot so workers could easily send screenshots of their pay receipts. The tool also analyzed the data, and told each worker whether they were getting paid more or less under the new algorithm. It found that 40 percent of workers had gotten an unannounced pay cut, and the workers used the findings to gain media attention as they organized strikes, boycotts, and protests.

Calacci writes: “Companies whose business models rely on gig workers have an interest in keeping their algorithms opaque. This “information asymmetry” helps companies better control their workforces—they set the terms without divulging details, and workers’ only choice is whether or not to accept those terms.... There’s no technical reason why these algorithms need to be black boxes; the real reason is to maintain the power structure.”

6. 15 Graphs That Explain the State of AI in 2024

AI spelled on graph paper IEEE Spectrum

Like a couple of Russian nesting dolls, here we have a list within a list. Every year Stanford puts out its massive AI Index, which has hundreds of charts to track trends within AI; chapters include technical performance, responsible AI, economy, education, and more. This year’s index. And for the past four years, Spectrum has read the whole thing and pulled out those charts that seem most indicative of the current state of AI. In 2024, we highlighted investment in generative AI, the cost and environmental footprint of training foundation models, corporate reports of AI helping the bottom line, and public wariness of AI.

7. A New Type of Neural Network Is More Interpretable

deep purple dots and lines connected together with 0's and 1's inbetween against a dark background iStock

Neural networks have been the dominant architecture in AI since 2012, when a system called AlexNet combined GPU power with a many-layered neural network to get never-before-seen performance on an image-recognition task. But they have their downsides, including their lack of transparency: They can provide an answer that is often correct, but can’t show their work. This article describes a fundamentally new way to make neural networks that are more interpretable than traditional systems and also seem to be more accurate. When the designers tested their new model on physics questions and differential equations, they were able to visually map out how the model got its (often correct) answers.

8. AI Takes On India’s Most Congested City

A man is seen from behind at a desk in front of three jumbo screens with the feeds from many traffic cameras. Edd Gent

The next story brings us to the tech hub of Bengaluru, India, which has grown faster in population than in infrastructure—leaving it with some of the most congested streets in the world. Now, a former chip engineer has been given the daunting task of taming the traffic. He has turned to AI for help, using a tool that models congestion, predicts traffic jams, identifies events that draw big crowds, and enables police officers to log incidents. For next steps, the traffic czar plans to integrate data from security cameras throughout the city, which would allow for automated vehicle counting and classification, as well as data from food delivery and ride sharing companies.

9. Was an AI Image Generator Taken Down for Making Child Porn?

A glowing white laptop screen in a dark room Mike Kemp/Getty Images

In another important investigation exclusive to Spectrum, AI policy researchers David Evan Harris and Dave Willner explained how some AI image generators are capable of making child sexual abuse material (CSAM), even though it’s against the stated terms of use. They focused particularly on the open-source model Stable Diffusion version 1.5, and on the platforms Hugging Face and Civitai that host the model and make it available for free download (in the case of Hugging Face, it was downloaded millions of times per month). They were building on prior research that has shown that many image generators were trained on a data set that included hundreds of pieces of CSAM. Harris and Willner contacted companies to ask for responses to these allegations and, perhaps in response to their inquiries, Stable Diffusion 1.5 promptly disappeared from Hugging Face. The authors argue that it’s time for AI companies and hosting platforms to take seriously their potential liability.

10. The Messy Reality Behind a Silicon Valley Unicorn

An image of a sawhorse with a unicorn head on it. The Voorhes

What happens when a sociologist embeds himself in a San Francisco startup that has just received an initial venture capital investment of $4.5 million and quickly shot up through the ranks to become one of Silicon Valley’s “unicorns” with a valuation of more than $1 billion? Answer: You get a deeply engaging book called Behind the Startup: How Venture Capital Shapes Work, Innovation, and Inequality, from which Spectrum excerpted a chapter. The sociologist author, Benjamin Shestakofsky, describes how the company that he calls AllDone (not its real name) prioritized growth at all costs to meet investor expectations, leading engineers to focus on recruiting both staff and users rather than doing much actual engineering.

Although the company’s whole value proposition was that it would automatically match people who needed local services with local service providers, it ended up outsourcing the matching process to a Filipino workforce that manually made matches. “The Filipino contractors effectively functioned as artificial artificial intelligence,” Shestakofsky writes, “simulating the output of software algorithms that had yet to be completed.”

Reference: https://ift.tt/bnpxtLz

Monday, December 30, 2024

IEEE Continues to Strengthen Its Research Integrity Process




Ensuring the integrity of the research IEEE publishes is crucial to maintaining the organization’s credibility as a scholarly publisher.

IEEE produces more than 30 percent of the world’s scholarly literature in electrical engineering, electronics, and computer science. In fact, the 50 top-patenting companies cite IEEE nearly three times more than any other technical-literature publisher.

With the volume of academic paper submissions increasing over the years, IEEE is continuously evolving its publishing processes based on industry best practices to help detect papers with issues of concern. They include plagiarism, inappropriate citations, author coercion by editors or reviewers, and the use of artificial intelligence to generate content without proper disclosure.

“Within the overall publishing industry, there are now many more types of misconduct and many more cases of violations, so it has become essential for all technical publishers to deal seriously with them,” says IEEE Fellow Sergio Benedetto, vice president of IEEE Publication Services and Products.

“Authors are also more careful to choose publishers that are serious about addressing misconduct. It has become important not only for the integrity of research but also for the publishing business,” adds Benedetto, a member of the IEEE Board of Directors.

“It’s important to understand that the IEEE is not blind to this problem but rather is investing heavily in trying to solve it,” says Steven Heffner, managing director of IEEE Publications. “We’re investing in technology around detection and investigation. We’re also investing with partners to develop their technologies so that we can help the whole industry scale up a detection system. And we’re also investing in staff resources.”

New ways to rig the system

Some of the root causes driving the misconduct have to do with incentives offered to authors to encourage them to publish more, Heffner says.

“Promotions and tenure are tied to publishing research, and the old ‘publish or perish’ imperative is still in operation,” he says. “There are now more ways for scholars to be evaluated through technology tracking the number of times their work has been cited, and through the use of bibliometrics,” a statistical method that counts the number of publications and citations by an author or researcher. Those statistics are used to validate the value their work brings to their organization.

“Even more sophisticated ways are being used to manipulate the system of bibliometrics,” Heffner says, “such as citation pseudo cartels that operate in a quid-pro-quo way of ‘I’ll cite you if you cite me.’

“Unfortunately, these are all creating more opportunities for people to abuse the system.”

Other activities on the rise include paper mills run by for-profit companies, which create fake journal articles that appear to be genuine research and then sell authorship to would-be scholars.

“Within the overall publishing industry, there are now many more types of misconduct and many more cases of violations, so it has become essential for all technical publishers to deal seriously with them.” —Sergio Benedetto

“I think the paper mill is the most dangerous at-scale problem we’ve got,” Heffner says. “But the old crimes such as plagiarism still persist, and in some ways are getting harder to spot.”

Benedetto says some fraudulent authors are making up the names and websites of reviewers, so their articles get accepted without undergoing peer review. It’s a serious issue, he says.

“I don’t think IEEE is unique in its experience in this phenomenon of misconduct,” he says. “Several commercial publishers and many in fields outside of technology are seeing the same problems.”

Addressing author misconduct

The IEEE PSPB Publishing Content Committee, which handles editorial misconduct cases, treats violations of its publication process as major infractions.

“IEEE’s volunteers are particularly strong on developing policies,” Heffner says. “We need that governance, but we also need their expertise as people who are participants in the endeavor of science.”

Benedetto says IEEE is serious about finding questionable papers and approaches, and it has launched several initiatives.

IEEE checks all content submitted to journals for plagiarism. A systematic, real-time analysis of data during the publication process helps identify potential wrongdoing. Reviewers of papers are required to include recommended references on their review form to monitor for high bibliometrics.

The organization’s peer review platform works to identify possible misuse by reviewers and editors. It monitors for biased reviews, conflicts of interest, plagiarism, and tracking reviewer activity to identify patterns that might indicate inappropriate behavior.

The names of authors and editors are compared against a list of prohibited participants, people who have violated IEEE publishing principles and can no longer publish in the organization’s journals.

Some unscrupulous authors are using artificial intelligence to game the system, Heffner says.

“With the advent of generative AI, completely fraudulent papers can be made more quickly and look more convincingly authentic,” he says.

That leads to concerns about the data’s validity.

A new policy addresses the use of AI by authors and peer reviewers. Authors who use AI for creating text or other work in their articles must clearly identify the portions and provide appropriate references to the AI tool. Reviewers are not permitted to load manuscripts into an AI-based large language model to generate their reviews, nor may they use AI to write them.

Anyone who suspects misconduct of any type—including inappropriate citations, use of AI, and plagiarism—can file a complaint using the IEEE Ethics Reporting Line. It is available seven days a week, 24 hours a day. An independent third party manages the process, and the information provided is sent to IEEE on a confidential basis and, if requested, anonymously.

Types of corrective actions

Should an author or reviewer be suspected of misconduct, a case is opened and a detailed analysis is performed. An independent committee reviews the information and, if warranted, begins an investigation. The alleged offender is allowed to respond to the allegations. If the offender is found guilty, several sanctions can be applied.

Depending on the severity of the violation, an escalating system of sanctions is used. Individuals who plagiarize content at a severe enough level are restricted from editorial duties and publishing, and their names get added to the prohibited participants list (PPL) database. Those on the list may not participate in any IEEE publication–related activities, including conferences. They also are removed from any editorial positions they hold.

IEEE has strengthened its article retraction and removal policies. When an article is flagged, the author receives an expression of concern. Unreliable data could result from an honest error or from research misconduct.

IEEE considers retraction a method of correcting the literature. When there are issues with the content, it takes the appropriate level of care and time in the review and, if necessary, retracts nonconforming publications. The retraction notices alert readers to those publications. Retractions also are used to alert readers to cases of redundant publication, plagiarism, and failure to disclose competing interests likely to have influenced interpretations or recommendations. In the most severe cases, articles are removed.

Retracted articles are not removed from printed copies of the publication or electronic archives, but their retracted status and the reason for retraction are explained.

IEEE’s corrective actions for publishing misconduct used to be focused on restricting authorship, but they now include restrictions on editorial roles such as peer reviewer, editor, conference organizer, and conference publication officer. Their names also are added to the PPL, and they can be prohibited from publishing with IEEE.

Industry-wide efforts to detect misconduct

IEEE and other scientific, technical, and medical (STM) publishers have joined forces to launch pilot programs aimed at detecting simultaneous submissions of suspicious content across publishers, Heffner says.

They are working on developing an STM Integrity Hub, a powerful submission screening tool that can flag tactics related to misconduct, including paper mills.

The publishers also are developing custom AI and machine learning–based tools to screen submissions and those articles that have undergone peer review in real time. Some of the tools have already been rolled out.

Benedetto says he is working on a process for sharing IEEE’s list of prohibited participants with other publishers.

“Those found guilty of misconduct simply go to other publishers,” he says. “Each publisher has its own list, but those aren’t shared with others, so it has become very simple for a banned author to change publishers to get around the ban. A shared list of misconduct cases would prevent those who are found guilty from publishing in all technical journals during the period of their sentence.”

“We are all working together to share information and to share best practices,” Heffner says, “so that we can fight this as a community of publishers that take their stewardship of the scholarly record seriously.”

“Some colleagues or authors think that misconduct may be a shortcut to build a better career or reach publication targets more easily,” Benedetto says. “This is not true. Misconduct is not a personal issue. It’s an issue that can and does build mistrust toward institutions, publishers, and journals.

“IEEE will continue to strengthen its efforts to combat publication misconduct cases because we believe that research integrity is the basis of our business. If readers lose trust in our journals and authors, then they lose trust in the IEEE itself.”

Reference: https://ift.tt/gny5X7W

Why China Is Building a Thorium Molten-Salt Reactor




After a half-century hiatus, thorium has returned to the front lines of nuclear power research as a source of fuel. In 2025, China plans to start building a demonstration thorium-based molten-salt reactor in the Gobi Desert.

The 10-megawatt reactor project, managed by the Chinese Academy of Sciences’ Shanghai Institute of Applied Physics (SINAP), is scheduled to be operational by 2030, according to an environmental-impact report released by the Academy in October. The project follows a 2-MW experimental version completed in 2021 and operated since then.

This article is part of our special report Top Tech 2025.

China’s efforts put it at the forefront of both thorium-based fuel breeding and molten-salt reactors. Several companies elsewhere in the world are developing plans for this kind of fuel or reactor, but none has yet operated one. Prior to China’s pilot project, the last operating molten-salt reactor was Oak Ridge National Laboratory’s Molten Salt Reactor Experiment, which ran on uranium. It shut down in 1969.

Thorium-232, found in igneous rocks and heavy mineral sands, is more abundant on Earth than the commonly used isotope in nuclear fuel, uranium-235. But this weakly radioactive metal isn’t directly fissile–it can’t undergo fission, the splitting of atomic nuclei that produces energy. So it must first be transformed into fissile uranium-233. That’s technically feasible, but whether it’s economical and practical is less clear.

China’s Thorium-Reactor Advances

The attraction of thorium is that it can help achieve energy self-sufficiency by reducing dependence on uranium, particularly for countries such as India with enormous thorium reserves. But China may source it in a different way: The element is a waste product of China’s huge rare earth mining industry. Harnessing it would provide a practically inexhaustible supply of fuel. Already, China’s Gansu province has maritime and aerospace applications in mind for this future energy supply, according to the state-run Xinhua News Agency.

Scant technical details of China’s reactor exist, and SINAP didn’t respond to IEEE Spectrum’s requests for information. The Chinese Academy of Sciences’ environmental-impact report states that the molten-salt reactor core will be 3 meters in height and 2.8 meters in diameter. It will operate at 700 °C and have a thermal output of 60 MW, along with 10 MW of electricity.


Molten-salt breeder reactors are the most viable designs for thorium fuel, says Charles Forsberg, a nuclear scientist at MIT. In this kind of reactor, thorium fluoride dissolves in molten salt in the reactor’s core. To turn thorium-232 into fuel, it is irradiated to thorium-233, which decays into an intermediate, protactinium-233, and then into uranium-233, which is fissile. During this fuel-breeding process, protactinium is removed from the reactor core while it decays, and then it is returned to the core as uranium-233. Fission occurs, generating heat and then steam, which drives a turbine to generate electricity.

But many challenges come along with thorium use. A big one is dealing with the risk of proliferation. When thorium is transformed into uranium-233, it becomes directly usable in nuclear weapons. “It’s of a quality comparable to separated plutonium and is thus very dangerous,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists in Washington, D.C. If the fuel is circulating in and out of the reactor core during operation, this movement introduces routes for the theft of uranium-233, he says.

Thorium Fuel Charms Nuclear-Power Sector

Most groups developing molten-salt reactors are focused on uranium or uranium mixtures as a fuel, at least in the short term. Natura Resources and Abilene Christian University, both in Abilene, Texas, are collaborating on a 1-MW liquid-molten-salt reactor after receiving a construction permit in September from the U.S. Nuclear Regulatory Commission. Kairos Power is developing a fluoride-salt-cooled, high-temperature reactor in Oak Ridge, Tenn., that will use uranium-based tri-structural isotropic (TRISO) particle fuel. The company in October inked a deal with Google to provide a total of 500 MW by 2035 to power its data centers.

But China isn’t alone in its thorium aspirations. Japan, the United Kingdom, and the United States, in addition to India, have shown interest in the fuel at one point or another. The proliferation issue doesn’t seem to be a showstopper, and there are ways to mitigate the risk. Denmark’s Copenhagen Atomics, for example, currently aims to develop a thorium-based molten-salt reactor, with a 1-MW pilot planned for 2026. The company plans to weld it shut so that would-be thieves would have to break open a highly radioactive system to get at the weapon-ready material. Chicago-based Clean Core Thorium Energy developed a blended thorium and enriched uranium (including high-assay low-enriched uranium, or HALEU) fuel, which they say can’t be used in a weapon. The fuel is designed for heavy-water reactors.

Political and technical hurdles may have largely sidelined thorium fuel and molten-salt-reactor research for the last five decades, but both are definitely back on the drawing table.

Reference: https://ift.tt/AO7Yk95

The Top 10 Semiconductor Stories of 2024




I like to think I can learn something about our readers from the list of most read semiconductor articles. What I think I’ve learned from this year’s list is that you are as obsessed as I am with packing more and more computing power into less and less space. That’s good, because it’s the main goal of a huge chunk of the industry as well.

Not all of this list fits neatly into that mold, but hey, who doesn’t love a millimeter-scale laser chip that can slice through steel?

1. How We’ll Reach a Trillion Transistor GPU

An image of a silicon wafer in a fab. TSMC

1971 was a special year for a number of reasons—the first e-book was posted, the first one-day international cricket match was played, this reporter was born. It was also the first time the semiconductor industry sold more than 1 trillion transistors. If TSMC executives’ predictions are correct, there will be 1 trillion transistors in just one GPU within a decade. Just how the foundry plans to deliver such a technological feat was the subject of the most read semiconductor story we posted this year.

2. The Tiny Ultrabright Laser That Can Melt Steel

A dark circle criss-crossed with narrow gold lines is surrounded by arcing gold wires on all sides. Susumu Noda

Slicing through steel and other feats of optical superheroism have, until very recently, been the reserve of large carbon dioxide lasers and similarly bulky systems. But now, centimeter-scale semiconductors have joined the club. Called photonic crystal semiconductor lasers (PCSELs), the devices take advantage of a complex array of carefully shaped nanometer-scale holes inside the semiconductor to steer more energy straight out of the laser. A PCSEL built in Japan produced a steel-slicing beam that diverges just 0.5 degrees.

3. In 2024, Intel Hopes to Leapfrog its Chipmaking Competitors

A black, white, grey image of the inside of a chip Intel

Intel had some big ambitions at the beginning of the year. Things are looking a lot less rosy now. Nevertheless, the predictions of this January 2024 issue article have come to pass. Intel is set to manufacture chips using a combination of two new technologies, nanosheet transistors and back side power delivery. Although the main competition, TSMC, is moving to nanosheets soon, too, the foundry behemoth is leaving back side power for later. But Intel’s plans didn’t completely survive contact with customers and competition. Instead of commercializing its first iteration of the combo, called 20A, it’s skipping on to the next version, called 18A.

4. Researchers Claim First Functioning Graphene-Based Chip

A man with white hair, beard and mustache holds a circular piece of material in front of his eye. Chris McKenney/Georgia Institute of Technology

Graphene has long been an interesting material for future electronics but a frustrating one, too. Electrons zip through it at speeds silicon could only wish for, tempting researchers with the potential of terahertz transistors. But it has no natural bandgap, and it’s proven really difficult to give it one. But Georgia Tech researchers have given it one more go and come up with a pretty straightforward way to make a semiconductor version atop a wafer of silicon carbide.

5. A Peak at Intel’s Future Foundry Tech

Colorful slabs stacked on top of each other in dramatic lighting. Intel

Intel’s foundry division is pinning its hopes on gaining foundry customers for its 18A process, which, as noted above, combines nanosheet transistors and back side power delivery. There haven’t been a lot of details about what customers plan to build with this tech, but Intel executives did explain to IEEE Spectrum how they planned to use those technologies, and some advanced packaging too, in a server CPU codenamed Clearwater Forest.

6. Challengers Are Coming for Nvidia’s Crown

An illustration of archers shotting arrows at a large man with a chip as a shield. David Plunkert

Can anyone beat Nvidia? It’s the subtext of so many articles about AI hardware, that we thought we should ask it explicitly. The answer: A very solid maybe. It all depends on what you’re trying to beat the company at.

7. India Injects $15 Billion Into Semiconductors

computer chip with a map of india on top with gold lines coming out and 0's and 1's iStock

In a year when the United States inked a blitz of preliminary deals as part of its $52-billion attempt to reinvigorate its chipmaking industry, our loyal readers were way more interested in India’s somewhat smaller moves. That government announced a trio of deals, including the country’s first silicon CMOS fab. A key architect of India’s plans to boost chip R&D explained it all to IEEE Spectrum later in the year.

8. Hybrid Bonding Plays Starring Role in 3D Chips

light gray cone looking objects on a dark gray background imec

Chip packaging is one of the most important aspects of the continuation of Moore’s Law, enabling systems made of many different silicon dies linked together as if they were one gigantic chip. And the hottest thing in advanced packaging is a technology called 3D hybrid bonding. (I know this because I sat in on no less than 20 talks about it at the IEEE Electronic Components Technology Conference in May 2024.) 3D hybrid bonding joins chips together in a vertical stack with connections so dense you could fit millions of them in a square millimeter.

9. Is the Future of Moore’s Law in a Particle Accelerator?

A room full of industrial equipment with a line of instruments at hip height that goes off into the distance. KEK

Just when you thought the making of advanced chips was already a bonkers process, here comes a hint that the future will be even more bananas than the present. Extreme-ultraviolet lithography today relies on a Rube-Goldberg-esque procedure of zapping flying droplets of molten tin with kilowatt-class lasers to produce glowing balls of plasma. But future chipmaking will want brighter light than such a system could provide. The answer, some say, is a gigantic particle accelerator that saves energy by using the high-energy physics version of regenerative braking.

10. Expect a Wave of Waferscale Computers

Seven metallic squares with various electronic components hover in a stack. Tesla

Like cowbell in a certain 1970s rock anthem, future computers need more silicon. How much? How about a whole wafer’s full of it. Back in April, the world’s biggest foundry, TSMC, laid out its plans for advanced packaging, and that future points toward wafer-scale computers. TSMC has technically been making one for a while now for Cerebras, but what it’s planning to offer in the coming years will be much more flexible and universally available. In 2027 the technology could lead to systems with 40 times as much computing power as today’s Reference: https://ift.tt/eZykPMR

Passkey technology is elegant, but it’s most definitely not usable security


It's that time again, when families and friends gather and implore the more technically inclined among them to troubleshoot problems they're having behind the device screens all around them. One of the most vexing and most common problems is logging into accounts in a way that's both secure and reliable.

Using the same password everywhere is easy, but in an age of mass data breaches and precision-orchestrated phishing attacks, it's also highly unadvisable. Then again, creating hundreds of unique passwords, storing them securely, and keeping them out of the hands of phishers and database hackers is hard enough for experts, let alone Uncle Charlie, who got his first smartphone only a few years ago. No wonder this problem never goes away.

Passkeys—the much-talked-about password alternative to passwords that have been widely available for almost two years—was supposed to fix all that. When I wrote about passkeys two years ago, I was a big believer. I remain convinced that passkeys mount the steepest hurdle yet for phishers, SIM swappers, database plunderers, and other adversaries trying to hijack accounts. How and why is that?

Read full article

Comments

Reference : https://ift.tt/isQIbec

Sunday, December 29, 2024

GPS Spoofing Attacks Are Dangerously Misleading Airliners




In 2023, at least 20 civilian aircraft flying through the Middle East were misled by their onboard GPS units into flying near Iranian airspace without clearance—situations that could have provoked an international incident. These planes were victims of GPS spoofing, in which deceptive signals from the ground, disguised as trustworthy signals from GPS satellites in orbit, trick an aircraft’s instruments into reporting the aircraft’s location as somewhere that it isn’t. Spoofing is a more sophisticated tactic than GPS jamming, in which malicious signals overwhelm a targeted GPS receiver until it can no longer function.

Todd Humphreys


Todd Humphreys is a professor of aerospace engineering at the University of Texas at Austin, where he directs the Wireless Networking and Communications Group and the Radionavigation Laboratory.

Long theorized, GPS spoofing attacks have increasingly cropped up in civilian airspace in recent years, prompting concerns about this new frontier in electronic warfare. IEEE Spectrum spoke with Todd Humphreys of the University of Texas at Austin about how spoofing works and how aircraft can be protected from it.

What is an example of a GPS spoofing attack?

Todd Humphreys: In 2017, we began to see spoofing attacks happening in the Black Sea. As time progressed, the spoofing has only gotten more sophisticated and more widespread. Nowadays, if you’re in the Eastern Mediterranean, and you’re on a flight bound for Turkey or Cyprus or Israel, it’s very likely that the GPS units in your aircraft will get spoofed. They will indicate a position at the Beirut airport or in Cairo. And it’s because Israel is sending out signals that fool GPS receivers for hundreds of kilometers around the country.

How can spoofing be detected?

Humphreys: It’s provable that you cannot, in all cases, detect spoofing. That’s because GPS is a one-way system. It broadcasts signals, but it doesn’t take any input from the receivers. So there’s always the possibility of somebody broadcasting a lookalike signal and fooling a receiver.

How can airlines reduce the chances of their planes’ GPS units being spoofed?

Humphreys: There’s an antenna on the front of large commercial aircraft, and in the aft also, there’s an antenna. Combining these together and analyzing the signals from them would enable you to detect almost all cases of spoofing.

So what’s the catch?

Humphreys: I spoke with Boeing about this many years ago. I said, “Look, I’d like to offer you a way of combining the signals from these two different antennas so that you could more readily detect spoofing.” And they pointed out that it was very important for their systems that these antennas operate entirely independently because they’re there for redundancy. They’re there for safety reasons.

Will the fight against spoofing always be an arms race?

Humphreys: There is often a trade-off between traditional safety on the one hand—and on the other hand, purposeful attacks from strategic adversaries. So it really depends on what you’re trying to protect yourself from. Is it that maybe one of your internal GPS antennas is just going to spontaneously fail, which does happen? Or is it that your most pressing fear is being caught in the crossfire of a war zone and having your GPS receiver spoofed without knowing? Unfortunately, it’s tough to address both of these problems with the same hardware at the same time.

This article appears in the January 2025 issue as “5 Questions for Todd Humphreys.”

Reference: https://ift.tt/sx91ZKU

The Top 7 Robotics Stories of 2024




2024 was the best year ever for robotics, which I’m pretty sure is not something that I’ve ever said before. But that’s the great thing about robotics—it’s always new, and it’s always exciting. What may be different about this year is the real sense that not only is AI going to change everything about robots, but that it will somehow make robots useful and practical and commercially viable. Is that true? Nobody knows yet! But it means that 2025 might actually be the best year ever for robotics, if you’ve ever wanted a robot to help you out at home or at work.

So as we look forward to 2025, here are some of our most interesting and impactful stories of the past year. And as always, thanks for reading!


1. Figure Raises $675M for Its Humanoid Robot Development

A photograph of a slim humanoid robot standing with reflective metal skin and a black motorcycle helmet. Figure

This announcement from back in February is pretty much what set the tone for robotics in 2024. Figure’s Series B raise valued the company at a bonkers US $2.6 billion, and all of a sudden, humanoids were where it’s at. And by “it,” I mean everything, from funding to talent to breathless media coverage. The big question of 2024 was whether or not humanoids would be able to deliver on their promises, and that’s shaping up to be the big question of 2025, too.

2. Hello, Electric Atlas

A bipedal humanoid robot with a circular head shining blu Boston Dynamics

It didn’t take long for legendary robotics company Boston Dynamics to make it clear that they’re not going to be left behind when it comes to commercial humanoids. For a company that has been leading humanoid research longer than just about anyone but has bounced around from owner to owner over the last 10 years, we were a little unsure whether Atlas would ever be more than a research platform. But the new all-electric Atlas is designed for work, and we saw it get busy in 2024.

3. Farewell, Hydraulic Atlas

Boston Dynamics Atlas robot jumping across a gap Boston Dynamics

As much as we’re excited for the new Atlas, the old hydraulic Atlas will always have a special place in our hearts. We’ve been through so much together, from the DRC to parkour to dancing. Electric robots are great and all, and I understand why they’re necessary for commercial applications, but all of that hydraulic power meant that hydraulic Atlas was able to move in dynamic ways that we may not see again for a very long time.

4. Nvidia Announces GR00T

A rendering of a pale blue humanoid robot standing in front of a bunch of various other dimly lit robots. Nvidia

So we’ve got all these humanoid robots now with all this impressive hardware, but the really hard part (or one of them, anyway) is getting those robots to actually do something commercially useful in a safe and reliable way. Is training in simulation the answer? I don’t know, but NVIDIA sure thinks so, and they’ve made a huge commitment by investing in GR00T, a “general-purpose foundation model for humanoid robots.” And what does that mean, exactly? Nobody’s quite sure yet, but with NVIDIA behind it that’s enough to make the entire industry pay attention.

5. Is It Autonomous?

A man sits in a chair wearing VR googles and attached to two robotic arms. Near him, a robot with two arms show's the man's face on a screen while its two arms reach out for two blue bottles. Evan Ackerman

With all the attention on humanoid robots right now, it’s critical to be able to separate real progress from hype. Unfortunately, there are all kinds of ways of cheating with robots. And there’s really nothing wrong with cheating with robots, as long as you tell people that the cheating is happening, and then (hopefully) cheat less and less as your robot gets better and better. In particular, we’re likely to see more and more teleoperation of humanoid robots (obviously or otherwise) because that’s one of the best ways of collecting training data: by having a human do it. And being able to tell that a human is doing it is an important skill to have.

6. Robotic Metalsmiths

A robotic arm works on a metal plate Machina Labs

Some of my favorite robots are robots that are able to leverage their robotic-ness to not just do things that humans do, but also do things that humans cannot do. Robots have the patience and precision to work metal in ways that a very highly skilled human might be able to do once, but the robots (being robots) can do it over and over again. NASA is leveraging this capability to build complex toroidal tanks for spacecraft, but it has the potential to change anything that’s made out of sheet metal.

7. The End of Ingenuity

Ingenuity on Mars JPL-Caltech/ASU/NASA

One of the greatest robotics stories of the last several years has been Ingenuity, the little Mars helicopter. We’ve written extensively about how Ingenuity was designed, how it can fly on Mars, and how it just kept on flying, more than 50 times. But it couldn’t fly forever, and as Ingenuity was pushed to fly farther and farther over more challenging terrain, flight 72 was to be its last. After losing its ability to localize over some particularly featureless terrain, the little robot had a very rough landing. It lived to tell the tale, but not to fly again.


Ingenuity’s spectacularly successful mission means, we hope, that there will be more robotic aircraft on Mars. And just last week, NASA shared a new video of Ingenuity’s successor, the Mars Chopper. That’s definitely something we’ll be looking forward to. Reference: https://ift.tt/xWIB0fo

Saturday, December 28, 2024

The Top 10 Telecommunications Stories of 2024




For IEEE Spectrum readers following telecommunications news in 2024, signals expanding their reach and range animated readers to read more: Including stories on early-stage cellphone “towers” now in low-earth orbit, low-power Wi-Fi implementations reaching out for kilometers, China expanding its satellite broadband constellations into regions of the globe dominated by SpaceX Starlink, and 6G signals curving around obstacles to expand bandwidth and coverage.

To be fair, “signals expanding reach and range” is a fair description of telecommunications more generally. What ultimately is electromagnetic spectrum, after all, but signals propagating? Yet, compared to telecom news of a decade or two ago—when wireless remained in its wireless silo, while broadband remained squarely in the domain of broadband, and satellite communications remained something else entirely—the encroaching of each previously fixed category into one another’s turf seems to be increasingly commonplace in today’s hybridizing field.

So while 6G still currently remains just a glimmer in the eye of big telecom’s R&D departments, while emerging Internet megaconstellations are still launching, and while phone calls remain mostly all still routed through ground-based cellphone towers, here are the stories that drew Spectrum telecom readers’ eyes most reliably over last year.

1. China’s Challenge to SpaceX’s Starlink

clouds of smoke below a large rocket against a black background VCG/Getty Images

China has their own Starlink competitor, a megaconstellation called Qianfan (“thousand sails”) that’s now on track to deploy 14,000 low-earth orbiting satellites by the end of the decade. Which is very ambitious, especially when measured against the 54 (18 at the time of Spectrum‘s article on the topic) now in orbit. In tech startup circles, this kind of hopeful wishcasting could generate some scoffs among the investor class. On the other hand, China is anything but a scrappy and unproven startup. Some early indicators that Spectrum readers will want to watch as a thousand sails begin to unfurl through 2025: the nation’s satellite manufacturing and launch rates, how China’s National Space Agency handles emerging space debris questions worldwide, and the tension between international cooperation and competition that Qianfan will surely continue to stoke.

2. 6G Terahertz Signals Curve Around Obstacles

illustration of three different colored people with corresponding bending light trails in a darkly lit office setting Mittleman Group/Brown University

Between the microwave and infrared regions of the electromagnetic spectrum are waves that oscillate between 0.1 and 10 trillion cycles per second. These terahertz (THz) waves behave somewhat differently than the giga- and megahertz spectrum most wireless and Wi-Fi users are accustomed to. Case in point: some regions of the THz spectrum can collectively follow curved trajectories, enabling signals to follow the contours of physical barriers and extend the reach of THz networks beyond straight lines-of-sight.

3. Qualcomm Brings AI to Wi-Fi

Illustration of a square with another square embedded in it. The inner one has a Wi-Fi symbol, and multiple lines and dots rising vertically out of it. Andriy Onufriyenko/Getty Images

The San Diego-based telecom chip giant in 2024 released a chip called FastConnect 7900 that represented something a step-change in the field: As artificial intelligence encroaches into every field in tech—and plenty others too—FastConnect will extend Wi-Fi signals using AI-enhanced spectrum that enable the chip to suit the needs of each application. So a Wi-Fi connection streaming video draws a different power profile than one that’s carrying a voice or video call.

4. European Telecos Wave a Slow Goodbye to Huawei and ZTE

A man wearing a hard hat while replacing 5G antennas Ian Forsyth/Bloomberg/Getty Images

Telecom providers in the EU have been increasingly alert in recent years to security concerns raised by analysts and other analysts around 5G equipment manufactured by China-based teleco giants Huawei and ZTE. So while an EU directive speaks with a singular voice about phasing out Huawei and ZTE equipment in regional 5G networks, real-world implementation remains scattershot and patchy.

5. Low-Power Wi-Fi Extends Signals Up to 3 Kilometers

In the background, a still from a video shows a 3000m stretch of beach. Two video stills show 11.2 Mbps speed at 1000m and 1 Mbps at 3000m. Morse Micro

A new standard for low-power Wi-Fi has been setting long-distance records. Specifically, the Australian startup Morse Micro is extending the reach and range of what are called Wi-Fi HaLow networks out beyond the 3 kilometer mark. Due to its comparatively low bandwidth (1 megabit per second top speeds), the system appears best poised for the Internet of Things (IoT) marketplace, though Spectrum‘s story also notes that other competitors in the space beyond the Wi-Fi brand are offering fierce competition.

6. Quantum Cryptography Has Everyone Scrambling

Conceptual illustration of a key and quantum imagery with tones of orange, green and yellow. Harald Ritsch/Science Source

Meanwhile, securing signals with quantum keys (a decades-long race in the field called quantum key distribution) has inspired a range of approaches that finds China, India, the E.U., and the U.S. each pursuing their own approach to a virtually unhackable communications network. Even as finalists this year emerged for the entirely different cryptographic standard called post-quantum cryptography, QKD has analysts guessing who will emerge victorious as this new field of purely quantum communications comes of age.

7. FCC Denies a Starlink Bid for Lower Orbit

Photo-illustration of the ISS seen in black and white over a blue background with the Starlink logo, which resembles an x. IEEE Spectrum

In March, the U.S. Federal Communications Commission denied a SpaceX request to orbit some of its Starlink satellites in very-low earth orbit (VLEO) in a bid to decrease Starlink’s latency. The problem, the agency noted, is the satellites would have orbited below even the International Space Station. As a result, spacecraft launching to the ISS might have to dodge stray Starlink traffic along the way. How the FCC responds to future SpaceX requests like this will be a topic to watch in 2025, to be sure.

8. Glass Antenna Turns Windows Into 5G Basestations

An interior photo of a rectangular glass device attached to a building's window, with cables going between it and the ceiling. JTower

The Japanese company JTower grabbed headlines with an innovative deployment of 5G antennas that’d been integrated into some otherwise conventional office building windows in Tokyo. A company spokesperson noted that as 5G signals proliferate and the need for coverage expands, ways to shrink 5G infrastructure into a cityscape background could be an unobtrusive way to extend network reach and range without expanding eyesore of another conspicuous cellphone tower antenna.

9. Wi-Fi Goes Long-Range on WiLo Approach

A digital rendering of colorful lines running through an aerial view of trees and sparse houses in a rural community. Huber & Starke/Getty Images

Like item 5 above, a Chinese team of researchers have expanded Wi-Fi’s footprint toward greater long-distance wireless networks. Again like item 5, IoT appears to be the best application for a relatively low-power and lower-bandwidth implementation.

10. Satellites Are Becoming the New Cellphone Towers

cell phone with different sized circles in different colors with a satellite in the background iStock

From nearly a year ago, Spectrum chronicled the changing currents of wireless infrastructure—including larger antennas and new beamforming tech enabling satellite-based phone towers in low-earth orbit. In an urban or suburban and even in many rural locations, nothing can match the range and low cost of the old-fashioned metal tower on a hilltop. However, remote and underserved regions that might not otherwise have any cell service to speak of could ultimately benefit from a new orbiting approach to routing calls and data through the skies above.

Reference: https://ift.tt/idShNzc

CES 2025 Preview: Needleless Injections, E-Skis, and More

This weekend, I’m on my way to Las Vegas to cover this year’s Consumer Electronics Show. I’ve scoured the CES schedule and lists of exhib...