Friday, March 13, 2026

Supply-chain attack using invisible code hits GitHub and other repositories


Researchers say they’ve discovered a supply-chain attack flooding repositories with malicious packages that contain invisible code, a technique that’s flummoxing traditional defenses designed to detect such threats.

The researchers, from firm Aikido Security, said Friday that they found 151 malicious packages that were uploaded to GitHub from March 3 to March 9. Such supply-chain attacks have been common for nearly a decade. They usually work by uploading malicious packages with code and names that closely resemble those of widely used code libraries, with the objective of tricking developers into mistakenly incorporating the former into their software. In some cases, these malicious packages are downloaded thousands of times.

Defenses see nothing. Decoders see executable code

The packages Aikido found this month have adopted a newer technique: selective use of code that isn’t visible when loaded into virtually all editors, terminals, and code review interfaces. While most of the code appears in normal, readable form, malicious functions and payloads—the usual telltale signs of malice—are rendered in unicode characters that are invisible to the human eye. The tactic, which Aikido said it first spotted last year, makes manual code reviews and other traditional defenses nearly useless. Other repositories hit in these attacks include NPM and Open VSX.

Read full article

Comments

Reference : https://ift.tt/p7numBF

Video Friday: These Robots Were Born to Run




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

All legged robots deployed “in the wild” to date were given a body plan that was predefined by human designers and could not be redefined in situ. The manual and permanent nature of this process has resulted in very few species of agile terrestrial robots beyond familiar four-limbed forms. Here, we introduce highly athletic modular building blocks and show how they enable the automatic design and rapid assembly of novel agile robots that can “hit the ground running” in unstructured outdoor environments.

[ Northwestern UniversityCenter for Robotics and Biosystems ] [ Paper ] via [ Gizmodo ]

If you were going to develop the ideal urban delivery robot more or less from scratch, it would be this.

[ RIVR ]

Don’t get me wrong, there are some clever things going on here, but I’m still having a lot of trouble seeing where the unique, sustainable value is for a humanoid robot performing these sorts of tasks.

[ Figure ]

One of those things that you don’t really think about as a human, but is actually pretty important.

[ Paper ] via [ ETH Zurich ]

We propose TRIP-Bag (Teleoperation, Recording, Intelligence in a Portable Bag), a portable, puppeteer-style teleoperation system fully contained within a commercial suitcase, as a practical solution for collecting high-fidelity manipulation data across varied settings.

[ KIMLAB ]

We propose an open-vocabulary semantic exploration system that enables robots to maintain consistent maps and efficiently locate (unseen) objects in semi-static real-world environments using LLM-guided reasoning.

[ TUM ]

That’s it folks, we have no need for real pandas anymore—if we ever did in the first place. Be honest, what has a panda done for you lately?

[ MagicLab ]

RoboGuard is a general-purpose guardrail for ensuring the safety of LLM-enabled robots. RoboGuard is configured offline with high-level safety rules and a robot description, reasons about how these safety rules are best applied in robot’s context, then synthesizes a plan that maximally follows user preferences while ensuring safety.

[ RoboGuard ]

In this demonstration, a small team responds to a (simulated) radiation contamination leak at a real nuclear reactor facility. The team deploys their reconfigurable robot to accompany them through the facility. As the station is suddenly plunged into darkness, the robot’s camera is hot-swapped to thermal so that it can continue on. Upon reaching the approximate location of the contamination, the team installs a Compton gamma-ray camera and pan-tilt illuminating device. The robot autonomously steps forward, locates the radiation source, and points it out with the illuminator.

[ Paper ]

On March 6th, 2025, the Robomechanics Lab at CMU was flooded with 4 feet of black water (i.e. mixed with sewage). We lost most of the robots in the lab, and as a tribute my students put together this “In Memoriam” video. It includes some previously unreleased robots and video clips.

[ Carnegie Mellon University Robomechanics Lab ]

There haven’t been a lot of successful education robots, but here’s one of them.

[ Sphero ]

The opening keynote from the 2025 Silicon Valley Humanoids Summit: “Insights Into Disney’s Robotic Character Platform,” by Moritz Baecher, Director, Zurich Lab, Disney Research.

[ Humanoids Summit ]

Reference: https://ift.tt/t6m3S4O

Raquel Urtasun on Level-4 Autonomous Trucks




Raquel Urtasun has spent 16 years in the self-driving space, long enough to navigate every metaphorical glorious hill and plunging valley. She took the trip from the early “pipe dream” dismissals, to the “we’re this close” certainty, and back again.

The industry is now riding a new wave of optimism and investment, including at Waabi Innovation Inc., the autonomous trucking company that Urtasun founded in 2001. The Spanish-Canadian professor at the University of Toronto, and former chief scientist of Uber’s Advanced Technologies Group, has helped make Waabi a key player. Beginning in fall 2023, theToronto-based startup has been running geofenced cargo routes from Dallas to Houston in a fleet of retrofitted Peterbilt semis, navigating even residential streets in loaded, 36,000-kilogram (80,000-pound) behemoths with no human aboard.

In October, the company reached a milestone by integrating its “Waabi Driver” physical-AI system in Volvo’s new VNL Autonomous truck, which the Swedish automaker is building in Virginia. That self-driving solution uses Nvidia’s Drive AGX Thor, an AI-based platform for autonomous and software-defined vehicles.

In January, the Toronto-based startup raised $750 million in its latest funding round to expand its self-driving system into the fiercely competitive robotaxi space. Backers include Khosla Ventures, Nvidia, and Volvo.

Urtasun says the Waabi Driver can scale across a full range of vehicles, geographies and environments — although snowstorms can still create a no-go zone for now. It’s powered by what Urtasun calls the industry’s most advanced neural simulator, allowing a “shared brain” that partners can transplant into cars, trucks, and pretty much anything on wheels. The idea is to grab a chunk of a global autonomous trucking business that McKinsey estimates could be worth more than $600 billion a year by 2035; with autonomous haulers responsible for 15 percent of total U.S. trucking miles as early as 2030.

Backed by an additional $250 million from Uber, Waabi plans to deploy at least 25,000 autonomous taxis through Uber’s ride-hailing service, whose world-dominating reach encompasses 70 countries, about 15,000 cities and more than 200 million monthly users.

Urtasun spoke with IEEE Spectrum about how Waabi is counting on sensors and simulation to prove real-world safety; and why the move to autonomy is a moral imperative that outweighs the disruption for human drivers—whether they’re driving trucks or family sedans. Our conversation was edited for length and clarity.

The Shift to Next-Gen Autonomous Vehicles

IEEE Spectrum: Until quite recently, autonomous tech seemed to have hit a wall, at least in the public’s mind. Now investors are flooding the zone again, and companies are all-in. What happened?

Raquel Urtasun: There were a lot of empty promises, or [people] not realizing the complexity of the problem. There was a realization that actually, this problem is harder than people anticipated. It’s also because of the type of technology that was developed at the time, what we call “AV 1.0”. These are hand-engineered systems that need to be brute-forced by humans. You need lots of capital and a massive amount of miles on the road just to get to the first deployment.

What you see with the next generation—AV 2.0 and systems that can reason—is that you finally have a solution that scales. When we started the company, this was a very contrarian view. But today, the breakthroughs in AI have made it clear that this is the next big revolution. It’s not just about more compute; it’s about building a brain that can generalize. That is the “aha moment” the industry is having now.

Even for someone who believes in the tech, seeing a driverless semi-trailer in your rear-view mirror might be unsettling. Now you’ve integrated your tech into the aerodynamic, diesel-powered Volvo VNL Autonomous truck. How do you convince regulators and the public that these trucks belong on the street?

Urtasun: Safety, when you think about carrying 80,000 pounds on this massive rig, is definitely top of mind. We believe the only way to do this safely is with a redundant platform that is fully developed and validated by the OEM, not with a retrofit. The OEM does a special type of truck that has all the redundant steering, power, and braking, so that no matter what happens, there is always a way we can interface and activate that truck in a safe manner. Then we are responsible for the sensors, the compute, and obviously the brain that drives those trucks.

AI’s Impact on Trucking Jobs

One of the biggest points of contention is the displacement of human drivers. As AI disrupts a range of workplaces, how do respond to people who say this will eliminate good-paying, blue-collar jobs?

Urtasun: The way we see this is that everybody who’s a truck driver today, and wants to retire as a truck driver, will be able to do so. This is physical AI; this is not like the digital world where suddenly you can switch immediately to this technology. That adoption and scaling is going to take time. There will also be many jobs created with this technology; remote operations, terminal operations, and other things. You have time to change the form of labor of being on the road, which is for weeks at a time—and it’s a really difficult and dehumanized job, let’s be honest—to something you can do locally. There was an interesting [U.S.] Department of Transportation study that showed because of this gradual adoption, there will be more jobs created than actually removed.

You’ve spoken about a personal motivation behind this. Why do you believe the advantages of autonomy outweigh any growing pains, including the potential for unexpected accidents or even deaths?

Urtasun: There are 2 million deaths on the road globally per year, and nobody’s questioning that. That’s the status quo. If you think the machines have to be perfect to deploy, you are actually sacrificing many humans along the way that you could have saved. Human error in accidents is between 90 percent and 96 percent. Those could be preventable accidents. Some accidents will always be unavoidable; a tire could blow for a machine the same as it could for a human. But the important comparison is how much safer we are. This technology is the answer to many, many things.

Most of the industry is focused on “hub-to-hub” highway driving. But you’ve argued that Waabi’s AI can handle the complexity of local streets.

Urtasun: The rest of the industry has gone with this business model where you need hubs next to the highway. This adds a lot of friction and cost. Thanks to our verifiable end-to-end AI system, we can drive in surface [local] streets. We can do unprotected lefts, traffic lights, and tight turns. These core capabilities enable us to drive all the way to the end customer. We are already hauling commercial loads for customers like Samsung through our Uber Freight partnership.

You’ve mentioned that Waabi doesn’t like to talk about “number of miles” driven as a metric. For an engineering audience, that sounds counterintuitive. How does your “simulation-first” approach replace the need for real-world road time?

Urtasun: In the industry, miles have been used as a proxy for advancement. How many miles does Tesla need to drive to see any of these situations? But we are a simulation-first company. Waabi World can simulate all the sensors, the behaviors of humans, everything. It is the only simulator where you can mathematically prove that testing and driving in simulation is the same as driving in the real world. You can expose the system to billions of simulations in the cloud. This is what allows us to be so capital efficient and fast.

Verifiable AI vs. Black Box Systems

What is the difference between your “interpretable” AI and the “black box” systems we see elsewhere?

Urtasun: We’ve seen an evolution on passenger cars for level- 2+ systems to end-to-end, black box architectures. But those are not verifiable. You cannot validate and verify those systems, which is a massive problem when you think about regulators and OEMs trusting that technology.

What Waabi has built is end-to-end, but fully verifiable. The system is forced to interpret what it is perceiving and use those interpretations for reasoning, so that it can understand the consequences of every action. It is much more akin to how our brain actually works; your “Type 2” thinking, where you start thinking about cause and effect and consequences, and then you typically do a much better choice in your maneuver.

Tesla is famously, and controversially, relying on camera data almost exclusively to run and improve its self-driving systems. You’re not a fan of that approach?

Urtasun: We use multiple sensors: lidar, camera, and radar. That’s very important because failure modes of those sensors are very different and they’re very complementary. We don’t compromise safety to reduce the bill- of- materials cost today.

Those (passenger car) level-2+ systems are not architected for level 4, where there’s no human on board. People don’t necessarily realize there is a huge difference in terms of the bar when there is no human to rely on. It’s not, “Well, if I don’t have a lot of system interventions, I’m almost there.” That’s not a metric. We are native level 4. We decide which areas the system can drive in, and in what conditions. We are building technology that can drive different form factors—trucks or robotaxis—with the same brain.

Reference: https://ift.tt/7x3wu9B

Thursday, March 12, 2026

The who, what, and why of the attack that has shut down Stryker's Windows network"


Within hours of the US and Israel launching airstrikes on Iran two weeks ago, security professionals warned organizations around the world to be on heightened watch for destructive retaliatory hacks. On Wednesday, the predictions appeared to come true as Stryker, a multinational maker of medical devices, confirmed a cyberattack that took down much of its infrastructure, and a hacking group long known to be aligned with the Iranian government claimed responsibility.

Where things stand

When and how did the attack come about?

The first indications were social media posts and a report from a news organization in Ireland. Messages posted by purported Stryker employees or their family members on social media said workers’ phones and computers had been wiped. A report the Irish Examiner published Wednesday morning, citing multiple anonymous sources, made the same claims and said some employees witnessed login pages on wiped devices displaying the logo of Handala Hack, a group that researchers who have followed it for years say is aligned with the Iranian government.

What is the status now?

Stryker said Thursday that it’s in the midst of responding to a “global network disruption to our Microsoft environment as a result of a cyber attack.” The update went on to say responders have no indication that ransomware or malware—the usual causes for such outages—were involved. The responders believe the incident is now contained and limited to the internal Microsoft environment.

Read full article

Comments

Reference : https://ift.tt/IWjDX1Y

Investing in Your Professional Community Yields Big Returns




Engineering is so much more than solving problems or writing efficient code. It is about creating solutions that affect billions of lives and contributing to a profession built on innovation, responsibility, and collaboration. Although technical skills remain critical, what truly will accelerate the growth of the next generation of engineers is community and professional involvement.

Learning from communities

University programs provide a strong foundation in theory and practice, but they cannot capture the complexity of real-world engineering. As an IEEE senior member, I believe professional communities such as IEEE can help bridge the gap by offering:

I have served as a mentor and judge for a variety of hackathons across different age groups, including high school competitions United Hacks and NextStep Hacks, as well as graduate-level events such as HackHarvard.

The experiences demonstrate how transformative community-driven opportunities can be for young engineers. They provide exposure to teamwork, innovation, and the realities of solving problems at scale.

The power of mentorship

Engineers don’t develop skills in isolation. Mentorship, whether formal or informal, plays a pivotal role in shaping careers. Senior professionals who invest in guiding students and early-career engineers pass on more than technical knowledge. They share decision-making approaches, ethical considerations, and strategies for navigating careers, thereby expanding the engineering field.

As a keynote speaker at conferences, I have seen how sharing real-world experiences can ignite students’ curiosity and confidence. What they often value most is not a lecture on technology but candid insights into how to be resilient, grow their career, and learn about the different engineering paths.

Building ethical awareness

With the rise of artificial intelligence, biotechnology, and other high-impact innovations, engineers’ ethical responsibilities are more important than ever. Professional organizations such as IEEE and ACM emphasize codes of ethics and standards to help ensure that technology is developed responsibly.

Through my work as a peer reviewer and committee member for IEEE and ACM conferences, including those at the university level, I have seen how the organizations promote rigor and accountability.

When students engage with such communities early, they can not only expand their technical knowledge but also build an understanding of responsible innovation.

Networking as a catalyst for innovation

Engineering breakthroughs often emerge at the intersections of different fields. Professional communities create the space for such interactions. A student working on computer vision, for example, might discover health care applications by collaborating with biomedical engineers.

While reviewing papers for conferences, I have seen how interdisciplinary ideas spark promising innovations.

I bring the same perspective to my role as an IEEE Collabratec mentor, connecting with innovators across different disciplines and industries.

“When we invest in the community, we invest in the future of engineering.”

By collaborating on projects and expanding your reach, you can find the mentors or partners you need to inspire your next breakthrough.

Participating in forums allows students and professionals alike to broaden their horizons and explore solutions that go beyond traditional boundaries.

Giving back shapes leadership

Community involvement is not only about what you gain. It is also about what you give. Engineers who volunteer for educational programs, STEM initiatives, and professional committees can develop leadership skills that extend beyond technical expertise. They can learn to inspire, organize, and guide others.

Judging hackathons and mentoring student teams reminds me that leadership often begins with service. When experienced professionals actively invest in the growth of others, they help create a culture wherein learning and leadership are passed forward.

Preparing for a lifelong journey

Learning how to be an engineer doesn’t end when you earn your degree. It is a lifelong journey of learning, adapting, and contributing. By engaging with communities and professional networks early, students and graduates can develop habits that serve them throughout their career. They can stay current with emerging trends, build trusted professional relationships, and gain resilience through shared challenges.

Community involvement can transform engineers from problem-solvers into change agents.

Investing in the community

The future of engineering depends not only on technological advancement but also on the collective strength of its communities. By fostering mentorship, encouraging collaboration, and embedding ethical responsibility, professional and community involvement can ensure that the next generation of engineers is prepared to meet tomorrow’s challenges with competence and character.

My journey as a mentor, judge, keynote speaker, and peer reviewer has reinforced a clear truth: When we invest in the community, we invest in the future of engineering. The students and young professionals we support today will be the ones building the world we live in tomorrow.

Reference: https://ift.tt/dolHJvf

40 Years of Wireless Evolution Leads to a Smart, Sensing Network




Every generation of mobile networks, from 1G to 5G, has rewritten the rules of how the world lives and works. The coming 6G revolution, by decade’s end, will represent a new direction still, toward a universal data fabric where millions of agents collaborate in real-time across the digital and physical worlds.

The story of wireless connectivity is often told in speeds and standards—megabits per second, latency, and spectrum bands. But these generational shifts in device specs obscure a deeper pattern. Each generation, from 1G to 5G, rewrote the relationships between three elements: the Devices we carry, the Networks that connect them, and the Applications that run on them. We call this connectivity’s DNA. With 6G, that DNA of interconnection is about to change fundamentally.

As with the “7 Phases of the Internet”—an article we published with IEEE Spectrum last October—mobile networks’ 6 generations follow a similar arc toward system-wide intelligence. That arc traces through every generation of wireless, revealing a steady advancement of the reach and scope of connectivity itself.

1G Connected Analog Voices


"Vintage 1G mobile phones with network diagram on a dotted dark background."

Devices: Bulky, expensive, analog phones

Networks: Circuit-switched systems dedicated exlusively to voice

Applications: Telephony, and telephony only

The first-generation networks of the 1980s did precisely one thing: carry voices without wires. Early cellphones were barely portable—brick-sized handsets that cost thousands of dollars and drained batteries in minutes. Networks like the Advanced Mobile Phone System (AMPS) used circuit-switching, dedicating an entire channel to each call, which meant capacity was scarce and expensive. The only application was the phone call.

Yet 1G’s modest achievement was revolutionary. Conversations could now move with the person having it. Communication detached from location. A salesperson could close a deal from their car. A doctor could be reached on the go. The technology was clunky and expensive, and the calls were only local. Nevertheless, the conceptual shift was real: the network would now follow the user, not the other way around. Every generation since has built on that remarkable insight.

2G Merged Digital Voice with Messaging


2G mobile phones with network diagram in background.

Devices: Smaller, more affordable phones with better battery life

Networks: GSM, CDMA, and TDMA—digital networks that enabled global roaming

Applications: Texting (SMS) took off, becoming wireless’s first killer app

Wireless phones’ second generation, arriving in the 1990s, ushered in a quieter revolution: digitization. Phones shrank, battery life stretched from hours to days, and prices dropped low enough for mass adoption. Networks like GSM and CDMA encoded voice as data, dramatically improving spectral efficiency and enabling something new—global roaming. A handset purchased in Helsinki could work in Hong Kong.

But the big surprise was SMS. Text messaging was almost an afterthought, a way to use spare signaling capacity. Many users, especially younger ones, soon preferred it to voice calls. By decade’s end, billions of texts were crisscrossing the planet daily. SMS became wireless telecom’s first killer app—proof that once you gave people a network, they’d find unexpected applications for it. The lesson would repeat with every generation to come.

3G Gave Mobile Data a Platform


"3G connectivity illustration with smartphones and network diagram."

Devices: Early smartphones combined telephony with computing and cameras

Networks: Hundreds of kilobits-per-second bandwidth

Applications: Mobile e-mail, browsing, and early app ecosystems

Third generation mobile networks, in the 2000s, launched the mobile internet. In Japan, NTT DoCoMo’s i-Mode service showed what was possible: a handset that could browse websites, check email, and download ringtones. Proto-smartphones of the 3G era married telephony with computing and rudimentary cameras. Networks like Wideband CDMA and EV-DO delivered speeds measured in hundreds of kilobits per second—horse-and-buggy speeds by today’s standards, but enough to make mobile email usable.

The applications that emerged hinted at a future still out of reach. BlackBerry became synonymous with executive productivity. Early app stores began to pop up. But screens were small, interfaces clunky, and coverage spotty. 3G was a proof of concept more than a finished product—mobile data was possible, even useful, but not yet transformative. The infrastructure was in place. What the world needed now was a device that could exploit it.

4G Rolled Out a Completely Mobile Internet


Smartphone and flip phone with 4G network diagram in black and white.

Devices: Full-fledged smartphones became general-purpose computing platforms, with integrated GPS and app ecosystems

Networks: LTE delivered speeds up to 100x greater than 3G—making video streaming, maps, and video conferencing possible

Applications: The app economy exploded, launching household names like Uber, Instagram, and WhatsApp

That device that could exploit the wireless network arrived with 4G. When long-term evolution (LTE) networks began rolling out around 2010, they delivered speeds an order of magnitude or more beyond 3G—fast enough to stream video, load maps instantly, and hold a video call without buffering. The network could now keep pace with what users wanted to do with it.

The smartphones that rode this wave were no longer communication tools with a few added features. 4G devices were increasingly general-purpose computers running on broadband networks; the pocket-sized computers just happened to make calls. High-resolution touchscreens, integrated GPS, accelerometers, and vast app ecosystems transformed mobile devices into something new: a platform. The phone became a remote control for daily life.

And daily life reorganized around it. Uber turned any car into a potential taxi. Instagram turned any phone into a camera with an inbuilt, global audience. WhatsApp replaced SMS texting and, in some countries, the phone call itself. Netflix moved from the living room to the subway. The app economy minted millionaires and disrupted industries.

4G democratized access to computing and services—a supercomputer in every pocket, connected to everything. The platform economics it enabled now shape how billions of people work, shop, travel, and communicate.

5G Pushed Connected Intelligence to the Edge


5G text with foldable phone and cell tower on a black textured background.

Devices: Smartphones with AI-specific hardware capable of trillions of operations per second

Networks: Programmable, sliceable infrastructure with low latency and edge computing capabilities

Applications: Smart factories, connected healthcare, augmented reality, and early, semi-autonomous systems

If 4G put the internet in your pocket, 5G began putting connected intelligence there too. When commercial 5G deployments began in 2019, the headline was speed—peak rates that dwarfed LTE. But the deeper shift was architectural. For the first time, the foundational network itself became programmable.

The devices reflected this ambition. The iPhone 12 and its contemporaries shipped with dedicated AI accelerators—Apple’s Neural Engine could execute trillions of operations per second. Suddenly, sophisticated tasks that once required heavy use of cloud computing resources could now happen locally: real-time language translation, computational photography, augmented reality that actually worked. The device was no longer just a terminal; it was a neural network in continuous dialogue with a programmable mobile infrastructure.

5G introduced concepts alien to earlier wireless generations. Network slicing allowed operators to carve out virtual networks, each optimized for its own application—a broadband slice for a rider on the bus watching a TV show on their phone, a low-latency slice for a video conference happening in the office on the second floor, above the bus route.

The applications followed. Smart factories deployed thousands of connected sensors. Hospitals began experimenting with remote diagnostics. AR glasses moved from novelty to tool. 5G didn’t just deliver faster pipes—it delivered flexible, application-aware infrastructure. The network had begun to sense—and react.

6G Will Usher In an Internet of AI agents


Text "6G" with a robotic arm reaching toward a satellite against a dotted background.

Devices: Digital and physical AI agents

Networks: AI-native fabrics fusing communication and sensing, via ground-based and non-terrestrial connections

Applications: Intelligent agents coordinating healthcare, transportation, and consumer applications globally

The transformation 6G promises is not incremental. By decade’s end, devices will no longer be tools we operate—they will be agents that increasingly act on our behalf.

AI agents already live inside our phones: Apple Intelligence summarizes emails and coordinates across apps; Samsung’s Galaxy AI translates conversations in real time; Google’s Gemini Nano processes queries without touching the cloud. These are early sketches of software that reasons, plans, and executes. Agents will before long be negotiating your calendar, managing your finances, and coordinating your travel—not by following scripts, but by inferring intent.

Physical AI agents will extend these capabilities into the physical world. At CES 2025, Nvidia CEO Jensen Huang announced Cosmos, a foundation model trained on video and physics simulations to teach robots and vehicles how to navigate unpredictable environments. Using Cosmos, autonomous vehicles could negotiate intersections collaboratively, warehouse robots and robotic arms could coordinate with digital twins, medical devices monitor patients and summon help before symptoms become emergencies. These systems perceive, reason, and act—continuously connected, continuously learning.

The network coordinating them will be unlike any generation previous. 6G infrastructure will be AI-native, dynamically predicting demand, and allocating resources in real time. It will fuse communication with sensing (a.k.a. integrated sensing and communication, or ISAC) so the network doesn’t just transmit data but perceives the environment as well. Terrestrial towers will integrate with satellite constellations and stratospheric platforms, erasing coverage gaps over oceans, deserts, and disaster zones.

What emerges is not just faster wireless. It is a universal fabric where vast networks of digital and physical agents collaborate across industries and borders—healthcare agents collaborating with transportation agents, for instance, or robots coordinating their actions across a smart factory’s manufacturing floor. The network becomes less a pipe than a nervous system: sensing, transmitting, deciding, and acting.

Beyond Devices, Networks, and Apps

The history of wireless connectivity is a history of Devices, Networks, and Applications. Every generation from 1G through 6G redefined each of those three elements. However, 6G marks a departure point where devices, network elements, and applications begin to lose definition as discrete entities unto themselves. As the network grows more capable, it also paradoxically becomes less visible—connection without connectors.

From 1G’s brick-sized phones to 6G’s digital fabric, wireless has moved from analog voices to autonomous agents—present everywhere, noticed nowhere, continuously interconnecting digital and physical worlds.

Reference: https://ift.tt/fbXC4eu

Wednesday, March 11, 2026

14,000 routers are infected by malware that's highly resistant to takedowns


Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices—primarily made by Asus—that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime.

The malware—dubbed KadNap—takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen’s Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it’s unlikely that the attackers are using any zero-days in the operation.

A botnet that stands out among others

The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia, a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.

Read full article

Comments

Reference : https://ift.tt/ert6K75

Supply-chain attack using invisible code hits GitHub and other repositories

Researchers say they’ve discovered a supply-chain attack flooding repositories with malicious packages that contain invisible code, a tech...