Tuesday, March 31, 2026

Quantum computers need vastly fewer resources than thought to break vital encryption


Building a utility-scale quantum computer that can crack one of the most vital cryptosystems—elliptic curves—doesn’t require nearly the resources anticipated just a year or two ago, two independently written whitepapers have concluded. In one, researchers demonstrated the use of neutral atoms as reconfigurable qubits that have free access to each other. They went on to show this approach could allow a quantum computer to break 256-bit elliptic-curve cryptography (ECC) in 10 days while using 100 times less overhead than previously estimated. In a second paper, Google researchers demonstrated how to break ECC-securing blockchains for bitcoin and other cryptocurrencies in less than nine minutes while achieving a 20-fold resource reduction.

Taken together, the papers are the latest sign that cryptographically relevant quantum computing (CRQC) at utility-scale is making meaningful progress. The advances are largely being driven by new quantum architectures developed by physicists and computer scientists in a push to create quantum computers that operate correctly even in the presence of errors that occur whenever qubits—the quantum analog to classical computing bits—interact with their environment. The other key drivers are ever-more efficient algorithms to supercharge Shor’s algorithm, the 1994 series of equations proving that quantum computing could break the ECC and RSA cryptosystems in polynomial time, specifically cubic time, far faster than the exponential time provided by today’s classical computers.

Neither paper has been peer-reviewed.

Read full article

Comments

Reference : https://ift.tt/ancJr9b

The ’80s Submersible That Transformed Underwater Exploration




As a kid, I loved the 1980s aquatic adventure show Danger Bay. True to the TV show’s name, danger was always lurking at the Vancouver Aquarium, where the show was set. In one memorable episode, young Jonah and a friend get trapped in a sabotaged mini-submarine, and Jonah’s dad, a marine-mammal veterinarian, comes to the rescue in a bubble-shaped underwater vehicle. Good stuff! Only recently—as in when I started working on this column—did I learn that the rescue vehicle was not a stage prop but rather a real-world research submersible named Deep Rover.

What Was Deep Rover and What Did It Do?

Built in 1984 and launched the following year, Deep Rover was a departure from standard underwater vehicles, which typically required divers to lie in a prone position and look through tiny portholes while tethered to a support ship.

Deep Rover was designed to satisfy human curiosity about the underwater world. As the rover moved freely through the water down to depths of 1,000 meters, the operator sat up in relative comfort in the cab, inside a clear 13-centimeter-thick acrylic bubble with panoramic views—an inverted fishbowl, with the human immersed in breathable air while the sea creatures looked in. Used for scientific research and deepwater exploration, it set a number of dive records along the way.

Photo of a man and a woman in a wood-paneled room with a scale model of an underwater vehicle in front of them.Submarine designer Graham Hawkes [left] and marine biologist Sylvia Earle [right] came up with the idea for Deep Rover.Alain Le Garsmeur/Alamy

The team behind Deep Rover included U.S. marine biologist Sylvia Earle and British marine engineer and submarine designer Graham Hawkes. Earle and Hawkes’s collaboration had begun in May 1980, when Earle complained to Hawkes about the “stupid” arms on Jim, an atmospheric diving suit; she didn’t realize she was complaining to one of Jim’s designers. Hawkes explained the difficulty of designing flexible joints that could withstand dueling pressures of 101 kilopascals on the inside—that is, the normal atmospheric pressure at sea level—and up to about 4,100 kPa on the outside. But he listened carefully to Earle’s wish list for a useful manipulator. Several months later, he came back with a design for a superbly dexterous arm that could hold a pencil and write normal-size letters.

Earle and Hawkes next turned to designing a one-person bubble sub, which they considered so practical that it would be an easy sell. But after failing to attract funding, they decided to build it themselves. In the summer of 1981, they pooled their resources and cofounded Deep Ocean Technology, setting up shop in Earle’s garage in Oakland, Calif.

Photo of a man sitting in an underwater vehicle with the words \u201cNewtsub DeepWorker 2000\u201d across the front and the logos of NASA and the National Geographic Society.Phil Nuytten, a Canadian designer of submersibles and dive systems, engineered Deep Rover.Stuart Westmorland/RGB Ventures/Alamy

They still found that customers weren’t interested in their crewed submersible, though, so they turned to unmanned systems. Their first contract was for a remotely operated vehicle (ROV) for use in oil-rig inspection, maintenance, and repair. Other customers followed, and they ended up building 10 of these ROVs. In 1983, they returned to their original idea and contracted with the Canadian inventor and entrepreneur Phil Nuytten to engineer Deep Rover.

Nuytten didn’t have to be convinced of the value of the submersible. He had grown up on the water and shared their dream. As a teenager, he opened Vancouver’s first dive shop. He then worked as a commercial diver. He founded the ocean- and research-tech companies Can-Dive Services (in 1965) and Nuytco Research (in 1982), and he developed advanced submersibles as well as diving systems. These included the Newtsuit, an aluminum atmospheric diving suit for use on drilling rigs and salvage operations.

Deep Rover’s first assignment was to boost offshore oil exploration and drilling in eastern Canada. Funding came from the provincial government of Newfoundland and Labrador and the oil companies Petro-Canada and Husky Oil. But the collapse of oil prices in the mid-1980s made it uneconomical to operate the submersible. So the rover’s mission broadened to scientific research.

Deep Rover’s Technical Specs

The pilot could operate Deep Rover safely for 4 to 6 hours at a depth of 1,000 meters and speeds of up to 1.5 knots (46 meters per minute). The submersible could be tethered to a support ship or move freely on its own. Two deep-cycle, lead-acid battery pods weighing about 170 kilograms apiece provided power. It had a VHF radio and two frequencies of through-water communications, plus tracking beacons.

Park ranger operates aircraft cockpit controls surrounded by cameras and instruments

Two photos, one showing a smiling man in the cab of a heavily instrumented vehicle, the other showing the underwater view out the front of the vehicle. From 1987 to 1989, Deep Rover did a series of dives in Oregon’s Crater Lake, the deepest lake in the United States. During one dive, National Park Service biologist Mark Buktenica [top] collected rock samples.NPS

The rover’s four thrusters—two horizontal fixed aft thrusters and two rotating wing thrusters—could be activated in any combination through microswitches built into the armrest. The pilot navigated using a gyro compass, sonar, and depth gauges (both digital and analog).

Much to Earle’s delight, Deep Rover had two excellent manipulators, each with four degrees of freedom, thus solving the problem that had started her down this path of invention. The pilot controlled the manipulators with a joystick at the end of each armrest. Sensory feedback systems helped the pilot “feel” the force, motion, and touch. The two arms had wraparound jaws and could lift about 90 kg.

If something went wrong, Deep Rover carried five days’ worth of life support stores and had a variety of redundant safety features: oxygen and carbon dioxide monitoring equipment; a halon (breathable) fire extinguisher; a full-face BIBS (built-in breathing system) that tapped into the starboard air bank; and a ground fault-detection system.

If needed, the rover could surface quickly by jettisoning equipment, including the battery pods and a 90-kg drop weight in the forward bay. In dire circumstances, the pressure hull (the acrylic bubble, that is) could separate from the frame, taking with it only its oxygen tanks, strobe, through-water communications, and wing thrusters.

Deep Rover’s achievements

From 1984 to 1992, Deep Rover conducted about 280 dives. It inspected two of the tunnels near Niagara Falls that divert water to the Sir Adam Beck II hydroelectric plant. In California’s Monterey Bay, the rover let researchers film previously unknown deep-sea marine life, which helped establish the Monterey Bay Aquarium Research Institute. At Crater Lake National Park, in Oregon, Deep Rover proved the existence of geothermal vents and bacteria mats, leading to the protection of the site from extractive drilling.

Deep Rover was featured in a short film shown at Vancouver’s Expo ’86, the first of several TV and movie appearances. There was Danger Bay. Director James Cameron used an early prototype of the submersible in his 1989 film The Abyss. Deep Rover also made an appearance in Cameron’s 2005 documentary Aliens of the Deep.

In 1992, Deep Rover came to the end of its working life. It now resides at Ingenium, Canada’s Museums of Science and Innovation, in Ottawa. For a time, Deep Ocean Engineering continued to develop later generations of the submersible. Eventually, though, uncrewed remotely operated and autonomous underwater vehicles became the norm for deep-sea missions, replacing human pilots with sensors and equipment. New ROVs can dive significantly deeper than human-piloted ones, and new cameras are so good that it feels like you’re there…almost. And yet, humans still long to have the personal experience of exploring the depths of the oceans.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the April 2026 print issue as “All Alone in the Abyss.”

References


My friends at Ingenium, Canada’s Museums of Science and Innovation, helpfully provided me with background material on why they decided to acquire Deep Rover. They also published a great blog post about the rover.

Dirk Rosen, executive vice president of engineering at DEEP, published specifications for Deep Rover in his 1986 IEEE paper “Design and Application of the Deep Rover Submersible.”

Sylvia Earle, known affectionately as “Her Deepness,” has written extensively about the ocean depths. I found her book Sea Change: A Message of the Oceans (G.P. Putnam’s Sons, 1995) to be especially enjoyable.

Reference: https://ift.tt/vGAIat8

Monday, March 30, 2026

Invences Empowers Small Businesses With Smart Telecom Networks




To stay competitive, many small businesses need advanced wireless communication networks, not only to communicate but also to leverage technologies such as artificial intelligence, the Internet of Things, and robotics. Often, however, the businesses lack the technical expertise needed to install, configure, and maintain the systems.

Bhaskara Rallabandi, who spent more than two decades working for major telecom companies, decided to use his expertise to help small businesses. Rallabandi, an IEEE senior member, is an expert certified by the International Council on Systems Engineering.

Invences


Cofounder

Bhaskara Rallabandi

Founded

2023

Headquarters

Frisco, Texas

Employees

100

In 2023 he helped found Invences, a telecommunications automation company headquartered in Frisco, Texas.

Invences services include designing, building, and installing data centers, as well as cost-effective and secure wireless, private, IoT, and virtual communications networks.

The company has set up systems for farms, factories, and universities in rural and urban areas including underserved communities. Its mission, Rallabandi says, is to “build autonomous, ethical, and sustainable networks that connect communities intelligently.”

For his work, he was recognized last year for “entrepreneurial leadership in founding and scaling a U.S.-based technology company, advancing innovation in 5G/6G and Open RAN [radio access network], shaping global standards, and inspiring future leaders through mentorship and community impact” with the IEEE-USA Entrepreneur Achievement Award for Leadership in Entrepreneurial Spirit.

Building a telecommunications career

He began his telecommunications career in 2009 as a manager and principal network engineer at Verizon’s Innovation Labs in Waltham, Mass. He and his team ran some of the earliest long-term evolution and evolved packet core performance trials. (LTE is the 4G wireless broadband standard for mobile devices. EPC is the IP-based, high-performance core network architecture for 4G LTE networks.)

That work at Innovation Labs, he says, was key to the development of the first 4G systems. It set the stage for scalable, interoperable broadband architectures that underpin today’s 5G and 6G designs.

“We built the first bridge between legacy and cloud-native networks,” he says.

He left in 2011 to join AT&T Labs in Redmond, Wash. As senior manager and principal solutions architect, he oversaw the design, integration, and testing of the company’s next-generation wireless systems. He also led projects that redefined automation of networks and set up cloud computing systems including FirstNet, the nationwide broadband network for first responders, and VoLTE, the first voice-over-video LTE for conducting video calls.

In 2018 Rallabandi was hired as a principal and a senior manager of engineering at Samsung Telecommunications America’s R&D division, in Mountain View, Calif. He led the development of 5G virtualization and Open RAN initiatives, which enable more flexible, scalable, and efficient large network deployments and interoperability among vendors.

Designing networks for small businesses

Feeling that he wasn’t reaching his full potential in the corporate world, and to help small businesses, he opted to start his own venture in 2023 with his wife, Lakshmi Rallabandi, a computer science engineer. She is Invences’s CEO, and he is its founding principal and chief technology advisor.

Invences, which is self-funded and employs about 100 people, has more than 50 customers from around the world.

“I wanted to do something more interesting where I could use the knowledge I gained working for these big companies to fill the gaps they overlooked in terms of automation” for small businesses, he says. “I have a team of people who, combined, have 200 years of technology experience.”

The startup builds networks that simplify its clients’ operations and reduce their costs, he says.

Instead of duplicating how major telecom carriers build networks for dense urban areas, he says, his designs reimagine the network architecture to lower its complexity, costs, and operational overhead.

“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”

The systems integrate new technologies such as Open RAN, virtualized RAN, digital twins, telemetry, and advanced analytics. Some networks also incorporate agentic AI, an autonomous system that runs independently of humans and uses AI agents that plan and act across the network. Digital twins evaluate the agent’s decisions before releasing them.

“Autonomy is not about removing humans from the loop,” Rallabandi says. “It is about giving systems the ability to manage complexity so humans can focus on intent and outcomes.”

Rallabandi also has worked on AI-driven telecom observability technologies designed to allow networks to detect anomalies and optimize performance automatically.

He has developed a virtual O-RAN innovation lab, where clients can test the interoperability of their 5G systems, try out their enhancements, run trials of future functions, and experiment with updates.

Invences partnered with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz. FarmGrid used private 5G networks, edge-computing AI, and digital twins to make the operations more efficient.

“The project connects farms with sensors, analytics platforms, and autonomous equipment to enable precision agriculture, water optimization, and real-time decision-making,” Rallabandi says.

IEEE Senior Member Bhaskara Rallabandi talks about partnering with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz.TECKNEXUS

Paying it forward through IEEE programs

Rallabandi says he believes staying involved with IEEE is important to his career development and a way to give back to the profession. He is a frequent invited speaker at IEEE conferences.

He is active with IEEE Future Networks and its Connecting the Unconnected (CTU) initiative. Members of the Future Networks technical community work to develop, standardize, and deploy 5G and 6G networks as well as successive generations.

CTU aims to bridge the digital divide by bringing Internet service to underserved communities. During itsannual challenge, Rallabandi works with the winning students, researchers, and innovators to help them turn their concepts into affordable, cost-effective options.

“CTU represents the best of IEEE,” he says. “It is about taking innovation out of conferences and into communities that need it the most.

“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”

He participates in the recently launched IEEE Future Networks Empowerment Through Mentorship initiative, which helps innovators, entrepreneurs, and startups expand their companies by educating them about finance, marketing, and related concepts.

“IEEE gives me both a voice and a responsibility,” Rallabandi says. “We’re not just developing technology; we are shaping how humanity connects.”

Reference: https://ift.tt/mgI3p54

Facial Recognition Is Spreading Everywhere




Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.

Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.

Three Possible Outcomes

White figures and an orange hooded figure, focusing on the hooded figure in a split design.a) identifies the suspect, since the two images are of the same person, according to the software. Success!

Abstract figures: orange hoodie enlarged, white, yellow, and orange on left, black background.b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice.Brandon Palacio

Three white icons and one orange hoodie icon on left, large orange hoodie icon on right.c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt.Brandon Palacio

In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are around two in 1,000 and false positives are less than one in 1 million.

In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.

Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The United Kingdom estimated that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.

Five faces arranged left to right, from easy to hard to recognize.Less clear photographs are harder for FRT to process.iStock

What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.

Facial Recognition Gone Wrong

THE NEGATIVES OF FALSE POSITIVES

Detroit Police SUV with American flag decal on side under bright sunlight.2020: Robert Williams’s wrongful arrest cost him detention. The ensuing settlement requires Detroit police to enact policies that recognize FRT’s limits. iStock

ALGORITHMIC BIAS

Red sign reads "Security cameras in use" with camera graphic.2023: Court bans Rite Aid from using facial recognition for five years over its use of a racially biased algorithm. iStock

TOO FAST, TOO FURIOUS?

Back of ICE officer in tactical gear facing a house.2026: U.S. immigration agents misidentify a woman they’d detained as two different women. VICTOR J. BLUE/BLOOMBERG/GETTY IMAGES

Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.

What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement agents have done since June 2025, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least 1.2 billion images.

At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.

Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist Erik Learned-Miller of the University of Massachusetts Amherst: “The care we take in deploying such systems should be proportional to the stakes.”

Reference: https://ift.tt/y2Iv1ER

How 5G Non-Terrestrial Networks Enable Ubiquitous Global Connectivity




5G covers under 40% of landmass. This Whitepaper details how 3GPP Release 17 addresses six satellite challenges: delay, Doppler, path loss, polarization, spectrum, and architecture.

What Attendees will Learn

  1. Why non-terrestrial networks are now integral to the 5G roadmap — Understand how the Third Generation Partnership Project (3GPP) Release 17 incorporates satellite-based connectivity into the 5G system, targeting ubiquitous coverage across maritime, remote, and polar regions where terrestrial networks reach less than 40% of the world’s landmass. Learn the distinction between New Radio non-terrestrial networks for mobile broadband and Internet of Things non-terrestrial networks for low-power machine-type communications.
  2. How satellite constellation design shapes coverage, capacity, and latency — Examine how orbit altitude (low earth orbit, medium earth orbit, geostationary earth orbit), beam footprint geometry, elevation angle, and inclination determine coverage area, round-trip time, and differential delay across user equipment within a single beam. Explore the trade-offs between transparent bent-pipe and regenerative onboard-processing payload architectures.
  3. What radio frequency challenges distinguish satellite links from terrestrial propagation — Explore the six major technical challenges: high free-space path loss, time-variant Doppler, differential delay across large beam footprints, Faraday rotation of polarization through the ionosphere, and spectrum coexistence between terrestrial and non-terrestrial bands in the S-band and L-band.
  4. How 5G protocols must adapt to support non-terrestrial connectivity — Learn the specific amendments to hybrid automatic repeat request operation, timing advance control (split into common and user-equipment-specific components), random access procedure timing extensions, discontinuous reception power saving adaptations, earth-fixed tracking area management, conditional handover mechanisms, and feeder link switching for service continuity in a unique propagation environment.

Download this free whitepaper now!

Reference: https://ift.tt/bsHPAKI

Friday, March 27, 2026

IEEE Professional Development Suite Teaches In-Demand Skills




In today’s technological landscape, the only constant is the rate of obsolescence. As engineers move deeper into the eras of 6G, ubiquitous artificial intelligence, and hyper-miniaturized electronics, a traditional degree is only a starting point.

To remain competitive in today’s job market, technical specialists must evolve into future-ready professionals by cultivating more than just niche expertise. Success now demands a high degree of adaptive intelligence and strategic communication, allowing specialists to translate complex data into actionable business decisions as industry shifts accelerate.

To bridge the gap between technical proficiency and organizational leadership, the IEEE Professional Development Suite offers training on programs designed to build the strategic competencies required to navigate today’s complex landscape. The suite provides deep technical dives into domains such as telecommunications connectivity and microelectronics reliability. Organizations can stay ahead of the curve through informed decision-making and a future-ready workforce.

Mastery of electrostatic discharge and 5G networks

Within the semiconductor sector, which is projected to become a US $1 billion industry by 2030, electrostatic discharge (ESD) is a major reliability challenge. Because even a microscopic, unnoticed discharge can compromise a semiconductor, ESD issues account for up to one-third of all field failures, according to the EOS/ESD Association.

IEEE’s targeted training—the online Practical ESD Protection Design certificate program—equips teams with technical protocols to mitigate the risks and ensure long-term hardware reliability. Specialized ESD training has become essential for chip designers and manufacturing professionals seeking to improve discharge control.

The interactive modules cover theory, real-world case studies, and practical mitigation techniques. The standards-based instruction is aligned with ANSI/ESD S20.20–21: Protection of Electrical and Electronic Parts and other industry guidelines.

As 5G network capabilities expand globally, so does the demand for engineers who can master the protocols and procedures required to manage complex telecommunications systems. The IEEE 5G/6G Essential Protocols and Procedures Training and Innovation Testbed, in partnership with Wray Castle, takes a deep dive into the 5G network function framework, registration processes, and packet data unit session establishment. The program is designed for system engineers, integrators, and technical professionals responsible for 5G signaling. Stakeholders such as network operators, equipment vendors, regulators, and handset manufacturers could find the program to be beneficial as well.

“The IEEE Professional Development Suite ensures that learners are not just keeping pace with change but helping to drive it.”

To bridge the gap between theory and practice, the course includes three months of free access to the IEEE 5G/6G Innovation Testbed. The secure, cloud-based platform offers a private, end-to-end 5G network environment where individuals and teams can gain hands-on experience with critical system signaling and troubleshooting.

Leadership training programs

Technical knowledge alone is not enough to climb the corporate ladder. To thrive today, engineering leaders must have a strategic vision and people-centric leadership skills.

The IEEE Leading Technical Teams training program focuses on the challenges of managing engineers in R&D environments and fostering creative problem-solving through an immersive learning experience. It’s designed for professionals who have been in a leadership position for at least six months. Participants can gain self-awareness.

The program includes a 360-degree assessment that gathers feedback about the individual from peers and direct reports to build a personalized development plan. The goal is to help technical professionals transition from high-performing individual contributors into leaders who drive innovation by inspiring their teams rather than just managing tasks.

Organizations can enroll groups of 10 or more to learn as a cohort—which can ensure that everyone stays on the same page while setting a training schedule that fits the team’s deadlines.

In collaboration with the Rutgers Business School, IEEE offers two mini MBA programs to bridge the gap between technical expertise and executive leadership. The programs offer flexibility to fit the demanding schedules of senior professionals. The online format lets participants engage with content as their time permits, while live virtual office hours with faculty provide opportunities for real-time interaction.

During the mini MBA for engineers 12-week curriculum, technical professionals master core competencies such as financial analysis, business strategy, and negotiation to effectively transition into management roles.

The mini MBA in artificial intelligence embeds AI literacy directly into business strategy rather than treating the technology as a standalone subject. Participants learn to evaluate AI through financial modeling and governance frameworks, gaining a practical foundation to lead initiatives that incorporate the technology.

The programs are offered to individuals as well as to organizations interested in training groups of 10 employees or more.

Earning credits that count

All the programs within the IEEE Professional Development Suite offer continuing education units and professional development hours.

Earning globally recognized credits provides a professional advantage, signaling a commitment to growth that often serves as a prerequisite for advancing into senior, lead, or principal roles. Additionally, the credits satisfy annual professional engineering license renewal requirements, ensuring practitioners remain compliant while expanding their capabilities.

Why curated content matters

Developed by IEEE Educational Activities, the training programs are peer-reviewed and built to align with industry needs. By focusing on upskilling (improving current skills) and reskilling (learning new ones), the IEEE Professional Development Suite ensures that learners are not just keeping pace with change but helping to drive it.

Reference: https://ift.tt/O0UVn5k

Video Friday: Beep! Beep! Roadrunner Bipedal Bot Breaks the Mold




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA
RSS 2026: 13–17 July 2026, SYDNEY
Summer School on Multi-Robot Systems: 29 July–4 August 2026, PRAGUE

Enjoy today’s videos!

“Roadrunner” is a new bipedal wheeled robot prototype designed for multi-modal locomotion. It weighs around 15 kg (33 lb) and can seamlessly switch between its side-by-side and in-line wheel modes and stepping configurations depending on what is required for navigating its environment. The robot’s legs are entirely symmetric, allowing it to point its knees forward or backward, which can be used to avoid obstacles or manage specific movements. A single control policy was trained to handle both side-by-side and in-line driving. Several behaviors, including standing up from various ground configurations and balancing on one wheel, were successfully deployed zero-shot on the hardware.

[ Robotics and AI Institute ]

Incredibly (INCREDIBLY!) NASA says that this is actually happening.

NASA’s SkyFall mission will build on the success of the Ingenuity Mars helicopter, which achieved the first powered, controlled flight on another planet. Using a daring mid-air deployment, SkyFall will deliver a team of next-gen Mars helicopters to scout human landing sites and map subsurface water ice.

[ NASA ]

NASA’s MoonFall mission will blaze a path for future Artemis missions by sending four highly mobile drones to survey the lunar surface around the Moon’s South Pole ahead of astronauts’ arrival there. MoonFall is built on the legacy of NASA’s Ingenuity Mars Helicopter. The drones will be launched together and released during descent to the surface. They will land and operate independently over the course of a lunar day (14 Earth days) and will be able to explore hard-to-reach areas, including permanently shadowed regions (PSRs), surveying terrain with high-definition optical cameras and other potential instruments.

For what it’s worth, Moon landings have a success rate well under 50%. So let’s send some robots there to land over and over!

[ NASA ]

In Science Robotics, researchers from the Tangible Media group led by Professor Hiroshi Ishii, together with colleagues from Politecnico di Bari, present Electrofluidic Fiber Muscles: a new class of artificial muscle fibers for robots and wearables. Unlike the rigid servo motors used in most robots, these fiber-shaped muscles are soft and flexible. They combine electrohydrodynamic (EHD) fiber pumps — slender tubes that move liquid using electric fields to generate pressure silently, with no moving parts — with fluid-filled fiber actuators. These artificial muscles could enable more agile untethered robots, as well as wearable assistive systems with compact actuation integrated directly into textiles.

[ MIT Media Lab ]

In this study, we developed MEVIUS2, an open-source quadruped robot. It is comparable in size to Boston Dynamics Spot, equipped with two LiDARs and a C1 camera, and can freely climb stairs and steep slopes! All hardware, software, and learning environments are released as open source.

[ MEVIUS2 ]

Thanks, Kento!

What goes into preparing for a live performance? Arun highlights the reliability testing that goes into trying a new behavior for Spot.

[ Boston Dynamics ]

In this work, a multi-robot planning and control framework is presented and demonstrated with a team of 40 indoor robots, including both ground and aerial robots.

That soundtrack though.

[ GitHub ]

Thanks, Keisuke!

Quadrupedal robots can navigate cluttered environments like their animal counterparts, but their floating-base configuration makes them vulnerable to real-world uncertainties. Controllers that rely only on proprioception (body sensing) must physically collide with obstacles to detect them. Those that add exteroception (vision) need precisely modeled terrain maps that are hard to maintain in the wild. DreamWaQ++ bridges this gap by fusing both modalities through a resilient multi-modal reinforcement learning framework. The result: a single controller that handles rough terrains, steep slopes, and high-rise stairs—while gracefully recovering from sensor failures and situations it has never seen before.

That cliff behavior is slightly uncanny.

[ DreamWaQ++ ]

I take issue with this from iRobot:

While the pyramid exploration that iRobot did was very cool, they did it with a custom made robot designed for a very specific environment. Cleaning your floors is way, way harder. Here’s a bit more detail on the pyramids thing:

[ iRobot ]

More robots in circus please!

[ Daniel Simu ]

MIT engineers have designed a wristband that lets wearers control a robotic hand with their own movements. By moving their hands and fingers, users can direct a robot to perform specific tasks, or they can manipulate objects in a virtual environment with high-dexterity control.

[ MIT ]

At NVIDIA GTC 2026, we showcased how AI is moving into the physical world. Visitors interacted with robots using voice commands, watching them interpret intent and act in real time — powered by our KinetIQ AI brain.

[ Humanoid ]

Props to Sony for their continued support and updates for Aibo!

[ Aibo ]

This robot looks like it could be a little curvier than normal?

[ LimX Dynamics ]

Developed by Zhejiang Humanoid Robot Innovation Center Co., Ltd., the Naviai Robot is an intelligent cooking device. It can autonomously process ingredients, perform cooking tasks with high accuracy, adjust smart kitchen equipment in real time, and complete post-cooking cleaning. Equipped with multi-modal perception technology, it adapts to daily kitchen environments and ensures safe and stable operation.

That 7x is doing some heavy lifting.

[ Zhejiang Lab ]

This CMU RI Seminar is by Hadas Kress-Gazit from Cornell, on “Formal Methods for Robotics in the Age of Big Data.”

Formal methods – mathematical techniques for describing systems, capturing requirements, and providing guarantees – have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk I will give a few examples for what we can do with formal methods, discuss their promise and challenges, and describe the synergies I see with data-driven approaches.

[ Carnegie Mellon University Robotics Institute ]

Reference: https://ift.tt/OPsyoC3

A New Way to Spray Paint Color




We’re all familiar with mixing red, yellow, and blue paint in various ratios to instantly make all kinds of colors. This works great for oils or watercolors, but fails when it comes to cans of spray paint. The paint droplets can’t be blended once they are aerosolized. Consequently, although spray cans are great for applying even coats of paint to large areas very quickly, spray-paint artists need a separate can for every color they want to use—until now.

Back in 2018, when I first saw professional spray artists lugging dozens to hundreds of cans to their work sites, I was inspired to start noodling on a solution. I’ve worked at Google X, Alphabet’s “moonshot factory,” as a hardware engineer, and I’m now building a startup in mechanical-design software. I’m no painter, but I know my way around mechatronics.

I wanted my solution to be inexpensive and simple enough to build as a DIY project and functional enough for an artist to use, without breaking their flow. So I began prototyping a system that combines base colors while they are still in pressurized form from off-the-shelf cans.

An illustration of how a spring-loaded arm driven by a stepper motor with a roller bearing at one end opens and closes a tube by pressing down on it. This new rotary pinch valve can be opened and closed in tens of milliseconds and prevents backpressure from clogging lines.James Provost

I tried a few approaches where pres-surized paint from the base-color cansfed through tubes into a mixing channel, before emerging from a spray head. To control the ratios, I decided to borrow a trick that would be familiar to anyone who’s ever had to control the bright-ness of an LED using a microcontroller: pulse-width modulation. Initially, I used electronically controlled solenoid valves to release the paint from the cans. The paint would flow into a mixing channel for a relative duration that corresponded to the ratio of the base colors required to make a given hue. However, this failed because different cans never have the same internal pressure. Whenever two valves were open at the same time, the pressure difference would make paint flow backward into the lower-pressure can.

As an alternative, I removed the mixing channel and tried making the paint pulses from each can sequentially converge into a tube so that no more than one valve would ever be open at a time. Surprisingly, this worked perfectly. The backflow was eliminated, and it turned out that the natural turbulence of the flow was sufficient to mix the paints. Let’s say you want to produce a clementine orange color. This requires yellow and red paint in a ratio of 1:2, so the yellow valve opens for a period of time, and then the red valve opens for twice as long. The system then keeps repeating this cycle of pulses in a rapid pace to instantly create the spray-paint color you want.


The theory is straightforward, but making this work in practice took quite a bit of experimentation. First, I had to determine the actual durations of pulses that would produce evenly mixed colors, not just their ratios. I also needed to work out the size of the tubing (too narrow and you’d get low spray force; too wide and you’d have paint accumulating in the tubes). Eventually I settled on a maximum pulse duration of 250 milliseconds and a tube diameter of 1 millimeter.

Inventing A New Valve

Even though the system worked, the solenoid valves I used constantly clogged up. Designed for water purifiers, the valves didn’t prevent paint from entering the mechanism, where the paint would harden. Moreover, when the valves were turned off, they could stop backflow only if the inlet remained pressurized. So disconnecting a paint can from the system would cause instant leaking. Other off-the-shelf valves I tried couldn’t cycle fast enough and were too expensive.

I had some spectacular failures along the way of the sort that only pressurized paint can provide.

So I created my own mechanism: a high-speed, electronically controlled, rotary pinch valve. It has a stepper motor that rotates a lever with a rolling bearing to constrict fluid flow inside a flexible tube. This concept isn’t new—there’s something like them in every peristaltic pump. But I added a spring to firmly hold the lever in the closed position against any back pressure when the motor isn’t powered, making it a normally closed valve that isolates the attached can. Additionally, the valve is fast enough to be open for as little as 30 milliseconds.

I went through four major prototypes of the system before reaching a working version, and I had some spectacular failures along the way of the sort that only pressurized paint can provide. The final version uses four base colors—red, yellow, blue, and white—with the color mix controlled by four knobs attached to an Arduino Nano and a small display. The flow of paint is triggered by a push button placed above the spray head, similar to a spray can’s nozzle.

A diagram showing the arrangement of valves and control wires, along with a timing diagram of valves opening and closing, showing the red paint open for twice as long as the yellow paint in a continuous cycle. Cans holding base colors (A) are attached to valves (B). An Arduino-based control panel (C) opens and closes valves to mix paint before it is aerosolized (E). By quickly opening and closing valves with varying durations in sequence (D), you can mix paint in specific ratios to create desired colors.James Provost

The length of time a base color’s paint valve can be open is one of eight values between 30 and 250 ms. This means that the entire system—which I coincidentally dubbed Spectrum—can create hundreds of distinct spray-paint colors instantly. It produces less than 84 (or 4,096) colors because duration ratios that are a multiple of each other will produce the same color—for example, 2:3 and 4:6. I added a force sensor to the push button, which allows for a gradient: Two color mixes can be dialed in, and as I increase my thumb’s pressure on the button, the paint mix shifts from one color to the other.

Spectrum’s various fixtures are 3D-printed, and project files and videos are available through my website at https://www.sandeshmanik.com/projects/spectrum. Preprints of technical descriptions of the rotary pinch valve and mixing methodology are available on TechRxiv. The total cost for the bill of materials is less than US $150.

Working on and off on the side for about seven years, I finally finished developing my system and writing the documentation in late 2025. After I posted a video to social media, I was heartened by the immediate positive response from spray-paint artists around the world. I’m now creating step-by-step instructions so that nontechnical people can build their own Spectrum paint sprayer. I look forward to seeing what creations artists out in the wild make!

Reference: https://ift.tt/WPDm1AY

How NYU’s Quantum Institute Bridges Science and Application




This sponsored article is brought to you by NYU Tandon School of Engineering.

Within a 6 mile radius of New York University’s (NYU) campus, there are more than 500 tech industry giants, banks, and hospitals. This isn’t just a fact about real estate, it’s the foundation for advancing quantum discovery and application.

While the world races to harness quantum technology, NYU is betting that the ultimate advantage lies not solely in a lab, but in the dense, demanding, and hyper-connected urban ecosystem that surrounds it. With the launch of its NYU Quantum Institute (NYUQI), NYU is positioning itself as the central node in this network; a “full stack” powerhouse built on the conviction that it has found the right place, and the right time, to turn quantum science into tangible reality.

Proximity advantage is essential because quantum science demands it. Globally, the quest for practical quantum solutions — whether for computing, sensing, or secure communications — has been stalled, in part, by fragmentation. Physicists and chemical engineers invent new materials, computer scientists develop new algorithms, and electrical engineers build new devices, but all three often work in isolated academic silos.

Three men pose at the 4th Annual NYC Quantum Summit 2025; attendees converse in the background. Gregory Gabadadze, NYU’s dean for science, NYU physicist and Quantum Institute Director Javad Shabani, and Juan de Pablo, Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and executive dean of the Tandon School of Engineering.Veselin Cuparić/NYU

NYUQI’s premise is that breakthroughs happen “at the interfaces between different domains,” according to Juan de Pablo, Executive Vice President for Global Science and Technology at NYU and Executive Dean of the NYU Tandon School of Engineering. The Institute is built to actively force those necessary collisions — to integrate the physicists, engineers, materials scientists, computer scientists, biologists, and chemists vital to quantum research into one holistic operation. This institutional design ensures that the hardware built by one team can be immediately tested by software developed by another, accelerating progress in a way that isolated departments never could.

NYUQI’s premise is that breakthroughs happen at the interfaces between different domains. —Juan de Pablo, NYU Tandon School of Engineering

NYUQI’s integrated vision is backed by a massive physical commitment to the city. The NYUQI is not just a theoretical concept; its collaborators will be housed in a renovated, million-square-foot facility in the heart of Manhattan’s West Village, backed by a state-of-the-art Nanofabrication Cleanroom in Brooklyn serving as a high-tech foundry. This is where the theoretical meets physical devices, allowing the Institute to test and refine the process from materials science to deployment.

NYU building exterior with "Science + Tech" signage, flags, and a passing yellow taxi. NYUQI will be housed in a renovated, million-square-foot facility in the heart of Manhattan’s West Village.Tracey Friedman/NYU

Leading this effort is NYUQI Director Javad Shabani, who, along with the other members, is turning the Institute into a hub for collaboration with private and public sector partners with quantum challenges that need solving. As de Pablo explains, “Anybody who wants to work on quantum with NYU, you come in through that door, and we’ll send you to the right place.” For New York’s vast ecosystem of tech giants and financial institutions, the NYUQI offers a resource they can’t build on their own: a cohesive team of experts in quantum phenomena, quantum information theory, communication, computing, materials, and optics, and a structured path to applying theoretical discoveries to advanced quantum technologies.

Solving the Challenge of Quantum Research

The NYUQI’s integrated structure is less about organizational management, and more about scientific requirement. The challenge of quantum is that the hardware, the software, and the programming are inherently interconnected — each must be designed to work with the other. To solve this, the Institute focuses on three applications of quantum science: Quantum Computing, Quantum Sensing, and Quantum Communications.

For Shabani, this means creating an integrated environment that bridges discovery with experimentation, starting with the physical components all the way to quantum algorithm centers. That will include a fabrication facility in the new building in Manhattan, as well as the NYU Nanofab in Brooklyn directed by Davood Shahjerdi. New York Senators Charles Schumer and Kirsten Gillibrand recently secured $1 million in congressionally-directed spending to bring Thermal Laser Epitaxy (TLE) technology — which allows for atomic-level purity, minimal defects, and streamlined application of a diverse range of quantum materials — to NYU, marking the first time the equipment will be used in the U.S.

Two people hold semiconductor wafers during a presentation with audience taking photos. NYU Nanofab manager Smiti Bhattacharya and Nanofab Director Davood Shahjerdi at the nanofab ribbon-cutting in 2023. The nanofab is the first academic cleanroom in Brooklyn, and serves as a prototyping facility for the NORDTECH Microelectronics Commons consortium.NYU WIRELESS

Tight control over fabrication, and can allow researchers to pivot quickly when a breakthrough in one area — say, finding a cheaper, more reliable material like silicon carbide — can be explored for use across all three applications, and offers unique access to academics and the private sector alike to sophisticated pieces of specialty equipment whose maintenance knowledge and costs make them all-but-impossible to maintain outside of the right staffing and environment.

3D model of a laboratory layout, highlighting the Yellow Room in bright yellow. The NYU Nanofab is Brooklyn’s first academic cleanroom, with a strategic focus on superconducting quantum technologies, advanced semiconductor electronics, and devices built from quantum heterostructures and other next-generation materials.NYU Nanofab

That speed and adaptability is the NYUQI’s competitive edge. It turns fragmented challenges into holistic solutions, positioning the Institute to solve real-world problems for its New York neighbors—from highly secure data transmission to next-generation drug discovery.

Testing Quantum Communication in NYC

The integrated approach also makes the NYUQI a testbed for the most critical near-term applications. Take Quantum Communications, which is essential for creating an “unhackable” quantum internet. In an industry first, NYU worked with the quantum start-up Qunnect to send quantum information through standard telecom fiber in New York City between Manhattan and Brooklyn through a 10-mile quantum networking link. Instead of simulating communication challenges in a lab, the NYUQI team is already leveraging NYU’s city-wide campus by utilizing existing infrastructure to test secure quantum transmission between Manhattan and Brooklyn.

The NYUQI team is already leveraging NYU’s city-wide campus by utilizing existing infrastructure to test secure quantum transmission between Manhattan and Brooklyn.

This isn’t just theory; it is building a functioning prototype in the most demanding, dense urban environment in the world. Real-time, real-world deployment is a critical component missing in other isolated institutions. When the NYUQI achieves results, the technology will be that much more readily available to the massive financial, tech, and communications organizations operating right outside their door.

Scientist in protective gear working in a laboratory with samples. NYUQI includes a state-of-the-art Nanofabrication Cleanroom in Brooklyn serving as a high-tech foundry.NYU Tandon

While the Institute has built the physical infrastructure and designed the necessary scientific architecture, its enduring contribution will be the specialized workforce it creates for the new quantum economy. This addresses the market’s greatest deficit: a lack of individuals trained not just in physics, but in the integrated, full-stack approach that quantum demands.

By creating a pipeline of 100 to 200 graduate and doctoral students who are encouraged to collaborate across Computing, Sensing, and Communications, the NYUQI is narrowing the skills gap. These will be future leaders who can speak the language of the physicist, the materials scientist, and the engineer simultaneously. This commitment to interdisciplinary talent is also fueled by the launch of the new Master of Science in Quantum Science & Technology program at NYU Tandon, positioning the university among a select group worldwide offering such a specialized degree.

Interdisciplinary education creates the shared language and understanding poised to make graduates coming from collaborations in the NYUQI extremely valuable in the current landscape. Quantum challenges are not just technical; they are managerial and philosophical as well. An engineer working with the NYUQI will understand the requirements of the nanofabrication cleanroom and the foundations of superconducting qubits for quantum computing, just as a physicist will understand the application needs of an industry partner like a large financial institution. In a field where the entire team must be able to communicate seamlessly, these are professionals truly equipped to rapidly translate discovery into deployable technology. Creating a talent pipeline at scale will provide a missing link that converts New York’s vast commercial energy into genuine quantum advantage.

NYUQI: Building Talent, Technology, and Structure

The vision for the NYUQI is an act of strategic geography that plays directly into the sheer volume of opportunity and demand right outside their new facility. By building the talent, the technology, and the structure necessary to capitalize on this dense environment, NYU is not just participating in the quantum race, it is actively steering it.

Conference room with attendees seated at round tables, facing a presenter on stage. Attendees of NYU’s 2025 Quantum Summit.Tracey Friedman/NYU

The initial hypothesis for the NYUQI was simple: the ultimate advantage lies in pursuing the science in the right place at the right time. Now, the institute will ensure that the next wave of scientific discovery, capable of solving previously intractable problems in finance, medicine, and security, will be conceived, built, and tested in the heart of New York City.

Reference: https://ift.tt/2crlDOC

Wednesday, March 25, 2026

Google bumps up Q Day deadline to 2029, far sooner than previously thought


Google is dramatically shortening its deadline readiness for the arrival of Q Day, the point at which existing quantum computers can break public-key cryptography algorithms that secure decades' worth of secrets belonging to militaries, banks, governments, and nearly every individual on earth.

In a post published on Wednesday, Google said it is giving itself until 2029 to prepare for this event. The post went on to warn that the rest of the world needs to follow suit by adopting PQC—short for post-quantum cryptography—algorithms to augment or replace elliptic curves and RSA, both of which will be broken.

The end is nigh

“As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline,” wrote Heather Adkins, Google’s VP of security engineering, and Sophie Schmieg, a senior cryptography engineer. “By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”

Read full article

Comments

Reference : https://ift.tt/98OGBMK

What Happens When You Host an AI Café




“Can I get an interview?” “Can I get a job when I graduate?” Those questions came from students during a candid discussion about artificial intelligence, capturing the anxiety many young people feel today. As companies adopt AI-driven interview screeners, restructure their workforces, and redirect billions of dollars toward AI infrastructure, students are increasingly unsure of what the future of work will look like.

We had gathered people together at a coffee shop in Auburn, Alabama for what we called an AI Café. The event was designed to confront concerns about AI directly, demystifying the technology while pushing back against the growing narrative of technological doom.

AI is reshaping society at breathtaking speed. Yet the trajectory of this transformation is being charted primarily by for-profit tech companies, whose priorities revolve around market dominance rather than public welfare. Many people feel that AI is something being done to them rather than developed with them.

As computer science and liberal arts faculty at Auburn University, we believe there is another path forward: One where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.

The AI Café Model

Last November, we ran two public AI Cafés in Auburn. These were informal, 90-minute conversations between faculty, students, and community members about their experiences with AI. In these conversational forums, participants sat in clusters, questions flowed in multiple directions, and lived experience carried as much weight as technical expertise.

We avoided jargon and resisted attempts to “correct” misconceptions, welcoming whatever emotions emerged. One ground rule proved crucial: keeping discussions in the present, asking participants where they encounter AI today. Without that focus, conversations could easily drift to sci-fi speculation. Historical analogies—to the printing press, electricity, and smartphones—helped people contextualize their reactions. And we found that without shared definitions of AI, people talked past each other; we learned to ask participants to name specific tools they were concerned about.

A pair of photos show people in chairs in a cafe raising their hands, and 3 people smiling in front of the audience. Organizers Xaq Frohlich, Cheryl Seals, and Joan Harrell (right) held their first AI Café in a welcoming coffee shop and bookstore. Well Red

Most importantly, we approached these events not as experts enlightening the masses, but as community members navigating complex change together.

What We Learned by Listening

Participants arrived with significant frustration. They felt that commercial interests were driving AI development “without consideration of public needs,” as one attendee put it. This echoed deeper anxieties about technology, from social media algorithms that amplify division to devices that profit from “engagement” and replace meaningful face-to-face connection. People aren’t simply “afraid of AI.” They’re weary of a pattern where powerful technologies reshape their lives while they have little say.

Yet when given space to voice concerns without dismissal, something shifted. Participants didn’t want to stop AI development; they wanted to have a voice in it. When we asked, “What would a human-centered AI future look like?” the conversation became constructive. People articulated priorities: fairness over efficiency, creativity over automation, dignity over convenience, community over individualism.

Three people standing together in front of a yellow curtain at an indoor event. The three organizers, all professors at Alabama’s Auburn University, say that including people from the liberal arts fields brought new perspectives to the discussions about AI. Well Red

For us as organizers, the experience was transformative. Hearing how AI affected people’s work, their children’s education, and their trust in information prompted us to consider dimensions we hadn’t fully grasped. Perhaps most striking was the gratitude participants expressed for being heard. It wasn’t about filling knowledge deficits; it was about mutual learning. The trust generated created a spillover effect, renewing faith that AI could serve the public interest if shaped through inclusive processes.

How to Start Your Own AI Café

The “deficit model” of science communication—where experts transmit knowledge to an uninformed public—has been discredited. Public resistance to emerging technologies reflects legitimate concerns about values, risks, and who controls decision-making. Our events point toward a better model.

We urge engineering and liberal arts departments, professional societies, and community organizations worldwide to organize dialogues similar to our AI Cafés.

We found that a few simple design choices made these conversations far more productive. Informal and welcoming spaces such as coffee shops, libraries, and community centers helped participants feel comfortable (and serving food and drinks helped too!). Starting with small-group discussions, where people talked with neighbors, produced more honest thinking and greater participation. Partnering with colleagues in the liberal arts brought additional perspectives on technology’s social dimensions. And by making a commitment to an ongoing series of events, we built trust.

Facilitation also matters. Rather than leading with technical expertise, we began with values: We asked what kind of world participants wanted, and how AI might help or hinder that vision. We used analogies to earlier technologies to help people situate their reactions, and grounded discussions in present realities, asking participants where they have encountered AI in their daily lives. We welcomed emotions constructively, transforming worry into problem-solving by asking questions like: “What would you do about that?”

Why Engineers Should Engage the Public

Professional ethics codes remain abstract unless grounded in dialogue with affected communities. Conversations about what “responsible AI” means will look different in São Paulo than in Seoul, in Vienna than in Nairobi. What makes the AI Café model portable is its general principles: informal settings, values-first questions, present-tense focus, genuine listening.

Without such engagement, ethical accountability quietly shifts to technical experts rather than remaining a shared public concern. If we let commercial interests define AI’s trajectory with minimal public input, it will only deepen divides and entrench inequities.

AI will continue advancing whether or not we have public trust. But AI shaped through dialogue with communities will look fundamentally different from AI developed solely to pursue what’s technically possible or commercially profitable.

The tools for this work aren’t technical; they’re social, requiring humility, patience, and genuine curiosity. The question isn’t whether AI will transform society. It’s whether that transformation will be done to people or with them. We believe scholars must choose the latter, and that starts with showing up in coffee shops and community centers to have conversations where we do less talking and more listening.

The future of AI depends on it.


Reference: https://ift.tt/xZsRvo2

Working With More Experienced Engineers Can Fast-Track Career Growth

This article is crossposted from IEEE Spectrum ’s careers newsletter. Sign up now to get insider tips, expert advice, and practical str...