Wednesday, April 15, 2026

Crypto Faces Increased Threat from Quantum Attacks




The race to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that are commonly used today to protect data online—RSA and elliptic curve cryptography—are uncrackable by supercomputers, but a large enough quantum computer would make quick work of them. There are algorithms secure enough to be out of reach for both classical and future quantum machines, called post-quantum cryptography, but transitioning to these is a work in progress.

Late last month, the team at Google Quantum AI published a whitepaper that added significant urgency to this race. In it, the team showed that the size of a quantum computer that would pose a cryptographic threat is approximately twenty times smaller than previously thought. This is still far from accessible to the quantum computers that exist today: the largest machines currently consist of approximately 1,000 quantum bits, or qubits, and the whitepaper estimated that about 500 times as much is needed. Nonetheless, this shortens the timeline to switch over to post-quantum algorithms.

The news had a surprising beneficiary: obscure cryptocurrency Algorand jumped 44% in price in response. The whitepaper called out Algorand specifically for implementing post-quantum cryptography on their blockchain. We caught up with Algorand’s chief scientific officer and professor of computer science and engineering at the University of Michigan, Chris Peikert, to understand how this announcement is impacting cryptography, why cryptocurrencies are feeling the effects, and what the future might hold. Peikert’s early work on a particular type of algorithm known as lattice cryptography underlies most post-quantum security today.

IEEE Spectrum: What is the significance of this Google Quantum AI whitepaper?

Peikert: The upshot of this paper is that it shows that a quantum computer would be able to break some of the cryptography that is most widely used, especially in blockchains and cryptocurrencies, with much, much fewer resources than had previously been established. Those resources include the time that it would take to do so and the number of qubits (or quantum bits) that it would have to use.

This cryptography is very central to not just cryptocurrencies but more broadly, to cryptography on the internet. It is also used for secure web connections between web browsers and web servers. Versions of elliptic curve cryptography are used in national security systems and military encryption. It’s very prevalent and pervasive in all modern networks and protocols.

And not only was this paper improving the algorithms, but there was also a concurrent paper showing that the hardware itself was substantially improved. The claim here was that the number of physical qubits needed to achieve a certain kind of logical qubit was also greatly reduced. These two kinds of improvements are compounding upon each other. It’s a kind of a win-win situation from the quantum computing perspective, but a lose-lose situation for cryptography.

IEEE Spectrum: What do Google AI’s findings mean for cryptocurrencies and the broader cybersecurity ecosystem?

Peikert: There’s always been this looming threat in the distance of quantum computers breaking a large fraction of the cryptography that’s used throughout the cryptocurrency ecosystem. And I think what this paper did was really the loudest alarm yet that these kinds of quantum attacks might not be as far off as some have suspected, or hoped, in recent years. It’s caused a re-evaluation across the industry, and a moving up of the timeline for when quantum computers might be capable of breaking this cryptography.

When we think about the timelines and when it’s important to have completed these transitions [to post-quantum cryptography], we also need to factor in the unknown improvements that we should expect to see in the coming years. The science of quantum computing will not stay static, and there will be these further breakthroughs. We can’t say exactly what they will be or when they will come, but you can bet that they will be coming.

IEEE Spectrum: What is your guess on if or when quantum computers will be able to break cryptography in the real world?

Peikert: Instead of thinking about a specific date when we expect them to come, we have to think about the probabilities and the risks as time goes on. There have been huge breakthrough developments, including not only this paper, but also some last year. But even with these, I think that the chance of a cryptographic attack by quantum computers being successful in the next three years is extremely low, maybe less than a percent. But then, as you get out to several years, like 5, 6, or 10 years, one has to seriously consider a probability, maybe 5% or 10% or more. So it’s still rather small, but significant enough that we have to worry about the risk, because the value that is protected by this kind of cryptography is really enormous.

The US government has put 2035 as its target for migrating all of the national security systems to post quantum cryptography. That seems like a prudent date, given the timelines that it takes to upgrade cryptography. It’s a slow process. It has to be done very deliberately and carefully to make sure that you’re not introducing new vulnerabilities, that you’re not making mistakes, that everything still works properly. So, you know, given the outlook for quantum computers on the horizon, it’s really important that we prepare now, or ideally, yesterday, or a few years ago, for that kind of transition.

IEEE Spectrum: Are there significant roadblocks you see to industrial adoption of post-quantum cryptography going forward?

Peikert: Cryptography is very hard to change. We’ve only had one or maybe two major transitions in cryptography since the early 1980s or late 1970s when the field first was invented. We don’t really have a systematic way of transitioning cryptography.

An additional challenge is that the performance tradeoffs are very different in post-quantum cryptography than they are in the legacy systems. Keys and cipher texts and digital signatures are all significantly larger in post-quantum cryptography, but the computations are actually faster, typically. People have optimized cryptography for speed in the past, and we have very good fast speeds now for post-quantum cryptography, but the sizes of the keys are a challenge.

Especially in blockchain applications, like cryptocurrencies, space on the blockchain is at a premium. So it calls for a reevaluation in many applications of how we integrate the cryptography into the system, and that work is ongoing. And, the blockchain ecosystem uses a lot of advanced cryptography, exotic things like zero-knowledge proofs. In many cases, we have rudimentary constructions of these fancy cryptography tools from post-quantum type mathematics, but they’re not nearly as mature and industry ready as the legacy systems that have been deployed. It continues to be an important technical challenge to develop post-quantum versions of these very fancy cryptographic schemes that are used in cutting edge applications.

IEEE Spectrum: As an academic cryptography researcher, what attracted you to work with a cryptocurrency, and Algorand in particular?

Peikert: My former PhD advisor is Silvio Micali, the inventor of Algorand. The system is very elegant. It is a very high performing blockchain system and it uses very little energy, has fast transaction finalization, and a number of other great features. And Silvio appreciated that this quantum threat was real and was coming, and the team approached me about helping to improve the Algorand protocol at the basic levels to become more post-quantum secure in 2021. That was a very exciting opportunity, because it was a difficult engineering and scientific challenge to integrate post-quantum cryptography into all the different technical and cryptographic mechanisms that were underlying the protocol.

IEEE Spectrum: What is the current status of post-quantum cryptography in Algorand, and blockchains in general?

Peikert: We’ve identified some of the most pressing issues and worked our way through some of them, but it’s a many-faceted problem overall. We started with the integrity of the chain itself, which is the transaction history that everybody has to agree upon.

Our first major project was developing a system that would add post-quantum security to the history of the chain. We developed a system called state proofs for that, which is a mixture of ordinary post-quantum cryptography and also some more fancy cryptography: It’s a way of taking a large number of signatures and digesting them down into a much smaller number of signatures, while still being confident that these large number of signatures actually exist and are properly formed. We also followed it with other papers and projects that are about adding post-quantum cryptography and security to other aspects of the blockchain in the Algorand ecosystem.

It’s not a complete project yet. We don’t claim to be fully post-quantum secure. That’s a very challenging target to hit, and there are aspects that we will continue to work on into the near future.

IEEE Spectrum: In your view, will we adopt post-quantum cryptography before the risks actually catch up with us?

Peikert: I tend to be an optimist about these things. I think that it’s a very good thing that more people in decision making roles are recognizing that this is an important topic, and that these kinds of migrations have to be done. I think that we can’t be complacent about it, and we can’t kick the can down the road much longer. But I do see that the focus is being put on this important problem, so I’m optimistic that most important systems will eventually have good either mitigations or full migrations in place.

But it’s also a point on the horizon that we don’t know exactly when it will come. So, there is the possibility that there is a huge breakthrough, and we have many fewer years than we might have hoped for, and that we don’t get all the systems upgraded that we would like to have fixed by the time quantum computers arrive.

Reference: https://ift.tt/0cX96Pi

Tuesday, April 14, 2026

OpenAI Engineer Helps Companies Attract Buyers and Boost Sales




Like many engineers, Sarang Gupta spent his childhood tinkering with everyday items around the house. From a young age he gravitated to projects that could make a difference in someone’s everyday life.

When the family’s microwave plug broke, Gupta and his father figured out how to fix it. When a drawer handle started jiggling annoyingly, the youngster made sure it didn’t do so for long.

Sarang Gupta


Employer

OpenAI in San Francisco

Job

Data science staff member

Member grade

Senior member

Alma maters

The Hong Kong University of Science and Technology; Columbia

By age 11, his interest expanded from nuts and bolts to software. He learned programming languages such as Basic and Logo and designed simple programs including one that helped a local restaurant automate online ordering and billing.

Gupta, an IEEE senior member, brings his mix of curiosity, hands-on problem-solving, and a desire to make things work better to his role as member of the data science staff at OpenAI in San Francisco. He works with the go-to-market (GTM) team to help businesses adopt ChatGPT and other products. He builds data-driven models and systems that support the sales and marketing divisions.

Gupta says he tries to ensure his work has an impact. When making decisions about his career, he says, he thinks about what AI solutions he can unlock to improve people’s lives.

“If I were to sum up my overall goal in one sentence,” he says, “it’s that I want AI’s benefits to reach as many people as possible.”

Pursuing engineering through a business lens

Gupta’s early interest in tinkering and programming led him to choose physics, chemistry, and math as his higher-level subjects at Chinmaya International Residential School, in Tamil Nadu, India. As part of the high school’s International Baccalaureate chapter, students select three subjects in which to specialize.

“I was interested in engineering, including the theoretical part of it,” Gupta says, “But I was always more interested in the applications: how to sell that technology or how it ties to the real world.”

After graduating in 2012, he moved overseas to attend the Hong Kong University of Science and Technology. The university offered a dual bachelor’s program that allowed him to earn one degree in industrial engineering and another in business management in just four years.

In his spare time, Gupta built a smartphone app that let students upload their class schedules and find classmates to eat lunch with. The app didn’t take off, he says, but he enjoyed developing it. He also launched Pulp Ads, a business that printed advertisements for student groups on tissues and paper napkins, which were distributed in the school’s cafeterias. He made some money, he says, but shuttered the business after about a year.

After graduating from the university in 2016, he decided to work in Hong Kong’s financial hub and joined Goldman Sachs as an analyst in the bank’s operations division.

From finance to process optimization at scale

After two parties agree on securities transactions, the bank’s operations division ensures that the trade details are recorded correctly, the securities and payments are ready to transfer, and the transaction settles accurately and on time.

As an analyst, Gupta’s task was to find bottlenecks in the bank’s workflows and fix them. He identified an opportunity to automate trade reconciliation: when analysts would manually compare data across spreadsheets and systems to make sure a transaction’s details were consistent. The process helped ensure financial transactions were recorded accurately and settled correctly.

Gupta built internal automation tools that pulled trade data from different systems, ran validation checks, and generated reports highlighting any discrepancies.

“Instead of analysts manually checking large datasets, the tools automatically flagged only the cases that required investigation,” he says. “This helped the team spend less time on repetitive verification tasks and more time resolving complex issues. It was also my first real exposure to how software and data systems could dramatically improve operational workflows.”

“Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

The experience made him realize he wanted to work more deeply in technology and data-driven systems, he says. He decided to return to school in 2018 to study data science and AI, when the fields were just beginning to surge into broader awareness.

He discovered that Columbia offered a dedicated master’s degree program in data science with a focus on AI. After being accepted in 2019, he moved to New York City.

Throughout the program, he gravitated to the applied side of machine learning, taking courses in applied deep learning and neural networks.

One of his major academic highlights, he says, was a project he did in 2019 with the Brown Institute, a joint research lab between Columbia and Stanford focused on using technology to improve journalism. The team worked with The Philadelphia Inquirer to help the newsroom staff better understand their coverage from a geographic and social standpoint. The project highlighted “news deserts”—underserved communities for which the newspaper was not providing much coverage—so the publication could redirect its reporting resources.

To identify those areas, Gupta and his team built tools that extracted locations such as street names and neighborhoods from news articles and mapped them to visualize where most of the coverage was concentrated. The Inquirer implemented the tool in several ways including a new web page that aggregated stories about COVID-19 by county.

“Journalism was an interesting problem set for me, because I really like to read the news every day,” Gupta says. “It was an opportunity to work with a real newsroom on a problem that felt really impactful for both the business and the local community.”

The GenAI inflection point

After earning his master’s degree in 2020, Gupta moved to San Francisco to join Asana, the company that developed the work management platform by the same name. He was drawn to the opportunity to work for a relatively small company where he could have end-to-end ownership of projects. He joined the organization as a product data scientist, focusing on A/B testing for new platform features.

Two years later, a new opportunity emerged: He was asked to lead the launch of Asana Intelligence, an internal machine learning team building AI-powered features into the company’s products.

“I felt I didn’t have enough experience to be the founding data scientist,” he says. “But I was also really interested in the space, and spinning up a whole machine learning program was an opportunity I couldn’t turn down.”

The Asana Intelligence team was given six months to build several machine learning–powered features to help customers work more efficiently. They included automatic summaries of project updates, insights about potential risks or delays, and recommendations for next steps.

The team met that goal and launched several other features including Smart Status, an AI tool that analyzes a project’s tasks, deadlines, and activity, then generates a status update.

“When you finally launch the thing you’ve been working on, and you see the usage go up, it’s exhilarating,” he says. “You feel like that’s what you were building toward: users actually seeing and benefiting from what you made.”

Gupta and his team also translated that first wave of work into reusable frameworks and documentation to make it easier to create machine learning features at Asana. He and his colleagues filed several U.S. patents.

At the time he took on that role, OpenAI launched ChatGPT. The mainstreaming of generative AI and large language models shifted much of his work at Asana from model development to assessing LLMs.

OpenAI captured the attention of people around the world, including Gupta. In September 2025 he left Asana to join OpenAI’s data science team.

The transition has been both energizing and humbling, he says. At OpenAI, he works closely with the marketing team to help guide strategic decisions. His work focuses on developing models to understand the efficiency of different marketing channels, to measure what’s driving impact, and to help the company better reach and serve its customers.

“The pace is very different from my previous work. Things move quickly,” he says. “The industry is extremely competitive, and there’s a strong expectation to deliver fast. It’s been a great learning experience.”

Gupta says he plans to stay in the AI space. With technology evolving so rapidly, he says, he sees enormous potential for task automation across industries. AI has already transformed his core software engineering work, he says, and it’s helped him enhance areas that aren’t natural strengths.

“I’m not a good writer, and AI has been huge in helping me frame my words better and present my work more clearly,” he says. “Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

Exploring IEEE publications and connections

Gupta has been an IEEE member since 2024, and he values the organization as both a technical resource and a professional network.

He regularly turns to IEEE publications and the IEEE Xplore Digital Library to read articles that keep him abreast of the evolution of AI, data science, and the engineering profession.

IEEE’s member directory tools are another valuable resource that he uses often, he says.

“It’s been a great way to connect with other engineers in the same or similar fields,” he says. “I love sharing and hearing about what folks are working on. It brings me outside of what I’m doing day to day.

“It inspires me, and it’s something I really enjoy and cherish.”

Reference: https://ift.tt/BHaYmR0

What It’s Like to Live With an Experimental Brain Implant




Scott Imbrie vividly remembers the first time he used a robotic arm to shake someone’s hand and felt the robotic limb as if it were his own. “I still get goosebumps when I think about that initial contact,” he says. “It’s just unexplainable.” The moment came courtesy of a brain implant: an array of electrodes that let him control a robotic arm and receive tactile sensations back to the brain.

Getting there took decades. In 1985, Imbrie had woken up in the hospital after a car accident with a broken neck and a doctor telling him he’d never use his hands or legs again. His response was an expletive, he says—and a decision. “I’m not going to allow someone to tell me what I can and can’t do.” With the determination of a head-strong 22-year-old, Imbrie gradually regained the ability to walk and some limited arm movement. Aware of how unusual his recovery was, the Illinois-native wanted to help others in similar situations and began looking for research projects related to spinal cord injuries. For decades, though, he wasn’t the right fit, until in 2020 he was finally accepted into a University of Chicago trial.

Two photos. The first shows a man sitting in a chair with a large robotic arm extending in front of him. The second is a close-up of implants on the surface of a brain. Scott Imbrie has shaken hands with a robotic arm controlled by a brain implant. The electrodes record neural signals that enable him to move the device and receive tactile feedback. Top: 60 Minutes/CBS News; Bottom: University of Chicago

Imbrie is part of a rarefied group: More people have gone to space than have received advanced brain-computer interfaces (BCI) like his. But a growing number of companies are now attempting to move the devices out of neuroscience labs and into mainstream medical care, where they could help millions of people with paralysis and other neurological conditions. Some companies even hope that BCIs will eventually become a consumer technology.

None of that will be possible without people like Imbrie. He’s a member of the BCI Pioneers Coalition, an advocacy group founded in 2018 by Ian Burkhart, the first quadriplegic to regain hand movement using a brain implant.

That life-changing experience convinced Burkhart that BCIs will make the leap from lab to real world only if users help shape the technology by sharing their perspectives on what works, what doesn’t, and how the devices fit into daily life. The coalition aims to ensure that companies, clinicians, and regulators hear directly from trial participants.

Two images. The first is a photo of a man sitting in a wheelchair; attached to the top of his head is a device with a cable attached. The second is a medical image showing the location of electrodes in the brain. Ian Burkhart founded the BCI Pioneers Coalition to ensure that companies developing brain implants hear directly from the people using them. Left: Andrew Spear/Redux; Right: Ian Burkhart

The group also serves as a peer-support network for trial participants. That’s crucial, because despite the steady drumbeat of miraculous results from BCI trials, receiving a brain implant comes with significant risks. Surgical complications, such as bleeding or infection in the brain, are possible. Even more concerning is the potential psychological toll if the implant fails to work as expected or if life-changing improvements are eventually withdrawn.

Researchers spell this out upfront, and many are put off, says John Downey, an assistant professor of neurological surgery at the University of Chicago and the lead on Imbrie’s clinical trial. “I would say, the number of people I talk to about doing it is probably 10 to 20 times the number of people that actually end up doing it,” he says.

What Happens in a BCI Trial?

BCI pioneers arrive at their unique status via a number of paths, including spinal cord injuries, stroke-induced paralysis, and amyotrophic lateral sclerosis (ALS). The implants they receive come from Blackrock Neurotech, Neuralink, Synchron, and other companies, and are being tested for restoring limb function, controlling computers and robotic arms, and even restoring speech.

Many of the implants record signals from the motor cortex—the part of the brain that controls voluntary movements—to move external devices. Some others target the somatosensory cortex, which processes sensory signals from the body, including touch, pain, temperature, and limb position, to re-create tactile sensation.

BCI Designs Used by Today’s Pioneers


Diagram comparing three brain-computer interface implants from Blackrock, Neuralink, Synchron.

Ease of use depends heavily on the application. Restoring function to a user’s own limbs or controlling robotic arms involves the most difficult learning curve. In early sessions, participants watch a virtual arm reach for objects while they imagine or attempt the same movement. Researchers record related brain signals and use them to train “decoder” software, which translates neural activity into control signals for a robotic arm or stimulation patterns for the user’s nerves or muscles.

Paralyzed in a 2010 swimming accident, Burkhart took part in a trial conducted by Battelle Memorial Institute and Ohio State University from 2014 to 2021. His implant recorded signals from his motor cortex as he attempted to move his hand, and the system relayed those commands to electrodes in his arm that stimulated the muscles controlling his fingers.

A man seated at a desk has electronics wrapped around his right arm. He\u2019s holding a device shaped like a guitar and looking at a screen showing the fretboard of a guitar. Ian Burkhart, who is paralyzed from the chest down, received a brain implant that routed neural signals through a computer to his paralyzed muscles, enabling him to play a video game. Battelle

Getting the system to work seamlessly took time, says Burkhart, and initially required intense concentration. Eventually, he could shift his focus from each individual finger movement to the overall task, allowing him to swipe a credit card, pour from a bottle, and even play Guitar Hero.

Training a decoder is also not a one-and-done process. Systems must be regularly recalibrated to account for “neural drift”—the gradual shift in a person’s neural activity patterns over time. For complex tasks like robotic arm control, researchers may have to essentially train an entirely new decoder before each session, which can take up to an hour.

A man sits in a wheelchair surrounded by screens and electrical equipment. A device is attached to the top of his head, and a wire extends from it. Two other men stand in the room wearing masks. Austin Beggin says that testing a BCI is hard work, but he adds that moments like petting his dog make it all worth it. Daniel Lozada/The New York Times/Redux

Even after the system is ready, using the device can be taxing, says Austin Beggin, who was paralyzed in a swimming accident in 2015 and now participates in a Case Western Reserve University trial aimed at restoring hand movement. “The mental work of just trying to do something like shaking hands or feeding yourself is 100-fold versus you guys that don’t even think about it,” he says.

It’s also a serious time commitment. Beggin travels more than 2 hours from his home in Lima, Ohio, to Cleveland for two weeks every month to take part in experiments. All the equipment is set up in the house he stays in, and he typically works with the researchers for 3 to 4 hours a day. The majority of the experiments are not actually task-focused, he says, and instead are aimed at adjusting the control software or better understanding his neural responses to different stimuli.

But the BCI users say the hard work is worth it. Beyond the hope of restoring lost function, many feel a strong moral obligation to advance a technology that could help others. Beggin compares the pioneers to the early astronauts who laid the groundwork for the lunar landings. “We’re some of the first astronauts just to get shot up for a couple of hours and come back down to earth,” he says.

The Emotional Impact of BCIs

Speak to BCI early adopters and a pattern emerges: The biggest benefits are often more emotional than practical. Using a robotic arm to feed oneself or control a computer is clearly useful, but many pioneers say the most meaningful moments are the ones the experiment wasn’t even trying to produce. Beggin counts shaking his parents’ hands for the first time since his injury and stroking his pet dachshund as among his favorite moments. “That stuff is absolutely incredible,” he says.

Neuralink participant Alex Conley, who broke his neck in a car accident in 2021, uses his implant to control both a robotic arm and computers, enabling him to open doors, feed himself, and handle a smartphone. But he says the biggest boost has come from using computer-aided design software.

A former mechanic, Conley began using the software within days of receiving his implant to design parts that could be fabricated on a 3D printer. He has designed everything from replacement parts for his uncle’s power tools to bumpers for his brother-in-law’s truck. “I was a very big problem solver before my accident, I was able to fix people’s things,” he says. “This gives me that same little burst of joy.”

Two photos show former U.S. president Barack Obama with a man seated in a wheelchair that has a robotic arm mounted to it. The first photo shows their whole bodies, the second is a close-up of a fist bump between Obama and the robotic hand. BCI user Nathan Copeland used a robotic arm to get a fist bump from then-President Barack Obama in 2016. Jim Watson/AFP/Getty Images

The outside world often underestimates those little wins, says Nathan Copeland, who holds the record for the longest functional brain implant. After breaking his neck in a car accident in 2004, he joined a University of Pittsburgh BCI trial in 2015 and has since used the device to control both computers and a robotic arm.

After he uploaded a video to Reddit of himself playing Final Fantasy XIV, one commenter criticized him for not using his device for more practical tasks. Copeland says people don’t understand that those lighthearted activities also matter. “A lot of tasks that people think are mundane or frivolous are probably the tasks that have the most impact on someone that can’t do them,” he says. “Agency and freedom of expression, I think, are the things that impact a person’s life the most.”

Nathan Copeland plays Final Fantasy XIV using his brain implant to control the game character.

When Brain Implants Become Life-Changing

This perspective resonates with Neuralink’s first user, Noland Arbaugh—paralyzed from the neck down after a swimming accident in 2016. After receiving his implant in January 2024, he was able to control a cursor within minutes of the device being switched on. A few days later, the engineers let him play the video game Civilisation VI, and the technology’s potential suddenly felt real. “I played it for 8 hours or 12 hours straight,” he says. “It made me feel so independent and so free.”

A man seated in a wheelchair looks at the screen of a laptop that\u2019s mounted on his wheelchair. Before receiving his Neuralink implant, Noland Arbaugh used mouth-operated devices to control a computer. He says the BCI is more reliable and enables him to do many more things on his own. Rebecca Noble/The New York Times/Redux

But the technology is also providing more practical benefits. Before his implant, Arbaugh relied on a mouth-held typing stick and a mouth-controlled joystick called a quadstick, which uses sip-or-puff sensors to issue commands. But the fiddliness of this equipment required constant caregiver support. The Neuralink implant has dramatically increased the number of things he can do independently. He says he finds great value in not needing his family “to come in and help me 100 times a day.”

For Casey Harrell, the technology has been even more transformative. Diagnosed with ALS in 2020, the climate activist had just welcomed a baby daughter and was in the midst of a major campaign, pressuring a financial firm to divest from companies that had poor environmental records.

Person in a wheelchair outdoors, surrounded by green foliage and soft sunlight.

Bald head with wired brain-computer interface sensors attached in front of a monitor

Person using a brain-computer interface to control text on a monitor.Casey Harrell was able to communicate again within 30 minutes of his BCI being switched on. The device translates his neural signals quickly enough for him to hold conversations. Ian Bates/The New York Times/Redux

“Every morning we’d wake up and there’d be a new thing he couldn’t do, a new part of his body that didn’t work,” says his wife, Levana Saxon. Most alarming was his rapid loss of speech, which, among other things, left him unable to indicate when he was in pain. Then a relative alerted him to a clinical trial at the University of California, Davis, using BCIs to restore speech. He immediately signed up.

The device, implanted in July 2023, records from the brain region that controls muscles involved in talking and translates these signals into instructions for a voice synthesizer. Within 30 minutes of it being switched on, Harrell could communicate again. “I was absolutely overwhelmed with the thought of how this would impact my life and allow me to talk to my family and friends and better interact with my daughter,” he says. “It just was so overwhelming that I began to cry.”

While earlier assistive technology limited him to short, direct commands, Harrell says the BCI is fast enough that he can hold a proper conversation, and he’s been able to resume work part-time.

What’s Holding BCI Technology Back?

BCI technology still has limits. Most trial participants using Blackrock Neurotech implants can operate their devices only in the lab because the systems rely on wired connections and racks of computer hardware. Some users, including Copeland and Harrell, have had the equipment installed at home, but they still can’t leave the house with it. “That would be a big unlock if I was able to do so,” says Harrell.

The academic nature of many trials creates additional constraints. Pressure to publish and secure funding pushes researchers to demonstrate peak performance on narrow tasks rather than build more versatile and reliable systems, says Mariska Vansteensel, who runs BCI studies at the University Medical Center Utrecht in the Netherlands. She says that investigating the technology’s limits or repeating an experiment in new patients is “less rewarded in terms of funding.”

In a clinical trial, Scott Imbrie uses a BCI to control a robotic arm, using signals from his motor cortex to make it move a block. University of Chicago

One of Imbrie’s biggest frustrations is the rapid turnover in experiments. Just as he begins to get proficient at one task, he’s asked to switch to the next task. Study designs also mean that much of the users’ time is spent on mundane tasks required to fine-tune the system.

Perhaps the biggest issue is that trials are often time-limited. That’s partly because scar tissue from the body’s immune response to the implant can gradually degrade signal quality. But constraints on funding and researcher availability can also make it impossible for users to keep using their BCIs after their trials end, even when the technology is still functional.

Ian Burkhart’s BCI enables him to grasp objects, pour from a bottle, and swipe a credit card.

Burkhart has firsthand experience. His trial was extended, but the implant was eventually removed after he got an infection. He always knew the trial would end, but it was nonetheless challenging. “It was a little bit of a tease where I got to see the capability of the restoration of function,” he says. “Now I’m just back to where I was.”

The Push to Commercialize BCIs

Progress is being made in transitioning the technology from experimental research devices to fully-fledged medical products that could help users in their everyday lives. Most academic BCI research has relied on Blackrock Neurotech’s Utah Arrays, which typically feature 96 needlelike electrodes that penetrate the brain’s surface. The implant is connected to a skull-mounted pedestal that’s wired to external hardware. But some of the newer devices are sleeker and less invasive.

Neuralink’s implant houses its electronics and rechargeable battery in a coin-size unit connected to flexible electrode threads inserted into the brain by a robotic “sewing machine.” The implant, which is roughly the size of a quarter or a euro, is mounted in a hole cut into the skull and charges and transfers data wirelessly. Synchron takes a different approach, threading a stent-like implant through blood vessels into the motor cortex. This “stentrode” connects by wire to a unit in the chest that powers the implant and transmits data wirelessly.

Bearded person in red T\u2011shirt using a laptop at a kitchen table

Man using a large on-screen keyboard to type messages on a tablet computer Rodney Gorham can use his Synchron implant to control not just a computer, but also smart devices in his home like an air conditioner, fan, and smart speaker. Rodney Decker

Neuralink’s decoder runs on a laptop, while Synchron deploys a smartphone-size signal processing unit as a wireless bridge to the user’s devices, which allows them to use their implants at home and on the move. The companies have also developed adaptive decoders that use machine learning to adjust to neural drift on the fly, reducing the need for recalibration.

Making these devices truly user-friendly will require technology that can interpret user context, says Kurt Haggstrom, Synchron’s chief commercial officer—including mood, attention levels, and environmental factors like background noise and location. This approach will require AI that analyzes neural signals alongside other data streams such as audio and visual input.

Last year, Synchron took a first step by pairing its implant with an Apple Vision Pro headset. When trial participant Rodney Gorham looked at devices such as a fan, a smart speaker, and an air conditioner, the headset overlaid a menu that enabled him to adjust the device’s settings using his implant.

Rodney Gorham uses his Synchron implant to turn on music, feed his dog, and more. Synchron BCI

Another way to reduce cognitive load is to detect high-order signals of intent in neural data rather than low-level motor commands, says Florian Solzbacher, cofounder and chief scientific officer of Blackrock Neurotech. For instance, rather than manually navigating to an email app and typing, the user could simply think about sending an email and the system would then open it with content already prepopulated, he says.

Durability may prove a thornier problem to solve, UChicago’s Downey says. Current implants last around a decade—well short of a lifelong solution. And with limited real estate in the brain, replacement is only possible once or twice, he says.

Rapid technological progress also raises difficult decisions about whether to get a BCI implant now or wait for a more advanced device. This was a major concern for Gorham’s wife, Caroline. “I was hesitant. I didn’t want him to go on the trial but maybe a future one,” she says. “It was my fear of missing out on future upgrades.”

Will Brain Implants Ever Become Consumer Tech?

Some executives have raised the prospect of BCIs eventually becoming consumer devices. Neuralink founder Elon Musk has been particularly vocal, suggesting that the company’s implants could replace smartphones, let people save and replay memories, or even achieve “symbiosis” with AI.

This kind of talk inspires mixed feelings in users. The hype brings visibility and funding, says Beggin, but could divert attention from medical users’ needs. Copeland worries that consumer branding could strip the devices of insurance coverage and that rising demand may make it harder to access qualified surgeons.

A man, seen in profile, sits in a wheelchair. Noland Arbaugh, the first recipient of Neuralink’s BCI, says that using the implant to control a computer made him feel independent and free. Steve Craft/Guardian/eyevine/Redux

There are also concerns about how data collected by BCI companies will be handled if the devices go mainstream. As a trial participant, Arbaugh says he’s comfortable signing away his data rights to advance the technology, but he thinks stronger legal protections will be needed in the future. “Does that data still belong to Neuralink? Does it belong to each person? And can that data be sold?” he asks.

Blackrock’s Solzbacher says the company remains focused on the medical applications of the technology. But he also believes it is building a “universal interface to any kind of a computerized system” that may have broader applications in the future. And he says the company owes it to users not to limit them to a bare-bones assistive technology. “Why would somebody who’s got a medical condition want to get less than something that somebody who’s able-bodied would possibly also take?” says Solzbacher.

The ever-optimistic Imbrie heartily agrees. Medical devices are invariably expensive, he says, but targeting consumer applications could push companies to keep devices simple and affordable while continuing to add features. “I truly believe that making it a consumer-available product will just enhance the product’s capabilities for the medical field,” he says.

Imbrie is on a mission to refocus the conversation around BCIs on the positives. While concerns about risks are valid, he worries that the alarming language often used to describe brain implants discourages people from volunteering for trials that could help them.

“I remember laying there in the bed and not being able to move,” he says, “and it was really dehumanizing having to ask someone to do everything for you. As humans, we want to be independent.”

Reference: https://ift.tt/lpzxdrY

Monday, April 13, 2026

Squishy Photonic Switches Promise Fast Low Power Logic




Photonic devices, which rely on light instead of electricity, have the potential to be faster and more energy efficient than today’s electronics. They also present a unique opportunity to develop devices using soft materials, such as polymers and gels, which are poor conductors of electricity, but are easier to manufacture and more environmentally friendly. The development of these potentially squishy, flexible photonics, however, requires the ability to manipulate light using only light, not electricity.

In soft matter, that’s been done primarily by changing the physical properties of optical materials or by using intense light pulses to change the direction of light. Now, an international team of scientists has developed a new way of controlling light with light using very low light intensities and without changing any of the physical properties of materials.

Igor Muševič, a professor of physics at the University of Ljubljana who led the project, says that he first got the idea for the device while at a conference in San Francisco, listening to a talk by Stefan W. Hell about stimulated emission depletion (STED) microscopy. The imaging technique, for which Hell won a Nobel Prize in Chemistry in 2014, uses two lasers to produce an extremely small light beam to scan objects. “When I saw this, I said, this is manipulation light by light, right?” Muševič recalls.

His realization inspired a device into which a laser pulse is fired. Whether or not this beam makes it out of the device depends on whether or not a second pulse is fired less than a nanosecond afterwards.

A liquid crystal photonic switch

The device consists of a spherically-shaped bead of liquid crystal, held in shape by its elastic material properties and the forces between its molecules, infused with a fluorescent dye and trapped between four upright cone-shaped polymer structures that guide light in and out of the device. When a laser pulse is sent through one of the four polymer waveguides, the light is quickly transferred into the liquid crystal, exciting the fluorescent dye. In a process known as whispering gallery mode resonance, the photons inside the liquid crystal are reflected back inside each time they hit the liquid’s spherical surface. The result is that light circulates inside the cavity until it is eventually reflected into one of the waveguides, which then emits the photons out in a laser beam.

The team realized that sending a second laser pulse of a different color into the waveguides before the liquid crystal started emitting light from the first laser pulse resulted in stimulated emission of the excited dye molecules. The photons from the second laser pulse, which had to be fired into the waveguides after the first laser pulse, interact with the already-excited dye molecules. The interaction causes the dye to emit photons identical to those in the second pulse while depleting the energy from the first pulse. The second laser beam, called the STED beam, is amplified by the process, while the light from the first pulse is so diminished that it isn’t emitted at all. Because the outcome of the first laser pulse could be controlled using the second laser pulse, the team had successfully demonstrated the control of light by light.

According to the Ljubljana team, the energy efficiency of the liquid crystal approach is much better than previous soft-matter techniques, which had typically involved using intense light fields to change material properties of the soft matter, such as the index of refraction. The new method reduces the energy needed by more than a factor of a hundred. Because the STED laser pulse circulates repeatedly in the crystal, a single photon can deplete many dye molecules of the energy from the first laser pulse.

Miha Ravnik, a theoretical physicist also at the University of Ljubljana who worked on the project, explains that control of light by light is essential in soft-matter photonic logic gates. “You can very much control when [light] is generated and in which direction,” Ravnik says of the light shined into the polymer waveguides. “And this gives you, then, this capability that you create logical operations with light.”

Aside from its potential in photonic logical circuits, the team’s approach presents several technical advantages over photonics made from silicon or other hard materials, Muševič says. For example, using soft matter greatly simplifies the manufacturing process. The liquid crystal in the team’s device can be inserted in less than a second, but manufacturing a similar structure with hard materials is difficult. Additionally, soft matter devices can be manufactured at much lower temperatures than silicon and other hard materials. Muševič also points out that soft matter presents an opportunity to experiment with the geometry of the device. With liquid crystals “you can make many different kinds of cavities,” says Muševič. “You have, I would say, a lot of engineering space.”

Ravnik is excited for the potential of the team’s breakthrough, particularly as a step towards photonic computing and even photonic neural networks. But, he recognizes that these developments are far down the line. “There’s no way this technology can compete with current neural network implementation at all,” he admits. Still, the possibilities are tantalizing. “The energy losses are predicted to be extremely low, the speeds for calculation extremely high.”

Reference: https://ift.tt/XsLyJ4d

Friday, April 10, 2026

Working With More Experienced Engineers Can Fast-Track Career Growth




This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

The Worst Engineer in the Room

My salary doubled. My confidence tanked.

That’s what happened when I had just joined a five-person startup in San Francisco in my third year as a software engineer. Two of the founders had been recognized in Forbes 30 Under 30. The team was exceptional by any measure.

On my first day, someone made a joke about Dijkstra’s algorithm. Everyone laughed. I smiled along, then looked it up afterward so I could understand why it was funny. Dijkstra’s algorithm finds the shortest path between 2 points—the math underlying GPS navigation. It’s a foundational concept in virtually every formal computer science curriculum. I had never encountered it.

That moment reflected a broader pattern. Conversations about system design and tradeoffs often felt just out of reach. I could follow parts of them, but not enough to contribute meaningfully.

I was mostly self-taught. Wide coverage, shallow roots. The engineers around me had roots. You could feel it in how they reasoned through problems, how they talked about tradeoffs, how they debugged with patience instead of pure panic.

The Advice That Sounds Good Until You’re Living It

You’ve heard the phrase: “If you’re the smartest person in the room, you’re in the wrong room.”

It sounds aspirational. What nobody tells you is what it actually feels like to be in that room. It feels like barely following system design conversations. Like nodding along to discussions you can only partially decode. Like shipping solutions through trial and error and hoping nobody looks too closely.

Being the weakest engineer in the room is genuinely uncomfortable. It surfaces every gap. And if you’re not careful, it pushes you in exactly the wrong direction.

My instinct was to make myself smaller. On a team of five, every voice mattered. I stopped offering mine. I rushed toward working solutions without real understanding, hoping velocity would compensate for depth.

I was working harder and, at the same time, I was not improving.

The turning point came when one of the most senior engineers left. Before departing, he told me it was difficult to work with me because I lacked foundational programming knowledge, listing out the concepts he saw me struggle with.

For the first time, what had felt like vague inadequacy became something specific.

What the Cliché Misses

Proximity to stronger engineers is not sufficient on its own. You won’t absorb their skill through osmosis. The engineers who thrive when they’re outmatched are not the ones who wait for confidence to arrive. They treat the discomfort as diagnostic information.

What can they answer that I can’t? What do they see in a system that I’m missing?

I defined a clear picture of the engineer I wanted to become and compared it to where I was. I wrote down what I did not know. I identified how I would close each gap with books, tutorials and small projects. I asked for recommendations from the same engineer who gave me the hard feedback.

I figured out the gaps. Then the bridges. Then I worked through each of them.

Over time, conversations became clearer. Debugging became more systematic. I started contributing meaningfully rather than just executing tasks.

The Other Room Nobody Warns You About

There’s a less-obvious version of this same problem: when you’re the strongest engineer in the room.

It can feel rewarding. Less friction, more validation. But there’s also less growth. When you’re at the ceiling, there’s no external pressure to raise your own floor. The feedback loops that sharpen judgment go quiet. Some engineers spend years there without noticing. They’re good. They’re comfortable. They stop getting better.

Both rooms carry risk. One threatens your confidence. The other threatens your trajectory.

Being the weakest engineer in a strong room is an advantage, but only if you treat it like one. It gives you a clear benchmark. But the room doesn’t do the work for you. You have to name the gaps, build a plan, and follow through.

And if you ever find yourself in the other room, where you’re clearly the strongest, pay attention to how long you’ve been there.

Both rooms are trying to tell you something.

—Brian

Are U.S. Engineering Ph.D. Programs Losing Students?

Not every engineer has a doctorate, but Ph.D. engineers are an essential part of the workforce, researching and designing tomorrow’s high-tech products and systems. In the United States, early signs are emerging that Ph.D. programs in electrical engineering and related fields may be shrinking. Political and economic uncertainty mean some universities are now seeing smaller applicant pools and graduate cohorts.

Read more here.

What Happens When You Host an AI Cafe

Last November, three professors at Auburn University in Ala. hosted a gathering at a coffee shop to confront students’ concerns about AI. The event, which they call an “AI Café,” was meant to create an environment “where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.” In a guest article, they share what they learned at the event and tips for starting your own AI Café.

Read more here.

What Is Inference Engineering?

Inference, the process of running a trained AI model on new data, is increasingly becoming a focus in the world of AI engineering. The growth of open LLMs means that more engineers can now tweak the models to perform better at inference. Given this trend, a recent issue of the Substack “The Pragmatic Engineer” does a deep dive on inference engineering—what it is, when it’s needed, and how to do it.

Read more here.

Reference: https://ift.tt/zqeBk96

Thursday, April 9, 2026

“Negative” views of Broadcom driving thousands of VMware migrations, rival says


Amid customer dissatisfaction around Broadcom's VMware takeover, rivals have been trying to lure customers from the leading virtualization firm. One of VMware's biggest competitors, Nutanix, claims to have swiped tens of thousands of VMware customers.

Speaking at a press briefing at Nutanix’s .NEXT conference in Chicago this week, Nutanix CEO Rajiv Ramaswami said that “about 30,000 customers” have migrated from VMware to the rival platform, pointing to customer disapproval over Broadcom’s VMware strategy, SDxCentral, a London-based IT publication, reported today.

“I think there's no doubt that the customer sentiment continues to be negative about Broadcom,” Ramaswami said, per SDxCentral.

Read full article

Comments

Reference : https://ift.tt/8TcopaW

Remembering Gus Gaynor: A Devoted IEEE Volunteer




Gerard “Gus” Gaynor, a long-serving IEEE volunteer and former engineering director at 3M, died on 9 March. The IEEE Life Fellow was 104.

Readers of The Institute might remember Gus from his 2022 profile: “From Fixing Farm Equipment to Becoming a Director at 3M.” Just last year, he and I coauthored twoarticles. One discusses how to leverage relationships to boost your career growth. The other weighs the pros and cons of pursuing a technical or managerial career path. He was 103 years old then. How many IEEE members can claim a centenarian coauthor?

I first met Gus in 2009 at the IEEE Technical Activities Board (TAB) meeting in San Juan, Puerto Rico. We sat together in the airplane on our way back to Minneapolis, our hometown. At home I told many of my friends about the remarkable person—who was 87 years young at the time—with whom I chatted during our six-hour flight.

A decade later, he and I met for lunch in Minneapolis. He drove himself to the restaurant, just asking for a hand to navigate the snowy sidewalk.

A dedicated IEEE volunteer

Gus’s involvement with IEEE predates the organization. He joined the Institute of Radio Engineers, a predecessor society, as a student member in 1942. Twenty years later he became an active IEEE volunteer.

He served on the TAB’s finance committee and the Publications Services and Products Board. He was president of the IEEE Engineering Management Society (now the Technology and Engineering Management Society ), and he was the Technology Management Council’s first president. He was the founding editor of IEEE-USA’s online magazine Today’s Engineer, which reported on government legislation and issues affecting U.S. members’ careers. The magazine is now available as the e-newsletter IEEE-USA InSight.

He authored several books on technology management, published by IEEE-USA.

An elderly white man smiling in a dress shirt against a background of bookshelves. IEEE Life Fellow Gerard “Gus” Gaynor died on 9 March.The Gaynor Family

Most recently, after the formation of TEMS in 2015, he became an active member of its executive committee. He served two terms as vice president of publications.

At 100 years old, he led the launch of a new publication, TEMS Leadership Briefs, a novel short-format open-access publication aimed at technology leaders.

Gus, who is a former member of The Institute’s editorial advisory board, also worked with Kathy Pretz, The Institute’s editor in chief, to start an ongoing series of TEMS-sponsored career-interest articles. He coauthored several of them.

Throughout his 64 years as an IEEE volunteer, he received several honors. They include IEEE EMS’s Engineering Manager of the Year Award, the IEEE TEMS Career Achievement Award, and the IEEE-USA McClure Citation of Honor. In 2014 he was inducted into the IEEE Technical Activities Board Hall of Honor.

A 25-year career at 3M

Gus received a degree in electrical engineering in 1950 from the University of Michigan in Ann Arbor. He worked for several companies including Automatic Electric (now part of Nokia) and Johnson Farebox (now part of Genfare), before joining 3M in 1962.

During his successful 25-year career at 3M, he served as chief engineer for a division in Italy, established the innovation department, and led the design and installation of the company’s first computerized manufacturing facilities. He retired as director of engineering in 1987.

Last year, IEEE Life Fellow Michael Condry, a former TEMS president, organized a Zoom call with Gus and other leaders of the society to celebrate Gus’s 104th birthday. Gus looked well and was his usual upbeat self, telling everyone: “I’m good. Everything’s well. I can’t complain.”

Gus was married to Shirley Margaret Karrels Gaynor, who passed away in 2018. He lives on in the hearts and minds of his seven children, seven grandchildren, two great-grandchildren, and innumerable friends and IEEE colleagues.

Reference: https://ift.tt/VQtCpkT

Crypto Faces Increased Threat from Quantum Attacks

The race to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that ...