Wednesday, April 22, 2026

Building an Interregional Transmission Overlay for a Resilient U.S. Grid




Examining how a U.S. Interregional Transmission Overlay could address aging grid infrastructure, surging demand, and renewable integration challenges.

What Attendees will Learn

  1. Why the current regional grid structure is approaching its limits — Explore how coal-fired generation retirements, renewable integration, aging infrastructure past its 50-year lifespan, and exponential large-load growth from data centers and manufacturing reshoring are creating unprecedented pressure on the U.S. transmission system.
  2. How an Interregional Transmission Overlay (ITO) would work — Understand the architecture of a high-capacity overlay using HVDC and 765 kV EHVAC technologies, how it would bridge the East/West/ERCOT seams, integrate renewable generation from resource-rich regions to demand centers, and potentially reduce electric system costs by hundreds of billions of dollars through 2050.
  3. The five major challenges facing interregional transmission — Examine the obstacles of cross-state planning coordination, investment barriers including permitting and cost allocation, energy market harmonization across regions, supply chain limitations for specialized equipment, and political and regulatory uncertainties that must be navigated.
  4. Actionable steps to begin building the ITO roadmap — Learn how utilities and developers can identify strategic corridors, form multi-stakeholder oversight entities, coordinate regional studies, secure state and federal support through FERC Order 1920 and DOE programs, and develop equitable cost allocation frameworks to move from vision to implementation.
Reference: https://ift.tt/efY0i3W

Tuesday, April 21, 2026

What to Consider Before You Accept a Management Role




This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

The Individual Contributor–Manager Fork: It’s Not a Promotion. It’s a Profession Change.

When I was promoted to engineering manager of a mid-sized team at Clorox, I thought I had made it.

More money. More stock. More visibility. More proximity to senior leadership. From the outside, and on paper, it was clearly a promotion.

I had often heard the phrase, “Management isn’t a promotion. It’s a job switch.” I brushed it off as cliché advice engineers tell each other to sound wise.

It turns out both things were true. It was a promotion. It was also an entirely different job.

And I was nowhere near ready for what that meant.

A Shift in Priorities

There’s surprisingly little training for new managers. As engineers, we’re highly technical and used to mastering complex systems. Many of us assume managing people will be easier than distributed systems. Or we assume it’s just “more meetings.”

Both assumptions are wrong.

Yes, I had more meetings. But what changed most wasn’t my calendar, it was how my impact was measured. As an individual contributor, my output was visible. Code shipped. Features delivered. Bugs fixed.

As a manager, my impact became indirect. It flowed through other people.

That shift was disorienting.

So I fell back into my comfort zone. I started writing more code. I tried to be the strongest engineer on the team. It felt productive and measurable.

It was also a mistake.

By trying to be the number one engineer, I was neglecting my actual job. I wasn’t supporting senior engineers. I wasn’t unblocking systemic problems. I wasn’t building career paths. I was competing with the very people I was supposed to enable.

Management is about amplification.

Learning to Redefine Impact

The turning point came when I began each week with a simple question:

What is the single most impactful thing I can do right now?

Often, it wasn’t code. It was writing a document that clarified direction. It was fixing a broken process with a single point of failure. It was redistributing ownership so that knowledge wasn’t concentrated in one person.

I started deliberately removing myself from implementation work. I committed to writing almost no code. That forced trust. It also revealed gaps in the system that I could address at the right level: through coaching, documentation, hiring, or process changes.

Another major shift was taking one-on-one meetings seriously.

Many engineers dislike one-on-ones. They can feel awkward or devolve into status updates. I scheduled them every other week and approached them with a mix of tactical alignment and human check-in.

I rarely started with engineering questions. Instead:

  • Are you happy with the work you’re doing?
  • Do you feel stretched or stagnant?
  • What’s frustrating you right now?

Burnout doesn’t show up in Jira tickets. Neither does quiet disengagement.

Those conversations helped me anticipate turnover, redistribute workload, and build trust.

I also spent more time thinking about career ladders. Was I giving my team the kind of work that would help them grow? Was I hoarding high-visibility projects? Was I clear about what senior-level impact looked like?

That work felt less tangible than code, but it moved the needle far more.

Why I Went Back to IC

Ultimately, I returned to the individual contributor track.

Part of it was practical: I was laid off from my management role, and the market rewarded senior IC roles more strongly at the time. But if I’m honest, the deeper reason was simpler.

I love writing code.

I enjoy improving systems and helping people, but the part of my day that energized me most was still building. Management required relinquishing that. You can’t be absorbed in technical implementation and deeply people-focused at the same time. Something has to give.

Personally, I don’t need to climb the corporate ladder to feel successful. And you might not have to. Many organizations offer technical leadership tracks that are truly in parity with management when it comes to salary bands. Staff and principal engineers steer strategy without managing people.

If you want to remain deeply technical, you should think very carefully before moving into people management. It requires surrendering control over implementation and focusing on alignment, growth, and long-range planning. If you don’t genuinely care about those things, you won’t just be unhappy, you’ll make your team unhappy.

A Simple Test Before You Choose

Before taking a management role, ask yourself:

  • Do I get energy from solving people-problems every day?
  • Am I comfortable measuring impact indirectly?
  • Would I be satisfied if I rarely wrote production code again?
  • Do I want leverage or craft?

There’s no right answer.

The IC/manager fork isn’t about prestige. It’s about what kind of work you want your days to consist of.

Choose based on energy, not ego.

—Brian

12 Graphs That Explain the State of AI in 2026

Stanford University’s AI Index is out for 2026, tracking trends and noble developments in artificial intelligence. This year, China has taken a notable lead in AI model releases and industrial robotics compared to previous years. AIs are rapidly reaching benchmarks and achieving high levels of compute, but public trust in AI and confidence in government regulation of AI is mixed.

Read more here.

AI Models Trained on Physics Are Changing Engineering

Much like large language models have learned from existing texts, new AI physics models are being trained on simulation results. This results in “large physics models” that can simulate situations in transportation, aerospace, or semiconductor engineering much faster than traditional physics simulations. Using new AI physics models “can be anywhere between 10,000 to close to a million times faster,” says Jacomo Corbo, CEO and co-founder of PhysicsX.

Read more here.

Temple University Student Highlights IEEE Membership Perks

Kyle McGinley is an IEEE Student Member pursuing a bachelor’s degree in electrical and computer engineering at Temple University. Joining IEEE helped him to develop the skills necessary for real-world teams. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff,” he says.

Read more here.

Reference: https://ift.tt/fL3JnGY

The Forgotten History of Hershey’s Electric Railway in Cuba




Why does a chocolatier build a railroad? For Milton S. Hershey, it was a logical response to a sugar shortage brought on by World War I. The Hershey Chocolate Co. was by then a chocolate-making powerhouse, having refined the automation and mass production of its products, including the eponymous Hershey’s Milk Chocolate Bar and the bite-size Hershey’s Kiss. To satisfy its many customers, the company needed a steady supply of sugar. Plus, it wanted a way to circumvent the American Sugar Refining Co., also known as the Sugar Trust, which had a virtual monopoly on sugar processing in the United States.

Why Did Hershey Build an Electric Railroad in Cuba?

Beginning in 1916, Hershey looked to Cuba to secure his sugar supply. According to historian Thomas R. Winpenny, the chocolate magnate had a “personal infatuation” with the lush, beautiful island. What’s more, U.S. business interests there were protected by a treaty known as the Platt Amendment, which made Cuba a satellite state of the United States.

Like many industrialists of the day, Hershey believed in vertical integration, and the company’s Cuban operation eventually expanded to include five sugar plantations, five modern sugar mills, a refinery, several company towns, and an oil-fired power plant with three substations to run it all.

A 1943 rail pass for the Hershey Cuban Railway A 1943 rail pass entitled the holder to travel on all ordinary passenger trains of the Hershey Electric Railway. Hershey Community Archives

The company also built a railroad. To maximize the sugar yield, the cane needed to be ground promptly after being cut, and the rail system offered an efficient means of transporting the cane to the mills, and ensured that the mills operated around the clock during the harvest. By 1920, one of Hershey’s three main sites was processing 135,000 tonnes of cane, yielding 14.4 million kilograms of sugar.

Initially, the Hershey Cuban Railway consisted of a single 56-kilometer-long standard gauge track on which ran seven steam locomotives that burned coal or oil. But due to the high cost of the imported fuel and the inefficiency of the locomotives, Hershey began electrifying the line in 1920. Although it was the first electrified train in Cuba, rail lines in Europe and the United States were already being electrified.

In addition to powering the various Hershey entities, the generating station supplied Matanzas and the smaller towns with electricity. F.W. Peters of General Electric’s Railway and Traction Engineering Department published a detailed account of the system in the April 1920 General Electric Review.

Hershey’s Company Towns

The company town of Central Hershey became the headquarters for Hershey’s Cuba operations. (“Central” is the Cuban term for a mill and the surrounding settlement.) It sat on a plateau overlooking the port of Santa Cruz del Norte, about halfway between Havana and Matanzas in the heart of Cuba’s sugarcane region.

Hershey imported the industrial utopian model he had established in Hershey, Penn., which was itself inspired by Richard and George Cadbury’s Bournville Village outside Birmingham, England.

Elderly man in a suit sits at a polished desk with papers in a dim office. The chocolate magnate Milton S. Hershey had a “personal infatuation” with Cuba.Underwood Archives/Getty Images

In Cuba as in Pennsylvania, Hershey’s factory complex was complemented by comfortable homes for his workers and their families, as well as swimming pools, baseball fields, and affordable medical clinics staffed with doctors, nurses, and dentists. Managers had access to a golf course and country club in Central Hershey. Schools provided free education for workers’ children.

Milton Hershey himself had very little formal education, and so in 1909 he and his wife, Catherine, established the Hershey Industrial School in Hershey, Penn. There, white, male orphans received an education until they were 18 years old. Now known as the Milton Hershey School, the school has broadened its admission criteria considerably over the years.

Hershey duplicated this concept in the Cuban company town of Central Rosario, founding the Hershey Agricultural School. The first students were children whose parents had died in a horrific 1923 train accident on the Hershey Electric Railway. The high-speed, head-on collision between two trains killed 25 people and injured 50 more.

Milton Hershey was a generous philanthropist, and by most accounts he truly cared for his employees and their welfare, and yet his early 20th-century paternalism was not without fault. He was a fierce opponent of union activity, and any hard-won pay increases for workers often came at the expense of profit-sharing benefits. Like other U.S. businessmen in Cuba, Hershey employed migrant seasonal labor from neighboring Caribbean islands, undercutting the wages of local workers. Historians are still wrangling with how to capture the long-lasting effects of U.S. economic imperialism on Cuba.

Can the Hershey Electric Railway Be Revived?

Hershey continued to acquire new sugar plantations in Cuba throughout the 1920s, eventually owning about 24,300 hectares and leasing another 12,000 hectares. In 1946, a year after Milton Hershey’s death and amid growing political uncertainty on the island, the company sold its Cuban interests to the Cuban Atlantic Sugar Co. In addition to Hershey’s sugar operations, the sale included a peanut oil plant, four electric plants, and 404 km of railroad track plus locomotives and train cars.

An old red electric passenger train car sitting on the tracks. Service on the Hershey Electric Railway in Cuba continued into at least the 2010s but became increasingly sporadic, with aging equipment like this car at the Central Hershey station. Hershey Community Archives

The Central Hershey sugar refinery continued to operate even after the Cuban Revolution but eventually closed in 2002. Passenger service, meanwhile, continued on the Hershey Electric Railway, albeit sporadically: By 2012, there were only two trips a day between Havana and Matanzas. This video, from 2013, gives a good sense of the route:

A colleague of mine who studies Cuban history told me that in his travels to the country over almost 30 years, he has never been able to ride the Hershey electric train. It was always out of service or had restricted service due to the island’s chronic electricity shortages, which have only gotten worse in recent years. I’ve been trying to find out if any part of the line is still operating. If you happen to know, please add a comment below.

Photo of a stopped train, with passengers standing in the doorways looking down the track. Cuba’s frequent power outages make it difficult to operate the Hershey Electric Railway. In this 2009 photo, passengers await the restoration of electricity so they can continue their journey.Adalberto Roque/AFP/Getty Images

A 2024 analysis of the economic potential and challenges of reactivating Cuba’s Hershey Electric Railway noted that an electric railway could be a hedge against climate change and geopolitical factors. But it also acknowledged that frequent power outages and damaged infrastructure argue against reactivating the electrified railway, and it favored the diesel engines used on most of Cuba’s rail network.

Cuba has been mostly off-limits to U.S. tourists for my entire life, but it was one of my grandmother’s favorite vacation spots. I would love to imagine a future where political ties are restored, the power grid is stabilized, and the Hershey Electric Railway is reopened to the Cuban public and to curious visitors like me.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the May 2026 print issue as “This Chocolate Empire Ran on Electric Rails.”

References


In April 1920, F.W. Peters of General Electric’s Railway and Traction Engineering Department wrote a detailed account called “Electrification of the Hershey Cuban Railway” in the General Electric Review, which was later abstracted in Scientific American Monthly to reach a broader audience.

Thomas R. Winpenny’s article “Milton S. Hershey Ventures into Cuban Sugar” in Pennsylvania History: A Journal of Mid-Atlantic Studies, Fall 1995, provided background to the business side of Hershey’s Cuba enterprise.

Florian Wondratschek’s 2024 article “Between Investment Risk and Economic Benefit: Potential Analysis for the Reactivation of the Hershey Railway in Cuba” in Transactions on Transport Sciences brought the story up to the present.

And if you’re interested in a visual take on the Hershey operation on Cuba, check out the documentary Milton Hershey’s Cuba by Ric Morris, a professor of Spanish and linguistics at Middle Tennessee State University.

Reference: https://ift.tt/Io5YPeu

Contrary to popular superstition, AES 128 is just fine in a post-quantum world


With growing focus on the existential threat quantum computing poses to some of the most crucial and widely used forms of encryption, cryptography engineer Filippo Valsorda wants to make one thing absolutely clear: Contrary to popular mythology that refuses to die, AES 128 is perfectly fine in a post-quantum world.

AES 128 is the most widely used variety of the Advanced Encryption Standard, a block cipher suite formally adopted by NIST in 2001. While the specification allows 192- and 256-bit key sizes, AES 128 was widely considered to be the preferred one because it meets the sweet spot between computational resources required to use it and the security it offers. With no known vulnerabilities in its 30-year history, a brute-force attack is the only known way to break it. With 2128 or 3.4 x 1038 possible key combinations, such an attack would take about 9 billion years using the entire Bitcoin mining resources as of 2026.

It boils down to parallelization

Over the past decade, something interesting happened to all that public confidence. Amateur cryptographers and mathematicians twisted a series of equations known as Grover’s algorithm to declare the death of AES 128 once a cryptographically relevant quantum computer (CRQC) came into being. They said a CRQC would halve the effective strength to just 264, a small enough supply that—if true—would allow the same Bitcoin mining resources to brute force it in less than a second (the comparison is purely for illustration purposes; a CRQC almost certainly couldn’t run like clusters of Bitcoin ASICs and more importantly couldn’t parallelize the workload as the amateurs assume).

Read full article

Comments

Reference : https://ift.tt/exh5pmr

Monday, April 20, 2026

The USC Professor Who Pioneered Socially Assistive Robotics




When the robotics engineering field that Maja Matarić wanted to work in didn’t exist, she helped create it. In 2005 she helped define the new area of socially assistive robotics.

As an associate professor of computer science, neuroscience, and pediatrics at the University of Southern California, in Los Angeles, she developed robots to provide personalized therapy and care through social interactions.

Maja Matarić


Employer

University of Southern California, Los Angeles

Job Title

Professor of computer science, neuroscience, and pediatrics

Member grade

Fellow

Alma maters

University of Kansas and MIT

The robots could have conversations, play games, and respond to emotions.

Today the IEEE Fellow is a professor at USC. She studies how robots can help students with anxiety and depression undergo cognitive behavioral therapy. CBT focuses on changing a person’s negative thought patterns, behaviors, and emotional responses.

For her work, she received a 2025 Robotics Medal from MassRobotics, which recognizes female researchers advancing robotics. The Boston-based nonprofit provides robotics startups with a workspace, prototyping facilities, mentorship, and networking opportunities.

When receiving the award at the ceremony in Boston, Matarić was overcome with joy, she says.

“I’ve been very fortunate to be honored with several awards, which I am grateful for. But there was something very special about getting the MassRobotics medal, because I knew at least half the people in the room,” she says. “Everyone was just smiling, and there was a great sense of love.”

Seeing herself as an engineer

Matarić grew up in Belgrade, Serbia. Her father was an engineer, and her mother was a writer. After her father died when she was 16, Matarić and her mother moved to the United States.

She credits her father for igniting her interest in engineering, and her uncle who worked as an aerospace engineer for introducing her to computer science.

Matarić says she didn’t consider herself an engineer until she joined USC’s faculty, since she always had worked in computer science.

“In retrospect, I’ve always been an engineer,” Matarić says. “But I didn’t set out specifically thinking of myself as one—which is just one of the many things I like to convey to young people: You don’t always have to know exactly everything in advance.”

Maja Matarić and her lab are exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. National Science Foundation News

While pursuing her bachelor’s degree in computer science at the University of Kansas in Lawrence, she was introduced to industrial robotics through a textbook. After earning her degree in 1987, she had an opportunity to continue her education as a graduate student at MIT’s AI Lab (now the Computer Science and Artificial Intelligence Lab). During her first year, she explored the different research projects being conducted by faculty members, she said in a 2010 oral history conducted by the IEEE History Center. She met IEEE Life Fellow Rodney Brooks, who was working on novel reactive and behavior-based robotic systems. His work so excited her that she joined his lab and conducted her master’s thesis under his tutelage.

Inspired by the way animals use landmarks to navigate, Matarić developed Toto, the first navigating behavior-based robot. Toto used distributed models to map the AI Lab building where Matarić worked and plan its path to different rooms. Toto used sonar to detect walls, doors, and furniture, according to Matarić’s paper, “The Robotics Primer.”

After earning her master’s degree in AI and robotics in 1990, she continued to work under Brooks as a doctoral student, pioneering distributed algorithms that allowed a team of up to 20 robots to execute complex tasks in tandem, including searching for objects and exploring their environment.

Matarić earned her Ph.D. in AI and robotics in 1994 and joined Brandeis University, in Waltham, Mass., as an assistant professor of computer science. There she founded the Interaction Lab, where she developed autonomous robots that work together to accomplish tasks.

Three years later, she relocated to California and joined USC’s Viterbi School of Engineering as an assistant professor in computer science and neuroscience.

In 2002 she helped to found the Center for Robotics and Embedded Systems (now the Robotics and Autonomous Systems Center). The RASC focuses on research into human-centric and scalable robotic systems and promotes interdisciplinary partnerships across USC.

Matarić’s shift in her research came after she gave birth to her first child in 1998. When her daughter was a bit older and asked Matarić why she worked with robots, she wanted to be able to “say something better than ‘I publish a lot of research papers,’ or ‘it’s well-recognized,’” she says.

“In academia, you can be in a leadership role and still do research. It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”

“Kids don’t consider those good answers, and they’re probably right,” she says. “This made me realize I was in a position to do something different. And I really wanted the answer to my daughter’s future question to be, ‘Mommy’s robots help people.’”

Matarić and her doctoral student David Feil-Seifer presented a paper defining socially assistive robotics at the 2005 International Conference on Rehabilitation Robotics. It was the only paper that talked about helping people complete tasks and learn skills by speaking with them rather than by performing physical jobs, she says.

Feil-Seifer is now a professor of computer science and engineering at the University of Nevada in Reno.

At the same time, she founded the Interaction Lab at USC and made its focus creating robots that provide social, rather than physical, support.

“At this point in my career journey, I’ve matured to a place where I don’t want to do just curiosity-driven research alone,” she says. “Plenty of what my team and I do today is still driven by curiosity, but it is answering the question: ‘How can we help someone live a better life?’”

In 2006 she was promoted to full professor and made the senior associate dean for research in USC’s Viterbi School of Engineering. In 2012 she became vice dean for research.

“In academia, you can be in a leadership role and still do research,” she says. “It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”

Research in socially assistive robotics

One of the longest research projects Matarić has led at her Interaction Lab is exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. ASD is a lifelong neurological condition that affects the way people interact with others, and the way they learn. Children with ASD often struggle with social behaviors such as reading nonverbal cues, playing with others, and making eye contact.

Matarić and her team developed a robot, Bandit, that can play games with a child and give the youngster words of affirmation. Bandit is 56 centimeters tall and has a humanlike head, torso, and arms. Its head can pan and tilt. The robot uses two FireWire cameras as its eyes, and it has a movable mouth and eyebrows, allowing it to exhibit a variety of facial expressions, according to the IEEE Spectrum’s robots guide. Its torso is attached to a wheeled base.

The study showed that when interacting with Bandit, children with ASD exhibited social behaviors that were out of the ordinary for them, such as initiating play and imitating the robot.

Matarić and her team also studied how the robot could serve as a social and cognitive aid for elderly people and stroke patients. Bandit was programmed to instruct and motivate users to perform daily movement exercises such as seated aerobics.

A smiling blonde woman gestures at a customizable tabletop robot that wears a knit outfit of a cute animal over its shell. Maja Matarić and doctoral student Amy O’Connell testing Blossom, which is being used to study how it can aid students with anxiety or depression.University of Southern California

Over the years, Matarić’s lab developed other robots including Kiwi and Blossom. Kiwi, which looked like an owl, helped children with ASD learn social and cognitive skills, helped motivate elderly people living alone to be more physically active, and mediated discussions among family members. Blossom, originally developed at Cornell, was adapted by the Interaction Lab to make it less expensive and personalizable for individuals. The robot is being used to study how it can aid students with anxiety or depression to practice cognitive behavioral therapy.

Matarić’s line of research began when she learned that large language model (LLM) chatbots were being promoted to help people with mental health struggles, she said in an episode of the AMA Medical News podcast.

“It is generally not easy to get [an appointment with a] therapist, or there might not be insurance coverage,” she said. “These, combined with the rates of anxiety and depression, created a real need.”

That made the chatbot idea appealing, she says, but she was interested to see if they were effective compared with a friendly robot such as Blossom.

Matarić and her team used the same LLMs to power CBT practice with a chatbot and with Blossom. They ran a two-week study in the USC dorms, where students were randomly assigned to complete CBT exercises daily with either a chatbot or the robot. Participants filled out a clinical assessment to measure their psychiatric distress before and after each session.

The study showed that students who interacted with the robot experienced a significant decrease in their mental state, Matarić said in the podcast, and students who interacted with the chatbot did not.

“Joining an [IEEE] society has an impact, and it can be personal. That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”

She and her team also reviewed transcripts of conversations between the students and the robot to evaluate how well the LLM responded to the participants. They found the robot was more effective than the chatbot, even though both were using the same model.

Based on those findings, in 2024 Matarić received a grant from the U.S. National Institute of Mental Health to conduct a six-week clinical trial to explore how effective a socially assistive robot could be at delivering CBT practice. The trial, currently underway, also is expected to study how Blossom can be personalized to adapt to each user’s preferences and progress, including the way the robot moves, which exercises it recommends, and what feedback it gives.

During the trial, the 120 students participating are wearing Fitbits to study their physiologic responses. The participants fill out a clinical assessment to measure their psychiatric distress before and after each session.

Data including the participants’ feelings of relating to the robot, intrinsic motivation, engagement, and adherence will be assessed by the research team, Matarić says.

She says she’s proud of the graduate students working on this project, and seeing them grow as engineers is one of the most rewarding parts of working in academia.

“Engineers generally don’t anticipate having to work with human study participants and needing to understand psychology in addition to the hardcore engineering,” she says. “So the students who choose to do this research are just wonderful, caring people.”

Finding a community at IEEE

Matarić joined IEEE as a graduate student in 1992, the year she published her first paper in IEEE Transactions on Robotics and Automation. The paper, “Integration of Representation Into Goal-Driven Behavior-Based Robots,” described her work on Toto.

As a member of the IEEE Robotics and Automation Society, she says she has gained a community of like-minded people. She enjoys attending conferences including the IEEE International Conference on Robotics and Automation, the IEEE/RSJ International Conference on Intelligent Robots and Systems, and the ACM/IEEE International Conference on Human-Robot Interaction, which is closest to her field of research.

Matarić credits IEEE Life Fellow George Bekey, the founding editor in chief of the IEEE Transactions on Robotics, for recruiting her for the USC engineering faculty position. He knew of her work through her graduate advisor Brooks, who published a paper in the journal that introduced reactive control and the subsumption architecture, which became the foundation of a new way to control robots. It is his most cited paper. Bekey, who was editor in chief at the time, helped guide Brooks through the challenging review process. Matarić joined Brooks’s lab at MIT two years after its publication, and her work on Toto built on that foundation.

“Joining a society has an impact, and it can be personal,” she says. “That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”

Reference: https://ift.tt/ljfsApV

Sunday, April 19, 2026

How Engineers Kick-Started the Scientific Method




In 1627, a year after the death of the philosopher and statesman Francis Bacon, a short, evocative tale of his was published. The New Atlantis describes how a ship blown off course arrives at an unknown island called Bensalem. At its heart stands Salomon’s House, an institution devoted to “the knowledge of causes, and secret motions of things” and to “the effecting of all things possible.” The novel captured Bacon’s vision of a science built on skepticism and empiricism and his belief that understanding and creating were one and the same pursuit.

No mere scholar’s study filled with curiosities, Salomon’s House had deep-sunk caves for refrigeration, towering structures for astronomy, sound-houses for acoustics, engine-houses, and optical perspective-houses. Its inhabitants bore titles that still sound futuristic: Merchants of Light, Pioneers, Compilers, and Interpreters of Nature.

Engraved title page of \u201cThe Advancement and Proficience of Learning\u201d with ship and globes Engraved title page of The Advancement and Proficience of LearningPublic Domain

Bacon didn’t conjure his story from nothing. Engineers he likely had met or observed firsthand gave him reason to believe such an institution could actually exist. Two in particular stand out: the Dutch engineer Cornelis Drebbel and the French engineer Salomon de Caus. Their bold creations suggested that disciplined making and testing could transform what we know.

Engineers show the way

Drebbel came to England around 1604 at the invitation of King James I. His audacious inventions quickly drew notice. By the early 1620s, he unveiled a contraption that bordered on fantasy: a boat that could dive beneath the Thames and resurface hours later, ferrying passengers from Westminster to Greenwich. Contemporary descriptions mention tubes reaching the surface to supply air, while later accounts claim Drebbel had found chemical means to replenish it. He refined the underwater craft through iterative builds, each informed by test dives and adjustments. His other creations included a perpetual-motion device driven by heat and air-pressure changes, a mercury regulator for egg incubation, and advanced microscopes.

De Caus, who arrived in England around 1611, created ingenious fountains that transformed royal gardens into animated spectacles. Visitors marveled as statues moved and birds sang in water-driven automatons, while hidden pipes and pumps powered elaborate fountains and mythic scenes. In 1615, de Caus published The Reasons for Moving Forces, an illustrated manual on water- and air-driven devices like spouts, hydraulic organs, and mechanical figures. What set him apart was scale and spectacle: He pressed ancient physical principles into the service of courtly theater.

Drebbel’s airtight submersibles and methodical trials echo in the motion studies and environmental chambers of Salomon’s House. De Caus’s melodic fountains and hidden mechanisms parallel its acoustic trials and optical illusions. From such hands-on workshops, Bacon drew the lesson that trustworthy knowledge comes from working within material constraints, through gritty making and testing. On the island of Bensalem, he imagines an entire society organized around it.

Beyond inspiring Bacon’s fiction, figures like Drebbel and de Caus honed his emerging philosophy. In 1620, Bacon published Novum Organum, which critiqued traditional philosophical methods and advocated a fresh way to investigate nature. He pointed to printing, gunpowder, and the compass as practical inventions that had transformed the world far more than abstract debates ever could. Nature reveals its secrets, Bacon argued, when probed through ingenious tools and stringent tests. Novum Organum laid out the rationale, while New Atlantis gave it a vivid setting.

A final legacy to science

Engraved title page of Bacon\u2019s *Novum Organum* with ships between two pillars Engraved title page of Bacon’s Novum OrganumPublic Domain

That devotion to inquiry followed Bacon to the roadside one day in March 1626. In a biting late-winter chill, he halted his carriage for an impromptu trial. He bought a hen and helped pack its gutted body with fresh snow to test whether freezing alone could prevent decay. Unfortunately, the cold seeped through Bacon’s own body, and within weeks pneumonia claimed him. Bacon’s life ended with an experiment—and set in motion a larger one. In 1660, a group of London thinkers hailed Bacon as their inspiration in founding the Royal Society. Their motto, Nullius in verba (“take no one’s word for it”), committed them to evidence over authority, and their ambition was nothing less than to create a Salomon’s House for England.

The Royal Society and its successors realized fragments of Bacon’s dream, institutionalizing experimental inquiry. Over the following centuries, though, a distorting story took root: Scientists discover nature’s truths, and the rest is just engineering. Nineteenth-century “men of science” pressed for greater recognition and invented the title of “scientist,” creating a new professional hierarchy. Across the Atlantic, U.S. engineers adopted the rigorous science-based curricula of French and German technical schools and recast engineering as “applied science” to gain institutional legitimacy.

We still call engineering “applied science,” a label that retrofits and reverses history. Alongside it stands “technology,” a catchall word that obscures as much as it describes. And we speak of “development” as if ideas cascade neatly from theory to practice. But creation and comprehension have been partners from the start. Yes, theory does equip engineers with tools to push for further insights. But knowing often follows making, arising from things that someone made work.

Bacon’s imaginary academy offered only fleeting glimpses of its inventions and methods. Yet he had seen the real thing: engineers like Drebbel and de Caus who tested, erred, iterated, and pushed their contraptions past the edge of known theory. From his observations of those muddy, noisy endeavors, Bacon forged his blueprint for organized inquiry. Later generations of scientists would reduce Bacon’s ideas to the clean, orderly “scientific method.” But in the process, they lost sight of its inventive roots.

Reference: https://ift.tt/ORjtHCA

Friday, April 17, 2026

US-sanctioned currency exchange says $15 million heist done by "unfriendly states"


Grinex, a US-sanctioned cryptocurrency exchange registered in Kyrgyzstan, said it’s halting operations after experiencing a $13 million heist carried out by “western special services” hackers.

Researchers from TRM, which has confirmed the theft, put the value of stolen assets at $15 million after discovering roughly 70 drained addresses, about 16 more than Grinex reported. Neither TRM nor fellow blockchain research firm Elliptic has said how the attackers slipped past Grinex’s defenses. Grinex said it has been under almost constant attack attempts since incorporating 16 months ago. The latest attacks, it said, targeted Russian users of the exchange.

Damaging "Russia's financial sovereignty"

“The digital footprints and nature of the attack indicate an unprecedented level of resources and technology available exclusively to the structures of unfriendly states,” Grinex said. “According to preliminary data, the attack was coordinated with the aim of causing direct damage to Russia's financial sovereignty.”

Read full article

Comments

Reference : https://ift.tt/IjfTrny

Designing Broadband LPDA-Fed Reflector Antennas With Full-Wave EM Simulation




A practical guide to designing log-periodic dipole array fed parabolic reflector antennas using advanced 3D MoM simulation — from parametric modeling to electrically large structures.

What Attendees will Learn

  1. How to set design requirements for LPDA-fed reflector antennas — Understand the key specifications including bandwidth ratio, gain targets, and VSWR matching constraints across the full operating range from 100 MHz to 1 GHz.
  2. Why advanced 3D EM solvers enable simulation of electrically large multiscale structures — Learn how higher order basis functions, quadrilateral meshing, geometrical symmetry, and CPU/GPU parallelization extend MoM simulation capability by an order of magnitude.
  3. How to apply a systematic three-step design strategy with proven workflow starting with first optimizing the stand-alone LPDA for VSWR and gain, then integrating the reflector, and finally tuning parameters to satisfy all performance requests including gain and impedance matching.
  4. How parametric CAD modeling accelerates LPDA design — Discover how self-scaling geometry, automated wire-to-solid conversion, and multiple-copy-with-scaling features enable fully parametrized antenna models that streamline optimization across dozens of design variants.
Reference: https://ift.tt/bitK1UV

Recent advances push Big Tech closer to the Q-Day danger zone


Sometime around 2010, sophisticated malware known as Flame hijacked the mechanism that Microsoft used to distribute updates to millions of Windows computers around the world. The malware—reportedly jointly developed by the US and Israel—pushed a malicious update throughout an infected network belonging to the Iranian government.

The lynchpin of the "collision" attack was an exploit of MD5, a cryptographic hash function Microsoft was using to authenticate digital certificates. By minting a cryptographically perfect digital signature based on MD5, the attackers forged a certificate that authenticated their malicious update server. Had the attack been used more broadly, it would have had catastrophic consequences worldwide.

Getting uncomfortably close to the danger zone

The event, which came to light in 2012, now serves as a cautionary tale for cryptography engineers as they contemplate the downfall of two crucial cryptography algorithms used everywhere. Since 2004, MD5 has been known to be vulnerable to "collisions," a fatal flaw that allows adversaries to generate two distinct inputs that produce identical outputs.

Read full article

Comments

Reference : https://ift.tt/UtFlDGB

Thursday, April 16, 2026

IEEE Entrepreneurship Connects Hardware Startups With Investors




Roughly 90 percent of hard tech startups fail due to funding constraints, longer R&D timelines for developing hardware, and the complexity of manufacturing their products, according to a number of studies.

Generally, these startups require up to 50 percent more investor financing than software ones, according to a Medium article. Typically, they need at least US $30 million, according to a Lucid article. That’s double the funding needed by software companies on average.

To help them connect with investors, IEEE Entrepreneurship in 2024 launched its Hard Tech Venture Summits. The two-day events connect founders with potential investors and other entrepreneurs. Attendees include manufacturers, design engineers, and intellectual property lawyers.

“Even though there are a lot of startup investor conferences, it’s hard to find those focused on hard tech,” says Joanne Wong, who helped initiate the program and is now the chair. She is a general partner at Redds Capital, a California-based venture capital firm that invests in global early-stage IT startups.

The IEEE member is also an entrepreneur. She founded SciosHub in 2020. The company’s software-as-a-service and informatics platform automates the data-management process for biomedical research labs.

“Many investors are focused on AI software—which is good,” she says. “But for hard tech companies, it is still hard to find support.”

The summit also includes a workshop to help founders navigate manufacturing processes and regulatory compliance. The event is open to IEEE members and others.

IEEE is a natural fit for the program, Wong says, because hard tech is synonymous with electrical engineering.

“Some of the domains we’re covering are robotics, semiconductors, and aerospace technology. IEEE has societies for all these fields,” she says. “Because of that, there are many resources within the organizations for startups, whether it be mentors or guides on how to commercialize products.”

There are several venture summits planned for this year. Two are scheduled in collaboration with the IEEE Systems Council: this month in Menlo Park, Calif., and in October in Toronto.

On 10 and 11 June, a third summit is scheduled to take place in Boston at the IEEE Microwave Theory and Technology Society’s International Microwave Symposium.

More events are being planned for next year in Asia, Europe, Latin America, and North America.

Networking and a pitch competition

Each summit includes keynote speakers, followed by networking roundtables. Each table is composed of people from three to five startups, one or two investors, and a service provider.

That arrangement helps founders build relationships, which is the summit organizers’ priority, Wong says. Investors at past events have included i3 Ventures, Monozukuri Ventures, and TSV Capital.

“The connection with the community was fantastic, especially investors and founders in robotics.” —Mark Boysen, founder of Naware

Startups present their pitch, which a number of investors evaluate before ranking the business plan and product. The top 10 startups pitch their business to all the investors.

On the second day, the startup founders participate in a half-day engineering design–to–manufacturing workshop, at which manufacturing engineers teach them how to navigate the process and meet regulations.

In an exhibition area, participants can see demonstrations from the startups and connect with service providers.

A woman standing next to a presentation screen while speaking to small seated groups during a professional workshop.The 2025 event’s half-day engineering design–to–manufacturing workshop was led by Liz Taylor, president of DOER Marine. The company manufactures marine equipment.Larissa Abi Nakhle/IEEE

Positive feedback from attendees

In a survey of past summit attendees, startup founders said the event connected them not only with investors but also with other entrepreneurs having similar struggles.

“The connection with the community was fantastic, especially investors and founders in robotics,” said Mark Boysen, who founded Naware. The company, based in Edina, Minn., developed a robot that uses AI to detect and remove weeds from golf courses, parks, and lawns.

“I loved getting the investors’ perspectives and understanding what they’re looking for,” Boysen said.

Jeffrey Cook, who attended a summit in 2024, said he met “a lot of great contacts and saw what the hard tech venture climate is like.”

Attendees of the Hard Tech Venture Summit spend the first day networking and presenting their pitch to investors. IEEE Entrepreneurship

“Those in the community would benefit from coming to the summit,” said Cook, who founded Gigantor Technologies in Melbourne Beach, Fla. It develops hardware systems for AI-powered devices.

More than 90 percent of attendees at the 2025 event in San Francisco said they would highly recommend the summit to others, according to a survey.

Investors and service providers also have found the events successful.

Ji Ke, a partner and the chief technology officer of deep tech VC firm SOSV, attended the 2025 summit.

“I met a lot of young entrepreneurs tackling some big challenges,” he said. “This is one of the best events to meet some very-early-stage companies.”

Making important connections in hard tech

Startup founders who want to attend a summit must apply. Applications for this year’s events are open. Participants must be founders of preseed, seed, or Series A startups.

Preseed founders are seeking small investments to get their businesses off the ground. Those in the seed stage have already secured funding from their first investor. Series A startups have obtained funding and are developing their product.

Applicants are reviewed by a committee of investors to ensure the startups would be a good fit. Those who are approved are matched with investors and service providers based on their specialty.

“The journey for a hard tech startup is very long and arduous,” Wong says. “Founders need to meet as many investors as possible and other people who support hard tech systems so that they’re able to reach out to them for advice or help.”

Those interested in learning more about an upcoming event can send a request to entrepreneurship@ieee.org.

Reference: https://ift.tt/PvxUhWb

Wednesday, April 15, 2026

Crypto Faces Increased Threat from Quantum Attacks




The race to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that are commonly used today to protect data online—RSA and elliptic curve cryptography—are uncrackable by supercomputers, but a large enough quantum computer would make quick work of them. There are algorithms secure enough to be out of reach for both classical and future quantum machines, called post-quantum cryptography, but transitioning to these is a work in progress.

Late last month, the team at Google Quantum AI published a whitepaper that added significant urgency to this race. In it, the team showed that the size of a quantum computer that would pose a cryptographic threat is approximately twenty times smaller than previously thought. This is still far from accessible to the quantum computers that exist today: the largest machines currently consist of approximately 1,000 quantum bits, or qubits, and the whitepaper estimated that about 500 times as much is needed. Nonetheless, this shortens the timeline to switch over to post-quantum algorithms.

The news had a surprising beneficiary: obscure cryptocurrency Algorand jumped 44% in price in response. The whitepaper called out Algorand specifically for implementing post-quantum cryptography on their blockchain. We caught up with Algorand’s chief scientific officer and professor of computer science and engineering at the University of Michigan, Chris Peikert, to understand how this announcement is impacting cryptography, why cryptocurrencies are feeling the effects, and what the future might hold. Peikert’s early work on a particular type of algorithm known as lattice cryptography underlies most post-quantum security today.

IEEE Spectrum: What is the significance of this Google Quantum AI whitepaper?

Peikert: The upshot of this paper is that it shows that a quantum computer would be able to break some of the cryptography that is most widely used, especially in blockchains and cryptocurrencies, with much, much fewer resources than had previously been established. Those resources include the time that it would take to do so and the number of qubits (or quantum bits) that it would have to use.

This cryptography is very central to not just cryptocurrencies but more broadly, to cryptography on the internet. It is also used for secure web connections between web browsers and web servers. Versions of elliptic curve cryptography are used in national security systems and military encryption. It’s very prevalent and pervasive in all modern networks and protocols.

And not only was this paper improving the algorithms, but there was also a concurrent paper showing that the hardware itself was substantially improved. The claim here was that the number of physical qubits needed to achieve a certain kind of logical qubit was also greatly reduced. These two kinds of improvements are compounding upon each other. It’s a kind of a win-win situation from the quantum computing perspective, but a lose-lose situation for cryptography.

IEEE Spectrum: What do Google AI’s findings mean for cryptocurrencies and the broader cybersecurity ecosystem?

Peikert: There’s always been this looming threat in the distance of quantum computers breaking a large fraction of the cryptography that’s used throughout the cryptocurrency ecosystem. And I think what this paper did was really the loudest alarm yet that these kinds of quantum attacks might not be as far off as some have suspected, or hoped, in recent years. It’s caused a re-evaluation across the industry, and a moving up of the timeline for when quantum computers might be capable of breaking this cryptography.

When we think about the timelines and when it’s important to have completed these transitions [to post-quantum cryptography], we also need to factor in the unknown improvements that we should expect to see in the coming years. The science of quantum computing will not stay static, and there will be these further breakthroughs. We can’t say exactly what they will be or when they will come, but you can bet that they will be coming.

IEEE Spectrum: What is your guess on if or when quantum computers will be able to break cryptography in the real world?

Peikert: Instead of thinking about a specific date when we expect them to come, we have to think about the probabilities and the risks as time goes on. There have been huge breakthrough developments, including not only this paper, but also some last year. But even with these, I think that the chance of a cryptographic attack by quantum computers being successful in the next three years is extremely low, maybe less than a percent. But then, as you get out to several years, like 5, 6, or 10 years, one has to seriously consider a probability, maybe 5% or 10% or more. So it’s still rather small, but significant enough that we have to worry about the risk, because the value that is protected by this kind of cryptography is really enormous.

The US government has put 2035 as its target for migrating all of the national security systems to post quantum cryptography. That seems like a prudent date, given the timelines that it takes to upgrade cryptography. It’s a slow process. It has to be done very deliberately and carefully to make sure that you’re not introducing new vulnerabilities, that you’re not making mistakes, that everything still works properly. So, you know, given the outlook for quantum computers on the horizon, it’s really important that we prepare now, or ideally, yesterday, or a few years ago, for that kind of transition.

IEEE Spectrum: Are there significant roadblocks you see to industrial adoption of post-quantum cryptography going forward?

Peikert: Cryptography is very hard to change. We’ve only had one or maybe two major transitions in cryptography since the early 1980s or late 1970s when the field first was invented. We don’t really have a systematic way of transitioning cryptography.

An additional challenge is that the performance tradeoffs are very different in post-quantum cryptography than they are in the legacy systems. Keys and cipher texts and digital signatures are all significantly larger in post-quantum cryptography, but the computations are actually faster, typically. People have optimized cryptography for speed in the past, and we have very good fast speeds now for post-quantum cryptography, but the sizes of the keys are a challenge.

Especially in blockchain applications, like cryptocurrencies, space on the blockchain is at a premium. So it calls for a reevaluation in many applications of how we integrate the cryptography into the system, and that work is ongoing. And, the blockchain ecosystem uses a lot of advanced cryptography, exotic things like zero-knowledge proofs. In many cases, we have rudimentary constructions of these fancy cryptography tools from post-quantum type mathematics, but they’re not nearly as mature and industry ready as the legacy systems that have been deployed. It continues to be an important technical challenge to develop post-quantum versions of these very fancy cryptographic schemes that are used in cutting edge applications.

IEEE Spectrum: As an academic cryptography researcher, what attracted you to work with a cryptocurrency, and Algorand in particular?

Peikert: My former PhD advisor is Silvio Micali, the inventor of Algorand. The system is very elegant. It is a very high performing blockchain system and it uses very little energy, has fast transaction finalization, and a number of other great features. And Silvio appreciated that this quantum threat was real and was coming, and the team approached me about helping to improve the Algorand protocol at the basic levels to become more post-quantum secure in 2021. That was a very exciting opportunity, because it was a difficult engineering and scientific challenge to integrate post-quantum cryptography into all the different technical and cryptographic mechanisms that were underlying the protocol.

IEEE Spectrum: What is the current status of post-quantum cryptography in Algorand, and blockchains in general?

Peikert: We’ve identified some of the most pressing issues and worked our way through some of them, but it’s a many-faceted problem overall. We started with the integrity of the chain itself, which is the transaction history that everybody has to agree upon.

Our first major project was developing a system that would add post-quantum security to the history of the chain. We developed a system called state proofs for that, which is a mixture of ordinary post-quantum cryptography and also some more fancy cryptography: It’s a way of taking a large number of signatures and digesting them down into a much smaller number of signatures, while still being confident that these large number of signatures actually exist and are properly formed. We also followed it with other papers and projects that are about adding post-quantum cryptography and security to other aspects of the blockchain in the Algorand ecosystem.

It’s not a complete project yet. We don’t claim to be fully post-quantum secure. That’s a very challenging target to hit, and there are aspects that we will continue to work on into the near future.

IEEE Spectrum: In your view, will we adopt post-quantum cryptography before the risks actually catch up with us?

Peikert: I tend to be an optimist about these things. I think that it’s a very good thing that more people in decision making roles are recognizing that this is an important topic, and that these kinds of migrations have to be done. I think that we can’t be complacent about it, and we can’t kick the can down the road much longer. But I do see that the focus is being put on this important problem, so I’m optimistic that most important systems will eventually have good either mitigations or full migrations in place.

But it’s also a point on the horizon that we don’t know exactly when it will come. So, there is the possibility that there is a huge breakthrough, and we have many fewer years than we might have hoped for, and that we don’t get all the systems upgraded that we would like to have fixed by the time quantum computers arrive.

Reference: https://ift.tt/0cX96Pi

Tuesday, April 14, 2026

OpenAI Engineer Helps Companies Attract Buyers and Boost Sales




Like many engineers, Sarang Gupta spent his childhood tinkering with everyday items around the house. From a young age he gravitated to projects that could make a difference in someone’s everyday life.

When the family’s microwave plug broke, Gupta and his father figured out how to fix it. When a drawer handle started jiggling annoyingly, the youngster made sure it didn’t do so for long.

Sarang Gupta


Employer

OpenAI in San Francisco

Job

Data science staff member

Member grade

Senior member

Alma maters

The Hong Kong University of Science and Technology; Columbia

By age 11, his interest expanded from nuts and bolts to software. He learned programming languages such as Basic and Logo and designed simple programs including one that helped a local restaurant automate online ordering and billing.

Gupta, an IEEE senior member, brings his mix of curiosity, hands-on problem-solving, and a desire to make things work better to his role as member of the data science staff at OpenAI in San Francisco. He works with the go-to-market (GTM) team to help businesses adopt ChatGPT and other products. He builds data-driven models and systems that support the sales and marketing divisions.

Gupta says he tries to ensure his work has an impact. When making decisions about his career, he says, he thinks about what AI solutions he can unlock to improve people’s lives.

“If I were to sum up my overall goal in one sentence,” he says, “it’s that I want AI’s benefits to reach as many people as possible.”

Pursuing engineering through a business lens

Gupta’s early interest in tinkering and programming led him to choose physics, chemistry, and math as his higher-level subjects at Chinmaya International Residential School, in Tamil Nadu, India. As part of the high school’s International Baccalaureate chapter, students select three subjects in which to specialize.

“I was interested in engineering, including the theoretical part of it,” Gupta says, “But I was always more interested in the applications: how to sell that technology or how it ties to the real world.”

After graduating in 2012, he moved overseas to attend the Hong Kong University of Science and Technology. The university offered a dual bachelor’s program that allowed him to earn one degree in industrial engineering and another in business management in just four years.

In his spare time, Gupta built a smartphone app that let students upload their class schedules and find classmates to eat lunch with. The app didn’t take off, he says, but he enjoyed developing it. He also launched Pulp Ads, a business that printed advertisements for student groups on tissues and paper napkins, which were distributed in the school’s cafeterias. He made some money, he says, but shuttered the business after about a year.

After graduating from the university in 2016, he decided to work in Hong Kong’s financial hub and joined Goldman Sachs as an analyst in the bank’s operations division.

From finance to process optimization at scale

After two parties agree on securities transactions, the bank’s operations division ensures that the trade details are recorded correctly, the securities and payments are ready to transfer, and the transaction settles accurately and on time.

As an analyst, Gupta’s task was to find bottlenecks in the bank’s workflows and fix them. He identified an opportunity to automate trade reconciliation: when analysts would manually compare data across spreadsheets and systems to make sure a transaction’s details were consistent. The process helped ensure financial transactions were recorded accurately and settled correctly.

Gupta built internal automation tools that pulled trade data from different systems, ran validation checks, and generated reports highlighting any discrepancies.

“Instead of analysts manually checking large datasets, the tools automatically flagged only the cases that required investigation,” he says. “This helped the team spend less time on repetitive verification tasks and more time resolving complex issues. It was also my first real exposure to how software and data systems could dramatically improve operational workflows.”

“Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

The experience made him realize he wanted to work more deeply in technology and data-driven systems, he says. He decided to return to school in 2018 to study data science and AI, when the fields were just beginning to surge into broader awareness.

He discovered that Columbia offered a dedicated master’s degree program in data science with a focus on AI. After being accepted in 2019, he moved to New York City.

Throughout the program, he gravitated to the applied side of machine learning, taking courses in applied deep learning and neural networks.

One of his major academic highlights, he says, was a project he did in 2019 with the Brown Institute, a joint research lab between Columbia and Stanford focused on using technology to improve journalism. The team worked with The Philadelphia Inquirer to help the newsroom staff better understand their coverage from a geographic and social standpoint. The project highlighted “news deserts”—underserved communities for which the newspaper was not providing much coverage—so the publication could redirect its reporting resources.

To identify those areas, Gupta and his team built tools that extracted locations such as street names and neighborhoods from news articles and mapped them to visualize where most of the coverage was concentrated. The Inquirer implemented the tool in several ways including a new web page that aggregated stories about COVID-19 by county.

“Journalism was an interesting problem set for me, because I really like to read the news every day,” Gupta says. “It was an opportunity to work with a real newsroom on a problem that felt really impactful for both the business and the local community.”

The GenAI inflection point

After earning his master’s degree in 2020, Gupta moved to San Francisco to join Asana, the company that developed the work management platform by the same name. He was drawn to the opportunity to work for a relatively small company where he could have end-to-end ownership of projects. He joined the organization as a product data scientist, focusing on A/B testing for new platform features.

Two years later, a new opportunity emerged: He was asked to lead the launch of Asana Intelligence, an internal machine learning team building AI-powered features into the company’s products.

“I felt I didn’t have enough experience to be the founding data scientist,” he says. “But I was also really interested in the space, and spinning up a whole machine learning program was an opportunity I couldn’t turn down.”

The Asana Intelligence team was given six months to build several machine learning–powered features to help customers work more efficiently. They included automatic summaries of project updates, insights about potential risks or delays, and recommendations for next steps.

The team met that goal and launched several other features including Smart Status, an AI tool that analyzes a project’s tasks, deadlines, and activity, then generates a status update.

“When you finally launch the thing you’ve been working on, and you see the usage go up, it’s exhilarating,” he says. “You feel like that’s what you were building toward: users actually seeing and benefiting from what you made.”

Gupta and his team also translated that first wave of work into reusable frameworks and documentation to make it easier to create machine learning features at Asana. He and his colleagues filed several U.S. patents.

At the time he took on that role, OpenAI launched ChatGPT. The mainstreaming of generative AI and large language models shifted much of his work at Asana from model development to assessing LLMs.

OpenAI captured the attention of people around the world, including Gupta. In September 2025 he left Asana to join OpenAI’s data science team.

The transition has been both energizing and humbling, he says. At OpenAI, he works closely with the marketing team to help guide strategic decisions. His work focuses on developing models to understand the efficiency of different marketing channels, to measure what’s driving impact, and to help the company better reach and serve its customers.

“The pace is very different from my previous work. Things move quickly,” he says. “The industry is extremely competitive, and there’s a strong expectation to deliver fast. It’s been a great learning experience.”

Gupta says he plans to stay in the AI space. With technology evolving so rapidly, he says, he sees enormous potential for task automation across industries. AI has already transformed his core software engineering work, he says, and it’s helped him enhance areas that aren’t natural strengths.

“I’m not a good writer, and AI has been huge in helping me frame my words better and present my work more clearly,” he says. “Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

Exploring IEEE publications and connections

Gupta has been an IEEE member since 2024, and he values the organization as both a technical resource and a professional network.

He regularly turns to IEEE publications and the IEEE Xplore Digital Library to read articles that keep him abreast of the evolution of AI, data science, and the engineering profession.

IEEE’s member directory tools are another valuable resource that he uses often, he says.

“It’s been a great way to connect with other engineers in the same or similar fields,” he says. “I love sharing and hearing about what folks are working on. It brings me outside of what I’m doing day to day.

“It inspires me, and it’s something I really enjoy and cherish.”

Reference: https://ift.tt/BHaYmR0

What It’s Like to Live With an Experimental Brain Implant




Scott Imbrie vividly remembers the first time he used a robotic arm to shake someone’s hand and felt the robotic limb as if it were his own. “I still get goosebumps when I think about that initial contact,” he says. “It’s just unexplainable.” The moment came courtesy of a brain implant: an array of electrodes that let him control a robotic arm and receive tactile sensations back to the brain.

Getting there took decades. In 1985, Imbrie had woken up in the hospital after a car accident with a broken neck and a doctor telling him he’d never use his hands or legs again. His response was an expletive, he says—and a decision. “I’m not going to allow someone to tell me what I can and can’t do.” With the determination of a head-strong 22-year-old, Imbrie gradually regained the ability to walk and some limited arm movement. Aware of how unusual his recovery was, the Illinois-native wanted to help others in similar situations and began looking for research projects related to spinal cord injuries. For decades, though, he wasn’t the right fit, until in 2020 he was finally accepted into a University of Chicago trial.

Two photos. The first shows a man sitting in a chair with a large robotic arm extending in front of him. The second is a close-up of implants on the surface of a brain. Scott Imbrie has shaken hands with a robotic arm controlled by a brain implant. The electrodes record neural signals that enable him to move the device and receive tactile feedback. Top: 60 Minutes/CBS News; Bottom: University of Chicago

Imbrie is part of a rarefied group: More people have gone to space than have received advanced brain-computer interfaces (BCI) like his. But a growing number of companies are now attempting to move the devices out of neuroscience labs and into mainstream medical care, where they could help millions of people with paralysis and other neurological conditions. Some companies even hope that BCIs will eventually become a consumer technology.

None of that will be possible without people like Imbrie. He’s a member of the BCI Pioneers Coalition, an advocacy group founded in 2018 by Ian Burkhart, the first quadriplegic to regain hand movement using a brain implant.

That life-changing experience convinced Burkhart that BCIs will make the leap from lab to real world only if users help shape the technology by sharing their perspectives on what works, what doesn’t, and how the devices fit into daily life. The coalition aims to ensure that companies, clinicians, and regulators hear directly from trial participants.

Two images. The first is a photo of a man sitting in a wheelchair; attached to the top of his head is a device with a cable attached. The second is a medical image showing the location of electrodes in the brain. Ian Burkhart founded the BCI Pioneers Coalition to ensure that companies developing brain implants hear directly from the people using them. Left: Andrew Spear/Redux; Right: Ian Burkhart

The group also serves as a peer-support network for trial participants. That’s crucial, because despite the steady drumbeat of miraculous results from BCI trials, receiving a brain implant comes with significant risks. Surgical complications, such as bleeding or infection in the brain, are possible. Even more concerning is the potential psychological toll if the implant fails to work as expected or if life-changing improvements are eventually withdrawn.

Researchers spell this out upfront, and many are put off, says John Downey, an assistant professor of neurological surgery at the University of Chicago and the lead on Imbrie’s clinical trial. “I would say, the number of people I talk to about doing it is probably 10 to 20 times the number of people that actually end up doing it,” he says.

What Happens in a BCI Trial?

BCI pioneers arrive at their unique status via a number of paths, including spinal cord injuries, stroke-induced paralysis, and amyotrophic lateral sclerosis (ALS). The implants they receive come from Blackrock Neurotech, Neuralink, Synchron, and other companies, and are being tested for restoring limb function, controlling computers and robotic arms, and even restoring speech.

Many of the implants record signals from the motor cortex—the part of the brain that controls voluntary movements—to move external devices. Some others target the somatosensory cortex, which processes sensory signals from the body, including touch, pain, temperature, and limb position, to re-create tactile sensation.

BCI Designs Used by Today’s Pioneers


Diagram comparing three brain-computer interface implants from Blackrock, Neuralink, Synchron.

Ease of use depends heavily on the application. Restoring function to a user’s own limbs or controlling robotic arms involves the most difficult learning curve. In early sessions, participants watch a virtual arm reach for objects while they imagine or attempt the same movement. Researchers record related brain signals and use them to train “decoder” software, which translates neural activity into control signals for a robotic arm or stimulation patterns for the user’s nerves or muscles.

Paralyzed in a 2010 swimming accident, Burkhart took part in a trial conducted by Battelle Memorial Institute and Ohio State University from 2014 to 2021. His implant recorded signals from his motor cortex as he attempted to move his hand, and the system relayed those commands to electrodes in his arm that stimulated the muscles controlling his fingers.

A man seated at a desk has electronics wrapped around his right arm. He\u2019s holding a device shaped like a guitar and looking at a screen showing the fretboard of a guitar. Ian Burkhart, who is paralyzed from the chest down, received a brain implant that routed neural signals through a computer to his paralyzed muscles, enabling him to play a video game. Battelle

Getting the system to work seamlessly took time, says Burkhart, and initially required intense concentration. Eventually, he could shift his focus from each individual finger movement to the overall task, allowing him to swipe a credit card, pour from a bottle, and even play Guitar Hero.

Training a decoder is also not a one-and-done process. Systems must be regularly recalibrated to account for “neural drift”—the gradual shift in a person’s neural activity patterns over time. For complex tasks like robotic arm control, researchers may have to essentially train an entirely new decoder before each session, which can take up to an hour.

A man sits in a wheelchair surrounded by screens and electrical equipment. A device is attached to the top of his head, and a wire extends from it. Two other men stand in the room wearing masks. Austin Beggin says that testing a BCI is hard work, but he adds that moments like petting his dog make it all worth it. Daniel Lozada/The New York Times/Redux

Even after the system is ready, using the device can be taxing, says Austin Beggin, who was paralyzed in a swimming accident in 2015 and now participates in a Case Western Reserve University trial aimed at restoring hand movement. “The mental work of just trying to do something like shaking hands or feeding yourself is 100-fold versus you guys that don’t even think about it,” he says.

It’s also a serious time commitment. Beggin travels more than 2 hours from his home in Lima, Ohio, to Cleveland for two weeks every month to take part in experiments. All the equipment is set up in the house he stays in, and he typically works with the researchers for 3 to 4 hours a day. The majority of the experiments are not actually task-focused, he says, and instead are aimed at adjusting the control software or better understanding his neural responses to different stimuli.

But the BCI users say the hard work is worth it. Beyond the hope of restoring lost function, many feel a strong moral obligation to advance a technology that could help others. Beggin compares the pioneers to the early astronauts who laid the groundwork for the lunar landings. “We’re some of the first astronauts just to get shot up for a couple of hours and come back down to earth,” he says.

The Emotional Impact of BCIs

Speak to BCI early adopters and a pattern emerges: The biggest benefits are often more emotional than practical. Using a robotic arm to feed oneself or control a computer is clearly useful, but many pioneers say the most meaningful moments are the ones the experiment wasn’t even trying to produce. Beggin counts shaking his parents’ hands for the first time since his injury and stroking his pet dachshund as among his favorite moments. “That stuff is absolutely incredible,” he says.

Neuralink participant Alex Conley, who broke his neck in a car accident in 2021, uses his implant to control both a robotic arm and computers, enabling him to open doors, feed himself, and handle a smartphone. But he says the biggest boost has come from using computer-aided design software.

A former mechanic, Conley began using the software within days of receiving his implant to design parts that could be fabricated on a 3D printer. He has designed everything from replacement parts for his uncle’s power tools to bumpers for his brother-in-law’s truck. “I was a very big problem solver before my accident, I was able to fix people’s things,” he says. “This gives me that same little burst of joy.”

Two photos show former U.S. president Barack Obama with a man seated in a wheelchair that has a robotic arm mounted to it. The first photo shows their whole bodies, the second is a close-up of a fist bump between Obama and the robotic hand. BCI user Nathan Copeland used a robotic arm to get a fist bump from then-President Barack Obama in 2016. Jim Watson/AFP/Getty Images

The outside world often underestimates those little wins, says Nathan Copeland, who holds the record for the longest functional brain implant. After breaking his neck in a car accident in 2004, he joined a University of Pittsburgh BCI trial in 2015 and has since used the device to control both computers and a robotic arm.

After he uploaded a video to Reddit of himself playing Final Fantasy XIV, one commenter criticized him for not using his device for more practical tasks. Copeland says people don’t understand that those lighthearted activities also matter. “A lot of tasks that people think are mundane or frivolous are probably the tasks that have the most impact on someone that can’t do them,” he says. “Agency and freedom of expression, I think, are the things that impact a person’s life the most.”

Nathan Copeland plays Final Fantasy XIV using his brain implant to control the game character.

When Brain Implants Become Life-Changing

This perspective resonates with Neuralink’s first user, Noland Arbaugh—paralyzed from the neck down after a swimming accident in 2016. After receiving his implant in January 2024, he was able to control a cursor within minutes of the device being switched on. A few days later, the engineers let him play the video game Civilisation VI, and the technology’s potential suddenly felt real. “I played it for 8 hours or 12 hours straight,” he says. “It made me feel so independent and so free.”

A man seated in a wheelchair looks at the screen of a laptop that\u2019s mounted on his wheelchair. Before receiving his Neuralink implant, Noland Arbaugh used mouth-operated devices to control a computer. He says the BCI is more reliable and enables him to do many more things on his own. Rebecca Noble/The New York Times/Redux

But the technology is also providing more practical benefits. Before his implant, Arbaugh relied on a mouth-held typing stick and a mouth-controlled joystick called a quadstick, which uses sip-or-puff sensors to issue commands. But the fiddliness of this equipment required constant caregiver support. The Neuralink implant has dramatically increased the number of things he can do independently. He says he finds great value in not needing his family “to come in and help me 100 times a day.”

For Casey Harrell, the technology has been even more transformative. Diagnosed with ALS in 2020, the climate activist had just welcomed a baby daughter and was in the midst of a major campaign, pressuring a financial firm to divest from companies that had poor environmental records.

Person in a wheelchair outdoors, surrounded by green foliage and soft sunlight.

Bald head with wired brain-computer interface sensors attached in front of a monitor

Person using a brain-computer interface to control text on a monitor.Casey Harrell was able to communicate again within 30 minutes of his BCI being switched on. The device translates his neural signals quickly enough for him to hold conversations. Ian Bates/The New York Times/Redux

“Every morning we’d wake up and there’d be a new thing he couldn’t do, a new part of his body that didn’t work,” says his wife, Levana Saxon. Most alarming was his rapid loss of speech, which, among other things, left him unable to indicate when he was in pain. Then a relative alerted him to a clinical trial at the University of California, Davis, using BCIs to restore speech. He immediately signed up.

The device, implanted in July 2023, records from the brain region that controls muscles involved in talking and translates these signals into instructions for a voice synthesizer. Within 30 minutes of it being switched on, Harrell could communicate again. “I was absolutely overwhelmed with the thought of how this would impact my life and allow me to talk to my family and friends and better interact with my daughter,” he says. “It just was so overwhelming that I began to cry.”

While earlier assistive technology limited him to short, direct commands, Harrell says the BCI is fast enough that he can hold a proper conversation, and he’s been able to resume work part-time.

What’s Holding BCI Technology Back?

BCI technology still has limits. Most trial participants using Blackrock Neurotech implants can operate their devices only in the lab because the systems rely on wired connections and racks of computer hardware. Some users, including Copeland and Harrell, have had the equipment installed at home, but they still can’t leave the house with it. “That would be a big unlock if I was able to do so,” says Harrell.

The academic nature of many trials creates additional constraints. Pressure to publish and secure funding pushes researchers to demonstrate peak performance on narrow tasks rather than build more versatile and reliable systems, says Mariska Vansteensel, who runs BCI studies at the University Medical Center Utrecht in the Netherlands. She says that investigating the technology’s limits or repeating an experiment in new patients is “less rewarded in terms of funding.”

In a clinical trial, Scott Imbrie uses a BCI to control a robotic arm, using signals from his motor cortex to make it move a block. University of Chicago

One of Imbrie’s biggest frustrations is the rapid turnover in experiments. Just as he begins to get proficient at one task, he’s asked to switch to the next task. Study designs also mean that much of the users’ time is spent on mundane tasks required to fine-tune the system.

Perhaps the biggest issue is that trials are often time-limited. That’s partly because scar tissue from the body’s immune response to the implant can gradually degrade signal quality. But constraints on funding and researcher availability can also make it impossible for users to keep using their BCIs after their trials end, even when the technology is still functional.

Ian Burkhart’s BCI enables him to grasp objects, pour from a bottle, and swipe a credit card.

Burkhart has firsthand experience. His trial was extended, but the implant was eventually removed after he got an infection. He always knew the trial would end, but it was nonetheless challenging. “It was a little bit of a tease where I got to see the capability of the restoration of function,” he says. “Now I’m just back to where I was.”

The Push to Commercialize BCIs

Progress is being made in transitioning the technology from experimental research devices to fully-fledged medical products that could help users in their everyday lives. Most academic BCI research has relied on Blackrock Neurotech’s Utah Arrays, which typically feature 96 needlelike electrodes that penetrate the brain’s surface. The implant is connected to a skull-mounted pedestal that’s wired to external hardware. But some of the newer devices are sleeker and less invasive.

Neuralink’s implant houses its electronics and rechargeable battery in a coin-size unit connected to flexible electrode threads inserted into the brain by a robotic “sewing machine.” The implant, which is roughly the size of a quarter or a euro, is mounted in a hole cut into the skull and charges and transfers data wirelessly. Synchron takes a different approach, threading a stent-like implant through blood vessels into the motor cortex. This “stentrode” connects by wire to a unit in the chest that powers the implant and transmits data wirelessly.

Bearded person in red T\u2011shirt using a laptop at a kitchen table

Man using a large on-screen keyboard to type messages on a tablet computer Rodney Gorham can use his Synchron implant to control not just a computer, but also smart devices in his home like an air conditioner, fan, and smart speaker. Rodney Decker

Neuralink’s decoder runs on a laptop, while Synchron deploys a smartphone-size signal processing unit as a wireless bridge to the user’s devices, which allows them to use their implants at home and on the move. The companies have also developed adaptive decoders that use machine learning to adjust to neural drift on the fly, reducing the need for recalibration.

Making these devices truly user-friendly will require technology that can interpret user context, says Kurt Haggstrom, Synchron’s chief commercial officer—including mood, attention levels, and environmental factors like background noise and location. This approach will require AI that analyzes neural signals alongside other data streams such as audio and visual input.

Last year, Synchron took a first step by pairing its implant with an Apple Vision Pro headset. When trial participant Rodney Gorham looked at devices such as a fan, a smart speaker, and an air conditioner, the headset overlaid a menu that enabled him to adjust the device’s settings using his implant.

Rodney Gorham uses his Synchron implant to turn on music, feed his dog, and more. Synchron BCI

Another way to reduce cognitive load is to detect high-order signals of intent in neural data rather than low-level motor commands, says Florian Solzbacher, cofounder and chief scientific officer of Blackrock Neurotech. For instance, rather than manually navigating to an email app and typing, the user could simply think about sending an email and the system would then open it with content already prepopulated, he says.

Durability may prove a thornier problem to solve, UChicago’s Downey says. Current implants last around a decade—well short of a lifelong solution. And with limited real estate in the brain, replacement is only possible once or twice, he says.

Rapid technological progress also raises difficult decisions about whether to get a BCI implant now or wait for a more advanced device. This was a major concern for Gorham’s wife, Caroline. “I was hesitant. I didn’t want him to go on the trial but maybe a future one,” she says. “It was my fear of missing out on future upgrades.”

Will Brain Implants Ever Become Consumer Tech?

Some executives have raised the prospect of BCIs eventually becoming consumer devices. Neuralink founder Elon Musk has been particularly vocal, suggesting that the company’s implants could replace smartphones, let people save and replay memories, or even achieve “symbiosis” with AI.

This kind of talk inspires mixed feelings in users. The hype brings visibility and funding, says Beggin, but could divert attention from medical users’ needs. Copeland worries that consumer branding could strip the devices of insurance coverage and that rising demand may make it harder to access qualified surgeons.

A man, seen in profile, sits in a wheelchair. Noland Arbaugh, the first recipient of Neuralink’s BCI, says that using the implant to control a computer made him feel independent and free. Steve Craft/Guardian/eyevine/Redux

There are also concerns about how data collected by BCI companies will be handled if the devices go mainstream. As a trial participant, Arbaugh says he’s comfortable signing away his data rights to advance the technology, but he thinks stronger legal protections will be needed in the future. “Does that data still belong to Neuralink? Does it belong to each person? And can that data be sold?” he asks.

Blackrock’s Solzbacher says the company remains focused on the medical applications of the technology. But he also believes it is building a “universal interface to any kind of a computerized system” that may have broader applications in the future. And he says the company owes it to users not to limit them to a bare-bones assistive technology. “Why would somebody who’s got a medical condition want to get less than something that somebody who’s able-bodied would possibly also take?” says Solzbacher.

The ever-optimistic Imbrie heartily agrees. Medical devices are invariably expensive, he says, but targeting consumer applications could push companies to keep devices simple and affordable while continuing to add features. “I truly believe that making it a consumer-available product will just enhance the product’s capabilities for the medical field,” he says.

Imbrie is on a mission to refocus the conversation around BCIs on the positives. While concerns about risks are valid, he worries that the alarming language often used to describe brain implants discourages people from volunteering for trials that could help them.

“I remember laying there in the bed and not being able to move,” he says, “and it was really dehumanizing having to ask someone to do everything for you. As humans, we want to be independent.”

Reference: https://ift.tt/lpzxdrY

Building an Interregional Transmission Overlay for a Resilient U.S. Grid

Examining how a U.S. Interregional Transmission Overlay could address aging grid infrastructure, surging demand, and renewable integratio...