Friday, January 31, 2025

William Ratcliff, Former IEEE Region 3 Director, Dies at 80


William Ratcliff
Former IEEE Region 3 director
Life senior member, 80; died 20 June

Ratcliff was the 2008–2009 director of IEEE Region 3 (Southeastern United States).

An active IEEE volunteer, he led efforts to change the IEEE Regional Activities board to the IEEE Member and Geographic Activities board.

He also helped develop and launch the IEEE MOVE (Mobile Outreach using Volunteer Engagement) program. The three vehicles in the IEEE-USA initiative provide U.S. communities with power and communications capabilities in areas affected by widespread outages due to natural disasters.

Ratcliff began his career in 1965 as an electrical engineer at Public Service Indiana, an electric utility based in Indianapolis. There he helped design bulk power systems and developed engineering software. He left in 1985 to join manufacturer Gulfstream Aerospace, in Savannah, Ga., where he was an engineering manager until 1994.

He earned a bachelor’s degree in electrical engineering from Purdue University, in West Lafayette, Ind.

Lembit Salasoo
GE senior research scientist
Life member, 68; died 17 August

Salasoo was a scientist for 36 years at the General Electric Global Research Center, in Niskayuna, N.Y.

He earned two bachelor’s degrees, one in computer science in 1976 and the other in electrical engineering in 1978, both from the University of Sydney.

He joined the Electricity Commission of New South Wales, an Australian utility, as a power engineer.

In 1982 he moved to the United States after being accepted into Rensselaer Polytechnic Institute, in New York. He earned a master’s degree in engineering in 1983 and a Ph.D. in electric power engineering in 1986.

After graduating, he joined GE Research’s superconducting magnet group, where he focused initially on researching conduction-cooled MRI magnets.

He later worked on computer tomography equipment including the Gemini tube used in CT scanners.

He and his team developed a tool to analyze secondary electron emission heat transfer in the tubes. For their work, they received GE’s 1998 Dushman Award, which recognizes contributions to the company.

In the early 2000s, Salasoo shifted his focus to developing technology for clean energy transportation—namely for hybrid-electric buses, locomotives, and mine trucks. He was part of the research team that conducted a proof-of-concept demonstration of a hybrid locomotive at Union Station in Los Angeles as part of GE’s Ecomagination initiative, a clean-energy R&D program.

His area of research changed again in the 2010s to developing financial systems for GE’s Applied Statistics Lab. His work involved making GE Capital, the company’s financial services subsidiary, compliant as a systemically important financial institution. Should a SIFI fail, it could trigger a financial crisis, so it must adhere to strict regulations. For his work, he received the 2015 Dushman Award.

From 2015 to 2020 he developed defect detection models at GE used in metal additive manufacturing. In 2023 he led GE’s climate research team in developing technology that predicts and mitigates the formation of long-lasting cirrus clouds, commonly known as contrails, produced by aircraft emissions. Under his leadership, the team won a grant from the Advanced Research Projects Agency–Energy.

Karl Kay Womack
Computer engineer
Life Fellow, 90; died 10 July

Womack spent his career working on early computers at IBM in New York City. He earned a master’s degree in electrical engineering from Syracuse University, in New York.

He was an avid science fiction and fantasy reader, according to his obituary.

Thomas M. Kurihara
Chair of IEEE Standards Association working groups
Life member, 89; died 24 May

Kurihara was an active volunteer for the IEEE Standards Association. He was chair of the working group that developed the IEEE 1512 series of standards for incident management message sets used by emergency management centers. He also chaired the IEEE 1609 working group, which developed standards for next-generation V2X (vehicle-to-everything) communications.

A member of the IEEE Vehicular Technology Society, he chaired its intelligent transportation systems standards committee from 2017 to 2022.

After graduating with a bachelor’s degree in 1957 from Stanford, he joined the U.S. Navy. By the time his active duty ended in 1969, he had attained the rank of lieutenant commander.

He then worked as an engineer for the U.S. government and in private industry before becoming a consultant.

Kurihara and his family were sent to Japanese-American internment camps during World War II. After the war ended, they resettled in St. Paul. He was a lifetime supporter of the Twin Cities chapter of the Japanese American Citizens League, a national organization that advocates for civil rights and seeks to preserve the heritage of Asian Americans. He was a member of the St. Paul–Nagasaki Sister City Committee, which promotes beneficial relationships between citizens of the two cities and encourages peace between the United States and Japan.

In honor of his parents, in 2010 Kurihara established the Earl K. and Ruth N. Tanbara Fund for Japanese American History at the Minnesota Historical Society. The money is used to document and preserve the group’s history, particularly in Minnesota.

Robert A. Reilly
Former IEEE Division VI director
Senior member, 76; died 21 May

Reilly served as the 2015–2016 director of IEEE Division VI. He was a former president of the IEEE Education Society and a member of numerous IEEE boards and committees.

He enlisted in the U.S. Army in 1965 and served as a medic in Japan for five years. After returning to the United States in 1970 as a major in the Army Reserve, he enrolled at the University of Massachusetts in Amherst. He received a bachelor’s degree in health and physical education in 1974. Two years later he earned a master’s degree in education from Springfield College, in Massachusetts. He later returned to the University of Massachusetts and in 1996 received a Ph.D. in education.

Reilly began his career in 1972 as a physical education teacher at St. Matthew’s Parish School in Indian Orchard, Mass. After two years there, he left to teach social studies, math, and science at Our Lady of the Sacred Heart School in Springfield. He worked at the school, which closed in 2006, for three years.

From 1979 to 1982 he was an instructor at North Adams State College (now the Massachusetts College of Liberal Arts), training educators.

In 1985 he joined Lanesborough Elementary School as a computer teacher, and he taught there until he retired in 2011.

In 1992 he founded and served as director of K12 Net, an online communication network for teachers that preceded the Internet. From 1995 to 2001 he was director of EdNet@UMass, a Web-based professional development network at the University of Massachusetts’s College of Education.

Reilly was a visiting scientist in the early 2000s at MIT, where he researched computer-based applications and cognitive learning theories.

He was a member of the American Society of Engineering Education and served as the 2009–2010 chair of its electrical and computer engineering division. He was president of the National Education Association’s Lanesborough chapter three times.

He received several IEEE awards including the 2010 IEEE Sayle Award for Achievement in Education from the IEEE Education Society and the 2006 Wilson Transnational Award from IEEE Member and Geographic Activities.

Ron B. Schroer
Aerospace engineer
Life senior member, 92; died 9 May

Schroer was an aerospace engineer at Martin Marietta (now part of Lockheed Martin) in Denver for more than 30 years.

After receiving bachelor’s degrees in chemistry and physical science from the University of Wisconsin in La Crosse in 1953, he enlisted in the U.S. Air Force. After his active duty ended in 1957, he earned a master’s degree in instrumentation engineering from the University of Michigan in Detroit and an MBA from the University of Colorado in Denver.

During his career at Martin Marietta, he worked on the Titan missile program, the NASA Space Shuttle, and a number of Federal Aviation Administration air traffic control systems.

An active IEEE volunteer, he was editor in chief of IEEE Aerospace and Electronic Systems Magazine and served on the Aerospace and Electronic Systems Society’s board of governors.

Reference: https://ift.tt/nivAKW4

Video Friday: Aibo Foster Parents




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY
German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY
European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY
RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
ICRA 2025: 19–23 May 2025, ATLANTA, GA
London Humanoids Summit: 29–30 May 2025, LONDON
IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
RSS 2025: 21–25 June 2025, LOS ANGELES

Enjoy today’s videos!

This video about ‘foster’ Aibos helping kids at a children’s hospital is well worth turning on auto-translated subtitles for.

[ Aibo Foster Program ]

Hello everyone, let me introduce myself again. I am Unitree H1 “Fuxi”. I am now a comedian at the Spring Festival Gala, hoping to bring joy to everyone. Let’s push boundaries every day and shape the future together.

[ Unitree ]

Happy Chinese New Year from PNDbotics!

[ PNDbotics ]

In celebration of the upcoming Year of the Snake, TRON 1 swishes into three little lions, eager to spread hope, courage, and strength to everyone in 2025. Wishing you a Happy Chinese New Year and all the best, TRON TRON TRON!

[ LimX Dynamics ]

Designing planners and controllers for contact-rich manipulation is extremely challenging as contact violates the smoothness conditions that many gradient-based controller synthesis tools assume. We introduce natural baselines for leveraging contact smoothing to compute (a) open-loop plans robust to uncertain conditions and/or dynamics, and (b) feedback gains to stabilize around open-loop plans.

Mr. Bucket is my favorite.

[ Mitsubishi Electric Research Laboratories ]

Thanks, Yuki!

What do you get when you put three aliens in a robotaxi? The first-ever Zoox commercial! We hope you have as much fun watching it as we had creating it and can’t wait for you to experience your first ride in the not-too-distant future.

[ Zoox ]

The Humanoids Summit at the Computer History Museum in December was successful enough (either because of or in spite of my active participation) that it’s not only happening again in 2025, there’s also going to be a spring version of the conference in London in May!

[ Humanoids Summit ]

I’m not sure it’ll ever be practical at scale, but I do really like JSK’s musculoskeletal humanoid work.

[ Paper ]

In November 2024, as part of the CRS-31 mission, flight controllers remotely maneuvered Canadarm2 and Dextre to extract a payload from the SpaceX Dragon cargo ship’s trunk (CRS-31) and install it on the International Space Station. This animation was developed in preparation for the operation and shows just how complex robotic tasks can be.

[ Canadian Space Agency ]

Staci Americas, a third-party logistics provider, addressed its inventory challenges by implementing the Corvus One™ Autonomous Inventory Management System in its Georgia and New Jersey facilities. The system uses autonomous drones for nightly, lights-out inventory scans, identifying discrepancies and improving workflow efficiency.

[ Corvus Robotics ]

Thanks, Joan!

I would have said that this controller was too small to be manipulated with a pinch grasp. I would be wrong.

[ Pollen ]

How does NASA plan to use resources on the surface of the Moon? One method is the ISRU Pilot Excavator, or IPEx! Designed by Kennedy Space Center’s Swamp Works team, the primary goal of IPEx is to dig up lunar soil, known as regolith, and transport it across the Moon’s surface.

[ NASA ]

The TBS Mojito is an advanced forward-swept FPV flying wing platform that delivers unmatched efficiency and flight endurance. By focusing relentlessly on minimizing drag, the wing reaches speeds upwards of 200 km/h (125 mph), while cruising at 90-120 km/h (60-75 mph) with minimal power consumption.

[ Team BlackSheep ]

At Zoox, safety is more than a priority—it’s foundational to our mission and one of the core reasons we exist. Our System Design & Mission Assurance (SDMA) team is responsible for building the framework for safe autonomous driving. Our Co-Founder and CTO, Jesse Levinson, and Senior Director of System Design and Mission Assurance (SDMA), Qi Hommes, hosted a LinkedIn Live to provide an insider’s overview of the teams responsible for developing the metrics that ensure our technology is safe for deployment on public roads.

[ Zoox ]

Reference: https://ift.tt/JtR2KNx

Dell risks employee retention by forcing all teams back into offices full-time


Dell is calling much of its workforce back into the office five days a week starting on March 3. The technology giant is framing the mandate as a business strategy, but there’s reason to believe the policy may drive employee turnover.

Business Insider detailed an internal memo today from CEO and Chairman Michael Dell informing workers that if they live within an hour of a Dell office, they’ll have to go in five days a week.

"What we're finding is that for all the technology in the world, nothing is faster than the speed of human interaction,” Dell wrote, per Business Insider. "A thirty-second conversation can replace an email back-and-forth that goes on for hours or even days."

Read full article

Comments

Reference : https://ift.tt/a4Qkdq1

Thursday, January 30, 2025

AIs and Robots Should Sound Robotic




Most people know that robots no longer sound like tinny trash cans. They sound like Siri, Alexa, and Gemini. They sound like the voices in labyrinthine customer support phone trees. And even those robot voices are being made obsolete by new AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice.

This technology will replace humans in many areas. Automated customer support will save money by cutting staffing at call centers. AI agents will make calls on our behalf, conversing with others in natural language. All of that is happening, and will be commonplace soon.

But there is something fundamentally different about talking with a bot as opposed to a person. A person can be a friend. An AI cannot be a friend, despite how people might treat it or react to it. AI is at best a tool, and at worst a means of manipulation. Humans need to know whether we’re talking with a living, breathing person or a robot with an agenda set by the person who controls it. That’s why robots should sound like robots.

You can’t just label AI-generated speech. It will come in many different forms. So we need a way to recognize AI that works no matter the modality. It needs to work for long or short snippets of audio, even just a second long. It needs to work for any language, and in any cultural context. At the same time, we shouldn’t constrain the underlying system’s sophistication or language complexity.

We have a simple proposal: all talking AIs and robots should use a ring modulator. In the mid-twentieth century, before it was easy to create actual robotic-sounding speech synthetically, ring modulators were used to make actors’ voices sound robotic. Over the last few decades, we have become accustomed to robotic voices, simply because text-to-speech systems were good enough to produce intelligible speech that was not human-like in its sound. Now we can use that same technology to make robotic speech that is indistinguishable from human sound robotic again.

A ring modulator has several advantages: It is computationally simple, can be applied in real-time, does not affect the intelligibility of the voice, and--most importantly--is universally “robotic sounding” because of its historical usage for depicting robots.

Responsible AI companies that provide voice synthesis or AI voice assistants in any form should add a ring modulator of some standard frequency (say, between 30-80 Hz) and of a minimum amplitude (say, 20 percent). That’s it. People will catch on quickly.

Here are a couple of examples you can listen to for examples of what we’re suggesting. The first clip is an AI-generated “podcast” of this article made by Google’s NotebookLM featuring two AI “hosts.” Google’s NotebookLM created the podcast script and audio given only the text of this article. The next two clips feature that same podcast with the AIs’ voices modulated more and less subtly by a ring modulator:

Raw audio sample generated by Google’s NotebookLM

Audio sample with added ring modulator (30 Hz-25%)

Audio sample with added ring modulator (30 Hz-40%)

We were able to generate the audio effect with a 50-line Python script generated by Anthropic’s Claude. One of the most well-known robot voices were those of the Daleks from Doctor Who in the 1960s. Back then robot voices were difficult to synthesize, so the audio was actually an actor’s voice run through a ring modulator. It was set to around 30 Hz, as we did in our example, with different modulation depth (amplitude) depending on how strong the robotic effect is meant to be. Our expectation is that the AI industry will test and converge on a good balance of such parameters and settings, and will use better tools than a 50-line Python script, but this highlights how simple it is to achieve.

Of course there will also be nefarious uses of AI voices. Scams that use voice cloning have been getting easier every year, but they’ve been possible for many years with the right know-how. Just like we’re learning that we can no longer trust images and videos we see because they could easily have been AI-generated, we will all soon learn that someone who sounds like a family member urgently requesting money may just be a scammer using a voice-cloning tool.

We don’t expect scammers to follow our proposal: They’ll find a way no matter what. But that’s always true of security standards, and a rising tide lifts all boats. We think the bulk of the uses will be with popular voice APIs from major companies--and everyone should know that they’re talking with a robot.

Reference: https://ift.tt/gTemICF

Sony Kills Recordable Blu-Ray And Other Vintage Media




Physical media fans need not panic yet—you’ll still be able to buy new Blu-Ray movies for your collection. But for those who like to save copies of their own data onto the discs, the remaining options just became more limited: Sony announced last week that it’s ending all production of several recordable media formats—including Blu-Ray discs, MiniDiscs, and MiniDV cassettes—with no successor models.

“Considering the market environment and future growth potential of the market, we have decided to discontinue production,” a representative of Sony said in a brief statement to IEEE Spectrum.

Though availability is dwindling, most Blu-Ray discs are unaffected. The discs being discontinued are currently only available to consumers in Japan and some professional markets elsewhere, according to Sony. Many consumers in Japan use blank Blu-Ray discs to save TV programs, Sony separately told Gizmodo.

Sony, which prototyped the first Blu-Ray discs in 2000, has been selling commercial Blu-Ray products since 2006. Development of Blu-Ray was started by Philips and Sony in 1995, shortly after Toshiba’s DVD was crowned the winner of the battle to replace the VCR, notes engineer Kees Immink, whose coding was instrumental in developing optical formats such as CDs, DVDs, and Blu-Ray discs. “Philips [and] Sony were so frustrated by that loss that they started a new disc format, using a blue laser,” Immink says.

Blu-Ray’s Short-Lived Media Dominance

The development took longer than expected, but when it was finally introduced a decade later, Blu-Ray was on its way to becoming the medium for distributing video, as DVD discs and VHS tapes had done in their heydays. In 2008, Spectrum covered the moment when Blu-Ray’s major competitor, HD-DVD, surrendered. But the timing was unfortunate, as the rise of streaming made it an empty victory. Still, Blu-Rays continue to have value as collector’s items for many film buffs who want high-quality recordings not subject to compression artifacts that can arise with streaming, not to mention those wary of losing access to movies due to the vagaries of streaming services’ licensing deals.

Sony’s recent announcement does, however, cement the death of the MiniDV cassette and MiniDisc. MiniDV, magnetic cassettes meant to replace VHS tapes at one-fifth the size, were once a popular format of digital video cassettes. The MiniDisc, an erasable magneto-optical disc that can hold up to 80 minutes of digitized audio, still has a small following. The 64-millimeter (2.5-inch) discs, held in a plastic cartridge similar to a floppy disk, were developed in the mid-1980s as a replacement for analog cassette tapes. Sony finally released the product in 1992, and it was popular in Japan into the 2000s.

To record data onto optical storage like CDs and Blu-Rays, lasers etch microscopic pits into the surface of the disc to represent ones and zeros. Lasers are also used to record data onto MiniDiscs, but instead of making indentations, they’re used to change the polarization of the material; the lasers heat up one side of the disc, making the material susceptible to a magnetic field, which can then alter the polarity of the heated area. Then in playback, the polarization of reflected light translates to a one or zero.

When the technology behind media storage formats like the MiniDisc and Blu-Ray was first being developed, the engineers involved believed the technology would be used well into the future, says optics engineer Joseph Braat. His research at Philips with Immink served as the basis of the MiniDisc.

Despite that optimism, “the density of information in optical storage was limited from the very beginning,” Braat says. Despite using the compact wavelengths of blue light, Blu-Ray soon hit a limit of how much data could be stored. Even dual-layer Blu-Ray discs can only hold 50 gigabytes per side; that amount of data will give you 50 hours of standard definition streaming on Netflix, or about seven hours of 4K video content.

MiniDiscs still have a small, dedicated niche of enthusiasts, with active social media communities and in-person disc swaps. But since Sony stopped production of MiniDisc devices in 2013, the retro format has effectively been on technological hospice care, with the company only offering blank discs and repair services. Now, it seems, it’s officially over.

Reference: https://ift.tt/HeLRQVz

Wednesday, January 29, 2025

A Vision for a Decarbonized Future




This sponsored article is brought to you by NYU Tandon School of Engineering.

As the world grapples with the urgent need to transition to cleaner energy systems, a growing number of researchers are delving into the design and optimization of emerging technologies. At the forefront of this effort is Dharik Mallapragada, Assistant Professor of Chemical and Biomolecular Engineering at NYU Tandon. Mallapragada is dedicated to understanding how new energy technologies integrate into an evolving energy landscape, shedding light on the intricate interplay between innovation, scalability, and real-world implementation.

Mallapragada’s Sustainable Energy Transitions group is interested in developing mathematical modeling approaches to analyze low-carbon technologies and their energy system integration under different policy and geographical contexts. The group’s research aims to create the knowledge and analytical tools necessary to support accelerated energy transitions in developed economies like the U.S. as well as emerging market and developing economy countries in the global south that are central to global climate mitigation efforts.

Bridging Research and Reality

“Our group focuses on designing and optimizing emerging energy technologies, ensuring they fit seamlessly into rapidly evolving energy systems,” Mallapragada says. His team uses sophisticated simulation and modeling tools to address a dual challenge: scaling scientific discoveries from the lab while adapting to the dynamic realities of modern energy grids.

“Energy systems are not static,” he emphasized. “What might be an ideal design target today could shift tomorrow. Our goal is to provide stakeholders—whether policymakers, venture capitalists, or industry leaders—with actionable insights that guide both research and policy development.”

A photo of a man in a suit jacket. Dharik Mallapragada is an Assistant Professor of Chemical and Biomolecular Engineering at NYU Tandon.

Mallapragada’s research often uses case studies to illustrate the challenges of integrating new technologies. One prominent example is hydrogen production via water electrolysis—a process that promises low-carbon hydrogen but comes with a unique set of hurdles.

“For electrolysis to produce low-carbon hydrogen, the electricity used must be clean,” he explained. “This raises questions about the demand for clean electricity and its impact on grid decarbonization. Does this new demand accelerate or hinder our ability to decarbonize the grid?”

Additionally, at the equipment level, challenges abound. Electrolyzers that can operate flexibly, to utilize intermittent renewables like wind and solar, often rely on precious metals like iridium, which are not only expensive but also are produced in small amounts currently. Scaling these systems to meet global decarbonization goals could require substantially expanding material supply chains.

“We examine the supply chains of new processes to evaluate how precious metal usage and other performance parameters affect prospects for scaling in the coming decades,” Mallapragada said. “This analysis translates into tangible targets for researchers, guiding the development of alternative technologies that balance efficiency, scalability, and resource availability.”

Unlike colleagues who develop new catalysts or materials, Mallapragada focuses on decision-support frameworks that bridge laboratory innovation and large-scale implementation. “Our modeling helps identify early-stage constraints, whether they stem from material supply chains or production costs, that could hinder scalability,” he said.

For instance, if a new catalyst performs well but relies on rare materials, his team evaluates its viability from both cost and sustainability perspectives. This approach informs researchers about where to direct their efforts—be it improving selectivity, reducing energy consumption, or minimizing resource dependency.

Decarbonizing aviation

Aviation presents a particularly challenging sector for decarbonization due to its unique energy demands and stringent constraints on weight and power. The energy required for takeoff, coupled with the need for long-distance flight capabilities, demands a highly energy-dense fuel that minimizes volume and weight. Currently, this is achieved using gas turbines powered by traditional aviation liquid fuels.

“The energy required for takeoff sets a minimum power requirement,” he noted, emphasizing the technical hurdles of designing propulsion systems that meet these demands while reducing carbon emissions.

Mallapragada highlights two primary decarbonization strategies: the use of renewable liquid fuels, such as those derived from biomass, and electrification, which can be implemented through battery-powered systems or hydrogen fuel. While electrification has garnered significant interest, it remains in its infancy for aviation applications. Hydrogen, with its high energy per mass, holds promise as a cleaner alternative. However, substantial challenges exist in both the storage of hydrogen and the development of the necessary propulsion technologies.

An illustration of two charts. Mallapragada’s research examined specific power required to achieve zero payload reduction and Payload reduction required to meet variable target fuel cell-specific power, among other factors.

Hydrogen stands out due to its energy density by mass, making it an attractive option for weight-sensitive applications like aviation. However, storing hydrogen efficiently on an aircraft requires either liquefaction, which demands extreme cooling to -253°C, or high-pressure containment, which necessitates robust and heavy storage systems. These storage challenges, coupled with the need for advanced fuel cells with high specific power densities, pose significant barriers to scaling hydrogen-powered aviation.

Mallapragada’s research on hydrogen use for aviation focused on the performance requirements of on-board storage and fuel cell systems for flights of 1000 nmi or less (e.g. New York to Chicago), which represent a smaller but meaningful segment of the aviation industry. The research identified the need for advances in hydrogen storage systems and fuel cells to ensure payload capacities remain unaffected. Current technologies for these systems would necessitate payload reductions, leading to more frequent flights and increased costs.

“Energy systems are not static. What might be an ideal design target today could shift tomorrow. Our goal is to provide stakeholders—whether policymakers, venture capitalists, or industry leaders—with actionable insights that guide both research and policy development.” —Dharik Mallapragada, NYU Tandon

A pivotal consideration in adopting hydrogen for aviation is the upstream impact on hydrogen production. The incremental demand from regional aviation could significantly increase the total hydrogen required in a decarbonized economy. Producing this hydrogen, particularly through electrolysis powered by renewable energy, would place additional demands on energy grids and necessitate further infrastructure expansion.

Mallapragada’s analysis explores how this demand interacts with broader hydrogen adoption in other sectors, considering the need for carbon capture technologies and the implications for the overall cost of hydrogen production. This systemic perspective underscores the complexity of integrating hydrogen into the aviation sector while maintaining broader decarbonization goals.

Mallapragada’s work underscores the importance of collaboration across disciplines and sectors. From identifying technological bottlenecks to shaping policy incentives, his team’s research serves as a critical bridge between scientific discovery and societal transformation.

As the global energy system evolves, researchers like Mallapragada are illuminating the path forward—helping ensure that innovation is not only possible but practical.

Reference: https://ift.tt/WO8fTqb

RF Safety Lab Ensures Wireless Gadgets Meet Safety Standards




Wireless technology, such as cellphones, fitness trackers, and medical devices, has become ubiquitous. Before a wireless device is manufactured and sold, its technology is tested by compliance engineering laboratories to ensure it adheres to technical standards established by organizations including the IEEE Standards Association.

In the United States, the Federal Communications Commission oversees regulations for all wireless devices, while the Food and Drug Administration focuses on medical technology. The two organizations often work together.

RF Safety Laboratory


Founder:

Steve Liu

Founded:

2023

Headquarters:

Catonsville, Md.

Employees:

8


To help tech companies get their wireless devices certified, IEEE Member Steve Liu founded RF Safety Laboratory in 2023. The startup, based in Catonsville, Md., specializes in specific absorption rate (SAR) testing, nerve stimulation testing, and measuring power density levels of charging docks, surgical tools, tablets, and other products. SAR testing measures the amount of RF energy absorbed by the human body from a wireless device. Nerve stimulation and power density tests measure the instantaneous effects on a person’s nervous system.

Liu says many of the lab’s clients, who are based in Japan, South Korea, and the United States, want to sell their products in Canada and the United States, which have the strictest, most difficult regulations to navigate.

The lab specializes in ensuring wireless devices are certifiably safe, providing consumers with peace of mind.

“Not everyone knows SAR levels are publicly available on the FCC website,” Liu says.



Many product developers don’t realize that when integrating Wi-Fi or 5G into a device, they need to include shielding against radiation in their designs to protect humans from excess radiation, Liu says.

“Startups and new designers of wireless technology in these countries don’t realize they can’t just make a device and then expect to be able to export it to North America,” he says. “More often than not, these companies only find this out after they have already begun to market their products to consumers, causing them to be held by customs and border protection.”

If a product is found to be noncompliant with regulations, the company can face significant fines and legal actions, and the device can be recalled.

From testing engineer to startup founder

Liu is no stranger to technical standards and regulations. After graduating with a bachelor’s degree in biomedical and electrical engineering in 2000 from Johns Hopkins University, in Baltimore, he joined PCTest Engineering Laboratory, in Columbia, Md. The company, now part of Element, was created to provide regulatory and testing services to the wireless, electronics, and telecommunications industries.

As a testing engineer, Liu assessed products’ electromagnetic compatibility, RF exposure levels, and hearing-aid compatibility. He also served as a member of the Telecommunication Certification Body, which assesses equipment. He worked his way up the career ladder and, in 2018, was promoted to vice president of engineering, overseeing global compliance.


“Being an IEEE member has definitely paid off for me.”


“I’ve certified more than 4,000 products for companies including Apple, Motorola, Samsung, Sony, and Rivian,” Liu says. “It was a really awesome experience to be engaged with such large companies and a vast portfolio of products, although we were a small-scale operation just doing regulatory electromagnetic compatibility and RF safety-type testing.”

In 2020 the company was acquired by Element, a London-based provider of critical-materials testing, product qualification testing, inspection, and certification services. New bosses meant a new culture and a different way of working.

“Although the new management was supportive and it was a good experience, I felt that I couldn’t spend the time and energy I wanted to on solving problems, helping customers, or mentoring my employees,” Liu says. After 23 years, he left in 2023 to start his own company, using some of his retirement savings. Six of his colleagues joined his venture.


RF Safety Laboratory team smiling and posing together in front of a wood paneled wall.

How wireless tech gets assessed

To get certified, new wireless products are tested against established standards such as IEEE 802.11 to ensure they meet requirements for performance and security. When a company develops a device, it sends the technology to an authorized test lab, where technicians determine if the product is satisfactory.

If the device is certified, the company can move forward with manufacturing and distribution. If it doesn’t pass muster, the company can stop working on it or fix the problem and resubmit the device for certification.

When products fail testing, companies retain organizations such as RF Safety Lab to figure out what has to be done to be certified, Liu says.

Although the startup works with large companies including Belkin, Google, and Sony, he says, most of its clients are new designers and small businesses.

“A lot of large, tier-one companies already have the experience necessary to fix issues found during regulatory testing, so they don’t encounter serious roadblocks unless they’re developing a novel technology,” he says. “But we are seeing that a lot of new designers are not aware of the regulations, so they come to us for help.

“Many times they just need to be educated on the regulations and told what documents they need to supply to get their device certified,” he says, adding that “there are cases where we need to run our own tests and assessments to determine how the product can comply with regulations.”

After the lab evaluates the product, it compiles a report with its findings including recommendations on how to remedy any failures. Corrective measures can include changing power levels, redesigning an antenna, and updating software parameters to influence an antenna’s behavior.

Liu says that one of his most challenging projects was a wireless endoscope developed by a group of doctors from Illinois. An endoscope is a thin, flexible instrument with a camera on the end, used to look inside the human body. Although small, it is connected via a wire to a monitor, which the surgeon can view during the procedure. And operating rooms have numerous other machines with wires in addition to endoscopes.

The Illinois doctors wanted to decrease the number of wires in operating rooms, so they developed a wireless endoscope that uses a high-frequency 60-gigahertz RF transmitter, which displays high-resolution images on a monitor.

“They built this interesting device but didn’t know how to get it certified and the impact that testing could have on the final design,” Liu says.

“Like many companies, the designers were resourceful and purchased the transmitter parts at a low cost from a manufacturer overseas,” he says. “For a regulatory certification, you have to be able to configure devices for certain testing purposes, so the team and I had to do a deep dive to find instructions on how the transmitter worked and how it was made.” He says it was particularly challenging because the transmitter wasn’t the only off-the-shelf part the designers used.

Getting the device ready for certification required a lot of collaboration among the designers, the many part manufacturers, and regulatory agencies.

The process took several months to complete due to the product’s high frequency, which required preapproval guidance with the FCC.

“It was a very fun experience and a rewarding project for the team,” Liu says. “We had a good time seeing that project through.”

RF Safety Lab is expanding its reach and helping companies get their devices compliance-ready to be sold in the European Union. It also is working on getting products to meet global cybersecurity regulations.

The benefits of IEEE membership


After leaving his job at PCTest, Liu joined IEEE in 2023 to stay involved with developing technical standards in wireless RF exposure, RF safety, and health. The IEEE C95.1-2019 standard, for example, establishes safety levels for human exposure to RF electromagnetic fields. IEEE/IEC 62209-1528-2020 aims to provide timely measurement procedures and techniques for accurate SAR testing.

“Staying up to date on regulations in the United States and other countries is important to me,” Liu says, “so joining IEEE was a natural step to take.

“People around the world recognize IEEE, so staying connected with the organization and the standards committees it participates in benefits me—which then benefits the clients I’m involved with.”

Liu has spoken at IEEE Product Safety Engineering Society events including the Symposium on Product Compliance Engineering in October in Bloomington, Minn., where he explained how to achieve FCC compliance.

“Being an IEEE member has definitely paid off for me,” Liu says.

Reference: https://ift.tt/xBeTKsJ

Build a Perfect Cryptographic Machine




Like many nerds, I have an interest in cryptography rooted in the wartime exploits of codebreaker and Ur-computer scientist Alan Turing. So I’ve followed with interest IEEE Spectrum’s reporting on the burgeoning field of postquantum cryptography. These techniques are designed to frustrate even the immense potential of quantum computing, a technology light-years beyond the electromechanical bombe that Turing used to break the German Enigma cipher. I’m sure those new cryptographic methods will work just fine. But there is one encryption scheme, known even in Turing’s time, that is mathematically secure against not just quantum computers but any computer that will ever be invented: the one-time pad.

A one-time pad is a series of random letters or numbers—typically 250 digits. The sender and receiver each have a copy of the pad, which is used for both encryption and decryption, following some simple but strict rules for pen and paper. It’s a cipher in which the key changes in an utterly unpredictable way after each character. Without predictability, there’s nothing for an attacking computer to get its teeth into.

However, even the most junior codebreaker in possession of two messages encrypted with the same pad would be able to strip off the encryption and read both. It’s therefore critical to destroy each pad after you’ve used it. And it’s a bad idea to store the pad on a thumb drive or something similar, because computers and storage devices have a habit of leaving residues of data around, even after the data has been officially deleted.

The one-time pad comes with some other significant limitations. The digits have to be truly random—the numbers generated by the pseudo-random algorithms typically used by computers won’t cut it. And because you can use a given pad only once, you need a whole bunch of them if you want to send more than a single message. Plus, the pads need to be physically printed and shared by hand—you can’t send them over a network.

An illustration of the major components of the Pad-O-Matic. The random-number generator uses a collection of 74HC-series logic chips [top right] to digitize electrical noise and present it as a random byte to an Arduino Uno Minima [top left]. The generator can produce roughly one byte every 200 microseconds, and the Uno converts this into a single digit and builds up a series of 50 pads with 250 digits each, which it sends to the printer [bottom].James Provost

I decided to build a machine that makes dealing with those problems a little easier. My Pad-O-Matic is built around a CSN-A2 thermal receipt printer I’d bought on a whim a few years back. The printer is connected to the most transparent technology stack I could find: a tortured transistor, a few logic chips, and a microcontroller with about 200 lines of my code. This code does nothing more complicated than division, because if I’ve learned one thing about cryptography, it’s that unless you really know what you’re doing, trying to be a clever clogs is a recipe for failure. The Pad-O-Matic is completely stand-alone.

The thermal receipt printer in the Pad-O-Matic lets me print a whole series of pads. I still have to physically share the pads, but at least they’re in a compact roll. My correspondent and I can then tear off and destroy each pad after it’s been used.

Without predictability, there’s nothing for an attacking computer to get its teeth into.

I still needed a good source of randomness—some fundamentally unpredictable physical process to convert into equally unpredictable bits. Fortunately, that problem was already solved for me. I found a neat little battery-powered circuit from Make: magazine that relies on the electrical noise produced by forcing electrons the wrong way across a transistor’s base and emitter terminals while leaving the collector terminal unconnected. Make:’s generator is a simplified version of a circuit by Aaron Logue, but Make: fortunately has a copy of the original schematic. This uses 12 and 5 volts instead of the 18 and 5 volts used by Make:’s version, so I could use an old power supply I had that also provides enough extra current to drive the thermal printer. The original circuit also has two nice additional features for the cost of a few extra chips.

The first feature is a clean microcontroller interface. It sends one byte at a time in parallel, alerting the microcontroller every time a new byte is available. An alert is needed because the length of time needed to generate a random byte varies slightly due to the other nice feature: automatic debiasing, using four flip-flops and an XOR gate. Debiasing means that even if the electrical-noise generator tends toward, say, more 0s than 1s, the final output will be statistically balanced.

Along the top a trace shows a voltage erratically switching between 0- and 5-volt levels, with the 0 V level occurring more frequently. Lines indicate the times when the voltage level is sampled, converting 5 V into a 1 and the 0 V into a 0. The 1s and 0s are divided into pairs, with a row of digits beneath them showing that whenever the pairs consist of the same bit, they are discarded. The unmatched pairs are converted into a byte, here 101011110, or 175, which is then converted into a single digit, 5. The Pad-O-Matic samples electrical noise at regular intervals to create a stream of bits. To prevent the final numbers from being biased toward those with many 0s or 1s, pairs of bits are compared. Only if they differ are they examined further, with the leading digit being passed along. Eight of these debiased bits are packed into a byte, which is then subjected to modular division to produce a random number between 0 and 9.James Provost

For my microcontroller, I finally got to use an Arduino Uno R4 Minima. Although this latest version of the beloved Uno came out about 18 months ago, I hadn’t found a project that needed it—until now. Its bigger memory—32 kilobytes of RAM versus 2 KB in the Rev3—is essential, because the Pad-O-Matic has to generate an entire series of pads—50 in my case—and hold it in memory. With 250 digits per pad, that requires over 12 KB. As the digits live only in RAM, there’s no risk of them leaving any trace of themselves behind.

The microcontroller produces digits from the incoming random bytes by first throwing away any byte with a value over 250. Then it performs modular division by 10 on each remaining byte, leaving digits in the range of 0 to 9.

I chose 50 pads per series, even though I had the memory for more, because I actually have to print one series to keep and a copy to share, and then generate and print another series and its copy: The first series is for sending messages from me to my secret correspondent, and the second series is for them to send messages to me. This eliminates the risk of accidentally using the same pad when messages cross each other. A total of 100 pads just about uses up one roll of thermal paper.

I put the whole thing in a wooden enclosure, and presto! At the press of a button, the Pad-O-Matic whirs into life, spitting out perfect—and now marginally more convenient!—cryptographic security.

Reference: https://ift.tt/mwSqT2l

“Mr. Transistor’s” Most Challenging Moment




It says something about your career at a company that makes hundreds of trillions of transistors every day when your nickname is “Mr. Transistor.” That’s what colleagues often call Tahir Ghani, a senior fellow and the director of process pathfinding in Intel’s technology development group. Ghani’s career spans three decades at the company and has resulted in more than a thousand patent filings. He’s had a hand in every major change to the CMOS transistor during that time period.

As Intel heads toward yet another major change—the move from FinFETs to RibbonFETs (called nanosheet transistors, more generically)—IEEE Spectrum asked Ghani what’s been the riskiest change so far. In an era when the entire architecture of the device has morphed, his somewhat surprising answer was a change introduced back in 2008 that left the transistor looking—from the outside—pretty similar to how it did before.

3 Big Changes to the Transistor

Prior to this year’s introduction of RibbonFETs, there have been three major changes to the CMOS transistor. At the turn of the century, the devices looked pretty much like they always had, just ever smaller. Built into the plane of the silicon are a source and drain separated by the channel region. Atop this region is the gate stack—a thin layer of silicon oxide insulation topped by a thicker piece of polycrystalline silicon. Voltage at the gate (the polysilicon) causes a conductive channel to bridge the source and drain, allowing current to flow.

But as engineers continued to shrink this basic structure, producing a device that drove enough current through it—particularly for the half of devices that conducted positively-charged holes instead of electrons—became more difficult. The answer was to stretch the silicon crystal lattice somewhat, allowing charge to speed through faster. When Intel announced its strained-silicon plan back in 2002, this was done by adding a bit of silicon germanium to the source and drain, and letting the material’s larger crystal structure squeeze the silicon in the channel between them.

The thin layer of silicon dioxide insulation separating the gate from the channel was now just five atoms thick

In 2012, the FinFET arrived. This was the biggest structural change, essentially flipping the device’s channel region on its side so that it protrudes like a fin above the surface of the silicon. This was done to provide better control over the flow of current through the channel. By this point, the distance between the source and drain had been reduced so much that current would leak across even when the device is supposed to be off. The fin structure allowed chipmakers to drape the gate stack over the channel region so that it surrounds the channel region on three sides, which gives better control than the planar transistor’s single-sided gate.

But between strained silicon and the FinFET came Intel’s riskiest move, according to Ghani—high-k/metal gate.

Running Out of Atoms

“If I take the three big changes in transistors during that decade my personal feeling is that high-k/metal gate was the most risky of all,” Ghani told IEEE Spectrum at the IEEE International Electron Device Meeting in December. “When we went to high-k/metal gate, that is taking the heart of the MOS transistor and changing it.”

As Tahir and his colleagues put it in an article in IEEE Spectrum at the time: “The basic problem we had to overcome was that a few years ago we ran out of atoms.”

Keeping to Moore’s Law scaling in this era meant reducing the smallest parts of a transistor by a factor of 0.7 with each generation. But there was one part of the device that had already reached its limit. The thin layer of silicon dioxide insulation separating the gate from the channel, having been thinned down 10-fold since the middle of the 1990s, was now just five atoms thick.

Losing any more of the material was simply impossible, and worse, at five atoms the gate dielectric was barely doing its job. The dielectric is meant to allow voltage at the gate to project an electric field into the channel but at the same time keep charge from leaking between the gate and the channel.

“We initially wanted to do one change at a time,” recalls Ghani, starting with swapping the silicon dioxide for something that could be physically thicker but still project the electric field just as well. That something is termed a high-dielectric-constant, or high-k, dielectric. When Intel’s components research team looked at doing that, Ghani says, “they found that actually if you just do polysilicon with high-k, there is an interaction between the poly and high-k.” That interaction effectively pins the voltage at which the transistor turns on or off—the threshold voltage—at a worse value than if you’d left well enough alone.

“There was no way out except… to do a metal gate too,” Ghani says. Metal would bond better to the high-k dielectric, eliminating the pinning problem while solving some other issues along the way. But finding the right metal—two metals really, because there are two types of transistor, NMOS and PMOS—introduced its own problems.

“Like a dog to a bone, the whole organization was psyched up to do it.” —Tahir Ghani, Intel

“The problem with the metal gate was that all the materials that would have [worked]… cannot withstand high temperatures” needed to build the rest of the device, Ghani says.

Once again, the solution actually ratcheted up the risk even further. Intel would have to take the series of steps it had reliably used to build transistors for 30 years and reverse it.

The basic process involved building the gate stack first and then using its dimensions as the boundaries around which the company built the rest of the device. But the metal gate stack wouldn’t survive the extremes of this so-called gate first process. “The way out was we had to reverse the flow and do the gate at the end,” explains Ghani. The new process, called gate last, involved starting with a dummy gate, a block of polysilicon, continuing with the processing, then removing the dummy and replacing it with the high-k dielectric and the metal gate. Adding yet a further complication, the new gate stack had to be deposited using a tool that Intel had never used in chip production called atomic-layer deposition. (It does what the name implies.)

“We had to change the foundational flow we had done for so many decades,” says Ghani. “We put in all these new elements and changed the heart of the transistor; we started to use tools we had not done before in industry. So if you look at the plethora of challenges that we had, I think it was clearly the most challenging project I have worked on.”

The 45-nanometer Node

That wasn’t the end of the story, of course.

The new process had to reliably produce devices and circuits and complete ICs with a degree of reliability that would ensure its economical use. “It was such a big change, we had to be very careful,” Ghani says. “And so we took our time.” Intel’s team developed processes for both NMOS and PMOS, then built wafers of each device separately, then together before moving on to more complex things.

Even then, it wasn’t clear that high-k/metal gate would make it as Intel’s next manufacturing process, the 45-nanometer node. All the work to that point had been done using the design rules—transistor and circuit geometries—for the existing 65-nanometer node rather than a future 45-nanometer node. “Every time you go to new design rules there are problems that the design rules bring itself,” he explains. “So you don’t want to confuse high-k/metal gate problems and design rule issues.”

“I think it almost took us a year and half before we thought we were ready to get the first yield lot out,” he says, referring to wafers with real CPUs on instead of just test structures [CK].

“The first… lot was exceptionally good for the very first time,” recalls Ghani. Seeing how high the initial yield was and looking at how much time the team had before it needed to deliver a 45-nanometer node management committed to making high-k/metal gate it’s next production technology. “Like a dog to a bone, the whole organization was psyched up to do it,” he says.

Asked if he still thinks Intel is as adventurous as it was when it developed and deployed high-k/metal gate, Ghani responds in the affirmative. “I think we still are,” he says, giving the example of the recent deployment of back side power delivery—a technology that saves power and boost performance by moving power-delivering interconnect beneath the transistors. “Seven or eight years ago we decided to really look at back-side contacts for power delivery, and we kept on pushing.”

Reference: https://ift.tt/xJe9rv2

Tuesday, January 28, 2025

Apple chips can be hacked to leak secrets from Gmail, iCloud, and more


Apple-designed chips powering Macs, iPhones, and iPads contain two newly discovered vulnerabilities that leak credit card information, locations, and other sensitive data from the Chrome and Safari browsers as they visit sites such as iCloud Calendar, Google Maps, and Proton Mail.

The vulnerabilities, affecting the CPUs in later generations of Apple A- and M-series chip sets, open them to side channel attacks, a class of exploit that infers secrets by measuring manifestations such as timing, sound, and power consumption. Both side channels are the result of the chips’ use of speculative execution, a performance optimization that improves speed by predicting the control flow the CPUs should take and following that path, rather than the instruction order in the program.

A new direction

The Apple silicon affected takes speculative execution in new directions. Besides predicting control flow CPUs should take, it also predicts the data flow, such as which memory address to load from and what value will be returned from memory.

Read full article

Comments

Reference : https://ift.tt/nEtFqer

Filter Technologies for Advanced Communication Systems




Learn about carrier aggregation, microcell overlapping, and massive MIMO implementation. Delve into the world of surface acoustic wave (SAW) and bulk acoustic wave (SAW) filters and understand their strengths, limitations, and applications in the evolving 5G/6G landscape.

Key highlights:

  • Uncover the design challenges of new technologies in the mobile ecosystem
  • Explore the field of SAW and SAW filters and discover their roles and performance nuances
  • Take a look at the impact of temperature on filter technologies and how it shapes their applications
  • Learn how simulation technology can bridge the gap between design concepts and real-world implementation

Stay ahead in 5G/6G innovation and learn how filters shape seamless communication.

Register now free-of-charge to explore this white paper

Reference: https://ift.tt/JpRTfMY

The Transformer Transformer




I first became aware of the looming transformer crisis in 2022, when IEEE Spectrum contributing editor Robert N. Charette was reporting on the infrastructure improvements required to make the transition to electrical vehicles possible. Modern power grids can’t run without transformers, which step voltages up and down for distribution from power plants to power stations and on to homes and businesses. And yet, Charette wrote “most of the millions of U.S. transformers are approaching the end of their useful lives.”

What’s worse, the way we use the power grid is stressing transformers even more. Deepak Divan, recipient of the 2024 IEEE Medal in Power Engineering and the director of the Center for Distributed Energy at Georgia Tech, told Charette that in residential areas “multiple [EV] chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” Charette wrote, “replacing transformers soon could be a major and costly headache for utilities, assuming they can get them.”

Well, they can’t, at least not quickly and not without paying a premium. As Andrew Moseman reports in “Engineers Transform Transformers to Save the Power Grid” [p. 20], global demand for transformers is soaring, and the wait time has more than doubled from one to two years; customers trying to get their hands on large power transformers can expect to wait up to four years and pay 60 to 80 percent more than they did five years ago. As a result, up to a quarter of the world’s renewable energy projects face substantial delays.

But where some people see only a crisis, Divan sees an opportunity: to infuse dumb transformers with some electronic smarts. His team at Georgia Tech is working on a solid-state design called a modular controllable transformer (MCT). It not only steps voltages up and down, but can also invert current between DC and AC in a single stage.

The MCT could potentially ease manufacturing bottlenecks because it doesn’t need to be custom-built for each application. But, as Moseman reports, it’s still an emerging technology whose future depends on yet-to-be-developed semiconductors that can handle loads of at least 13 kilovolts.

We’ll need MCTs or something similar to build out power grids that can handle more solar and wind power, EV chargers, and utility-scale batteries. Unlike traditional grids, whose voltage and frequency were regulated by large, always-on generators, tomorrow’s grids will be regulated by highly intelligent electronics, as Divan and Charette discuss in an interview with Spectrum published last summer.

Divan’s 2024 book Energy 2040: Aligning Innovation, Economics and Decarbonization recommends coordinating energy policy and industrial practices so utilities and their customers can take full advantage of continuing advances in clean energy technologies. But as he told Charette, utilities don’t have the skills to deal with “this dynamic beast…. In fact, most big electric utilities have few people in their workforce who are skilled in power electronics, because the old system did not need it.”

In a way, utilities, power engineers, and policymakers are the victims of their own success: The grid is so reliable that it’s virtually invisible in many places and enters the public consciousness only when it fails. Says Divan, “Part of the problem is that nobody can stand in public and say, “Hey, there’s a problem here!”

Divan did. Is anyone listening?

Reference: https://ift.tt/AHFJ5pY

A Journey From Software Engineering to Product Management




Engineers are great at solving problems that arise when creating new products. But deciding what new products to build is often just as big a challenge. That decision-making process is a task software engineer Johnny Ray Austin has found himself increasingly drawn to as his career has progressed.

Austin is the senior director of product management at the financial technology company Best Egg, based in Wilmington, Del. During his career, he’s worked on a wide array of assignments, including classified defense projects, mapping technology, and education software. And over the years, he’s gravitated toward roles where he gets to think more deeply about how the technology he’s building fits in with users’ needs.

Johnny Ray Austin


Employer:

Best Egg

Occupation:

Senior director of product management and customer platforms

Education:

Bachelor’s degree in computer science, Tuskegee University; master’s degree in information technology, University of Maryland Global Campus

He’s gradually transitioned from pure engineering roles to ones that are more focused on product management. The process of developing new products and bringing them to market typically involves identifying what products and features the customer wants and then coordinating between engineering and business teams to make them a reality.

“You kind of sit in between the business side of the house and the more technical side of the house, and you need to be able to speak both languages,” Austin says.

At Best Egg, he’s helping lead an effort to transform the business into a “product-led” organization. And while product management typically attracts people who have a business background, he thinks the core skills of an engineer are well-suited to the job.

“Engineers tend to be very creative in their thinking processes, and that’s really important to a lot of product management roles,” he says.

The cutting edge of software engineering

Growing up in Saginaw, Mich., in the 1990s, Austin had little exposure to digital technology. His family didn’t get a computer until he was in high school, and he only used it for writing the occasional class report.

An interest in space inspired him to enroll in an aerospace engineering course at Tuskegee University, in Alabama, in 2002. But it turned out to be less exciting than he expected, and by the end of freshman year, he was considering other options. A chance conversation with his roommate’s friend, who was taking computer science, convinced him to give the subject a try.

“The following semester, I took my first computer science class, which was C++ intro to programming, and I just fell in love with it,” Austin says. “It was like a world had unlocked for me.”

In his final year, Austin attended a campus hiring event run by Lockheed Martin and was offered a job as a software engineer. He started work immediately after graduating in 2006 and continued his studies in his spare time. In 2010, he received a master’s degree in information technology from the University of Maryland Global Campus.

Although the applications Austin worked on at Lockheed were fascinating, he says, the government is risk averse, and so the technology he was working with was well behind the cutting edge.

“Being a young engineer, I wanted to use the latest technology,” he says. “I felt kind of trapped about 10, 15 years behind where all the action was happening.”

A photo of a man in a suit jacket. Jason Spear

Jumping into startups

After six years at Lockheed Martin, Austin took the leap to the startup world. In 2012, he joined Everfi, where he helped build software to track student progress. This was followed by stints doing Web development for the marketing startup ISL and helping train employees on software development and cybersecurity at the financial services company Capital One.

In 2018, he joined Mapbox, a company that creates navigation technology. As head of navigation data, he was responsible for building the tools and infrastructure to collate and organize the company’s geospatial data. After six months, Austin was promoted to director of engineering.

In addition to managing engineering efforts, he was also responsible for ensuring that Mapbox found customers for its products. “That required me to pick my head up out of the technical world and think about what products we were actually delivering to people and whether they actually want them,” he says. The role gave him his first taste of product management.

Fintech to help people get through the month

In 2019, Austin joined Till, a financial tech startup in Alexandria, Va., as vice president of engineering. His job was to help build the company’s flexible rent-payment service, which allows people to pay their rent in installments throughout the month.

“The first thing that popped into my mind was, ‘Why hasn’t anyone already built this?’” Austin says. “It felt like we could have a lot of impact on people’s lives.”

Austin says he quickly discovered why the service didn’t exist already. The business model required Till to pay landlords up front on the first of the month, essentially loaning the tenant rent money and allowing them to pay it back over the course of the month.

This in turn meant that Till’s software needed to interface with legacy property management and loan-management systems, some dating back to the 1990s. Trying to connect these systems was a major engineering challenge, Austin says. “It can be fun, but it can be surprisingly frustrating.”

Shifting focus for product growth

In 2020, Austin became Till’s chief technology officer. As the leader of both the product-management and engineering teams, he had to think about product development nearly as much as engineering, something he found he relished. In December 2022, when Till was acquired by Best Egg, he jumped at the chance to move into a more product-focused role during the transition.

The purchase of Till was part of a new strategy for Best Egg, Austin says, which at the time primarily provided personal loans. The company wanted to offer its customers an array of financial products, including Till’s flexible rent-payment service, but this would require a shift in Best Egg’s plan to become a “product-led” business, Austin says.

He volunteered to lead this change, and in September 2023 he was made senior director of product management and customer platforms. Part of the role involves hiring and training product managers for each of the company’s key offerings, but the biggest challenge has been changing the company’s culture, Austin says.

In general, businesses that are not product-led may focus on maximizing metrics such as revenue, profit, and monthly users. In its new business model, Best Egg is instead working to boost these metrics indirectly through improved customer experiences.

That has meant focusing more on identifying what problems customers needed to solve and trusting the engineering teams to find solutions. “Let the engineers figure it out, because that’s what they do best, right?” he says.

A career for the curious

Product management is not a good fit for every engineer, says Austin, particularly if you prefer to work directly with technology day to day. But for those who are curious about all aspects of the business, it can be very fulfilling because “you touch everything,” he says.

The creativity, problem solving, and systematic thinking required of engineers can be a great foundation for a career in product management, Austin says. But it’s crucial to work on your communication skills, because product managers need to hold their own in technical discussions and also clearly communicate to other parts of the business what the engineering teams are building. It’s also important that product managers make good judgment calls about things like new features. They should nurture what Austin calls “product intuition”—that is, knowing their product and knowing their customer.

Reference: https://ift.tt/T0D7Ks5

Better AI Is a Matter of Timing

AI is changing everything in data centers: New AI-specific chips , new cooling techniques , and new storage drives . Now e ven the meth...