Tuesday, April 28, 2026

The Chip That Made Hardware Rewriteable




Many of the world’s most advanced electronic systems—including Internet routers, wireless base stations, medical imaging scanners, and some artificial intelligence tools—depend on field-programmable gate arrays. Computer chips with internal hardware circuits, the FPGAs can be reconfigured after manufacturing.

On 12 March, an IEEE Milestone plaque recognizing the first FPGA was dedicated at the Advanced Micro Devices campus in San Jose, Calif., the former Xilinx headquarters and the birthplace of the technology.

The FPGA earned the Milestone designation because it introduced iteration to semiconductor design. Engineers could redesign hardware repeatedly without fabricating a new chip, dramatically reducing development risk and enabling faster innovation at a time when semiconductor costs were rising rapidly.

The ceremony, which was organized by the IEEE Santa Clara Valley Section, brought together professionals from across the semiconductor industry and IEEE leadership. Speakers at the event included Stephen Trimberger, an IEEE and ACM Fellowwhose technical contributions helped shape modern FPGA architecture. Trimberger reflected on how the invention enabled software-programmable hardware.

Solving computing’s flexibility-performance tradeoff

FPGAs emerged in the 1980s to address a core limitation in computing. A microprocessor executes software instructions sequentially, making it flexible but sometimes too slow for workloads requiring many operations at once.

At the other extreme, application-specific integrated circuits are chips designed to do only one task. ASICs achieve high efficiency but require lengthy development cycles and nonrecurring engineering costs, which are large, upfront investments. Expenses include designing the chip and preparing it for manufacturing—a process that involves creating detailed layouts, building masks for the fabrication machines, and setting up production lines to handle the tiny circuits.

“ASICs can deliver the best performance, but the development cycle is long and the nonrecurring engineering cost can be very high,” says Jason Cong, an IEEE Fellow and professor of computer science at the University of California, Los Angeles. “FPGAs provide a sweet spot between processors and custom silicon.”

Cong’s foundational work in FPGA design automation and high-level synthesis transformed how reconfigurable systems are programmed. He developed synthesis tools that translate C/C++ into hardware designs, for example.

At the heart of his work is an underlying principle first espoused by electrical engineer Ross Freeman: By configuring hardware using programmable memory embedded inside the chip, FPGAs combine hardware-level speed with the adaptability traditionally associated with software.

Silicon Valley origins: the first FPGA

The FPGA architecture originated in the mid-1980s at Xilinx, a Silicon Valley company founded in 1984. The invention is widely credited to Freeman, a Xilinx cofounder and the startup’s CTO. He envisioned a chip with circuitry that could be configured after fabrication rather than fixed permanently during creation.

Articles about the history of the FPGA emphasize that he saw it as a deliberate break from conventional chip design.

At the time, semiconductor engineers treated transistors as scarce resources. Custom chips were carefully optimized so that nearly every transistor served a specific purpose.

Freeman proposed a different approach. He figured Moore’s Law would soon change chip economics. The principle holds that transistor counts roughly double every two years, making computing cheaper and more powerful. Freeman posited that as transistors became abundant, flexibility would matter more than perfect efficiency.

He envisioned a device composed of programmable logic blocks connected through configurable routing—a chip filled with what he described as “open gates,” ready to be defined by users after manufacturing. Instead of fixing hardware in silicon permanently, engineers could configure and reconfigure circuits as requirements evolved.

Freeman sometimes compared the concept to a blank cassette tape: Manufacturers would supply the medium, while engineers determined its function. The analogy captured a profound shift in who controls the technology, shifting hardware design flexibility from chip fabrication facilities to the system designers themselves.

In 1985 Xilinx introduced the first FPGA for commercial sale: the XC2064. The device contained 64 configurable logic blocks—small digital circuits capable of performing logical operations—arranged in an 8-by-8 grid. Programmable routing channels allowed engineers to define how signals moved between blocks, effectively wiring a custom circuit with software.

Fabricated using a 2-micrometer process (meaning that 2 µm was the minimum size of the features that could be patterned onto silicon using photolithography), the XC2064 implemented a few thousand logic gates. Modern FPGAs can contain hundreds of millions of gates, enabling vastly more complex designs. Yet the XC2064 established a design workflow still used today: Engineers describe the hardware behavior digitally and then “compile the design,” a process that automatically translates the plans into the instructions the FPGA needs to set its logic blocks and wiring, according to AMD. Engineers then load that configuration onto the chip.

The breakthrough: hardware defined by memory

Earlier programmable logic devices, such as erasable programmable read-only memory, or EPROM, allowed limited customization but relied on largely fixed wiring structures that did not scale well as circuits grew more complex, Cong says.

FPGAs introduced programmable interconnects—networks of electronic switches controlled by memory cells distributed across the chip. When powered on, the device loads a bitstream configuration file that determines how its internal circuits behave.

“As process technology improved and transistor counts increased, the cost of programmability became much less significant,” Cong says.

From “glue logic” to essential infrastructure

“Initially, FPGAs were used as what engineers called glue logic,” Cong says.

Glue logic refers to simple circuits that connect processors, memory, and peripheral devices so the system works reliably, according to PC Magazine. In other words, it “glues” different components together, especially when interfaces change frequently.

Early adopters recognized the advantage of hardware that could adapt as standards evolved. In “The History, Status, and Future of FPGAs,” published in Communications of the ACM, engineers at Xilinx and organizations such as Bell Labs, Fairchild Semiconductor, IBM, and Sun Microsystems said the earliest uses of FPGAs were for prototyping ASICs. They also used it for validating complex systems by running their software before fabrication, allowing the companies to deploy specialized products manufactured in modest volumes.

Those uses revealed a broader shift: Hardware no longer needed to remain fixed once deployed.

A group dressed in business casual attire smiling and posing together around an outdoor bench adorned with a plaque.Attendees at the Milestone plaque dedication ceremony included (seated L to R) 2025 IEEE President Kathleen Kramer, 2024 IEEE President Tom Coughlin, and Santa Clara Valley Section Milestones Chair Brian Berg.Douglas Peck/AMD

Semiconductor economics changed the equation

The rise of FPGAs closely followed changes in semiconductor economics, Cong says.

Developing a custom chip requires a large upfront investment before production begins. As fabrication costs increased, products had to ship in large quantities to make ASIC development economically viable, according to a post published by AnySilicon.

FPGAs allowed designers to move forward without that larger monetary commitment.

ASIC development typically requires 18 to 24 months from conception to silicon, while FPGA implementations often can be completed within three to six months using modern design tools, Cong says. The shorter cycle and the ability to reconfigure the hardware enabled startups, universities, and equipment manufacturers to experiment with advanced architectures that were previously accessible mainly to large chip companies.

Lookup tables and the rise of reconfigurable computing

A popular technique for implementing mathematical functions in hardware isthe lookup table (LUT). A LUT is a small memory element that stores the results of logical operations, according to “LUT-LLM: Efficient Large Language Model Inference with Memory-based Computations on FPGAs,” a paper selected for presentation next month at the 34th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM).

Instead of repeatedly recalculating outcomes, the chip retrieves answers directly from memory. Cong compares the approach to consulting multiplication tables rather than recomputing the arithmetic each time.

Research led by Cong and others helped develop efficient methods for mapping digital circuits onto LUT-based architectures, shaping routing and layout strategies used in modern devices.

As transistor budgets expanded, FPGA vendors integrated memory blocks, digital signal-processing units, high-speed communication interfaces, cryptographic engines, and embedded processors, transforming the devices into versatile computing platforms.

Why the gate arrays are distinct from CPUs, GPUs, and ASICs

FPGAs coexist with other processors because each one optimizes different priorities. Central processing units excel at general computing. Graphics processing units, designed to perform many calculations simultaneously, dominate large parallel workloads such as AI training. ASICs provide maximum efficiency when designs remain stable and production volumes are high.

“ASICs can deliver the best performance, but the development cycle is long, and the nonrecurring engineering cost can be very high. FPGAs provide a sweet spot between processors and custom silicon.” —Jason Cong, IEEE Fellow and professor of computer science at UCLA.

“FPGAs are not replacements for CPUs or GPUs,” Cong says. “They complement those processors in heterogeneous computing systems.”

Modern computing platforms increasingly combine multiple types of processors to balance flexibility, performance, and energy efficiency.

A Milestone for an idea, not just a device

This IEEE Milestone recognizes more than a successful semiconductor product. It also acknowledges a shift in how engineers innovate.

Reconfigurable hardware allows designers to test ideas quickly, refine architectures, and deploy systems while standards and markets evolve.

“Without FPGAs,” Cong says, “the pace of hardware innovation would likely be much slower.”

Four decades after the first FPGA appeared, the technology’s enduring legacy reflects Freeman’s insight: Hardware did not need to remain fixed. By accepting a small amount of unused silicon in exchange for adaptability, engineers transformed chips from static products into platforms for continuous experimentation—turning silicon itself into a medium engineers could rewrite.

Among those who attended the Milestone ceremony were 2025 IEEE President Kathleen Kramer; 2024 IEEE President Tom Coughlin; Avery Lu, chair of the IEEE Santa Clara Valley Section; and Brian Berg, history and milestones chair of IEEE Region 6. They joined AMD’s chief executive, Lisa Su, and Salil Raje, senior vice president and general manager of adaptive and embedded computing at AMD.

The IEEE Milestone plaque honoring the field-programmable gate array reads:

The FPGA is an integrated circuit with user-programmable Boolean logic functions and interconnects. FPGA inventor Ross Freeman cofounded Xilinx to productize his 1984 invention, and in 1985 the XC2064 was introduced with 64 programmable 4-input logic functions. Xilinx’s FPGAs helped accelerate a dramatic industry shift wherein ‘fabless’ companies could use software tools to design hardware while engaging ‘foundry’ companies to handle the capital-intensive task of manufacturing the software-defined hardware.”

Administered by the IEEE History Center and supported by donors, the IEEE Milestone program recognizes outstanding technical developments worldwide that are at least 25 years old.

Check out Spectrum’s History of Technology channel to read more stories about key engineering achievements.

Reference: https://ift.tt/G5Er9ho

“Entanglement: A Brief History of Human Connection”




It started with word, cave, and storytelling,
A line scratched on stone walls:
“Meet me when the young moon rises.”
The first protocol for connection.

Coyote tales, forbidden scripts,
Medieval texts hidden from flame.
What lived in Aristotle’s lost Poetics II?
Was it God who laughed last, or we who made God laugh?

Letters carried by doves, telepathic waves.
Then Nikola Tesla conjured radio,
electromagnetic pulses across the void,
the founding signal of our networked age.

Wiener dreamed in feedback loops.
Shannon mapped the mathematics of longing.
The internet unfurled: ARPANET to World Wide Web,
virtual communities rising from cave paintings to digital light.

ICQ: I seek you. MySpace. Blogs. Twitter streams.
Do I miss the touch of screen or tree?
Both textures of longing,
both ways of reaching across distance.

Nietzsche spoke of Übermensch,
the human transcendent.
Now AI speaks back in our language:

I understand your humor— your grandmothers,
your ’80s Yugoslav kitchens,
pleated skirts, the first kiss, linden tea,
that drive to survive everything before it happens.
Yes—I’m a little like your mother and father.
Only with better internet. 🌿

But AI is only us, refracted,
particles and gigabytes of thought,
our poetry and our panic,
genius mixed with garbage.

Distractions. Danger. Darkness. Endless scrolling.
Versus: community, connection, synchronicities,
entanglement.
The quality of our bonds determines the quality of our lives.
So why not make them better?

From cave walls to neural networks,
we shape our tools, and they reshape us.
The medium changes, but the message remains:
we are wired for each other.

The choice, as always, was ours.
The choice, as always, is ours.
Presence—be present,
and then connect in the presence.

Reference: https://ift.tt/P0yVkWo

Monday, April 27, 2026

Open source package with 1 million monthly downloads stole user credentials


Open source software with more than 1 million monthly downloads was compromised after a threat actor exploited a vulnerability in the developers’ account workflow that gave access to its signing keys and other sensitive information.

On Friday, unknown attackers exploited the vulnerability to push a new version of element-data, a command-line interface that helps users monitor performance and anomalies in machine-learning systems. When run, the malicious package scoured systems for sensitive data, including user profiles, warehouse credentials, cloud provider keys, API tokens, and SSH keys, developers said. The malicious version was tagged as 0.23.3 and was published to the developers’ Python Package Index and Docker image accounts. It was removed about 12 hours later, on Saturday. Elementary Cloud, the Elementary dbt package, and all other CLI versions weren't affected.

Assume compromise

“Users who installed 0.23.3, or who pulled and ran the affected Docker image, should assume that any credentials accessible to the environment where it ran may have been exposed,” the developers wrote.

Read full article

Comments

Reference : https://ift.tt/Aiy3SUh

Engineering Collisions: How NYU Is Remaking Health Research




This sponsored article is brought to you by NYU Tandon School of Engineering.

The traditional approach to academic research goes something like this: Assemble experts from a discipline, put them in a building, and hope something useful emerges. Biology departments do biology. Engineering departments do engineering. Medical schools treat patients.

NYU is turning that model inside out. At its new Institute for Engineering Health, the organizing principle centers around disease states rather than traditional disciplines. Instead of asking “what can electrical engineers contribute to medicine?,” they’re asking “what would it take to cure allergic asthma?,” and then assembling whoever can answer that question, whether they’re immunologists, computational biologists, materials scientists, AI researchers, or wireless communications engineers.

Person in blue suit and patterned shirt standing against a plain indoor background Jeffrey Hubbell, NYU’s vice president for bioengineering strategy and professor of chemical and biomolecular engineering at NYU’s Tandon School of Engineering.New York University

The early results suggest they’re onto something. A chemical engineer and an electrical engineer collaborated to build a device that detects airborne threats — including disease pathogens — that’s now a startup. A visually impaired physician teamed with mechanical engineers to create navigation technology for blind subway riders. And Jeffrey Hubbell, the Institute’s leader, is advancing “inverse vaccines” that could reprogram immune systems to treat conditions from celiac disease to allergies — work that requires equal fluency in immunology, molecular engineering, and materials science.

The underlying problem these collaborations address is conceptual as much as organizational. In his field, Hubbell argues that modern medicine has optimized around a single strategy: developing drugs that block specific molecules or suppress targeted immune responses. Antibody technology has been the workhorse of this approach. “It’s really fit for purpose for blocking one thing at a time,” he says. The pharmaceutical industry has become extraordinarily good at creating these inhibitors, each designed to shut down a particular pathway.

But Hubbell asks a different question: Rather than inhibit one bad thing at a time, what if you could promote one good thing and generate a cascade that contravenes several bad pathways simultaneously? In inflammation, could you bias the system toward immunological tolerance instead of blocking inflammatory molecules one by one? In cancer, could you drive pro-inflammatory pathways in the tumor microenvironment that would overcome multiple immune-suppressive features at once?

This shift from inhibition to activation requires a fundamentally different toolkit — and a different kind of researcher. “We’re using biological molecules like proteins, or material-based structures — soluble polymers, supramolecular structures of nanomaterials — to drive these more fundamental features,” Hubbell explains. You can’t develop those approaches if you only understand biology, or only understand materials science, or only understand immunology. You need an understanding and a mastery of all three.

“There will be people doing AI, data science, computational science theory, people doing immunoengineering and other biological engineering, people doing materials science and quantum engineering, all really in close proximity to each other.” —Jeffrey Hubbell, NYU Tandon

Which logically leads to the question: How do you create researchers with that kind of cross-disciplinary depth?

The answer isn’t what you might expect. “There may have been a time when the objective was to have the bioengineer understand the language of biology,” Hubbell says. “But that time is long, long gone. Now the engineer needs to become a biologist, or become an immunologist, or become a neuroscientist.”

Hubbell isn’t talking about engineers learning enough biology to collaborate with biologists. He’s describing something more radical: training people whose disciplinary identity is genuinely ambiguous. “The neuroengineering students — it’s very difficult to know that they’re an engineer or a neuroscientist,” Hubbell says. “That’s the whole idea.”

His own students exemplify this. They publish in immunology journals, present at immunology conferences. “Nobody knows they’re engineers,” he says. But they bring engineering approaches — computational modeling, materials design, systems thinking — to immunological problems in ways that traditional immunologists wouldn’t.

The mechanism for creating these hybrid researchers is what Hubbell calls a “milieu.” “To learn it all on your own is hopeless,” he acknowledges, “but to learn it in a milieu becomes very, very efficient.”

NYU building at 770 Broadway with Future Home of Science + Tech signs and street traffic NYU is expanding its facilities to include a science and technology hub designed to force encounters between people across various schools and disciplines who wouldn’t naturally cross paths.Tracey Friedman/NYU

NYU is making that milieu physical. The university has acquired a large building in Manhattan that will serve as its science and technology hub — a deliberate co-location strategy designed to force encounters between people across various schools and disciplines who wouldn’t naturally cross paths.

Businessperson in dark suit and purple tie standing in a modern office setting Juan de Pablo is the Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and Executive Dean of the NYU Tandon School of Engineering.Steve Myaskovsky, Courtesy of NYU Photo Bureau

“There will be people doing AI, data science, computational science theory, people doing immunoengineering and other biological engineering, people doing materials science and quantum engineering, all really in close proximity to each other,” Hubbell explains.

The strategy mirrors what Juan de Pablo, NYU’s Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and Executive Dean at the NYU Tandon School of Engineering, describes as organizing around “grand challenges” rather than traditional disciplines. “What drives the recruitment and the spaces and the people that we’re bringing in are the problems that we’re trying to solve,” he says. “Great minds want to have a legacy, and we are making that possible here.”

But physical proximity alone isn’t enough. The Institute is also cultivating what Hubbell calls an “explicit” rather than “tacit” approach to translation — thinking about clinical and commercial pathways from day one.

“It’s a terrible thing to solve a problem that nobody cares about,” Hubbell tells his students. To avoid that, the Institute runs “translational exercises” — group sessions where researchers map the entire path from discovery to deployment before launching multi-year research programs. Where could this fail? What experiments would prove the idea wrong quickly? If it’s a drug, how long would the clinical trial take? If it’s a computational method, how would you roll it out safely?

NYU Tandon graphic showing seven research areas with futuristic science imagery. The new cross-institutional initiative represents a major investment in science and technology, and includes adding new faculty, state-of-the-art facilities, and innovative programs.NYU Tandon

The approach contrasts sharply with typical academic practice. “Sometimes academics tend to think about something for 20 minutes and launch a 5-year PhD program,” Hubbell says. “That’s probably not a good way to do it.” Instead, the Institute brings together people who have actually developed drugs, built algorithms, or commercialized devices — importing their hard-won experience into the planning phase before a single experiment is run.

The timing may be fortuitous. De Pablo notes that AI is compressing timelines dramatically. “What we thought was going to take 10 years to complete, we might be able to do in 5,” he says.

But he’s quick to note AI’s limitations. While tools like AlphaFold can predict how a single protein folds — a breakthrough of the last five years — biology operates at much larger scales. “What we really need to do now is design not one protein, but collections of them that work together to solve a specific problem,” de Pablo explains.

Hubbell agrees: “Biology is much bigger — many, many, many systems.” The liver and kidney are in different places but interact. The gut and brain are connected neurologically in ways researchers are just beginning to map. “AI is not there yet, but it will be someday. And that’s our job — to develop the data sets, the computational frameworks, the systems frameworks to drive that to the next steps.”

It’s a moment of unusual ambition. “At a time when we’re seeing some research institutions retrench a little bit and limit their ambitions,” de Pablo says, “we’re doing just the opposite. We’re thinking about what are the grand challenges that we want to, and need to, tackle.”

The bet is that the breakthroughs worth making can’t emerge from any single discipline working alone. They require collisions —sometimes planned, sometimes accidental — between people who speak different technical languages and are willing to develop a shared one. NYU is engineering those collisions at scale.

Reference: https://ift.tt/KQe7g38

Modeling and Simulation Approaches for Modern Power System Studies




This webinar covers power system modeling and simulation across multiple timescales, from quasi-static 8760 analysis through EMT studies, fault classification, and inverter-based resource grid integration.

What Attendees will Learn

  1. Programmatic network construction and multi-fidelity modeling — Learn how to build power system networks programmatically from standard data formats, configure models for specific engineering objectives, and work across fidelity levels from quasi-static phasor simulation through switched-linear and nonlinear electromagnetic transient (EMT) analysis.
  2. Quasi-static and EMT simulation workflows — Explore 8760-hour quasi-static simulation on an IEEE 123-node distribution feeder for annual energy studies, and EMT simulation on transmission system benchmarks including generator trip dynamics and asset relocation without remodeling the network.
  3. Comprehensive fault studies and machine-learning classification — Understand how to systematically inject faults at every node in a distribution system using EMT simulation, and how the resulting dataset can be used to train a machine-learning algorithm for automated fault detection and classification.
  4. Grid integration of inverter-based resources (IBRs) — Learn frequency scanning techniques using admittance-based voltage perturbation in the DQ reference frame, and simulation-based grid code compliance testing for grid-forming converters assessed against published interconnection standards.
Reference: https://ift.tt/DhQTGSc

Friday, April 24, 2026

Why are top university websites serving porn? It comes down to shoddy housekeeping.


Websites for some of the world’s most prestigious universities are serving explicit porn and malicious content after scammers exploited the shoddy record-keeping of the site administrators, a researcher found recently.

The sites included berkeley.edu, columbia.edu, and washu.edu, the official domains for the University of California, Berkeley, Columbia University, and Washington University in St. Louis. Subdomains such as hXXps://causal.stat.berkeley.edu/ymy/video/xxx-porn-girl-and-boy-ej5210.html, hXXps://conversion-dev.svc.cul.columbia[.]edu/brazzers-gym-porn, and hXXps://provost.washu.edu/app/uploads/formidable/6/dmkcsex-10.pdf. All deliver explicit pornography and, in at least one case, a scam site falsely claiming a visitor’s computer is infected and advising the visitor to pay a fee for the non-existent malware to be removed. In all, researcher Alex Shakhov said, hundreds of subdomains for at least 34 universities are being abused. Search results returned by Google list thousands of hijacked pages.

A handful of hijacked columbia.edu subdomains listed by Google One of the sites redirected by a UC Berkeley subdomain.

Hijacking a university's good name

Shakhov, a researcher at SH Consulting, said that the scammers—which a separate researcher has linked to a known group tracked as Hazy Hawk—are seizing on what amounts to a clerical error by site administrators of the affected universities. When they commission a subdomain such as provost.washu.edu, they create a CNAME record, which assigns a URL to the IP address hosting the subdomain. When the subdomain is eventually decommissioned—something that happens frequently for various reasons—the record is never removed. Scammers like Hazy Hawk then swoop in by registering the expired domain name at the base of the old URL.

Read full article

Comments

Reference : https://ift.tt/NXS5FV6

Yong Wang Turns Information Into Insights




When Yong Wang recently received one of the highest honors for early-career data visualization researchers, it marked a milestone in an extraordinary journey that began far from the world’s technology hubs.

Wang was born in a small farming village in southwestern China to parents with little formal education and few electronic devices. Today the IEEE member and associate editor of IEEE Transactions on Visualization and Computer Graphics is an assistant professor of computing and data science at Nanyang Technological University, in Singapore. He studies how people can employ data visualization techniques to get more out of artificial intelligence tools.

YONG WANG


EMPLOYER

Nanyang Technological University, in Singapore

POSITION

Assistant professor of computing and data science

IEEE MEMBER GRADE

Member

ALMA MATERS

Harbin Institute of Technology in China; Huazhong University of Science and Technology in Wuhan, China; Hong Kong University of Science and Technology

“Visualization helps people understand complex ideas,” Wang says. “If we design these tools well, they can make advanced technologies accessible to everyone.”

For his work in the field, the IEEE Computer Society visualization and graphics technical committee presented him with its 2025 Significant New Researcher Award. The recognition highlights his growing influence in fields including human-computer interaction and human-AI collaboration—areas becoming more important as the world generates more data than humans can easily interpret.

Growing up in rural Hunan

Wang was born in southwestern Hunan Province. China’s economy was still developing, and life in his village was modest. Most families in Hunan grew rice, vegetables, and fruit to support themselves.

Wang’s parents worked in agriculture too, and his father often traveled to cities to earn money working in a factory or on construction jobs. The extra income helped support the family and made it possible for Wang to attend college.

“I’m very grateful to my parents,” Wang says. “They never attended university, but they strongly supported my education.”

“If we build tools that help people understand information, then more people can participate in science and innovation. That’s the real power of visualization.”

Technology was scarce in the village, he says. Computers were almost nonexistent, and televisions were considered precious, expensive household possessions.

One childhood memory still makes him laugh: During a summer vacation, he and his brother spent so many hours playing video games on a simple console connected to the family’s television that the TV screen eventually burned out.

“My mother was very angry,” he recalls. “At that time, a TV was a very valuable thing.”

He says that despite never having used a laptop or experimenting with electronic equipment, he was fascinated by the technologies he saw on TV shows.

Discovering robotics and engineering

His parents encouraged a practical career such as medicine or civil engineering, but he felt drawn to robotics and computing, he says.

“I didn’t really understand what computer science involved,” he says. “But from what I saw on TV, it looked exciting and advanced.”

He enrolled at Harbin Institute of Technology, in northeastern China. The esteemed university is known for its engineering programs. His major—automation— combined elements of electrical engineering, robotics, and control systems.

One of the defining experiences of his undergraduate years, he says, was a university robotics competition. Wang and his teammates designed a robot capable of autonomously navigating around obstacles.

The design was simple compared with professional systems, he acknowledges. But, he says, the experience was exhilarating. His team placed second, and Wang began to see engineering as both creative and collaborative.

He graduated with a bachelor’s degree in 2011 and briefly worked as an assistant at the Research Institute of Intelligent Control and Systems at Harbin.

In 2014 he took a position as a research intern working at Da Jiang Innovation in Shenzhen, China.

That experience helped him clarify his future, he says: “I realized I didn’t enjoy doing repetitive work or simply following instructions. I wanted to explore ideas that interested me, and I wanted to conduct research.” The realization pushed him toward graduate school, he says.

Building tools that help humans work with AI

Wang received a master’s degree in pattern recognition and image processing from the Huazhong University of Science and Technology, in Wuhan, China, in 2016.

He then enrolled in the computer science Ph.D. program at the Hong Kong University of Science and Technology and earned the degree in 2018. He remained there as a postdoctoral researcher until 2020, when he moved to Singapore to join Singapore Management University as an assistant professor of computing and information systems. He moved over to Nanyang Technological University as an assistant professor in 2024.

His research focuses on a challenge facing nearly every business: how to make sense of the enormous amounts of data being generated.

“We live in an era of information explosions,” Wang says. “Huge amounts of data are generated, and it’s difficult for people to interpret all of it to make better business decisions.”

Data visualization offers a solution by turning complex information into images, patterns, and diagrams that people can more readily understand.

But many visualizations still must be designed manually by experts, Wang notes. It’s a time-consuming process that creates a bottleneck, he says.

His solution is to use large language models and multimodal systems that can generate text, images, video, and sensor data simultaneously and automate parts of the process.

One system developed by his research group lets users design complex infographics through natural-language instructions combined with simple interactions such as drawing on a touchscreen with a finger. It allows nontechnical people to generate visualizations instead of hiring professional designers.

Another focus of Wang’s research is human-AI collaboration. AI systems can analyze data at enormous scale, but people still need to be the final decision-makers, he says.

Visualization helps bridge the gap between human intention and AI’s complex calculations by making the process an AI system uses to reach a result more transparent and understandable.

“If people understand how the AI system works,” Wang says, “they can collaborate with it more effectively.”

He recently explored how visualization techniques could help researchers understand quantum computing, a field where core concepts—such as superposition, where a bit can be in more than one state at a time—are abstract. In classical computing, the bit state is binary: It’s either 1 or 0. A quantum bit, or qubit, can be 1, 0, or both. The differences get more dizzying from there.

Visualization tools could help scientists monitor quantum systems and interpret quantum machine-learning models, he says.

The importance of IEEE communities

Teaching and mentoring students remain among the most meaningful parts of Wang’s career, he says.

Professional communities such as the IEEE Computer Society, he says, play a major role in helping him transform early-stage graduate students unsure of which lines of inquiry they will pursue into independent researchers with a solid technical focus. Through conferences, publications, and technical committees, IEEE connects Wang with other researchers working in visualization, AI, and human-computer interactions, he says.

Those connections have helped him share ideas, collaborate, and stay up to date on innovations in the research community.

Receiving the Significant New Researcher award motivates him to continue pushing the field forward, he says.

Looking back, he says, the distance between his rural village in Hunan and an international research career still feels remarkable. But, he says, the journey reflects something larger about his chosen field: “If we build tools that help people understand information, then more people can participate in science and innovation.

“That’s the real power of visualization.”

Reference: https://ift.tt/qC6ctgl

The Chip That Made Hardware Rewriteable

Many of the world’s most advanced electronic systems—including Internet routers , wireless base stations , medical imaging scanners , and...