Friday, May 31, 2024

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic


A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

Reference : https://ift.tt/HO4Yml9

Google’s AI Overview is flawed by design, and a new company blog post hints at why


A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

Reference : https://ift.tt/pFCmTQu

Federal agency warns critical Linux vulnerability being actively exploited


Federal agency warns critical Linux vulnerability being actively exploited

Enlarge (credit: Getty Images)

The US Cybersecurity and Infrastructure Security Agency has added a critical security bug in Linux to its list of vulnerabilities known to be actively exploited in the wild.

The vulnerability, tracked as CVE-2024-1086 and carrying a severity rating of 7.8 out of a possible 10, allows people who have already gained a foothold inside an affected system to escalate their system privileges. It’s the result of a use-after-free error, a class of vulnerability that occurs in software written in the C and C++ languages when a process continues to access a memory location after it has been freed or deallocated. Use-after-free vulnerabilities can result in remote code or privilege escalation.

The vulnerability, which affects Linux kernel versions 5.14 through 6.6, resides in the NF_tables, a kernel component enabling the Netfilter, which in turn facilitates a variety of network operations, including packet filtering, network address [and port] translation (NA[P]T), packet logging, userspace packet queueing, and other packet mangling. It was patched in January, but as the CISA advisory indicates, some production systems have yet to install it. At the time this Ars post went live, there were no known details about the active exploitation.

Read 4 remaining paragraphs | Comments

Reference : https://ift.tt/Z2ntpBX

Video Friday: Multitasking




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Do you have trouble multitasking? Cyborgize yourself through muscle stimulation to automate repetitive physical tasks while you focus on something else.

[ SplitBody ]

By combining a 5,000 frame-per-second (FPS) event camera with a 20 FPS RGB camera, roboticists from the University of Zurich have developed a much more effective vision system that keeps autonomous cars from crashing into stuff, as described in the current issue of Nature.

[ Nature ]

Mitsubishi Electric has been awarded the GUINNESS WORLD RECORDS™ title for the fastest robot to solve a puzzle cube. The robot’s time of 0.305 second beat the previous record of 0.38 second, for which it received a GUINNESS WORLD RECORDS certificate on 21 May 2024.

[ Mitsubishi ]

Sony’s AIBO is celebrating its 25th anniversary, which seems like a long time, and it is. But back then, the original AIBO could check your email for you. Email! In 1999!

I miss Hotmail.

[ AIBO ]

SchniPoSa: schnitzel with french fries and a salad.

[ Dino Robotics ]

Cloth folding is still a really hard problem for robots, but progress was made at ICRA!

[ ICRA Cloth Competitioln ]

Thanks, Francis!

MIT CSAIL researchers enhance robotic precision with sophisticated tactile sensors in the palm and agile fingers, setting the stage for improvements in human-robot interaction and prosthetic technology.

[ MIT ]

We present a novel adversarial attack method designed to identify failure cases in any type of locomotion controller, including state-of-the-art reinforcement learning (RL)-based controllers. Our approach reveals the vulnerabilities of black-box neural network controllers, providing valuable insights that can be leveraged to enhance robustness through retraining.

[ Fan Shi ]

In this work, we investigate a novel integrated flexible OLED display technology used as a robotic skin-interface to improve robot-to-human communication in a real industrial setting at Volkswagen (VW) for a collaborative human-robot interaction task in motor assembly. The interface was implemented in a workcell and validated qualitatively with a small group of operators (n=9) and quantitatively with a large group (n=42). The validation results showed that using flexible OLED technology could improve the operators’ attitude toward the robot, increase their intention to use the robot, enhance perceived enjoyment, social influence, and trust, and reduce their anxiety.

[ Paper ]

Thanks, Bram!

We introduce InflatableBots, shape-changing inflatable robots for large-scale encountered-type haptics in VR. Unlike traditional inflatable shape displays, which are immobile and limited in interaction areas, our approach combines mobile robots with fan-based inflatable structures. This enables safe, scalable, and deployable haptic interactions on a large scale.

[ InflatableBots ]

We present a bioinspired passive dynamic foot in which the claws are actuated solely by the impact energy. Our gripper simultaneously resolves the issue of smooth absorption of the impact energy and fast closure of the claws by linking the motion of an ankle linkage and the claws through soft tendons.

[ Paper ]

In this video, A 3-UPU exoskeleton robot for wrist joint is designed and controlled to perform wrist extension, flexion, radial deviation and ulnar deviation motions in stroke-affected patients. This is the first time a 3-UPU robot has been used effectively for any kind of task.

UPU is referring to the actuators: a prismatic joint in between two universal joints.

[ BAS ]

Thanks, Tony!

BRUCE Got Spot-ted at ICRA2024.

[ Westwood Robotics ]

Parachutes: maybe not as good of an idea for drones as you might think.

[ Wing ]

In this paper, we propose a system for the artist-directed authoring of stylized bipedal walking gaits, tailored for execution on robotic characters. To demonstrate the utility of our approach, we animate gaits for a custom, free-walking robotic character, and show, with two additional in-simulation examples, how our procedural animation technique generalizes to bipeds with different degrees of freedom, proportions, and mass distributions.

[ Disney Research ]

The European drone project Labyrinth aims to keep new and conventional air traffic separate, especially in busy airspaces such as those expected in urban areas. The project provides a new drone traffic service and illustrates its potential to improve the safety and efficiency of civil land, air and sea transport, as well as emergency and rescue operations.

[ DLR ]

This CMU RI Seminar is from Kim Baraka at Vrije Universiteit Amsterdam, on “Why We Should Build Robot Apprentices And Why We Shouldn’t Do It Alone.”

For robots to be able to truly integrate human-populated, dynamic, and unpredictable environments, they will have to have strong adaptive capabilities. In this talk, I argue that these adaptive capabilities should leverage interaction with end users, who know how (they want) a robot to act in that environment. I will present an overview of my past and ongoing work on the topic of Human-Interactive Robot Learning, a growing interdisciplinary subfield that embraces rich, bidirectional interaction to shape robot learning. I will discuss contributions on the algorithmic, interface, and interaction design fronts, showcasing several collaborations with animal behaviorists/trainers, dancers, puppeteers, and medical practitioners.

[ CMU RI ]

Reference: https://ift.tt/rVUepsG

Five Cool Tech Demos from the ARPA-E Summit




Nearly 400 exhibitors representing the boldest energy innovations in the United States came together last week at the annual ARPA-E Energy Innovation Summit. The conference, hosted in Dallas by the U.S. Advanced Research Projects Agency–Energy (ARPA-E), showcased the agency’s bets on early-stage energy technologies that can disrupt the status quo. U.S. Secretary of Energy Jennifer Granholm spoke at the summit. “The people in this room are America’s best hope” in the race to unleash the power of clean energy, she said. “The technologies you create will decide whether we win that race. But no pressure,” she quipped. IEEE Spectrum spent three days meandering the aisles of the showcase. Here are five of our favorite demonstrations.

Gas Li-ion batteries thwart extreme cold

South 8 Technologies demonstrates the cold tolerance of its Li-ion battery by burying it in ice at the 2024 ARPA-E Energy Innovation Summit. Emily Waltz

Made with a liquified gas electrolyte instead of the standard liquid solvent, a new kind of lithium-ion battery that stands up to extreme cold, made by South 8 Technologies in San Diego, won’t freeze until temps drop below -80 °C. That’s a big improvement on conventional Li-ion batteries, which start to degrade when temps reach 0°C, and shut down at about -20 °C. “You lose about half of your range in an electric vehicle if you drive it in the middle of winter in Michigan,” says Cyrus Rustomji, co-founder of South 8. To prove their point, Rustomji and his team set out a bucket of dry ice at nearly -80 °C at their booth at the ARPA-E summit and put flashlights in it—one powered by a South 8 battery and one powered by a conventional Li-ion cell. The latter flashlight went out after about 10 minutes, and South 8’s kept going for the next 15 hours. Rustomji says he expects EV batteries made with South 8’s technology to maintain nearly full range at -40 °C, and gradually degrade in temperatures lower than that.

A shining flashlight sits on dry ice next to a container of battery cells. South 8 Technologies

Conventional Li-ion batteries use liquid solvents, such as ethylene and dimethyl carbonate, as the electrolyte. The electrolyte serves as a medium through which lithium salt moves from one electrode to the other in the battery, shuttling electricity. When it’s cold, the carbonates thicken, which lowers the power of the battery. They can also freeze, which shuts down all conductivity. South 8 swapped out the carbonate for some industrial liquified gases with low freezing points (a recipe the company won’t disclose).

Using liquified gases also reduces fire risk because the gas very quickly evaporates from a damaged battery cell, removing fuel that could burn and catch the battery on fire. If a conventional Li-ion battery gets damaged, it can short circuit and quickly become hot—like over 800 °C hot. This causes the liquid electrolyte to heat adjacent cells and potentially start a fire.

There’s another benefit to this battery, and this one will make EV drivers very happy: It will only take 10 minutes to reach 80 percent charge in EVs powered by these batteries, estimates Rustomji. That’s because liquified gas has a lower viscosity than carbonate-based electrolytes, which allows the lithium salt to move from one electrode to the other at a faster rate, shortening the time it takes to recharge the battery.

South 8’s latest improvement is a high voltage cathode that reduces material costs and could enable fast charging down to five minutes for a full charge. “We have the world record for a high voltage, low temperature cathode,” says Rustomji.

Liquid cooling won’t leak on servers

Chilldyne guarantees its liquid cooling system won’t leak even if tubes get hacked in half, as IEEE Spectrum editor Emily Waltz demonstrates at the 2024 ARPA-E Energy Innovation Summit. Emily Waltz

Data centers need serious cooling technologies to keep servers from overheating, and sometimes air conditioning just isn’t enough. In fact, the latest Blackwell chips from Nvidia require liquid cooling, which is more energy efficient than air. But liquid cooling tends to make data center operators nervous. “A bomb won’t do as much damage as a leaky liquid cooling system,” says Steve Harrington, CEO at Chilldyne. His company, based in Carlsbad, California, offers liquid cooling guaranteed not to leak, even if the coolant lines get chopped in half. (They aren’t kidding; Chilldyne brought an axe to its demonstration at ARPA-E and let Spectrum try it out. Watch the blue cooling liquid immediately disappear from the tube after it’s chopped.)

Hands holding pliers snip at a tube of liquid coolant in a server. Chilldyne

The system is leak-proof because Chilldyne’s negative pressure system pulls, rather than pushes, liquid coolant through tubes, like a vacuum. The tubes wind through servers, absorbing heat through cold plates, and return the warmed liquid to tanks in a cooling distribution unit. This unit transfers the heat outside and supplies cooled liquid back to the servers. If a component anywhere in the cooling loop breaks, the liquid is immediately sucked back into the tanks before it can leak. Key to the technology: low thermal resistance cold plates attached to each server’s processors, such as the CPUs or GPUs. The cold plates absorb heat by convection, transferring the heat to the coolant tube that runs through it. Chilldyne optimized the cold plate using corkscrew-shaped metal channels, called turbulators, that force water around them “like little tornadoes,” maximizing the heat absorbed, says Harrington. The company developed the cold plate under an ARPA-E grant and is now measuring the energy savings of liquid cooling through an ARPA-E program.

Salvaged mining waste also sequesters CO2

Photo of a woman in a red jacket holding a container. Phoenix Tailings’ senior research scientist Rita Silbernagel explains how mining waste contains useful metals and rare earth elements and can also be used as a place to store carbon dioxide.Emily Waltz

Mining leaves behind piles of waste after the commercially viable material is extracted. This waste, known as tailings, can contain rare earth elements and valuable metals that were too difficult to extract with conventional mining techniques. Phoenix Tailings—a start-up based in Woburn, Mass.—extracts metals and rare earth elements from tailings in a process that leaves behind no waste and creates no direct carbon dioxide emissions. The company’s process starts with a hydrometallurgical treatment that separates rare earth elements away from the tailings, which contain iron, aluminum and other common elements. Next the company uses a novel solvent extraction method to separate the rare earth elements from each other and purify the desired element in the form of an oxide. The rare earth oxide then undergoes a molten salt electrolysis process that converts it into a solid metal form. Phoenix Tailings focuses on extracting neodymium, neodymium-praseodymium alloy, dysprosium, and ferro dysprosium alloy, which are rare earth metals used in permanent magnets for EVs, wind turbines, jet engines and other applications. The company is evaluating several tailings sites in the U.S., including in upstate New York.

The company has also developed a process to extract metals such as nickel, copper, and cobalt from mining tailings, while simultaneously sequestering carbon dioxide. The approach involves injecting CO2 into the tailings, where it reacts with minerals, transforming them into carbonates—compounds that contain the carbonate ion, which contains three oxygens and one carbon atom. After the mineral carbonation process, the nickel or other metals are selectively leached from the mixture, yielding high quality nickel that can be used by EV battery and stainless steel industries.

Better still, this whole process, says Rita Silbernagel, senior research scientist at Phoenix Tailings, absorbs more CO2 than it emits.

Hydrokinetic turbines: a new business model

Emrgy adjusts the height of its hydrokinetic turbines at the 2024 ARPA-E Energy Innovation Summit. The company plans to install them in old irrigation channels to generate renewable energy and new revenue streams for rural communities. Emily Waltz

These hydrokinetic turbines run in irrigation channels, generating electricity and revenue for rural communities. Developed by Emrgy in Atlanta, the turbines can change in height and blade pitch based on the flow of the water. The company plans to put them in irrigation channels that were built to bring water from snow melt in the Rocky Mountains to agricultural areas in the Western United States. Emrgy estimates that there are more than 160,000 kilometers of these waterways in the United States. The system is aging and losing water, but it’s hard for water districts to justify the cost of repairing them, says Tom Cuthbert, chief technology officer at Emrgy. The company’s solution is to place its hydrokinetic turbines throughout these waterways as a way to generate renewable electricity and pay for upgrades to the irrigation channels.

The concept of placing hydrokinetic turbines in waterways isn’t new, but until recent years connecting them to the grid wasn’t practical. Emrgy’s timing takes advantage of the groundwork laid by the solar power industry. The company has five pilot projects in the works in the United States and New Zealand. “We found that existing water infrastructure is a massive overlooked real estate segment that is ripe for renewable energy development,” says Emily Morris, CEO and founder of Emrgy.

Pressurized water stores energy deep underground

Photo of blue pipe with a display board. Quidnet Energy brought a wellhead to the 2024 ARPA-E Energy Innovation Summit to demonstrate its geoengineered energy storage system.Emily Waltz

Quidnet Energy brought a whole wellhead to the ARPA-E summit to demonstrate its underground pumped hydro storage technique. The Houston-based company’s geoengineered system stores energy as pressurized water deep underground. It consists of a surface-level pond, a deep well, an underground reservoir at the end of the well, and a pump system that moves pressurized water from the pond to the underground reservoir and back. The design doesn’t require an elevation change like traditional pumped storage hydropower.

An illustration of how a pressurized pump works. Quidnet’s system consists of a surface-level pond, a deep well, an underground reservoir at the end of the well, and a pump system that moves pressurized water from the pond to the underground reservoir and back.Quidnet Energy

It works like this: Electricity from renewable sources powers a pump that sends water from the surface pond into a wellhead and down a well that’s about 300 meters deep. At the end of the well, the pressure from the pumped water flows into a previously engineered fracture in the rock, creating a reservoir that’s hundreds of meters wide and sits beneath the weight of the whole column of rock above it, says Bunker Hill, vice president of engineering at Quidnet. The wellhead then closes, and the water remains under high pressure, keeping energy stored in the reservoir for days if necessary. When electricity is needed, the well is opened, letting the pressurized water run up the same well. Above ground, the water passes through a hydroelectric turbine, generating 2 to 8 megawatts of electricity. The spent water then returns to the surface pond, ready for the next cycle. “The hard part is making sure the underground reservoir doesn’t lose water,” says Hill. To that end, the company developed customized sealing solutions that get injected into the fracture, sealing in the water.

Reference: https://ift.tt/cwH2DLe

Thursday, May 30, 2024

Tech giants form AI group to counter Nvidia with new interconnect standard


Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

Reference : https://ift.tt/8wfd5Ng

1-bit LLMs Could Solve AI’s Energy Demands




Large language models, the AI systems that power chatbots like ChatGPT, are getting better and better—but they’re also getting bigger and bigger, demanding more energy and computational power. For LLMs that are cheap, fast, and environmentally friendly, they’ll need to shrink, ideally small enough to run directly on devices like cell phones. Researchers are finding ways to do just that by drastically rounding off the many high-precision numbers that store their memories to equal just 1 or -1.

LLMs, like all neural networks, are trained by altering the strengths of connections between their artificial neurons. These strengths are stored as mathematical parameters. Researchers have long compressed networks by reducing the precision of these parameters—a process called quantization—so that instead of taking up 16 bits each, they might take up 8 or 4. Now researchers are pushing the envelope to a single bit.

How to make a 1-bit LLM

There are two general approaches. One approach, called post-training quantization (PTQ) is to quantize the parameters of a full-precision network. The other approach, quantization-aware training (QAT), is to train a network from scratch to have low-precision parameters. So far, PTQ has been more popular with researchers.

In February, a team including Haotong Qin at ETH Zürich, Xianglong Liu at Beihang University, and Wei Huang at the University of Hong Kong introduced a PTQ method called BiLLM. It approximates most parameters in a network using 1 bit, but represents a few salient weights—those most influential to performance—using 2 bits. In one test, the team binarized a version of Meta’s LLaMa LLM that has 13 billion parameters.

“1-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs.” —Furu Wei, Microsoft Research Asia

To score performance, the researchers used a metric called perplexity, which is basically a measure of how surprised the trained model was by each ensuing piece of text. For one dataset, the original model had a perplexity of around 5, and the BiLLM version scored around 15, much better than the closest binarization competitor, which scored around 37 (for perplexity, lower numbers are better). That said, the BiLLM model required about a tenth of the memory capacity as the original.

PTQ has several advantages over QAT, says Wanxiang Che, a computer scientist at Harbin Institute of Technology, in China. It doesn’t require collecting training data, it doesn’t require training a model from scratch, and the training process is more stable. QAT, on the other hand, has the potential to make models more accurate, since quantization is built into the model from the beginning.

1-bit LLMs find success against their larger cousins

Last year, a team led by Furu Wei and Shuming Ma, at Microsoft Research Asia, in Beijing, created BitNet, the first 1-bit QAT method for LLMs. After fiddling with the rate at which the network adjusts its parameters, in order to stabilize training, they created LLMs that performed better than those created using PTQ methods. They were still not as good as full-precision networks, but roughly ten times as energy-efficient.

In February, Wei’s team announced BitNet 1.58b, in which parameters can equal -1, 0, or 1, which means they take up roughly 1.58 bits of memory per parameter. A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model with the same number of parameters and amount of training—Wei called this an “aha moment”—but it was 2.71 times as fast, used 72 percent less GPU memory, and used 94 percent less GPU energy. Further, the researchers found that as they trained larger models, efficiency advantages improved.

A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model.

This year, a team led by Che, of Harbin Institute of Technology, released a preprint on another LLM binarization method, called OneBit. OneBit combines elements of both post-training quantization (PTQ) and quantization-aware training (QAT). It uses a full-precision pre-trained LLM to generate data for training a quantized version. The team’s 13-billion parameter model achieved a perplexity score of around 9 on one dataset, versus 5 for a LLaMA model with 13 billion parameters. Meanwhile, OneBit occupied only 10 percent as much memory. On customized chips, it could presumably run much faster.

Wei, of Microsoft, says quantized models have multiple advantages. They can fit on smaller chips, they require less data transfer between memory and processors, and they allow for faster processing. Current hardware can’t take full advantage of these models, though. LLMs often run on GPUs like those made by Nvidia, which represent weights using higher precision and spend most of their energy multiplying them. New hardware could natively represent each parameter as a -1 or 1 (or 0), and then simply add and subtract values and avoid multiplication. “1-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs,” Wei says.

“They should grow up together,” Huang, of the University of Hong Kong, says of 1-bit models and processors. “But it’s a long way to develop new hardware.”

Reference: https://ift.tt/v0KsHVN

Law enforcement operation takes aim at an often-overlooked cybercrime linchpin


Law enforcement operation takes aim at an often-overlooked cybercrime linchpin

Enlarge (credit: Getty Images)

An international cast of law enforcement agencies has struck a blow at a cybercrime linchpin that’s as obscure as it is instrumental in the mass-infection of devices: so-called droppers, the sneaky software that’s used to install ransomware, spyware, and all manner of other malware.

Europol said Wednesday it made four arrests, took down 100 servers, and seized 2,000 domain names that were facilitating six of the best-known droppers. Officials also added eight fugitives linked to the enterprises to Europe’s Most Wanted list. The droppers named by Europol are IcedID, SystemBC, Pikabot, Smokeloader, Bumblebee, and Trickbot.

Droppers provide two specialized functions. First, they use encryption, code-obfuscation, and similar techniques to cloak malicious code inside a packer or other form of container. These containers are then put into email attachments, malicious websites, or alongside legitimate software available through malicious web ads. Second, the malware droppers serve as specialized botnets that facilitate the installation of additional malware.

Read 9 remaining paragraphs | Comments

Reference : https://ift.tt/wKDehbV

Mystery malware destroys 600,000 routers from a single ISP during 72-hour span


Mystery malware destroys 600,000 routers from a single ISP during 72-hour span

Enlarge (credit: Getty Images)

One day last October, subscribers to an ISP known as Windstream began flooding message boards with reports their routers had suddenly stopped working and remained unresponsive to reboots and all other attempts to revive them.

“The routers now just sit there with a steady red light on the front,” one user wrote, referring to the ActionTec T3200 router models Windstream provided to both them and a next door neighbor. “They won't even respond to a RESET.”

In the messages—which appeared over a few days beginning on October 25—many Windstream users blamed the ISP for the mass bricking. They said it was the result of the company pushing updates that poisoned the devices. Windstream’s Kinetic broadband service has about 1.6 million subscribers in 18 states, including Iowa, Alabama, Arkansas, Georgia, and Kentucky. For many customers, Kinetic provides an essential link to the outside world.

Read 17 remaining paragraphs | Comments

Reference : https://ift.tt/60oZPxs

Wednesday, May 29, 2024

Researchers crack 11-year-old password, recover $3 million in bitcoin


Illustration of a wallet

Enlarge (credit: Flavio Coelho/Getty Images)

Two years ago when “Michael,” an owner of cryptocurrency, contacted Joe Grand to help recover access to about $2 million worth of bitcoin he stored in encrypted format on his computer, Grand turned him down.

Michael, who is based in Europe and asked to remain anonymous, stored the cryptocurrency in a password-protected digital wallet. He generated a password using the RoboForm password manager and stored that password in a file encrypted with a tool called TrueCrypt. At some point, that file got corrupted, and Michael lost access to the 20-character password he had generated to secure his 43.6 BTC (worth a total of about 4,000 euros, or $5,300, in 2013). Michael used the RoboForm password manager to generate the password but did not store it in his manager. He worried that someone would hack his computer and obtain the password.

“At [that] time, I was really paranoid with my security,” he laughs.

Read 26 remaining paragraphs | Comments

Reference : https://ift.tt/nUs8zFh

Using AI to Clear Land Mines in Ukraine




Stephen Cass: Hello. I’m Stephen Cass, Special Projects Director at IEEE Spectrum. Before starting today’s episode hosted by Eliza Strickland, I wanted to give you all listening out there some news about this show.

This is our last episode of Fixing the Future. We’ve really enjoyed bringing you some concrete solutions to some of the world’s toughest problems, but we’ve decided we’d like to be able to go deeper into topics than we can in the course of a single episode. So we’ll be returning later in the year with a program of limited series that will enable us to do those deep dives into fascinating and challenging stories in the world of technology. I want to thank you all for listening and I hope you’ll join us again. And now, on to today’s episode.

Eliza Strickland: Hi, I’m Eliza Strickland for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.IEEE.org/newsletters to subscribe.

Around the world, about 60 countries are contaminated with land mines and unexploded ordnance, and Ukraine is the worst off. Today, about a third of its land, an area the size of Florida, is estimated to be contaminated with dangerous explosives. My guest today is Gabriel Steinberg, who co-founded both the nonprofit Demining Research Community and the startup Safe Pro AI with his friend, Jasper Baur. Their technology uses drones and artificial intelligence to radically speed up the process of finding land mines and other explosives. Okay, Gabriel, thank you so much for joining me on Fixing the Future today.

Gabriel Steinberg: Yeah, thank you for having me.

Strickland: So I want to start by hearing about the typical process for demining, and so the standard operating procedure. What tools do people use? How long does it take? What are the risks involved? All that kind of stuff.

Steinberg: Sure. So humanitarian demining hasn’t changed significantly. There’s been evolutions, of course, since its inception and about the end of World War I. But mostly, the processes have been the same. People stand from a safe location and walk around an area in areas that they know are safe, and try to get as much intelligence about the contamination as they can. They ask villagers or farmers, people who work around the area and live around the area, about accidents and potential sightings of minefields and former battle positions and stuff. The result of this is a very general idea, a polygon, of where the contamination is. After that polygon and some prioritization based on danger to civilians and economic utility, the field goes into clearance. The first part is the non-technical survey, and then this is clearance. Clearance happens one of three ways, usually, but it always ends up with a person on the ground basically doing extreme gardening. They dig out a certain standard amount of the soil, usually 13 centimeters. And with a metal detector, they walk around the field and a mine probe. They find the land mines and nonexploded ordnance. So that always is how it ends.

To get to that point, you can also use mechanical assets, which are large tillers, and sometimes dogs and other animals are used to walk in lanes across the contaminated polygon to sniff out the land mines and tell the clearance operators where the land mines are.

Strickland: How do you hope that your technology will change this process?

Steinberg: Well, my technology is a drone-based mapping solution, basically. So we provide a software to the humanitarian deminers. They are already flying drones over these areas. Really, it started ramping up in Ukraine. The humanitarian demining organizations have started really adopting drones just because it’s such a massive problem. The extent is so extreme that they need to innovate. So we provide AI and mapping software for the deminers to analyze their drone imagery much more effectively. We hope that this process, or our software, will decrease the amount of time that deminers use to analyze the imagery of the land, thereby more quickly and more effectively constraining the areas with the most contamination. So if you can constrain an area, a polygon with a certainty of contamination and a high density of contamination, then you can deploy the most expensive parts of the clearance process, which are the humans and the machines and the dogs. You can deploy them to a very specific area. You can much more cost-effectively and efficiently demine large areas.

Strickland: Got it. So it doesn’t replace the humans walking around with metal detectors and dogs, but it gets them to the right spots faster.

Steinberg: Exactly. Exactly. At the moment, there is no conception of replacing a human in demining operations, and people that try to push that eventuality are usually disregarded pretty quickly.

Strickland: How did you and your co-founder, Jasper, first start experimenting with the use of drones and AI for detecting explosives?

Steinberg: So it started in 2016 with my partner, Jasper Baur, doing a research project at Binghamton University in the remote sensing and geophysics lab. And the project was to detect a specific anti-personnel land mine, the PFM-1. Then found— it’s a Russian-made land mine. It was previously found in Afghanistan. It still is found in Afghanistan, but it’s found in much higher quantities right now in Ukraine. And so his project was to detect the PFM-1 anti-personnel land mine using thermal imagery from drones. It sort of snowballed into quite an intensive research project. It had multiple papers from it, multiple researchers, some awards, and most notably, it beat NASA at a particular Tech Briefs competition. So that was quite a morale boost.

And at some point, Jasper had the idea to integrate AI into the project. Rightfully, he saw the real bottleneck as not the detecting of land mines in drone imagery, but the analysis of land mines in drone imagery. And that really has become— I mean, he knew, somehow, that that would really become the issue that everybody is facing. And everybody we talked to in Ukraine is facing that issue. So machine learning really was the key for solving that problem. And I joined the project in 2018 to integrate machine learning into the research project. We had some more papers, some more presentations, and we were nearing the end of our college tenure, of our undergraduate degree, in 2020. So at that time– but at that time, we realized how much the field needed this. We started getting more and more into the mine action field, and realizing how neglected the field was in terms of technology and innovation. And we felt an obligation to bring our technology, really, to the real world instead of just a research project. There were plenty of research projects about this, but we knew that it could be more and that it should. It really should be more. And we felt we had the– for some reason, we felt like we had the capability to make that happen.

So we formed a nonprofit, the Demining Research Community, in 2020 to try to raise some funding for this project. Our for-profit end of that, of our endeavors, was acquired by a company called Safe Pro Group in 2023. Yeah, 2023, about one year ago exactly. And the drone and AI technology became Safe Pro AI and our flagship product spotlight. And that’s where we’re bringing the technology to the real world. The Demining Research Community is providing resources for other organizations who want to do a similar thing, and is doing more research into more nascent technologies. But yeah, the real drone and AI stuff that’s happening in the real world right now is through Safe Pro.

Strickland: So in that early undergraduate work, you were using thermal sensors. I know now the Spotlight AI system is using more visual. Can you talk about the different modalities of sensing explosives and the sort of trade-offs you get with them?

Steinberg: Sure. So I feel like I should preface this by saying the more high tech and nascent the technology is, the more people want to see it apply to land mine detection. But really, we have found from the problems that people are facing, by far the most effective modality right now is just visual imagery. People have really good visual sensors built into their face, and you don’t need a trained geophysicist to observe the data and very, very quickly get actionable intelligence. There’s also plenty of other benefits. It’s cheaper, much more readily accessible in Ukraine and around the world to get built-in visual sensors on drones. And yeah, just processing the data, and getting the intelligence from the data, is way easier than anything else.

I’ll talk about three different modalities. Well, I guess I could talk about four. There’s thermal, ground penetrating radar, magnetometry, and lidar. So thermal is what we started with. Thermal is really good at detecting living things, as I’m sure most people can surmise. But it’s also pretty good at detecting land mines, mostly large anti-tank land mines buried under a couple millimeters, or up to a couple centimeters, of soil. It’s not super good at this. The research is still not super conclusive, and you have to do it at a very specific time of day, in the morning and at night when, basically the soil around the land mine heats up faster than the land mine and you cause a thermal anomaly, or the sun causes a thermal anomaly. So it can detect things, land mines, in some amount of depth in certain soils, in certain weather conditions, and can only detect certain types of land mines that are big and hefty enough. So yeah, that’s thermal.

Ground penetrating radar is really good for some things. It’s not really great for land mine detection. You have to have really expensive equipment. It takes a really long time to do the surveys. However, it can get plastic land mines under the surface. And it’s kind of the only modality that can do that with reliability. However, you need to train geophysicists to analyze the data. And a lot of the time, the signatures are really non-unique and there’s going to be a lot of false positives. Magnetometry is the other-- by the way, all of this is airborne that I’m referring to. Ground-based GPR and magnetometry are used in demining of various types, but airborne is really what I’m talking about.

For magnetometry, it’s more developed and more capable than ground penetrating radar. It’s used, actually, in the field in Ukraine in some scenarios, but it’s still very expensive. It needs a trained geophysicist to analyze the data, and the signatures are non-unique. So whether it’s a bottle can or a small anti-personnel land mine, you really don’t know until you dig it up. However, I think if I were to bet on one of the other modalities becoming increasingly useful in the next couple of years, it would be airborne magnetometry.

Lidar is another modality that people use. It’s pretty quick, also very expensive, but it can reliably map and find surface anomalies. So if you want to find former fighting positions, sometimes an indicator of that is a trench line or foxholes. Lidar is really good at doing that in conflicts from long ago. So there’s a paper that the HALO Trust published of flying a lidar mission over former fighting positions, I believe, in Angola. And they reliably found a former trench line. And from that information, they confirmed that as a hazardous area. Because if there is a former front line on this position, you can pretty reliably say that there is going to be some explosives there.

Strickland: And so you’ve done some experiments with some of these modalities, but in the end, you found that the visual sensor was really the best bet for you guys?

Steinberg: Yeah. It’s different. The requirements are different for different scenarios and different locations, really. Ukraine has a lot of surface ordnance. Yeah. And that’s really the main factor that allows visual imagery to be so powerful.

Strickland: So tell me about what role machine learning plays in your Spotlight AI software system. Did you create a model trained on a lot of— did you create a model based on a lot of data showing land mines on the surface?

Steinberg: Yeah. Exactly. We used real-world data from inert, non-explosive items, and flew drone missions over them, and did some physical augmentation and some programmatic augmentation. But all of the items that we are training on are real-life Russian or American ordnance, mostly. We’re also using the real-world data in real minefields that we’re getting from Ukraine right now. That is, obviously, the most valuable data and the most effective in building a machine learning model. But yeah, a lot of our data is from inert explosives, as well.

Strickland: So you’ve talked a little bit about the current situation in Ukraine, but can you tell me more about what people are dealing with there? Are there a lot of areas where the battle has moved on and civilians are trying to reclaim roads or fields?

Steinberg: Yeah. So the fighting is constantly ongoing, obviously, in eastern Ukraine, but I think sometimes there’s a perspective of a stalemate. I think that’s a little misleading. There’s lots of action and violence happening on the front line, which constantly contaminates, cumulatively, the areas that are the front line and the gray zone, as well as areas up to 50 kilometers back from both sides. So there’s constantly artillery shells going into villages and cities along the front line. There’s constantly land mines, new mines, being laid to reinforce the positions. And there’s constantly mortars. And everything is constant. In some fights—I just watched the video yesterday—one of the soldiers said you could not count to five without an explosion going off. And this is just one location in one city along the front. So you can imagine the amount of explosive ordnance that are being fired, and inevitably 10, 20, 30 percent of them are sometimes not exploding upon impact, on top of all the land mines that are being purposely laid and not detonating from a vehicle or a person. These all just remain after the war. They don’t go anywhere. So yeah, Ukraine is really being littered with explosive ordnance and land mines every day.

This past year, there hasn’t been terribly much movement on the front line. But in the Ukrainian counteroffensive in 2020— I guess the last major Ukrainian counteroffensive where areas of Mykolaiv, which is in the southeast, were reclaimed, the civilians started repopulating the city almost immediately. There are definitely some villages that are heavily contaminated, that people just deserted and never came back to, and still haven’t come back to after them being liberated. But a lot of the areas that have been liberated, they’re people’s homes. And even if they’re destroyed, people would rather be in their homes than be refugees. And I mean, I totally understand that. And it just puts the responsibility on the deminers and the Ukrainian government to try to clear the land as fast as possible. Because after large liberations are made, people want to come back almost all the time. So it is a very urgent problem as the lines change and as land is liberated.

Strickland: And I think it was about a year ago that you and Jasper went to the Ukraine for a technology demonstration set up by the United Nations. Can you tell about that, and what the task was, and how your technology fared?

Steinberg: Sure. So yeah, the United Nations Development Program invited us to do a demonstration in northern Ukraine to see how our technology, and other technologies similar to it, performed in a military training facility in Ukraine. So everybody who’s doing this kind of thing, which is not many people, but there are some other organizations, they have their own metrics and their own test fields— not always, but it would be good if they did. But the UNDP said, “No, we want to standardize this and try to give recommendations to the organizations on the ground who are trying to adopt these technologies.” So we had five hours to survey the field and collect as much data as we could. And then we had 72 hours to return the results. We—

Strickland: Sorry. How big was the field?

Steinberg: The field was 25 hectares. So yeah, the audience at home can type 25 hectares to amount of football fields. I think it’s about 60. But it’s a large area. So we’d never done anything like that. That was really, really a shock that it was that large of an area. I think we’d only done half a hectare at a time up to that point. So yeah, it was pretty daunting. But we basically slept very, very little in those 72 hours, and as a result, produced what I think is one of the best results that the UNDP got from that test. We didn’t detect everything, but we detected most of the ordnance and land mines that they had laid. We also detected some that they didn’t know were there because it was a military training facility. So there were some mortars being fired that they didn’t know about.

Strickland: And I think Jasper told me that you had to sort of rewrite your software on the fly. You realized that the existing approach wasn’t going to work and you had to do some all-nighter to recode?

Steinberg: Yeah. Yeah, I remember us sitting in a Georgian restaurant— Georgia, the country, not the state, and racking our brain, trying to figure out how we were going to map this amount of land. We just found out how big the area was going to be and we were a little bit stunned. So we devised a plan to do it in two stages. The first stage was where we figured out in the drone images where the contaminated regions were. And then the second stage was to map those areas, just those areas. Now, our software can actually map the whole thing, and pretty casually too. So not to brag. But at the time, we had lots less development under our belt. And yeah, therefore we just had to brute force it through Georgian food and brainpower.

Strickland: You and Jasper just got back from another trip to the Ukraine a couple of weeks ago, I think. Can you talk about what you were doing on this trip, and who you met with?

Steinberg: Sure. This trip was much less stressful, although stressful in different ways than the UNDP demo. Our main objectives were to see operations in action. We had never actually been to real minefields before. We’d been in some perhaps contaminated areas, but never in a real minefield where you can say, “Here was the Russian position. There are the land mines. Do not go there.” So that was one of the main objectives. That was very powerful for us to see the villages that were destroyed and are denied to the citizens because of land mines and unexploded ordnance. It’s impossible to describe how that feels being there. It’s really impactful, and it makes the work that I’m doing feel not like I have a choice anymore. I feel very much obligated to do my absolute best to help these people.

Strickland: Well, I hope your work continues. I hope there’s less and less need for it over time. But yeah, thank you for doing this. It’s important work. And thanks for joining me on Fixing the Future.

Steinberg: My pleasure. Thank you for having me.

Strickland: That was Gabriel Steinberg speaking to me about the technology that he and Jasper Baur developed to help rid the world of land mines. I’m Eliza Strickland, and I hope you’ll join us next time on Fixing the Future.

Reference: https://ift.tt/FQmJTBc

Tuesday, May 28, 2024

US sanctions operators of “free VPN” that routed crime traffic through user PCs


US sanctions operators of “free VPN” that routed crime traffic through user PCs

Enlarge (credit: Getty Images)

The US Treasury Department has sanctioned three Chinese nationals for their involvement in a VPN-powered botnet with more than 19 million residential IP addresses they rented out to cybercriminals to obfuscate their illegal activities, including COVID-19 aid scams and bomb threats.

The criminal enterprise, the Treasury Department said Tuesday, was a residential proxy service known as 911 S5. Such services provide a bank of IP addresses belonging to everyday home users for customers to route Internet connections through. When accessing a website or other Internet service, the connection appears to originate with the home user.

In 2022, researchers at the University of Sherbrooke profiled 911[.]re, a service that appears to be an earlier version of 911 S5. At the time, its infrastructure comprised 120,000 residential IP addresses. This pool was created using one of two free VPNs—MaskVPN and DewVPN—marketed to end users. Besides acting as a legitimate VPN, the software also operated as a botnet that covertly turned users’ devices into a proxy server. The complex structure was designed with the intent of making the botnet hard to reverse engineer.

Read 9 remaining paragraphs | Comments

Reference : https://ift.tt/lgkZKsy

The Forgotten History of Chinese Keyboards




Today, typing in Chinese works by converting QWERTY keystrokes into Chinese characters via a software interface, known as an input method editor. But this was not always the case. Thomas S. Mullaney’s new book, The Chinese Computer: A Global History of the Information Age, published by the MIT Press, unearths the forgotten history of Chinese input in the 20th century. In this article, which was adapted from an excerpt of the book, he details the varied Chinese input systems of the 1960s and ’70s that renounced QWERTY altogether.

“This will destroy China forever,” a young Taiwanese cadet thought as he sat in rapt attention. The renowned historian Arnold J. Toynbee was on stage, delivering a lecture at Washington and Lee University on “A Changing World in Light of History.” The talk plowed the professor’s favorite field of inquiry: the genesis, growth, death, and disintegration of human civilizations, immortalized in his magnum opus A Study of History. Tonight’s talk threw the spotlight on China.

China was Toynbee’s outlier: Ancient as Egypt, it was a civilization that had survived the ravages of time. The secret to China’s continuity, he argued, was character-based Chinese script. Character-based script served as a unifying medium, placing guardrails against centrifugal forces that might otherwise have ripped this grand and diverse civilization apart. This millennial integrity was now under threat. Indeed, as Toynbee spoke, the government in Beijing was busily deploying Hanyu pinyin, a Latin alphabet–based Romanization system.

The Taiwanese cadet listening to Toynbee was Chan-hui Yeh, a student of electrical engineering at the nearby Virginia Military Institute (VMI). That evening with Arnold Toynbee forever altered the trajectory of his life. It changed the trajectory of Chinese computing as well, triggering a cascade of events that later led to the formation of arguably the first successful Chinese IT company in history: Ideographix, founded by Yeh 14 years after Toynbee stepped offstage.

During the late 1960s and early 1970s, Chinese computing underwent multiple sea changes. No longer limited to small-scale laboratories and solo inventors, the challenge of Chinese computing was taken up by engineers, linguists, and entrepreneurs across Asia, the United States, and Europe—including Yeh’s adoptive home of Silicon Valley.

A piece of paper with many small squares and signs on them. Chan-hui Yeh’s IPX keyboard featured 160 main keys, with 15 characters each. A peripheral keyboard of 15 keys was used to select the character on each key. Separate “shift” keys were used to change all of the character assignments of the 160 keys. Computer History Museum

The design of Chinese computers also changed dramatically. None of the competing designs that emerged in this era employed a QWERTY-style keyboard. Instead, one of the most successful and celebrated systems—the IPX, designed by Yeh—featured an interface with 120 levels of “shift,” packing nearly 20,000 Chinese characters and other symbols into a space only slightly larger than a QWERTY interface. Other systems featured keyboards with anywhere from 256 to 2,000 keys. Still others dispensed with keyboards altogether, employing a stylus and touch-sensitive tablet, or a grid of Chinese characters wrapped around a rotating cylindrical interface. It’s as if every kind of interface imaginable was being explored except QWERTY-style keyboards.

IPX: Yeh’s 120-dimensional hypershift Chinese keyboard

Yeh graduated from VMI in 1960 with a B.S. in electrical engineering. He went on to Cornell University, receiving his M.S. in nuclear engineering in 1963 and his Ph.D. in electrical engineering in 1965. Yeh then joined IBM, not to develop Chinese text technologies but to draw upon his background in automatic control to help develop computational simulations for large-scale manufacturing plants, like paper mills, petrochemical refineries, steel mills, and sugar mills. He was stationed in IBM’s relatively new offices in San Jose, Calif.

Toynbee’s lecture stuck with Yeh, though. While working at IBM, he spent his spare time exploring the electronic processing of Chinese characters. He felt convinced that the digitization of Chinese must be possible, that Chinese writing could be brought into the computational age. Doing so, he felt, would safeguard Chinese script against those like Chairman Mao Zedong, who seemed to equate Chinese modernization with the Romanization of Chinese script. The belief was so powerful that Yeh eventually quit his good-paying job at IBM to try and save Chinese through the power of computing.

Yeh started with the most complex parts of the Chinese lexicon and worked back from there. He fixated on one character in particular: ying 鷹 (“eagle”), an elaborate graph that requires 24 brushstrokes to compose. If he could determine an appropriate data structure for such a complex character, he reasoned, he would be well on his way. Through careful analysis, he determined that a bitmap comprising 24 vertical dots and 20 horizontal dots would do the trick, taking up 60 bytes of memory, excluding metadata. By 1968, Yeh felt confident enough to take the next big step—to patent his project, nicknamed “Iron Eagle.” The Iron Eagle project quickly garnered the interest of the Taiwanese military. Four years later, with the promise of Taiwanese government funding, Yeh founded Ideographix, in Sunnyvale, Calif.

A single key of the IPX keyboard contained 15 characters. This key contains the character zhong (中 “central”), which is necessary to spell “China.” MIT Press

The flagship product of Ideographix was the IPX, a computational typesetting and transmission system for Chinese built upon the complex orchestration of multiple subsystems.

The marvel of the IPX system was the keyboard subsystem, which enabled operators to enter a theoretical maximum of 19,200 Chinese characters despite its modest size: 59 centimeters wide, 37 cm deep, and 11 cm tall. To achieve this remarkable feat, Yeh and his colleagues decided to treat the keyboard not merely as an electronic peripheral but as a full-fledged computer unto itself: a microprocessor-controlled “intelligent terminal” completely unlike conventional QWERTY-style devices.

Seated in front of the IPX interface, the operator looked down on 160 keys arranged in a 16-by-10 grid. Each key contained not a single Chinese character but a cluster of 15 characters arranged in a miniature 3-by-5 array. Those 160 keys with 15 characters on each key yielded 2,400 Chinese characters.

Several color images of people sitting at large keyboards. The process of typing on the IPX keyboard involved using a booklet of characters used to depress one of 160 keys, selecting one of 15 numbers to pick a character within the key, and using separate “shift” keys to indicate when a page of the booklet was flipped. MIT Press

Chinese characters were not printed on the keys, the way that letters and numbers are emblazoned on the keys of QWERTY devices. The 160 keys themselves were blank. Instead, the 2,400 Chinese characters were printed on laminated paper, bound together in a spiral-bound booklet that the operator laid down flat atop the IPX interface.The IPX keys weren’t buttons, as on a QWERTY device, but pressure-sensitive pads. An operator would push down on the spiral-bound booklet to depress whichever key pad was directly underneath.

To reach characters 2,401 through 19,200, the operator simply turned the spiral-bound booklet to whichever page contained the desired character. The booklets contained up to eight pages—and each page contained 2,400 characters—so the total number of potential symbols came to just shy of 20,000.

For the first seven years of its existence, the use of IPX was limited to the Taiwanese military. As years passed, the exclusivity relaxed, and Yeh began to seek out customers in both the private and public sectors. Yeh’s first major nonmilitary clients included Taiwan’s telecommunication administration and the National Taxation Bureau of Taipei. For the former, the IPX helped process and transmit millions of phone bills. For the latter, it enabled the production of tax return documents at unprecedented speed and scale. But the IPX wasn’t the only game in town.

Grid of squares with Chinese characters in each Loh Shiu-chang, a professor at the Chinese University of Hong Kong, developed what he called “Loh’s keyboard” (Le shi jianpan 樂氏鍵盤), featuring 256 keys. Loh Shiu-chang

Mainland China’s “medium-sized” keyboards

By the mid-1970s, the People’s Republic of China was far more advanced in the arena of mainframe computing than most outsiders realized. In July 1972, just months after the famed tour by U.S. president Richard Nixon, a veritable blue-ribbon committee of prominent American computer scientists visited the PRC. The delegation visited China’s main centers of computer science at the time, and upon learning what their counterparts had been up to during the many years of Sino-American estrangement, the delegation was stunned.

But there was one key arena of computing that the delegation did not bear witness to: the computational processing of Chinese characters. It was not until October 1974 that mainland Chinese engineers began to dive seriously into this problem. Soon after, in 1975, the newly formed Chinese Character Information Processing Technology Research Office at Peking University set out upon the goal of creating a “Chinese Character Information Processing and Input System” and a “Chinese Character Keyboard.”

The group evaluated more than 10 proposals for Chinese keyboard designs. The designs fell into three general categories: a large-keyboard approach, with one key for every commonly used character; a small-keyboard approach, like the QWERTY-style keyboard; and a medium-size keyboard approach, which attempted to tread a path between these two poles.

Top: grid of circles with Chinese characters in each. Bottom: square delineated into sections. Peking University’s medium-sized keyboard design included a combination of Chinese characters and character components, as shown in this explanatory diagram. Public Domain

The team leveled two major criticisms against QWERTY-style small keyboards. First, there were just too few keys, which meant that many Chinese characters were assigned identical input sequences. What’s more, QWERTY keyboards did a poor job of using keys to their full potential. For the most part, each key on a QWERTY keyboard was assigned only two symbols, one of which required the operator to depress and hold the shift key to access. A better approach, they argued, was the technique of “one key, many uses”— yijian duoyong—assigning each key a larger number of symbols to make the most use of interface real estate.

The team also examined the large-keyboard approach, in which 2,000 or more commonly used Chinese characters were assigned to a tabletop-size interface. Several teams across China worked on various versions of these large keyboards. The Peking team, however, regarded the large-keyboard approach as excessive and unwieldy. Their goal was to exploit each key to its maximum potential, while keeping the number of keys to a minimum.

After years of work, the team in Beijing settled upon a keyboard with 256 keys, 29 of which would be dedicated to various functions, such as carriage return and spacing, and the remaining 227 used to input text. Each keystroke generated an 8-bit code, stored on punched paper tape (hence the choice of 256, or 28, keys). These 8-bit codes were then translated into a 14-bit internal code, which the computer used to retrieve the desired character.

In their assignment of multiple characters to individual keys, the team’s design was reminiscent of Ideographix’s IPX machine. But there was a twist. Instead of assigning only full-bodied, stand-alone Chinese characters to each key, the team assigned a mixture of both Chinese characters and character components. Specifically, each key was associated with up to four symbols, divided among three varieties:

  • full-body Chinese characters (limited to no more than two per key)
  • partial Chinese character components (no more than three per key)
  • the uppercase symbol, reserved for switching to other languages (limited to one per key)

In all, the keyboard contained 423 full-body Chinese characters and 264 character components. When arranging these 264 character components on the keyboard, the team hit upon an elegant and ingenious way to help operators remember the location of each: They treated the keyboard as if it were a Chinese character itself. The team placed each of the 264 character components in the regions of the keyboard that corresponded to the areas where they usually appeared in Chinese characters.

In its final design, the Peking University keyboard was capable of inputting a total of 7,282 Chinese characters, which in the team’s estimation would account for more than 90 percent of all characters encountered on an average day. Within this character set, the 423 most common characters could be produced via one keystroke; 2,930 characters could be produced using two keystrokes; and a further 3,106 characters could be produced using three keystrokes. The remaining 823 characters required four or five keystrokes.

The Peking University keyboard was just one of many medium-size designs of the era. IBM created its own 256-key keyboard for Chinese and Japanese. In a design reminiscent of the IPX system, this 1970s-era keyboard included a 12-digit keypad with which the operator could “shift” between the 12 full-body Chinese characters outfitted on each key (for a total of 3,072 characters in all). In 1980, Chinese University of Hong Kong professor Loh Shiu-chang developed what he called “Loh’s keyboard” (Le shi jianpan 樂氏鍵盤), which also featured 256 keys.

But perhaps the strangest Chinese keyboard of the era was designed in England.

The cylindrical Chinese keyboard

On a winter day in 1976, a young boy in Cambridge, England, searched for his beloved Meccano set. A predecessor of the American Erector set, the popular British toy offered aspiring engineers hours of modular possibility. Andrew had played with the gears, axles, and metal plates recently, but today they were nowhere to be found.

Wandering into the kitchen, he caught the thief red-handed: his father, the Cambridge University researcher Robert Sloss. For three straight days and nights, Sloss had commandeered his son’s toy, engrossed in the creation of a peculiar gadget that was cylindrical and rotating. It riveted the young boy’s attention—and then the attention of the Telegraph-Herald, which dispatched a journalist to see it firsthand. Ultimately, it attracted the attention and financial backing of the U.K. telecommunications giant Cable & Wireless.

Robert Sloss was building a Chinese computer.

The elder Sloss was born in 1927 in Scotland. He joined the British navy, and was subjected to a series of intelligence tests that revealed a proclivity for foreign languages. In 1946 and 1947, he was stationed in Hong Kong. Sloss went on to join the civil service as a teacher and later, in the British air force, became a noncommissioned officer. Owing to his pedagogical experience, his knack for language, and his background in Asia, he was invited to teach Chinese at Cambridge and appointed to a lectureship in 1972.

At Cambridge, Sloss met Peter Nancarrow. Twelve years Sloss’s junior, Nancarrow trained as a physicist but later found work as a patent agent. The bearded 38-year-old then taught himself Norwegian and Russian as a “hobby” before joining forces with Sloss in a quest to build an automatic Chinese-English translation machine.

Three men leaning over cylindrical device In 1976, Robert Sloss and Peter Nancarrow designed the Ideo-Matic Encoder, a Chinese input keyboard with a grid of 4,356 keys wrapped around a cylinder. PK Porthcurno

They quickly found that the choke point in their translator design was character input— namely, how to get handwritten Chinese characters, definitions, and syntax data into a computer.

Over the following two years, Sloss and Nancarrow dedicated their energy to designing a Chinese computer interface. It was this effort that led Sloss to steal and tinker with his son’s Meccano set. Sloss’s tinkering soon bore fruit: a working prototype that the duo called the “Binary Signal Generator for Encoding Chinese Characters into Machine-compatible form”—also known as the Ideo-Matic Encoder and the Ideo-Matic 66 (named after the machine’s 66-by-66 grid of characters).

Each cell in the machine’s grid was assigned a binary code corresponding to the X-column and the Y-row values. In terms of total space, each cell was 7 millimeters squared, with 3,500 of the 4,356 cells dedicated to Chinese characters. The rest were assigned to Japanese syllables or left blank.

The distinguishing feature of Sloss and Nancarrow’s interface was not the grid, however. Rather than arranging their 4,356 cells across a rectangular interface, the pair decided to wrap the grid around a rotating, tubular structure. The typist used one hand to rotate the cylindrical grid and the other hand to move a cursor left and right to indicate one of the 4,356 cells. The depression of a button produced a binary signal that corresponded to the selected Chinese character or other symbol.

The Ideo-Matic Encoder was completed and delivered to Cable & Wireless in the closing years of the 1970s. Weighing in at 7 kilograms and measuring 68 cm wide, 57 cm deep, and 23 cm tall, the machine garnered industry and media attention. Cable & Wireless purchased rights to the machine in hopes of mass-manufacturing it for the East Asian market.

QWERTY’s comeback

The IPX, the Ideo-Matic 66, Peking University’s medium-size keyboards, and indeed all of the other custom-built devices discussed here would soon meet exactly the same fate—oblivion. There were changes afoot. The era of custom-designed Chinese text-processing systems was coming to an end. A new era was taking shape, one that major corporations, entrepreneurs, and inventors were largely unprepared for. This new age has come to be known by many names: the software revolution, the personal-computing revolution, and less rosily, the death of hardware.

From the late 1970s onward, custom-built Chinese interfaces steadily disappeared from marketplaces and laboratories alike, displaced by wave upon wave of Western-built personal computers crashing on the shores of the PRC. With those computers came the resurgence of QWERTY for Chinese input, along the same lines as the systems used by Sinophone computer users today—ones mediated by a software layer to transform the Latin alphabet into Chinese characters. This switch to typing mediated by an input method editor, or IME, did not lead to the downfall of Chinese civilization, as the historian Arnold Toynbee may have predicted. However, it did fundamentally change the way Chinese speakers interact with the digital world and their own language.

This article appears in the June 2024 print issue.

Reference: https://ift.tt/CvZ0D4U

The Top 10 Climate Tech Stories of 2024

In 2024, technologies to combat climate change soared above the clouds in electricity-generating kites, traveled the oceans sequestering...