Tuesday, April 30, 2024

Here’s your chance to own a decommissioned US government supercomputer


A photo of the Cheyenne supercomputer, which is now up for auction.

Enlarge / A photo of the Cheyenne supercomputer, which is now up for auction. (credit: US General Services Administration)

On Tuesday, the US General Services Administration began an auction for the decommissioned Cheyenne supercomputer, located in Cheyenne, Wyoming. The 5.34-petaflop supercomputer ranked as the 20th most powerful in the world at the time of its installation in 2016. Bidding started at $2,500, but it's price is currently $27,643 with the reserve not yet met.

The supercomputer, which officially operated between January 12, 2017, and December 31, 2023, at the NCAR-Wyoming Supercomputing Center, was a powerful and energy-efficient system that significantly advanced atmospheric and Earth system sciences research.

"In its lifetime, Cheyenne delivered over 7 billion core-hours, served over 4,400 users, and supported nearly 1,300 NSF awards," writes the University Corporation for Atmospheric Research (UCAR) on its official Cheyenne information page. "It played a key role in education, supporting more than 80 university courses and training events. Nearly 1,000 projects were awarded for early-career graduate students and postdocs. Perhaps most tellingly, Cheyenne-powered research generated over 4,500 peer-review publications, dissertations and theses, and other works."

Read 5 remaining paragraphs | Comments

Reference : https://ift.tt/rleRWz5

Health care giant comes clean about recent hack and paid ransom


Health care giant comes clean about recent hack and paid ransom

Enlarge (credit: Getty Images)

Change Healthcare, the health care services provider that recently experienced a ransomware attack that hamstrung the US prescription market for two weeks, was hacked through a compromised account that failed to use multifactor authentication, the company CEO told members of Congress.

The February 21 attack by a ransomware group using the names ALPHV or BlackCat took down a nationwide network Change Healthcare administers to allow healthcare providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, payment processors, providers, and patients experienced long delays in filling prescriptions for medicines, many of which were lifesaving. Change Healthcare has also reported that hackers behind the attacks obtained personal health information for a "substantial portion" of the US population.

Standard defense not in place

Andrew Witty, CEO of Change Healthcare parent company UnitedHealth Group, said the breach started on February 12 when hackers somehow obtained an account password for a portal allowing remote access to employee desktop devices. The account, Witty admitted, failed to use multifactor authentication (MFA), a standard defense against password compromises that requires additional authentication in the form of a one-time password or physical security key.

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/lp9LbQg

Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts


Robot fortune teller hand and crystal ball

Enlarge (credit: Getty Images)

On Sunday, word began to spread on social media about a new mystery chatbot named "gpt2-chatbot" that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI's upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo.

Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site's "side-by-side" arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day—dramatically limiting people's ability to test it in detail.

So far, gpt2-chatbot has inspired plenty of rumors online, including that it could be the stealth launch of a test version of GPT-4.5 or even GPT-5—or perhaps a new version of 2019's GPT-2 that has been trained using new techniques. We reached out to OpenAI for comment but did not receive a response by press time. On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, "i do have a soft spot for gpt2."

Read 14 remaining paragraphs | Comments

Reference : https://ift.tt/NHP0mfc

This Startup Uses the MIT Inventor App to Teach Girls Coding




When Marianne Smith was teaching computer science in 2016 at Flathead Valley Community College, in Kalispell, Mont., the adjunct professor noticed the female students in her class were severely outnumbered, she says.

Smith says she believed the disparity was because girls were not being introduced to science, technology, engineering, and mathematics in elementary and middle school.

Code Girls United


Founded

2018

Headquarters

Kalispell, Mont.

Employees

10


In 2017 she decided to do something to close the gap. The IEEE member started an after-school program to teach coding and computer science.

What began as a class of 28 students held in a local restaurant is now a statewide program run by Code Girls United, a nonprofit Smith founded in 2018. The organization has taught more than 1,000 elementary, middle, and high school students across 38 cities in Montana and three of the state’s Native American reservations. Smith has plans to expand the nonprofit to South Dakota, Wisconsin, and other states, as well as other reservations.

“Computer science is not a K–12 requirement in Montana,” Smith says. “Our program creates this rare hands-on experience that provides students with an experience that’s very empowering for girls in our community.”

The nonprofit was one of seven winners last year of MIT Solve’s Gender Equity in STEM Challenge. The initiative supports organizations that work to address gender barriers. Code Girls United received US $100,000 to use toward its program.

“The MIT Solve Gender Equity in STEM Challenge thoroughly vets all applicants—their theories, practices, organizational health, and impact,” Smith says. “For Code Girls United to be chosen as a winner of the contest is a validating honor.”

From a restaurant basement to statewide programs

When Smith had taught her sons how to program robots, she found that programming introduced a set of logic and communication skills similar to learning a new language, she says.

Those skills were what many girls were missing, she reasoned.

“It’s critical that girls be given the opportunity to speak and write in this coding language,” she says, “so they could also have the chance to communicate their ideas.”

An app to track police vehicles


Last year Code Girls United’s advanced class held in Kalispell received a special request from Jordan Venezio, the city’s police chief. He asked the class to create an app to help the Police Department manage its vehicle fleet.

The department was tracking the location of its police cars on paper, a process that made it challenging to get up-to-date information about which cars were on patrol, available for use, or being repaired, Venezio told the Flathead Beacon.

The objective was to streamline day-to-day vehicle operations. To learn how the department operates and see firsthand the difficulties administrators faced when managing the vehicles, two students shadowed officers for 10 weeks.

The students programmed the app using Visual Studio Code, React Native, Expo Go, and GitHub.

The department’s administrators now more easily can see each vehicle’s availability, whether it’s at the repair shop, or if it has been retired from duty.

“It’s a great privilege for the girls to be able to apply the skills they’ve learned in the Code Girls United program to do something like this for the community,” Smith says. “It really brings our vision full circle.”

At first she wasn’t sure what subjects to teach, she says, reasoning that Java and other programming languages were too advanced for elementary school students.

She came across MIT App Inventor, a block-based visual programming language for creating mobile apps for Android and iOS devices. Instead of learning a coding language by typing it, students drag and drop jigsaw puzzle–like pieces that contain code to issue instructions. She incorporated building an app with general computer science concepts such as conditionals, logic flow, and variables. With each concept learned, the students built a more difficult app.

“It was perfect,” she says, “because the girls could make an app and test it the same day. It’s also very visual.”

Once she had a curriculum, she wanted to find willing students, so she placed an advertisement in the local newspaper. Twenty-eight girls signed up for the weekly classes, which were held in a diner. Assisting Smith were Beth Schecher, a retired technical professional; and Liz Bernau, a newly graduated elementary school teacher who taught technology classes. Students had to supply their own laptop.

At the end of the first 18 weeks, the class was tasked with creating apps to enter in the annual Technovation Girls competition. The contest seeks out apps that address issues including animal abandonment, safely reporting domestic violence, and access to mental health services.

The first group of students created several apps to enter in the competition, including ones that connected users to water-filling stations, provided people with information about food banks, and allowed users to report potholes. The group made it to the competition’s semifinals.

The coding program soon outgrew the diner and moved to a computer lab in a nearby elementary school. From there classes were held at Flathead Valley Community College. The program continued to grow and soon expanded to schools in other Montana towns including Belgrade, Havre, Joliet, and Polson.

The COVID-19 pandemic prompted the program to become virtual—which was “oddly fortuitous,” Smith says. After she made the curriculum available for anyone to use via Google Classroom, it increased in popularity.

That’s when she decided to launch her nonprofit. With that came a new curriculum.

young girls sitting at a large desk with computers and keyboards in front of them, the girl closest wearing a bright yellow shirt What began as a class of 28 students held in a restaurant in Kalispell, Mont., has grown into a statewide program run by Code Girls United. The nonprofit has taught coding and computer science to more than 1,000 elementary, middle, and high school students. Code Girls United

Program expands across the state

Beginner, intermediate, and advanced classes were introduced. Instructors of the weekly after-school program are volunteers and teachers trained by Smith or one of the organization’s 10 employees. The teachers are paid a stipend.

For the first half of the school year, students in the beginner class learn computer science while creating apps.

“By having them design and build a mobile app,” Smith says, “I and the other teachers teach them computer science concepts in a fun and interactive way.”

Once students master the course, they move on to the intermediate and advanced levels, where they are taught lessons in computer science and learn more complicated programming concepts such as Java and Python.

“It’s important to give girls who live on the reservations educational opportunities to close the gap. It’s the right thing to do for the next generation.”

During the second half of the year, the intermediate and advanced classes participate in Code Girls United’s App Challenge. The girls form teams and choose a problem in their community to tackle. Next they write a business plan that includes devising a marketing strategy, designing a logo, and preparing a presentation. A panel of volunteer judges evaluates their work, and the top six teams receive a scholarship of up to $5,000, which is split among the members.

The organization has given out more than 55 scholarships, Smith says.

“Some of the girls who participated in our first education program are now going to college,” she says. “Seventy-two percent of participants are pursuing a degree in a STEM field, and quite a few are pursuing computer science.”

Introducing coding to Native Americans

The program is taught to high school girls on Montana’s Native American reservations through workshops.

Many reservations lack access to technology resources, Smith says, so presenting the program there has been challenging. But the organization has had some success and is working with the Blackfeet reservation, the Salish and Kootenai tribes on the Flathead reservation, and the Nakota and Gros Ventre tribes at Fort Belknap.

The workshops tailor technology for Native American culture. In the newest course, students program a string of LEDs to respond to the drumbeat of tribal songs using the BBC’s Micro:bit programmable controller. The lights are attached to the bottom of a ribbon skirt, a traditional garment worn by young women. Colorful ribbons are sewn horizontally across the bottom, with each hue having a meaning.

The new course was introduced to students on the Flathead reservation this month.

“Montana’s reservations are some of the most remote and resource-limited communities,” Smith says, “especially in regards to technology and educational opportunities.

“It’s important to give girls who live on the reservations educational opportunities to close the gap. It’s the right thing to do for the next generation.”

Reference: https://ift.tt/3uf5sHZ

How Field AI is Conquering Unstructured Autonomy




One of the biggest challenges for robotics right now is practical autonomous operation in unstructured environments. That is, doing useful stuff in places your robot hasn’t been before and where things may not be as familiar as your robot might like. Robots thrive on predictability, which has put some irksome restrictions on where and how they can be successfully deployed.

But over the last few years, this has started to change, thanks in large part to a couple of pivotal robotics challenges put on by DARPA. The DARPA Subterranean Challenge ran from 2018 to 2021, putting mobile robots through a series of unstructured underground environments. And the currently ongoing DARPA RACER program tasks autonomous vehicles with navigating long distances off-road. Some extremely impressive technology has been developed through these programs, but there’s always a gap between this cutting-edge research and any real-world applications.

Now, a bunch of the folks involved in these challenges, including experienced roboticists from NASA, DARPA, Google DeepMind, Amazon, and Cruise (to name just a few places) are applying everything that they’ve learned to enable real-world practical autonomy for mobile robots at a startup called Field AI.


Field AI was co-founded by Ali Agha, who previously was the leader of NASA JPL’s Aerial Mobility Group. While at JPL, Agha led Team CoSTAR, which won the DARPA Subterranean Challenge Urban Circuit. Agha has also been the principal investigator for DARPA RACER, first with JPL, and now continuing with Field AI. “Field AI is not just a startup,” Agha tells us. “It’s a culmination of decades of experience in AI and its deployment in the field.”

Unstructured environments are where things are constantly changing, which can play havoc with robots that rely on static maps.

The “field” part in Field AI is what makes Agha’s startup unique. Robots running Field AI’s software are able to handle unstructured, unmapped environments without reliance on prior models, GPS, or human intervention. Obviously, this kind of capability was (and is) of interest to NASA and JPL, which send robots to places where there are no maps, GPS doesn’t exist, and direct human intervention is impossible.

But DARPA SubT demonstrated that similar environments can be found on Earth, too. For instance, mines, natural caves, and the urban underground are all extremely challenging for robots (and even for humans) to navigate. And those are just the most extreme examples: robots that need to operate inside buildings or out in the wilderness have similar challenges understanding where they are, where they’re going, and how to navigate the environment around them.

driverless dune buggy-type vehicle with waving American flag drives through a blurred landscape of sand and scrub brush An autonomous vehicle drives across kilometers of desert with no prior map, no GPS, and no road.Field AI

Despite the difficulty that robots have operating in the field, this is an enormous opportunity that Field AI hopes to address. Robots have already proven their worth in inspection contexts, typically where you either need to make sure that nothing is going wrong across a large industrial site, or for tracking construction progress inside a partially completed building. There’s a lot of value here because the consequences of something getting messed up are expensive or dangerous or both, but the tasks are repetitive and sometimes risky and generally don’t require all that much human insight or creativity.

Uncharted territory as home base

Where Field AI differs from other robotics companies offering these services, as Agha explains, is that his company wants to do these tasks without first having a map that tells the robot where to go. In other words, there’s no lengthy set-up process, and no human supervision, and the robot can adapt to changing and new environments. Really, this is what full autonomy is all about: going anywhere, anytime, without human interaction. “Our customers don’t need to train anything,” Agha says, laying out the company’s vision. “They don’t need to have precise maps. They press a single button, and the robot just discovers every corner of the environment.” This capability is where the DARPA SubT heritage comes in. During the competition, DARPA basically said, ‘here’s the door into the course, we’re not going to tell you anything about what’s back there or even how big it is, just go explore the whole thing and bring us back the info we’ve asked for.’ Agha’s Team CoSTAR did exactly that during the competition, and Field AI is commercializing this capability.

“With our robots, our aim is for you to just deploy it, with no training time needed. And then we can just leave the robots.” —Ali Agha, Field AI

The other tricky thing about these unstructured environments, especially construction environments, is that things are constantly changing, which can play havoc with robots that rely on static maps. “We’re one of the few, if not the only companies that can leave robots for days on continuously changing construction sites with minimal supervision,” Agha tells us. “These sites are very complex—every day there are new items, new challenges, and unexpected events. Construction materials on the ground, scaffolds, forklifts and heavy machinery moving all over the place, nothing you can predict.”

Field AI

Field AI’s approach to this problem is to emphasize environmental understanding over mapping. Agha says that essentially, Field AI is working towards creating “field foundation models” (FFMs) of the physical world, using sensor data as an input. You can think of FFMs as being similar to the foundation models of language, music, and art that other AI companies have created over the past several years, where ingesting a large amount of data from the Internet enables some level of functionality in a domain without requiring specific training for each new situation. Consequently, Field AI’s robots can understand how to move in the world, rather than just where to move. “We look at AI quite differently from what’s mainstream,” Agha explains. “We do very heavy probabilistic modeling.” Much more technical detail would get into Field AI’s IP, says Agha, but the point is that real-time world modeling becomes a byproduct of Field AI’s robots operating in the world rather than a prerequisite for that operation. This makes the robots fast, efficient, and resilient.

Developing field foundation models that robots can use to reliably go almost anywhere requires a lot of real world data, which Field AI has been collecting at industrial and construction sites around the world for the past year. To be clear, they’re collecting the data as part of their commercial operations—these are paying customers that Field AI has already. “In these job sites, it can traditionally take weeks to go around a site and map where every single target of interest that you need to inspect is,” explains Agha. “But with our robots, our aim is for you to just deploy it, with no training time needed. And then we can just leave the robots. This level of autonomy really unlocks a lot of use cases that our customers weren’t even considering, because they thought it was years away.” And the use cases aren’t just about construction or inspection or other areas where we’re already seeing autonomous robotic systems, Agha says. “These technologies hold immense potential.”

There’s obviously demand for this level of autonomy, but Agha says that the other piece of the puzzle that will enable Field AI to leverage a trillion dollar market is the fact that they can do what they do with virtually any platform. Fundamentally, Field AI is a software company—they make sensor payloads that integrate with their autonomy software, but even those payloads are adjustable, ranging from something appropriate for an autonomous vehicle to something that a drone can handle.

Heck, if you decide that you need an autonomous humanoid for some weird reason, Field AI can do that too. While the versatility here is important, according to Agha, what’s even more important is that it means you can focus on platforms that are more affordable, and still expect the same level of autonomous performance, within the constraints of each robot’s design, of course. With control over the full software stack, integrating mobility with high-level planning, decision-making, and mission execution, Agha says that the potential to take advantage of relatively inexpensive robots is what’s going to make the biggest difference towards Field AI’s commercial success.

Group shot in a company parking lot of ten men and 12 robots Same brain, lots of different robots: the Field AI team’s foundation models can be used on robots big, small, expensive, and somewhat less expensive.Field AI

Field AI is already expanding its capabilities, building on some of its recent experience with DARPA RACER by working on deploying robots to inspect pipelines for tens of kilometers and to transport materials across solar farms. With revenue coming in and a substantial chunk of funding, Field AI has even attracted interest from Bill Gates. Field AI’s participation in RACER is ongoing, under a sort of subsidiary company for federal projects called Offroad Autonomy, and in the meantime the commercial side of is targeting expansion to “hundreds” of sites on every platform they can think of, including humanoids.

Reference: https://ift.tt/XwKOa8T

Expect a Wave of Waferscale Computers




At TSMC’s North American Technology Symposium on Wednesday, the company detailed both its semiconductor technology and chip packaging technology roadmaps. While the former is key to keeping the traditional part of Moore’s Law going, the latter could accelerate a trend towards processors made from more and more silicon, leading quickly to systems the size of a full silicon wafer. Such a system, Tesla’s next generation Dojo training tile is already in production, TSMC says. And in 2027 the foundry plans to offer technology for more complex waferscale systems than Tesla’s that could deliver 40-times as much computing power as today’s systems.

For decades chipmakers increased the density of logic on processors largely by scaling down the area transistors take up and the size of interconnects. But that scheme has been running out of steam for a while now. Instead, the industry is turning to advanced packaging technology that allows a single processor to be made from a larger amount of silicon. The size of a single chip is hemmed in by the largest pattern lithography equipment can make. Called the reticle limit, that’s currently about 800 square millimeters. So if you want more silicon in your GPU you need to make it from two or more dies. The key is connecting those dies so that signals can go from one to the other as quickly and with as little energy as if they were all one big piece of silicon.

TSMC already makes a wafer-sized AI accelerator for Cerebras, but that arrangement appears to be unique and is different from what TSMC is now offering with what it calls System-on-Wafer.

In 2027, you can get a full wafer integration that delivers 40-times as much compute, more than 40 reticles worth of silicon and room for more than 60 high-bandwidth memory chips, TSMC predicts

For Cerebras, TSMC makes a wafer full of identical arrays of AI cores that are smaller than the reticle limit. It connects these arrays across the “scribe lines,” the areas between dies that are usually left blank, so the wafer can be diced up into chips. No chipmaking process is perfect, so there are always flawed parts on every wafer. But Cerebras designed in enough redundancy that it doesn’t matter to the finished computer.

However, with its first round of System-on-Wafer, TSMC is offering a different solution to the problems of both reticle limit and yield. It starts with already tested logic dies to minimize defects. (Tesla’s Dojo contains a 5 x 5 grid of pretested processors.) These are placed on a carrier wafer, and the blank spots between the dies are filled in. Then a layer of high-density interconnects is constructed to connect the logic using TSMC’s integrated fan-out technology. The aim is to make data bandwidth among the dies so high that they effectively act like a single large chip.

By 2027, TSMC plans to offer waferscale integration based on its more advanced packaging technology, chip-on-wafer-on-substrate (CoWoS). In that technology, pre-tested logic and, importantly, high-bandwidth memory, is attached to a silicon substrate that’s been patterned with high-density interconnects and shot-through with vertical connections called through-silicon vias. The attached logic chips can also take advantage of the company’s 3D-chip technology called system-on-integrated chips (SoIC).

The waferscale version of CoWoS is the logical endpoint of an expansion of the packaging technology that’s already visible in top-end GPUs. Nvidia’s next GPU, Blackwell, uses CoWos to integrate more than 3-reticle-sizes worth of silicon, including 8 high-bandwidth memory (HBM) chips. By 2026, the company plans to expand that to 5.5-reticles, including 12 HBMs. TSMC says that would translate to more than 3.5 times as much compute power as its 2023 tech allows. But in 2027, you can get a full wafer integration that delivers 40-times as much compute, more than 40 reticles worth of silicon and room for more than 60 HBMs, TSMC predicts.

What waferscale is good for

The 2027 version of system-on-wafer somewhat resembles technology called Silicon-Interconnect Fabric, or Si-IF, developed at UCLA more than five years ago. The team behind SiIF includes electrical and computer engineering professor Puneet Gupta and IEEE Fellow Subramanian Iyer, who is now charged with implementing the packaging portion of the United States’ CHIPS Act.

Since then, they’ve been working to make the interconnects on the wafer more dense and to add other features to the technology. “If you want to this as a full technology infrastructure, it needs to do many other things beyond just providing fine pitch connectivity,” says Gupta, also an IEEE Fellow. “One of the biggest pain points for these large systems is going to be delivering power.” So the UCLA team is working on ways to add good quality capacitors and inductors to the silicon substrate and integrating gallium nitride power transistors.

AI training is the obvious first application for waferscale technology, but it is not the only one, and it may not even be the best, says University of Illinois Urbana-Champaign computer architect and IEEE Fellow Rakesh Kumar. At the International Symposium on Computer Architecture in June, his team is presenting a design for a waferscale network switch for data centers. Such a system could cut the number of advanced network switches in a very large—16,000-rack—data center from 4608 to just 48, the researchers report. A much smaller, enterprise-scale, data center for say 8,000 servers could get by using a single waferscale switch.

Reference: https://ift.tt/rWkHcxf

Monday, April 29, 2024

Account compromise of “unprecedented scale” uses everyday home devices


Account compromise of “unprecedented scale” uses everyday home devices

Enlarge (credit: Getty Images)

Authentication service Okta is warning about the “unprecedented scale” of an ongoing campaign that routes fraudulent login requests through the mobile devices and browsers of everyday users in an attempt to conceal the malicious behavior.

The attack, Okta said, uses other means to camouflage the login attempts as well, including the TOR network and so-called proxy services from providers such as NSOCKS, Luminati, and DataImpulse, which can also harness users’ devices without their knowledge. In some cases, the affected mobile devices are running malicious apps. In other cases, users have enrolled their devices in proxy services in exchange for various incentives.

Unidentified adversaries then use these devices in credential-stuffing attacks, which use large lists of login credentials obtained from previous data breaches in an attempt to access online accounts. Because the requests come from IP addresses and devices with good reputations, network security devices don’t give them the same level of scrutiny as logins from virtual private servers (VPS) that come from hosting services threat actors have used for years.

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/Ut5seKa

Phone Keyboard Exploits Leave 1 Billion Users Exposed




Digital Chinese-language keyboards that are vulnerable to spying and eavesdropping have been used by one billion smartphone users, according to a new report. The widespread threats these leaky systems reveal could also present a concerning new kind of exploit for cyberattacks, whether the device uses a Chinese-language keyboard or English keyboard or any other.

Last year, the University of Toronto’s Citizen Lab released a study of a proprietary Chinese keyboard system owned by the Shenzhen-based tech giant Tencent. Citizen Lab’s “Sogou Keyboard” report exposed the widespread range of attacks possible on the keyboard that could leak a user’s key presses to outside eavesdroppers. Now, in the group’s new study, released last week, the same researchers have discovered essentially all the world’s popular Chinese smartphone keyboards have suffered similar vulnerabilities.

“Whatever Chinese language users of your app might have typed into it has been exposed for years.” —Jedidiah Crandall, Arizona State University

And while the specific bugs the two reports have uncovered have been fixed in most instances, the researchers’ findings—and in particular, their recommendations—point to substantially larger gaps in the systems that extend into software developed around the world, no matter the language.

“All of these keyboards were also using custom network protocols,” says Mona Wang, a computer science Ph.D. student at Princeton University and co-author of the report. “Because I had studied these sort of custom network protocols before, then this immediately screamed to me that there was something really terrible going on.”

Jedidiah Crandall, an associate professor of computing and augmented intelligence at Arizona State University in Tempe, who was consulted in the report’s preparation but was not on the research team, says these vulnerabilities matter for nearly any coder or development team that releases their work to the world. “If you are a developer of a privacy-focused chat app or an app for tracking something health related, whatever Chinese language users of your app might have typed into it has been exposed for years,” he says.

The Chinese keyboard problem

Chinese, a language of tens of thousands of characters with some 4,000 or more in common use, represents a distinct challenge for keyboard input. A range of different keyboard systems have been developed in the digital era—sometimes called pinyin keyboards, named after a popular romanization system for standard Chinese. Ideally, these creative approaches to digital input enable a profoundly complex language to be straightforwardly phoneticized and transliterated via a compact, often QWERTY-style keyboard format.

“Even competent and well-resourced people get encryption wrong, because it’s really hard to do correctly.” —Mona Wang, Princeton University

But because computational and AI smarts can help transform key presses into a Chinese character on the screen, Chinese keyboards often involve back-and-forth across the Internet, to cloud servers and other assistive networked apps. All in order for a Chinese-speaking person to be able to type.

According to the report—and an FAQ the researchers released explaining the technical points in plain language—the Chinese keyboards studied all used character-prediction features, which in turn relied on cloud computing resources. It was the communications between the device’s keyboard app and the external cloud servers that constituted the insecure or improperly-secured communications that could be vulnerable to being hacked into.

Jeffrey Knockel, a senior research associate at Citizen Lab and report co-author, says cloud-based character prediction is a particularly attractive feature for Chinese-language keyboards, given the vast array of possible characters any given QWERTY keystroke sequence might be attempting to represent. “If you’re typing in English or any language where there’s enough keys on a keyboard for all your letters, that’s already a much simpler task to design a keyboard around than an ideographic language where you might have over 10,000 characters,” he says.

keyboard with english and chinese characters Chinese-language keyboards are often “pinyin keyboards,” which allow for thousands of characters to be typed using a QWERTY-style approach.Zamoeux/Wikimedia

Sarah Scheffler, a postdoctoral associate at MIT, expressed concern also about other kinds of data vulnerabilities that the Citizen Lab report reveals—beyond keyboards and Chinese-language specific applications, necessarily. “The vulnerabilities [identified by the report] are not at all specific to pinyin keyboards,” she says. “It applies to any application sending data over the Internet. Any app sending unencrypted—or badly encrypted—information would have similar issues.”

Wang says the chief problem the researchers uncovered concerns the fact that so many Chinese keyboard protocols transmit data using inferior and sometimes custom-made encryption.

“These encryption protocols are probably developed by very, very competent and very well-resourced people,” Wang says. “But even competent and well-resourced people get encryption wrong, because it’s really hard to do correctly.”

Beyond the vulnerabilities exposed

Scheffler points to the two-decades-long testing, iteration, and development of the transport layer security (TLS) system underlying much of the internet’s secure communications, including websites that use the Hypertext Transfer Protocol Secure (HTTPS) protocol. (The first version of TLS was specified and released in 1999.) “All these Chinese Internet companies who are rolling their own [cryptography] or using their own encryption algorithms are sort of missing out on all those 20 years of standard encryption development,” Wang says.

Crandall says the report may have also inadvertently highlighted assumptions about security protocols that may not always apply in every corner of the globe. “Protocols like TLS sometimes make assumptions that don’t suit the needs of developers in certain parts of the world,” he says. For instance, he adds, custom-made, non-TLS security systems may be more attractive “where the network delay is high or where people may spend large amounts of time in areas where the network is not accessible.”

Scheffler says the Chinese-language keyboard problem could even represent a kind of canary in the coal mine for a range of computer, smartphone, and software systems. Because of their reliance on extensive Internet communications, such systems—while perhaps overlooked or relegated to the background by developers—also still represent potential cybersecurity attack surfaces.

“Anecdotally, a lot of these security failures arise from groups that don’t think they’re doing anything that requires security or don’t have much security expertise,” Scheffler says.

Scheffler identifies “Internet-based predictive text keyboards in any language, and maybe some of the Internet-based AI features that have crept into apps over the years” as possible places concealing similar cybersecurity vulnerabilities that the Citizen Lab team discovered in Chinese-language keyboards. This category could include voice recognition, speech-to-text, text-to-speech, and generative AI tools, she adds.

“Security and privacy isn’t many people’s first thought when they’re building their cool image-editing application,” says Scheffler. ”Maybe it shouldn’t be the first thought, but it should definitely be a thought by the time the application makes it to users.”

Reference: https://ift.tt/D6KfWk2

An Engineer Who Keeps Meta’s AI infrastructure Humming




Making breakthroughs in artificial intelligence these days requires huge amounts of computing power. In January, Meta CEO Mark Zuckerberg announced that by the end of this year, the company will have installed 350,000 Nvidia GPUs—the specialized computer chips used to train AI models—to power its AI research.

As a data-center network engineer with Meta’s network infrastructure team, Susana Contrera is playing a leading role in this unprecedented technology rollout. Her job is about “bringing designs to life,” she says. Contrera and her colleagues take high-level plans for the company’s AI infrastructure and turn those blueprints into reality by working out how to wire, power, cool, and house the GPUs in the company’s data centers.

Susana Contrera


Employer:

Meta

Occupation:

Data-center network engineer

Education:

Bachelor’s degree in telecommunications engineering, Andrés Bello Catholic University in Caracas, Venezuela

Contrera, who now works remotely from Florida, has been at Meta since 2013, spending most of that time helping to build the computer systems that support its social media networks, including Facebook and Instagram. But she says that AI infrastructure has become a growing priority, particularly in the past two years, and represents an entirely new challenge. Not only is Meta building some of the world’s first AI supercomputers, it is racing against other companies like Google and OpenAI to be the first to make breakthroughs.

“We are sitting right at the forefront of the technology,” Contrera says. “It’s super challenging, but it’s also super interesting, because you see all these people pushing the boundaries of what we thought we could do.”

Cisco Certification Opened Doors

Growing up in Caracas, Venezuela, Contrera says her first introduction to technology came from playing video games with her older brother. But she decided to pursue a career in engineering because of her parents, who were small-business owners.

“They were always telling me how technology was going to be a game changer in the future, and how a career in engineering could open many doors,” she says.

She enrolled at Andrés Bello Catholic University in Caracas in 2001 to study telecommunications engineering. In her final year, she signed up for the training and certification program to become a Cisco Certified Network Associate. The program covered topics such as the fundamentals of networking and security, IP services, and automation and programmability.

The certificate opened the door to her first job in 2006—managing the computer network of a business-process outsourcing company, Atento, in Caracas.

“Getting your hands dirty can give you a lot of perspective.”

“It was a very large enterprise network that had just the right amount of complexity for a very small team,” she says. “That gave me a lot of freedom to put my knowledge into practice.”

At the time, Venezuela was going through a period of political unrest. Contrera says she didn’t see a future for herself in the country, so she decided to leave for Europe.

She enrolled in a master’s degree program in project management in 2009 at Spain’s Pontifical University of Salamanca, continuing to collect additional certifications through Cisco in her free time. In 2010, partway through the program, she left for a job as a support engineer at the Madrid-based law firm Ecija, which provides legal advice to technology, media, and telecommunications companies. Following that with a stint as a network engineer at Amazon’s facility in Dublin from 2011 to 2013, she then joined Meta and “the rest is history,” she says.

Starting From the Edge Network

Contrera first joined Meta as a network deployment engineer, helping build the company’s “edge” network. In this type of network design, user requests go out to small edge servers dotted around the world instead of to Meta’s main data centers. Edge systems can deal with requests faster and reduce the load on the company’s main computers.

After several years traveling around Europe setting up this infrastructure, she took a managerial position in 2016. But after a couple of years she decided to return to a hands-on role at the company.

“I missed the satisfaction that you get when you’re part of a project, and you can clearly see the impact of solving a complex technical problem,” she says.

Because of the rapid growth of Meta’s services, her work primarily involved scaling up the capacity of its data centers as quickly as possible and boosting the efficiency with which data flowed through the network. But the work she is doing today to build out Meta’s AI infrastructure presents very different challenges, she says.

Designing Data Centers for AI

Training Meta’s largest AI models involves coordinating computation over large numbers of GPUs split into clusters. These clusters are often housed in different facilities, often in distant cities. It’s crucial that messages passing back and forth have very low latency and are lossless—in other words, they move fast and don’t drop any information.

Building data centers that can meet these requirements first involves Meta’s network engineering team deciding what kind of hardware should be used and how it needs to be connected.

“They have to think about how those clusters look from a logical perspective,” Contrera says.

Then Contrera and other members of the network infrastructure team take this plan and figure out how to fit it into Meta’s existing data centers. They consider how much space the hardware needs, how much power and cooling it will require, and how to adapt the communications systems to support the additional data traffic it will generate. Crucially, this AI hardware sits in the same facilities as the rest of Meta’s computing hardware, so the engineers have to make sure it doesn’t take resources away from other important services.

“We help translate these ideas into the real world,” Contrera says. “And we have to make sure they fit not only today, but they also make sense for the long-term plans of how we are scaling our infrastructure.”

Working on a Transformative Technology

Planning for the future is particularly challenging when it comes to AI, Contrera says, because the field is moving so quickly.

“It’s not like there is a road map of how AI is going to look in the next five years,” she says. “So we sometimes have to adapt quickly to changes.”

With today’s heated competition among companies to be the first to make AI advances, there is a lot of pressure to get the AI computing infrastructure up and running. This makes the work much more demanding, she says, but it’s also energizing to see the entire company rallying around this goal.

While she sometimes gets lost in the day-to-day of the job, she loves working on a potentially transformative technology. “It’s pretty exciting to see the possibilities and to know that we are a tiny piece of that big puzzle,” she says.

Hands-on Data Center Experience

For those interested in becoming a network engineer, Contrera says the certification programs run by companies like Cisco are useful. But she says it’s also important not to focus just on simply ticking boxes or rushing through courses just to earn credentials. “Take your time to understand the topics because that’s where the value is,” she says.

It’s good to get some experience working in data centers on infrastructure deployment, she says, because “getting your hands dirty can give you a lot of perspective.” And increasingly, coding can be another useful skill to develop to complement more traditional network engineering capabilities.

Mainly, she says, just “enjoy the ride” because networking can be a truly fascinating topic once you delve in. “There’s this orchestra of protocols and different technologies playing together and interacting,” she says. “I think that’s beautiful.”

Reference: https://ift.tt/Fxk5hLy

Sunday, April 28, 2024

Electronically Assisted Astronomy on the Cheap




I hate the eye strain that often comes with peering through a telescope at the night sky—I’d rather let a camera capture the scene. But I’m too frugal to sink thousands of dollars into high-quality astrophotography gear. The Goldilocks solution for me is something that goes by the name of electronically assisted astronomy, or EAA.

EAA occupies a middle ground in amateur astronomy: more involved than gazing through binoculars or a telescope, but not as complicated as using specialized cameras, expensive telescopes, and motorized tracking mounts. I set about exploring how far I could get doing EAA on a limited budget.

Photo of the moon.

Photo of a sun.

Photo of a nebula. Electronically-assisted-astronomy photographs captured with my rig: the moon [top], the sun [middle], and the Orion Nebula [bottom] David Schneider

First, I purchased a used Canon T6 DSLR on eBay. Because it had a damaged LCD viewscreen and came without a lens, it cost just US $100. Next, rather than trying to marry this camera to a telescope, I decided to get a telephoto lens: Back to eBay for a 40-year-old Nikon 500-mm F/8 “mirror” telephoto lens for $125. This lens combines mirrors and lenses to create a folded optical path. So even though the focal length of this telephoto is a whopping 50 centimeters, the lens itself is only about 15 cm long. A $20 adapter makes it work with the Canon.

The Nikon lens lacks a diaphragm to adjust its aperture and hence its depth of field. Its optical geometry makes things that are out of focus resemble doughnuts. And it can’t be autofocused. But these shortcomings aren’t drawbacks for astrophotography. And the lens has the big advantage that it can be focused beyond infinity. This allows you to adjust the focus on distant objects accurately, even if the lens expands and contracts with changing temperatures.

Getting the focus right is one of the bugaboos of using a telephoto lens for astrophotography, because the focus on such lenses is touchy and easily gets knocked off kilter. To avoid that, I built something (based on a design I found in an online astronomy forum) that clamps to the focus ring and allows precise adjustments using a small knob.

My next purchase was a modified gun sight to make it easier to aim the camera. The version I bought (for $30 on Amazon) included an adapter that let me mount it to my camera’s hot shoe. You’ll also need a tripod, but you can purchase an adequate one for less than $30.

Getting the focus right is one of the bugaboos of using a telephoto lens

The only other hardware you need is a laptop. On my Windows machine, I installed four free programs: Canon’s EOS Utility (which allows me to control the camera and download images directly), Canon’s Digital Photo Professional (for managing the camera’s RAW format image files), the GNU Image Manipulation Program (GIMP) photo editor, and a program called Deep Sky Stacker, which lets me combine short-exposure images to enhance the results without having Earth’s rotation ruin things.

It was time to get started. But focusing on astronomical objects is harder than you might think. The obvious strategy is to put the camera in “live view” mode, aim it at Jupiter or a bright star, and then adjust the focus until the object is as small as possible. But it can still be hard to know when you’ve hit the mark. I got a big assist from what’s known as a Bahtinov mask, a screen with angled slats you temporarily stick in front of the lens to create a diffraction pattern that guides focusing.

A set of images showing dim celestial objects transiting across the frames. A final synthesized frame shows a clear, sharp image. Stacking software takes a series of images of the sky, compensates for the motion of the stars, and combines the images to simulate long exposures without blurring.

After getting some good shots of the moon, I turned to another easy target: the sun. That required a solar filter, of course. I purchased one for $9 , which I cut into a circle and glued to a candy tin from which I had cut out the bottom. My tin is of a size that slips perfectly over my lens. With this filter, I was able to take nice images of sunspots. The challenge again was focusing, which required trial and error, because strategies used for stars and planets don’t work for the sun.

With focusing down, the next hurdle was to image a deep-sky object, or DSO—star clusters, galaxies, and nebulae. To image these dim objects really well requires a tracking mount, which turns the camera so that you can take long exposures without blurring from the motion of the Earth. But I wanted to see what I could do without a tracker.

I first needed to figure out how long of an exposure was possible with my fixed camera. A common rule of thumb is to take the focal length of your telescope in millimeters and divide by 500 to give you the maximum exposure duration in seconds. For my setup, that would be 1 second. A more sophisticated approach, called the NPF rule, factors in additional details regarding your imaging sensor. Using an online NPF-rule calculator gave me a slightly lower number: 0.8 seconds. To be even more conservative, I used 0.6-second exposures.

My first DSO target was the Orion Nebula, of which I shot 100 images from my suburban driveway. No doubt, I would have done better from a darker spot. I was mindful, though, to acquire calibration frames—“flats” and “darks” and “bias images”—which are used to compensate for imperfections in the imaging system. Darks and bias images are easy enough to obtain by leaving the lens cap on. Taking flats, however, requires an even, diffuse light source. For that I used a $17 A5-size LED tracing pad placed on a white T-shirt covering the lens.

With all these images in hand, I fired up the Deep Sky Stacker program and put it to work. The resultant stack didn’t look promising, but postprocessing in GIMP turned it into a surprisingly detailed rendering of the Orion Nebula. It doesn’t compare, of course, with what somebody can do with a better gear. But it does show the kinds of fascinating images you can generate with some free software, an ordinary DSLR, and a vintage telephoto lens pointed at the right spot.

This article appears in the May 2024 print issue as “Electronically Assisted Astronomy.”

Reference: https://ift.tt/ofY7k1C

Saturday, April 27, 2024

Will Human Soldiers Ever Trust Their Robot Comrades?




Editor’s note: This article is adapted from the author’s book War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future (University of California Press, published in paperback April 2024).

The blistering late-afternoon wind ripped across Camp Taji, a sprawling U.S. military base just north of Baghdad. In a desolate corner of the outpost, where the feared Iraqi Republican Guard had once manufactured mustard gas, nerve agents, and other chemical weapons, a group of American soldiers and Marines were solemnly gathered around an open grave, dripping sweat in the 114-degree heat. They were paying their final respects to Boomer, a fallen comrade who had been an indispensable part of their team for years. Just days earlier, he had been blown apart by a roadside bomb.

As a bugle mournfully sounded the last few notes of “Taps,” a soldier raised his rifle and fired a long series of volleys—a 21-gun salute. The troops, which included members of an elite army unit specializing in explosive ordnance disposal (EOD), had decorated Boomer posthumously with a Bronze Star and a Purple Heart. With the help of human operators, the diminutive remote-controlled robot had protected American military personnel from harm by finding and disarming hidden explosives.

Boomer was a Multi-function Agile Remote-Controlled robot, or MARCbot, manufactured by a Silicon Valley company called Exponent. Weighing in at just over 30 pounds, MARCbots look like a cross between a Hollywood camera dolly and an oversized Tonka truck. Despite their toylike appearance, the devices often leave a lasting impression on those who work with them. In an online discussion about EOD support robots, one soldier wrote, “Those little bastards can develop a personality, and they save so many lives.” An infantryman responded by admitting, “We liked those EOD robots. I can’t blame you for giving your guy a proper burial, he helped keep a lot of people safe and did a job that most people wouldn’t want to do.”

Two men work with a rugged box containing the controller for the small four-wheeled vehicle in front of them. The vehicle has a video camera mounted on a jointed arm. A Navy unit used a remote-controlled vehicle with a mounted video camera in 2009 to investigate suspicious areas in southern Afghanistan.Mass Communication Specialist 2nd Class Patrick W. Mullen III/U.S. Navy

But while some EOD teams established warm emotional bonds with their robots, others loathed the machines, especially when they malfunctioned. Take, for example, this case described by a Marine who served in Iraq:

My team once had a robot that was obnoxious. It would frequently accelerate for no reason, steer whichever way it wanted, stop, etc. This often resulted in this stupid thing driving itself into a ditch right next to a suspected IED. So of course then we had to call EOD [personnel] out and waste their time and ours all because of this stupid little robot. Every time it beached itself next to a bomb, which was at least two or three times a week, we had to do this. Then one day we saw yet another IED. We drove him straight over the pressure plate, and blew the stupid little sh*thead of a robot to pieces. All in all a good day.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods. Either way, observers have interpreted these accounts as unsettling glimpses of a future in which men and women ascribe personalities to artificially intelligent war machines.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods.

From this perspective, what makes robot funerals unnerving is the idea of an emotional slippery slope. If soldiers are bonding with clunky pieces of remote-controlled hardware, what are the prospects of humans forming emotional attachments with machines once they’re more autonomous in nature, nuanced in behavior, and anthropoid in form? And a more troubling question arises: On the battlefield, will Homo sapiens be capable of dehumanizing members of its own species (as it has for centuries), even as it simultaneously humanizes the robots sent to kill them?

As I’ll explain, the Pentagon has a vision of a warfighting force in which humans and robots work together in tight collaborative units. But to achieve that vision, it has called in reinforcements: “trust engineers” who are diligently helping the Department of Defense (DOD) find ways of rewiring human attitudes toward machines. You could say that they want more soldiers to play “Taps” for their robot helpers and fewer to delight in blowing them up.

The Pentagon’s Push for Robotics

For the better part of a decade, several influential Pentagon officials have relentlessly promoted robotic technologies, promising a future in which “humans will form integrated teams with nearly fully autonomous unmanned systems, capable of carrying out operations in contested environments.”

Several soldiers wearing helmets and ear protectors pull upright a tall grey drone. Soldiers test a vertical take-off-and-landing drone at Fort Campbell, Ky., in 2020.U.S. Army

As The New York Times reported in 2016: “Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power.” The U.S. government is spending staggering sums to advance these technologies: For fiscal year 2019, the U.S. Congress was projected to provide the DOD with US $9.6 billion to fund uncrewed and robotic systems—significantly more than the annual budget of the entire National Science Foundation.

Arguments supporting the expansion of autonomous systems are consistent and predictable: The machines will keep our troops safe because they can perform dull, dirty, dangerous tasks; they will result in fewer civilian casualties, since robots will be able to identify enemies with greater precision than humans can; they will be cost-effective and efficient, allowing more to get done with less; and the devices will allow us to stay ahead of China, which, according to some experts, will soon surpass America’s technological capabilities.

A headshot shows a smiling man in a dark suit with his arms crossed.\u00a0 Former U.S. deputy defense secretary Robert O. Work has argued for more automation within the military. Center for a New American Security

Among the most outspoken advocate of a roboticized military is Robert O. Work, who was nominated by President Barack Obama in 2014 to serve as deputy defense secretary. Speaking at a 2015 defense forum, Work—a barrel-chested retired Marine Corps colonel with the slight hint of a drawl—described a future in which “human-machine collaboration” would win wars using big-data analytics. He used the example of Lockheed Martin’s newest stealth fighter to illustrate his point: “The F-35 is not a fighter plane, it is a flying sensor computer that sucks in an enormous amount of data, correlates it, analyzes it, and displays it to the pilot on his helmet.”

The beginning of Work’s speech was measured and technical, but by the end it was full of swagger. To drive home his point, he described a ground combat scenario. “I’m telling you right now,” Work told the rapt audience, “10 years from now if the first person through a breach isn’t a friggin’ robot, shame on us.”

“The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them,” said a 2016 New York Times article. The rhetoric surrounding robotic and autonomous weapon systems is remarkably similar to that of Silicon Valley, where charismatic CEOs, technology gurus, and sycophantic pundits have relentlessly hyped artificial intelligence.

For example, in 2016, the Defense Science Board—a group of appointed civilian scientists tasked with giving advice to the DOD on technical matters—released a report titled “Summer Study on Autonomy.” Significantly, the report wasn’t written to weigh the pros and cons of autonomous battlefield technologies; instead, the group assumed that such systems will inevitably be deployed. Among other things, the report included “focused recommendations to improve the future adoption and use of autonomous systems [and] example projects intended to demonstrate the range of benefits of autonomy for the warfighter.”

What Exactly Is a Robot Soldier?

A red book cover shows the crosshairs of a target surrounded by images of robots and drones. The author’s book, War Virtually, is a critical look at how the U.S. military is weaponizing technology and data.University of California Press

Early in the 20th century, military and intelligence agencies began developing robotic systems, which were mostly devices remotely operated by human controllers. But microchips, portable computers, the Internet, smartphones, and other developments have supercharged the pace of innovation. So, too, has the ready availability of colossal amounts of data from electronic sources and sensors of all kinds. The Financial Times reports: “The advance of artificial intelligence brings with it the prospect of robot-soldiers battling alongside humans—and one day eclipsing them altogether.” These transformations aren’t inevitable, but they may become a self-fulfilling prophecy.

All of this raises the question: What exactly is a “robot-soldier”? Is it a remote-controlled, armor-clad box on wheels, entirely reliant on explicit, continuous human commands for direction? Is it a device that can be activated and left to operate semiautonomously, with a limited degree of human oversight or intervention? Is it a droid capable of selecting targets (using facial-recognition software or other forms of artificial intelligence) and initiating attacks without human involvement? There are hundreds, if not thousands, of possible technological configurations lying between remote control and full autonomy—and these differences affect ideas about who bears responsibility for a robot’s actions.

The U.S. military’s experimental and actual robotic and autonomous systems include a vast array of artifacts that rely on either remote control or artificial intelligence: aerial drones; ground vehicles of all kinds; sleek warships and submarines; automated missiles; and robots of various shapes and sizes—bipedal androids, quadrupedal gadgets that trot like dogs or mules, insectile swarming machines, and streamlined aquatic devices resembling fish, mollusks, or crustaceans, to name a few.

A four-legged black and grey robot moves in the foreground, while in the background a number of uniformed people watch its actions, Members of a U.S. Air Force squadron test out an agile and rugged quadruped robot from Ghost Robotics in 2023.Airman First Class Isaiah Pedrazzini/U.S. Air Force

The transitions projected by military planners suggest that servicemen and servicewomen are in the midst of a three-phase evolutionary process, which begins with remote-controlled robots, in which humans are “in the loop,” then proceeds to semiautonomous and supervised autonomous systems, in which humans are “on the loop,” and then concludes with the adoption of fully autonomous systems, in which humans are “out of the loop.” At the moment, much of the debate in military circles has to do with the degree to which automated systems should allow—or require—human intervention.

“Ten years from now if the first person through a breach isn’t a friggin’ robot, shame on us.” —Robert O. Work

In recent years, much of the hype has centered around that second stage: semiautonomous and supervised autonomous systems that DOD officials refer to as “human-machine teaming.” This idea suddenly appeared in Pentagon publications and official statements after the summer of 2015. The timing probably wasn’t accidental; it came at a time when global news outlets were focusing attention on a public backlash against lethal autonomous weapon systems. The Campaign to Stop Killer Robots was launched in April 2013 as a coalition of nonprofit and civil society organizations, including the International Committee for Robot Arms Control, Amnesty International, and Human Rights Watch. In July 2015, the campaign released an open letter warning of a robotic arms race and calling for a ban on the technologies. Cosigners included world-renowned physicist Stephen Hawking, Tesla founder Elon Musk, Apple cofounder Steve Wozniak, and thousands more.

In November 2015, Work gave a high-profile speech on the importance of human-machine teaming, perhaps hoping to defuse the growing criticism of “killer robots.” According to one account, Work’s vision was one in which “computers will fly the missiles, aim the lasers, jam the signals, read the sensors, and pull all the data together over a network, putting it into an intuitive interface humans can read, understand, and use to command the mission”—but humans would still be in the mix, “using the machine to make the human make better decisions.” From this point forward, the military branches accelerated their drive toward human-machine teaming.

The Doubt in the Machine

But there was a problem. Military experts loved the idea, touting it as a win-win: Paul Scharre, in his book Army of None: Autonomous Weapons and the Future of War, claimed that “we don’t need to give up the benefits of human judgment to get the advantages of automation, we can have our cake and eat it too.” However, personnel on the ground expressed—and continue to express—deep misgivings about the side effects of the Pentagon’s newest war machines.

The difficulty, it seems, is humans’ lack of trust. The engineering challenges of creating robotic weapon systems are relatively straightforward, but the social and psychological challenges of convincing humans to place their faith in the machines are bewilderingly complex. In high-stakes, high-pressure situations like military combat, human confidence in autonomous systems can quickly vanish. The Pentagon’s Defense Systems Information Analysis Center Journal noted that although the prospects for combined human-machine teams are promising, humans will need assurances:

[T]he battlefield is fluid, dynamic, and dangerous. As a result, warfighter demands become exceedingly complex, especially since the potential costs of failure are unacceptable. The prospect of lethal autonomy adds even greater complexity to the problem [in that] warfighters will have no prior experience with similar systems. Developers will be forced to build trust almost from scratch.

In a 2015 article, U.S. Navy Commander Greg Smith provided a candid assessment of aviators’ distrust in aerial drones. After describing how drones are often intentionally separated from crewed aircraft, Smith noted that operators sometimes lose communication with their drones and may inadvertently bring them perilously close to crewed airplanes, which “raises the hair on the back of an aviator’s neck.” He concluded:

[I]n 2010, one task force commander grounded his manned aircraft at a remote operating location until he was assured that the local control tower and UAV [unmanned aerial vehicle] operators located halfway around the world would improve procedural compliance. Anecdotes like these abound…. After nearly a decade of sharing the skies with UAVs, most naval aviators no longer believe that UAVs are trying to kill them, but one should not confuse this sentiment with trusting the platform, technology, or [drone] operators.

Two men look at a variety of screens in a dark room. Bottom: A large gray unmanned aircraft sits in a hangar. U.S. Marines [top] prepare to launch and operate a MQ-9A Reaper drone in 2021. The Reaper [bottom] is designed for both high-altitude surveillance and destroying targets.Top: Lance Cpl. Gabrielle Sanders/U.S. Marine Corps; Bottom: 1st Lt. John Coppola/U.S. Marine Corps

Yet Pentagon leaders place an almost superstitious trust in those systems, and seem firmly convinced that a lack of human confidence in autonomous systems can be overcome with engineered solutions. In a commentary, Courtney Soboleski, a data scientist employed by the military contractor Booz Allen Hamilton, makes the case for mobilizing social science as a tool for overcoming soldiers’ lack of trust in robotic systems.

The problem with adding a machine into military teaming arrangements is not doctrinal or numeric…it is psychological. It is rethinking the instinctual threshold required for trust to exist between the soldier and machine.… The real hurdle lies in surpassing the individual psychological and sociological barriers to assumption of risk presented by algorithmic warfare. To do so requires a rewiring of military culture across several mental and emotional domains.… AI [artificial intelligence] trainers should partner with traditional military subject matter experts to develop the psychological feelings of safety not inherently tangible in new technology. Through this exchange, soldiers will develop the same instinctual trust natural to the human-human war-fighting paradigm with machines.

The Military’s Trust Engineers Go to Work

Soon, the wary warfighter will likely be subjected to new forms of training that focus on building trust between robots and humans. Already, robots are being programmed to communicate in more human ways with their users for the explicit purpose of increasing trust. And projects are currently underway to help military robots report their deficiencies to humans in given situations, and to alter their functionality according to the machine’s perceived emotional state of the user.

At the DEVCOM Army Research Laboratory, military psychologists have spent more than a decade on human experiments related to trust in machines. Among the most prolific is Jessie Chen, who joined the lab in 2003. Chen lives and breathes robotics—specifically “agent teaming” research, a field that examines how robots can be integrated into groups with humans. Her experiments test how humans’ lack of trust in robotic and autonomous systems can be overcome—or at least minimized.

For example, in one set of tests, Chen and her colleagues deployed a small ground robot called an Autonomous Squad Member that interacted and communicated with infantrymen. The researchers varied “situation-awareness-based agent transparency”—that is, the robot’s self-reported information about its plans, motivations, and predicted outcomes—and found that human trust in the robot increased when the autonomous “agent” was more transparent or honest about its intentions.

The Army isn’t the only branch of the armed services researching human trust in robots. The U.S. Air Force Research Laboratory recently had an entire group dedicated to the subject: the Human Trust and Interaction Branch, part of the lab’s 711th Human Performance Wing, located at Wright-Patterson Air Force Base, in Ohio.

In 2015, the Air Force began soliciting proposals for “research on how to harness the socio-emotional elements of interpersonal team/trust dynamics and inject them into human-robot teams.” Mark Draper, a principal engineering research psychologist at the Air Force lab, is optimistic about the prospects of human-machine teaming: “As autonomy becomes more trusted, as it becomes more capable, then the Airmen can start off-loading more decision-making capability on the autonomy, and autonomy can exercise increasingly important levels of decision-making.”

Air Force researchers are attempting to dissect the determinants of human trust. In one project, they examined the relationship between a person’s personality profile (measured using the so-called Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, neuroticism) and his or her tendency to trust. In another experiment, entitled “Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot,” Air Force scientists compared male and female research subjects’ levels of trust by showing them a video depicting a guard robot. The robot was armed with a Taser, interacted with people, and eventually used the Taser on one. Researchers designed the scenario to create uncertainty about whether the robot or the humans were to blame. By surveying research subjects, the scientists found that women reported higher levels of trust in “Robocop” than men.

The issue of trust in autonomous systems has even led the Air Force’s chief scientist to suggest ideas for increasing human confidence in the machines, ranging from better android manners to robots that look more like people, under the principle that

good HFE [human factors engineering] design should help support ease of interaction between humans and AS [autonomous systems]. For example, better “etiquette” often equates to better performance, causing a more seamless interaction. This occurs, for example, when an AS avoids interrupting its human teammate during a high workload situation or cues the human that it is about to interrupt—activities that, surprisingly, can improve performance independent of the actual reliability of the system. To an extent, anthropomorphism can also improve human-AS interaction, since people often trust agents endowed with more humanlike features…[but] anthropomorphism can also induce overtrust.

It’s impossible to know the degree to which the trust engineers will succeed in achieving their objectives. For decades, military trainers have trained and prepared newly enlisted men and women to kill other people. If specialists have developed simple psychological techniques to overcome the soldier’s deeply ingrained aversion to destroying human life, is it possible that someday, the warfighter might also be persuaded to unquestioningly place his or her trust in robots?

Reference: https://ift.tt/tXDTMZs

The Sneaky Standard

A version of this post originally appeared on Tedium , Ernie Smith’s newsletter, which hunts for the end of the long tail. Personal c...