Monday, July 31, 2023

Dissolving circuit boards in water sounds better than shredding and burning


Dissolved circuit board from Jiva Technologies

Enlarge / 30 minutes in near-boiling water, and those soldered chips come right off, leaving you with something that's non-toxic, compostable, and looking like something from your grandparent's attic. (credit: Infineon)

Right now, the destination for the circuit board inside a device you no longer need is almost certainly a gigantic shredder, and that's the best-case scenario.

Most devices that don't have resale or reuse value end up going into the shredder—if they even make it into the e-waste stream. After their batteries are (hopefully removed, the shredded boards pass through magnets, water, and incineration, to pull specific minerals and metals out of the boards. The woven fiberglass and epoxy resin the boards were made from aren't worth much after they're sliced up, so they end up as waste. That waste is put in landfills, burned, or sometimes just stockpiled.

That's why, even if it's still in its earliest stages, something like the Soluboard sounds so promising. UK-based Jiva Materials makes printed circuit boards (PCBs) from natural fibers encased in a non-toxic polymer that dissolves in hot water. That leaves behind whole components previously soldered onto the board, which should be easier to recover.

Read 4 remaining paragraphs | Comments

Reference : https://ift.tt/twQJzAK

Boston Dynamics’ Founder on the Future of Robotics




When Marc Raibert founded Boston Dynamics in 1992, he wasn’t even sure it was going to be a robotics company—he thought it might become a modeling and simulation company instead. Now, of course, Boston Dynamics is the authority in legged robots, with its Atlas biped and Spot quadruped. But as the company focuses more on commercializing its technology, Raibert has become more interested in pursuing the long-term vision of what robotics can be.

To that end, Raibert founded the Boston Dynamics AI Institute in August of 2022. Funded by Hyundai (the company also acquired Boston Dynamics in 2020), the Institute’s first few projects will focus on making robots useful outside the lab by teaching them to better understand the world around them.

Marc Raibert 


Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

At the 2023 IEEE International Conference on Robotics at Automation (ICRA) in London this past May, Raibert gave a keynote talk that discussed some of his specific goals, with an emphasis on developing practical, helpful capabilities in robots. For example, Raibert hopes to teach robots to watch humans perform tasks, understand what they’re seeing, and then do it themselves—or know when they don’t understand something, and how to ask questions to fill in those gaps. Another of Raibert’s goals is to teach robots to inspect equipment to figure out whether something is working—and if it’s not, to determine what’s wrong with it and make repairs. Raibert showed concept art at ICRA that included robots working in domestic environments such as kitchens, living rooms, and laundry rooms as well as industrial settings. “I look forward to having some demos of something like this happening at ICRA 2028 or 2029,” Raibert quipped.

Following his keynote, IEEE Spectrum spoke with Raibert, and he answered five questions about where he wants robotics to go next.

At the Institute, you’re starting to share your vision for the future of robotics more than you did at Boston Dynamics. Why is that?

Marc Raibert: At Boston Dynamics, I don’t think we talked about the vision. We just did the next thing, saw how it went, and then decided what to do after that. I was taught that when you wrote a paper or gave a presentation, you showed what you had accomplished. All that really mattered was the data in your paper. You could talk about what you want to do, but people talk about all kinds of things that way—the future is so cheap, and so variable. That’s not the same as showing what you did. And I took pride in showing what we actually did at Boston Dynamics.

But if you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision. So I’m starting to be a little more comfortable with doing that. Not to mention that at this point, we don’t have any actual results to show.

A group of images showing a robot observing a human doing a task then performing it on it's own. Right now, robots must be carefully trained to complete specific tasks. But Marc Raibert wants to give robots the ability to watch a human do a task, understand what\u2019s happening, and then do the task themselves, whether it\u2019s in a factory [top left and bottom] or in your home [top right and bottom]. Boston Dynamics AI Institute

The Institute will be putting a lot of effort into how robots can better manipulate objects. What’s the opportunity there?

Raibert: I think that for 50 years, people have been working on manipulation, and it hasn’t progressed enough. I’m not criticizing anybody, but I think that there’s been so much work on path planning, where path planning means how you move through open space. But that’s not where the action is. The action is when you’re in contact with things—we humans basically juggle with our hands when we’re manipulating, and I’ve seen very few things that look like that. It’s going to be hard, but maybe we can make progress on it. One idea is that going from static robot manipulation to dynamic can advance the field the way that going from static to dynamic advanced legged robots.

How are you going to make your vision happen?

Raibert: I don’t know any of the answers for how we’re going to do any of this! That’s the technical fearlessness—or maybe the technical foolishness. My long-term hope for the Institute is that most of the ideas don’t come from me, and that we succeed in hiring the kind of people who can have ideas that lead the field. We’re looking for people who are good at bracketing a problem, doing a quick pass at it (“quick” being maybe a year), seeing what sticks, and then taking another pass at it. And we’ll give them the resources they need to go after problems that way.

“If you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision.”

Are you concerned about how the public perception of robots, and especially of robots you have developed, is sometimes negative?

Raibert: The media can be over the top with stories about the fear of robots. I think that by and large, people really love robots. Or at least, a lot of people could love them, even though sometimes they’re afraid of them. But I think people just have to get to know robots, and at some point I’d like to open up an outreach center where people could interact with our robots in positive ways. We are actively working on that.

What do you find so interesting about dancing robots?

Raibert: I think there are a lot of opportunities for emotional expression by robots, and there’s a lot to be done that hasn’t been done. Right now, it’s labor-intensive to create these performances, and the robots are not perceiving anything. They’re just playing back the behaviors that we program. They should be listening to the music. They should be seeing who they’re dancing with, and coordinating with them. And I have to say, every time I think about that, I wonder if I’m getting soft because robots don’t have to be emotional, either on the giving side or on the receiving side. But somehow, it’s captivating.

Marc Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

This article appears in the August 2023 print issue as “5 Questions for Marc Raibert.”

Reference: https://ift.tt/C1bcleq

Arizona law school embraces ChatGPT use in student applications


A computer-augmented view of ASU's campus.

Enlarge (credit: Benj Edwards / Arizona State University)

On Friday, Arizona State University's Sandra Day O'Connor College of Law announced that prospective students will be allowed to use AI tools, such as OpenAI's ChatGPT, to assist in preparing their applications, according to a report by Reuters.

This decision comes a week after the University of Michigan Law School notably decided to ban such AI tools, highlighting the diverse policies different universities are adopting related to AI's role in student applications.

Arizona State's law school says that applicants who use AI tools must clearly disclose that fact, and they must also ensure that the submitted information is accurate. This parallels the school's existing requirement for applicants to certify if they have used a professional consultant to help with their application.

Read 5 remaining paragraphs | Comments

Reference : https://ift.tt/oawfWBK

Sunday, July 30, 2023

The Cold War Arms Race Over Prosthetic Arms




In 1961, Norbert Wiener, the father of cybernetics, broke his hip and wound up in Massachusetts General Hospital. Wiener’s bad luck turned into fruitful conversations with his orthopedic surgeon, Melvin Glimcher. Those talks in turn led to a collaboration and an invention: the Boston Arm, an early myoelectric prosthesis. The device’s movements were controlled using electrical signals from an amputee’s residual bicep and tricep muscles.

What was the Boston Arm?

Wiener had first postulated that biological signals could be used to control a prosthesis in the early 1950s, but research in this area did not flourish in the United States.

Two photos of white men. One has glasses and a beard. The other has glasses and is smiling. Discussions between cyberneticist Norbert Wiener [left] and surgeon Melvin Glimcher [right] inspired the Boston Arm. Left: MIT Museum; Right: Stephanie Mitchell/Harvard University

Instead, it was Russian scientist Alexander Kobrinski who debuted the first clinically significant myoelectric prosthesis in 1960. Its use of transistors reduced the size, but the battery packs, worn in a belt around the waist, were heavy. A special report in the Canadian Medical Association Journal in 1964 deemed the prosthesis cosmetically acceptable and operationally satisfactory, with a few drawbacks: It was noisy; it only had two motions—the opening and closing of the hand; and it came in just one size—appropriate for an average adult male. Historically, most upper arm amputations resulted from combat injuries and workplace accidents, and so had disproportionately affected men. But the use of thalidomide during pregnancy in the early 1960s resulted in an increase of babies of both genders born missing limbs. There was a need for prosthetics of different sizes.

In 1961, Glimcher traveled to the Soviet Union to see a demo of the Russian Hand. At the time, he was working one day a week at the Liberty Mutual Rehabilitation Center, treating amputees. Glimcher and Thomas Delorme, the center’s medical director, noticed that many amputees were not using their prostheses due to the limitations of the devices. Liberty Mutual Insurance Co., which ran the rehab center, had a financial interest in developing better prostheses so that their users could get back to work and get off long-term disability. The company agreed to fund a working group to develop a myoelectric prosthetic arm.

References


I first came across the Boston Arm when I was researching Norbert Wiener’s “hearing glove,” a device to help people who are deaf and hard of hearing interpret sound waves. A search of the MIT Museum’s database came up empty (no known example of the hearing glove exists), but I did find the entry on the Boston Arm. Curator Deborah Douglas sent me digitized copies of Ralph Alter’s doctoral thesis, a press release on the Boston Arm, and numerous press clippings.

Looking for a broad overview of the history of prosthetics, I found “The evolution of functional hand replacement: From iron prostheses to hand transplantation” by Kevin Zuo and Jaret Olson (Plastic Surgery vol. 22, no. 1, Spring 2014) to be particularly useful, as was “Historical Aspects of Powered Limb Prostheses” by Dudley Childress (Clinical Prosthetics & Orthotics, vol. 9, no. 1, 1985).

The Office of Technology Assessment closed in 1995, but Princeton University hosts the legacy site, which has a full run of the agency’s digitized reports, including Health Technology Case Study 29, “The Boston Elbow.”

Wiener suggested that Amar G. Bose, a professor of electrical engineering at MIT, and Robert W. Mann, a professor of mechanical engineering also at MIT, join the group. Bose and Mann in turn recruited grad students Ralph Alter, to work on signal processing and software, and Ronald Rothschild, to work on hardware. Over the next few years, this collaboration of MIT, Harvard Medical School, Massachusetts General Hospital, and Liberty Mutual developed the Boston Arm.

In 1966, MIT’s Research Laboratory of Electronics published Alter’s doctoral thesis, “Bioelectric Control of Prostheses,” as Technical Report 446. Alter had studied the electromyographic (EMG) signals stemming from muscle tissue and concluded they could be used to control the prosthesis. Meanwhile, Rothschild was working on his master’s thesis, “Design of an externally powered artificial elbow for electromyographic control.” Working with Alter, Rothschild designed, constructed, and demonstrated a motor-driven elbow controlled by emg signals.

Black and white photo of a smiling young man whose left forearm is attached by wires to electrical equipment. MIT grad student Ralph Alter worked on signal processing and software for the Boston Arm. Robert W. Mann Collection/MIT Museum and Liberating Technologies/Coapt

Even as Rothschild and Alter were putting the final touches on their theses, Glimcher was teasing the press with the group’s experimental results during the summer of 1965. The New York Times ran a story claiming “New Process Will Help Amputee To Control Limb With Thought.” The Boston Globe was a bit more sensational, comparing the prosthetic device to black magic and supernatural abilities. Glimcher did try to temper expectations, explaining that practical use of the arm was still a number of years away.

After many design iterations and improvements, the Boston Arm debuted in 1968 at a press conference at Massachusetts General Hospital. (Technically, it was an elbow rather than an arm, and in medical circles and technical reports, it was called the Boston Elbow. But colloquially and in the popular press, the name “Boston Arm” stuck.) Although the Boston Arm mostly remained a research project, several hundred were produced and fitted to amputees by the R&D company Liberating Technologies.

The Boston Arm, in turn, influenced the Utah Artificial Arm, developed by Stephen Jacobsen, who had completed his Ph.D. in 1973 at MIT under Robert Mann and then returned to his alma mater, the University of Utah. The Utah Arm went on to become one of the most widely used myoelectric prosthetics.

Was the Boston Arm a success?

In 1984, the U.S. Congress’s Office of Technology Assessment (OTA) analyzed the Boston Arm as Health Technology Case Study 29, part of its assessment of the medical devices industry.

A color photo of a prosthetic arm showing the electronics exposed. The Boston Arm’s movements were controlled by electrical signals from an amputee’s bicep and tricep muscles.Michael Cardinali/MIT Museum

It is a fascinating read, and it does not mince words. The first iterations of the Boston Arm, it concluded, “were by all accounts failures.” Eighteen were produced and fitted to an amputee, and every single user rejected it. The most serious problem, similar to the Russian Hand, was the bulky, belt-worn rechargeable battery, which had a limited charge. MIT students and employees went back to work, improving the battery and creating a slimmer profile for the device. The several hundred that were produced enjoyed better adoption, according to the OTA report.

The newer version of the Boston Arm weighed about 1.1 kilograms, could lift 23 kg, and could hold over 22.7 kg in a locked position. It had a battery life of about 8 hours, and charging took about 2 hours. It had a range of 145° from full flection to full extension, a distance covered in approximately 1 second. It had an estimated service life of five years, with a recommended annual tune-up that required shipping the elbow back to Liberty Mutual for adjustments.

The first iterations of the Boston Arm were by all accounts failures.

In 1983, about 100 people were regularly using the Boston Arm out of an estimated 30,000 to 40,000 people with above-elbow amputations in the United States. Why wasn’t it more widely adopted? Cost was one major hurdle. The base cost of the Boston Arm was US $3,500, but that rose to $9,500 (more than $29,000 in today’s dollars) once it was properly fitted. The Utah Arm, the only commercially available myoelectric alternative to the Boston Arm, had a full fitting cost of $20,000. In comparison, the total cost for a mechanical cable elbow prosthesis averaged about $1,500 (including the price of the socket and the fitting) and had a service life of 10 years. The OTA report quoted an engineer at the National Institute of Handicapped Research describing the Boston Arm as “essentially overkill”—”an unnecessarily complex technology at a correspondingly high price,” the report stated. In the engineer’s opinion, the Boston Elbow did not outperform a conventional mechanical prosthesis.

Black and white photo of a man wearing a prosthetic arm and working on electrical equipment. Liberty Mutual Insurance Co. supported the development of the Boston Arm as a way of getting amputees off disability and back on the job.MIT Museum

Of course, cost takes on a different meaning depending on who is paying. What may be an exorbitant price for a consumer might be a shrewd calculation for a business. Liberty Mutual marketed the Boston Arm as a “worker’s arm,” and in advertising materials the battery life was described as “a full 8-hour work day.” The majority of amputees fitted with a Boston Arm happened to be covered by Liberty Mutual’s worker’s compensation insurance. Getting them back on the job motivated the research and development process.

But cost is not the only factor in determining what type of prosthesis to choose, or whether to use one at all. The OTA report acknowledged the psychological impact of amputation and the idiosyncratic and contextual nature of individual choice. Depending on the amputee’s situation, a prosthesis may or may not be the right choice. The latest electrotechnology may not be better than a mechanical design that had been in use for more than 100 years.

Were prosthetic users involved in the R&D process?

I’m the type of person who always jumps to the end of a book, just to see how things work out, so it is no surprise that I read the last section of Alter’s thesis, “Suggestions for the Future,” first. One sentence stood out: “Thus far, only two persons have operated the prosthetic system.”

One person was Alter himself, even though he had two fully functioning arms. The other was a 55-year-old male with a 25-year-old unilateral, above-elbow amputation. That user sat for one session, which lasted about two hours. Presumably Glimcher would have provided some background from users based on his clinical experience. Later, Neville Hogan, director of the Eric P. and Evelyn E. Newman Laboratory for Biomechanics and Human Rehabilitation at MIT, involved other prosthetic users in the research process, as shown in this short undated video:

Robert W. Mann’s “Boston Arm” www.youtube.com

But a question still looms large in my mind, especially after I read Britt H. Young’s critique of the modern prosthetics industry and the editorial reflections of Spectrum editor-in-chief Harry Goldstein: Might the Boston Arm have seen wider adoption if potential users had been a more integral part of its development?

I’m currently teaching a history of industrial design course, part of a program for first-generation college students who plan to major in computer science and engineering. Student retention is the program’s primary goal. But my personal goal for the course is to help these new students think about inclusive, user-centered design from the start. Imagine how adding that perspective could change the future of engineering.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2023 print issue as “Ode to an Arm.”

Reference: https://ift.tt/4FNV7up

What Self-Driving Cars Tell Us About AI Risks




In 2016, just weeks before the Autopilot in his Tesla drove Joshua Brown to his death, I pleaded with the U.S. Senate Committee on Commerce, Science, and Transportation to regulate the use of artificial intelligence in vehicles. Neither my pleading nor Brown’s death could stir the government to action.

Since then, automotive AI in the United States has been linked to at least 25 confirmed deaths and to hundreds of injuries and instances of property damage.

The lack of technical comprehension across industry and government is appalling. People do not understand that the AI that runs vehicles—both the cars that operate in actual self-driving modes and the much larger number of cars offering advanced driving assistance systems (ADAS)—are based on the same principles as ChatGPT and other large language models (LLMs). These systems control a car’s lateral and longitudinal position—to change lanes, brake, and accelerate—without waiting for orders to come from the person sitting behind the wheel.

Both kinds of AI use statistical reasoning to guess what the next word or phrase or steering input should be, heavily weighting the calculation with recently used words or actions. Go to your Google search window and type in “now is the time” and you will get the result “now is the time for all good men.” And when your car detects an object on the road ahead, even if it’s just a shadow, watch the car’s self-driving module suddenly brake.

Neither the AI in LLMs nor the one in autonomous cars can “understand” the situation, the context, or any unobserved factors that a person would consider in a similar situation. The difference is that while a language model may give you nonsense, a self-driving car can kill you.

In late 2021, despite receiving threats to my physical safety for daring to speak truth about the dangers of AI in vehicles, I agreed to work with the U.S. National Highway Traffic Safety Administration (NHTSA) as the senior safety advisor. What qualified me for the job was a doctorate focused on the design of joint human-automated systems and 20 years of designing and testing unmanned systems, including some that are now used in the military, mining, and medicine.

My time at NHTSA gave me a ringside view of how real-world applications of transportation AI are or are not working. It also showed me the intrinsic problems of regulation, especially in our current divisive political landscape. My deep dive has helped me to formulate five practical insights. I believe they can serve as a guide to industry and to the agencies that regulate them.

A white car with running lights on and with the word \u201cWaymo\u201d emblazoned on the rear door stands in a street, with other cars backed up behind it. In February 2023 this Waymo car stopped in a San Francisco street, backing up traffic behind it. The reason? The back door hadn’t been completely closed.Terry Chea/AP

1. Human errors in operation get replaced by human errors in coding

Proponents of autonomous vehicles routinely assert that the sooner we get rid of drivers, the safer we will all be on roads. They cite the NHTSA statistic that 94 percent of accidents are caused by human drivers. But this statistic is taken out of context and inaccurate. As the NHTSA itself noted in that report, the driver’s error was “the last event in the crash causal chain…. It is not intended to be interpreted as the cause of the crash.” In other words, there were many other possible causes as well, such as poor lighting and bad road design.

Moreover, the claim that autonomous cars will be safer than those driven by humans ignores what anyone who has ever worked in software development knows all too well: that software code is incredibly error-prone, and the problem only grows as the systems become more complex.

While a language model may give you nonsense, a self-driving car can kill you.

Consider these recent crashes in which faulty software was to blame. There was the October 2021 crash of a Pony.ai driverless car into a sign, the April 2022 crash of a TuSimple tractor trailer into a concrete barrier, the June 2022 crash of a Cruise robotaxi that suddenly stopped while making a left turn, and the March 2023 crash of another Cruise car that rear-ended a bus.

These and many other episodes make clear that AI has not ended the role of human error in road accidents. That role has merely shifted from the end of a chain of events to the beginning—to the coding of the AI itself. Because such errors are latent, they are far harder to mitigate. Testing, both in simulation but predominantly in the real world, is the key to reducing the chance of such errors, especially in safety-critical systems. However, without sufficient government regulation and clear industry standards, autonomous-vehicle companies will cut corners in order to get their products to market quickly.

2. AI failure modes are hard to predict

A large language model guesses which words and phrases are coming next by consulting an archive assembled during training from preexisting data. A self-driving module interprets the scene and decides how to get around obstacles by making similar guesses, based on a database of labeled images—this is a car, this is a pedestrian, this is a tree—also provided during training. But not every possibility can be modeled, and so the myriad failure modes are extremely hard to predict. All things being equal, a self-driving car can behave very differently on the same stretch of road at different times of the day, possibly due to varying sun angles. And anyone who has experimented with an LLM and changed just the order of words in a prompt will immediately see a difference in the system’s replies.

One failure mode not previously anticipated is phantom braking. For no obvious reason, a self-driving car will suddenly brake hard, perhaps causing a rear-end collision with the vehicle just behind it and other vehicles further back. Phantom braking has been seen in the self-driving cars of many different manufacturers and in ADAS-equipped cars as well.


Ross Gerber, behind the wheel, and Dan O’Dowd, riding shotgun, watch as a Tesla Model S, running Full Self-Driving software, blows past a stop sign.

THE DAWN PROJECT

The cause of such events is still a mystery. Experts initially attributed it to human drivers following the self-driving car too closely (often accompanying their assessments by citing the misleading 94 percent statistic about driver error). However, an increasing number of these crashes have been reported to NHTSA. In May 2022, for instance, the NHTSA sent a letter to Tesla noting that the agency had received 758 complaints about phantom braking in Model 3 and Y cars. This past May, the German publication Handelsblatt reported on 1,500 complaints of braking issues with Tesla vehicles, as well as 2,400 complaints of sudden acceleration. It now appears that self-driving cars experience roughly twice the rate of rear-end collisions as do cars driven by people.

Clearly, AI is not performing as it should. Moreover, this is not just one company’s problem—all car companies that are leveraging computer vision and AI are susceptible to this problem.

As other kinds of AI begin to infiltrate society, it is imperative for standards bodies and regulators to understand that AI failure modes will not follow a predictable path. They should also be wary of the car companies’ propensity to excuse away bad tech behavior and to blame humans for abuse or misuse of the AI.

3. Probabilistic estimates do not approximate judgment under uncertainty

Ten years ago, there was significant hand-wringing over the rise of IBM’s AI-based Watson, a precursor to today’s LLMs. People feared AI would very soon cause massive job losses, especially in the medical field. Meanwhile, some AI experts said we should stop training radiologists.

These fears didn’t materialize. While Watson could be good at making guesses, it had no real knowledge, especially when it came to making judgments under uncertainty and deciding on an action based on imperfect information. Today’s LLMs are no different: The underlying models simply cannot cope with a lack of information and do not have the ability to assess whether their estimates are even good enough in this context.

These problems are routinely seen in the self-driving world. The June 2022 accident involving a Cruise robotaxi happened when the car decided to make an aggressive left turn between two cars. As the car safety expert Michael Woon detailed in a report on the accident, the car correctly chose a feasible path, but then halfway through the turn, it slammed on its brakes and stopped in the middle of the intersection. It had guessed that an oncoming car in the right lane was going to turn, even though a turn was not physically possible at the speed the car was traveling. The uncertainty confused the Cruise, and it made the worst possible decision. The oncoming car, a Prius, was not turning, and it plowed into the Cruise, injuring passengers in both cars.

Cruise vehicles have also had many problematic interactions with first responders, who by default operate in areas of significant uncertainty. These encounters have included Cruise cars traveling through active firefighting and rescue scenes and driving over downed power lines. In one incident, a firefighter had to knock the window out of the Cruise car to remove it from the scene. Waymo, Cruise’s main rival in the robotaxi business, has experienced similar problems.

These incidents show that even though neural networks may classify a lot of images and propose a set of actions that work in common settings, they nonetheless struggle to perform even basic operations when the world does not match their training data. The same will be true for LLMs and other forms of generative AI. What these systems lack is judgment in the face of uncertainty, a key precursor to real knowledge.

4. Maintaining AI is just as important as creating AI

Because neural networks can only be effective if they are trained on significant amounts of relevant data, the quality of the data is paramount. But such training is not a one-and-done scenario: Models cannot be trained and then sent off to perform well forever after. In dynamic settings like driving, models must be constantly updated to reflect new types of cars, bikes, and scooters, construction zones, traffic patterns, and so on.

In the March 2023 accident, in which a Cruise car hit the back of an articulated bus, experts were surprised, as many believed such accidents were nearly impossible for a system that carries lidar, radar, and computer vision. Cruise attributed the accident to a faulty model that had guessed where the back of the bus would be based on the dimensions of a normal bus; additionally, the model rejected the lidar data that correctly detected the bus.

Software code is incredibly error-prone, and the problem only grows as the systems become more complex.

This example highlights the importance of maintaining the currency of AI models. “Model drift” is a known problem in AI, and it occurs when relationships between input and output data change over time. For example, if a self-driving car fleet operates in one city with one kind of bus, and then the fleet moves to another city with different bus types, the underlying model of bus detection will likely drift, which could lead to serious consequences.

Such drift affects AI working not only in transportation but in any field where new results continually change our understanding of the world. This means that large language models can’t learn a new phenomenon until it has lost the edge of its novelty and is appearing often enough to be incorporated into the dataset. Maintaining model currency is just one of many ways that AI requires periodic maintenance, and any discussion of AI regulation in the future must address this critical aspect.

5. AI has system-level implications that can’t be ignored

Self-driving cars have been designed to stop cold the moment they can no longer reason and no longer resolve uncertainty. This is an important safety feature. But as Cruise, Tesla, and Waymo have demonstrated, managing such stops poses an unexpected challenge.

A stopped car can block roads and intersections, sometimes for hours, throttling traffic and keeping out first-response vehicles. Companies have instituted remote-monitoring centers and rapid-action teams to mitigate such congestion and confusion, but at least in San Francisco, where hundreds of self-driving cars are on the road, city officials have questioned the quality of their responses.

Self-driving cars rely on wireless connectivity to maintain their road awareness, but what happens when that connectivity drops? One driver found out the hard way when his car became entrapped in a knot of 20 Cruise vehicles that had lost connection to the remote-operations center and caused a massive traffic jam.

Of course, any new technology may be expected to suffer from growing pains, but if these pains become serious enough, they will erode public trust and support. Sentiment towards self-driving cars used to be optimistic in tech-friendly San Francisco, but now it has taken a negative turn due to the sheer volume of problems the city is experiencing. Such sentiments may eventually lead to public rejection of the technology if a stopped autonomous vehicle causes the death of a person who was prevented from getting to the hospital in time.

So what does the experience of self-driving cars say about regulating AI more generally? Companies not only need to ensure they understand the broader systems-level implications of AI, they also need oversight—they should not be left to police themselves. Regulatory agencies must work to define reasonable operating boundaries for systems that use AI and issue permits and regulations accordingly. When the use of AI presents clear safety risks, agencies should not defer to industry for solutions and should be proactive in setting limits.

AI still has a long way to go in cars and trucks. I’m not calling for a ban on autonomous vehicles. There are clear advantages to using AI, and it is irresponsible for people to call on a ban, or even a pause, on AI. But we need more government oversight to prevent the taking of unnecessary risks.

And yet the regulation of AI in vehicles isn’t happening yet. That can be blamed in part on industry overclaims and pressure, but also on a lack of capability on the part of regulators. The European Union has been more proactive about regulating artificial intelligence in general and in self-driving cars particularly. In the United States, we simply do not have enough people in federal and state departments of transportation that understand the technology deeply enough to advocate effectively for balanced public policies and regulations. The same is true for other types of AI.

This is not any one administration’s problem. Not only does AI cut across party lines, it cuts across all agencies and at all levels of government. The Department of Defense, Department of Homeland Security, and other government bodies all suffer from a workforce that does not have the technical competence needed to effectively oversee advanced technologies, especially rapidly evolving AI.

To engage in effective discussion about the regulation of AI, everyone at the table needs to have technical competence in AI. Right now, these discussions are greatly influenced by industry (which has a clear conflict of interest) or Chicken Littles who claim machines have achieved the ability to outsmart humans. Until government agencies have people with the skills to understand the critical strengths and weaknesses of AI, conversations about regulation will see very little meaningful progress.

Recruiting such people can be easily done. Improve pay and bonus structures, embed government personnel in university labs, reward professors for serving in the government, provide advanced certificate and degree programs in AI for all levels of government personnel, and offer scholarships for undergraduates who agree to serve in the government for a few years after graduation. Moreover, to better educate the public, college classes that teach AI topics should be free.

We need less hysteria and more education so that people can understand the promises but also the realities of AI.

Reference: https://ift.tt/dXTuVyG

Who Really Invented the Rechargeable Lithium-Ion Battery?




Fifty years after the birth of the rechargeable lithium-ion battery, it’s easy to see its value. It’s used in billions of laptops, cellphones, power tools, and cars. Global sales top US $45 billion a year, on their way to more than $100 billion in the coming decade.

And yet this transformative invention took nearly two decades to make it out of the lab, with numerous companies in the United States, Europe, and Asia considering the technology and yet failing to recognize its potential.

The first iteration, developed by M. Stanley Whittingham at Exxon in 1972, didn’t get far. It was manufactured in small volumes by Exxon, appeared at an electric vehicle show in Chicago in 1977, and served briefly as a coin cell battery. But then Exxon dropped it.

Various scientists around the world took up the research effort, but for some 15 years, success was elusive. It wasn’t until the development landed at the right company at the right time that it finally started down a path to battery world domination.

Did Exxon invent the rechargeable lithium battery?

Two smiling men flank a man seated in a wheelchair, all wearing suits. Akira Yoshino, John Goodenough, and M. Stanley Whittingham [from left] shared the 2019 Nobel Prize in Chemistry. At 97, Goodenough was the oldest recipient in the history of the Nobel awards. Jonas Ekstromer/AFP/Getty Images

In the early 1970s, Exxon scientists predicted that global oil production would peak in the year 2000 and then fall into a steady decline. Company researchers were encouraged to look for oil substitutes, pursuing any manner of energy that didn’t involve petroleum.

Whittingham, a young British chemist, joined the quest at Exxon Research and Engineering in New Jersey in the fall of 1972. By Christmas, he had developed a battery with a titanium-disulfide cathode and a liquid electrolyte that used lithium ions.

Whittingham’s battery was unlike anything that had preceded it. It worked by inserting ions into the atomic lattice of a host electrode material—a process called intercalation. The battery’s performance was also unprecedented: It was both rechargeable and very high in energy output. Up to that time, the best rechargeable battery had been nickel cadmium, which put out a maximum of 1.3 volts. In contrast, Whittingham’s new chemistry produced an astonishing 2.4 volts.

In the winter of 1973, corporate managers summoned Whittingham to the company’s New York City offices to appear before a subcommittee of the Exxon board. “I went in there and explained it—5 minutes, 10 at the most,” Whittingham told me in January 2020. “And within a week, they said, yes, they wanted to invest in this.”

three three-dimensional block diagrams of a battery, with arrows showing electron flow, labeled Whittingham\u2019s battery. Whittingham’s battery, the first lithium intercalation battery, was developed at Exxon in 1972 using titanium disulfide for the cathode and metallic lithium for the anode.Johan Jarnestad/The Royal Swedish Academy of Sciences

It looked like the beginning of something big. Whittingham published a paper in Science; Exxon began manufacturing coin cell lithium batteries, and a Swiss watch manufacturer, Ebauches, used the cells in a solar-charging wristwatch.

But by the late 1970s, Exxon’s interest in oil alternatives had waned. Moreover, company executives thought Whittingham’s concept was unlikely to ever be broadly successful. They washed their hands of lithium titanium disulfide, licensing the technology to three battery companies—one in Asia, one in Europe, and one in the United States.

“I understood the rationale for doing it,” Whittingham said. “The market just wasn’t going to be big enough. Our invention was just too early.”

Oxford takes the handoff

A photo that shows a closeup of a man with glasses In 1976, John Goodenough [left] joined the University of Oxford, where he headed development of the first lithium cobalt oxide cathode.The University of Texas at Austin

It was the first of many false starts for the rechargeable lithium battery. John B. Goodenough at the University of Oxford was the next scientist to pick up the baton. Goodenough was familiar with Whittingham’s work, in part because Whittingham had earned his Ph.D. at Oxford. But it was a 1978 paper by Whittingham, “Chemistry of Intercalation Compounds: Metal Guests in Chalcogenide Hosts,” that convinced Goodenough that the leading edge of battery research was lithium. [Goodenough passed away on 25 June at the age of 100.]

Goodenough and research fellow Koichi Mizushima began researching lithium intercalation batteries. By 1980, they had improved on Whittingham’s design, replacing titanium disulfide with lithium cobalt oxide. The new chemistry boosted the battery’s voltage by another two-thirds, to 4 volts.

Goodenough wrote to battery companies in the United States, United Kingdom, and the European mainland in hopes of finding a corporate partner, he recalled in his 2008 memoir, Witness to Grace. But he found no takers.

He also asked the University of Oxford to pay for a patent, but Oxford declined. Like many universities of the day, it did not concern itself with intellectual property, believing such matters to be confined to the commercial world.

Three three-dimensional block diagrams of a battery, with arrows showing electron flow, labeled Goodenough\u2019s battery. Goodenough’s 1980 battery replaced Whittingham’s titanium disulfide in the cathode with lithium cobalt oxide.Johan Jarnestad/The Royal Swedish Academy of Sciences

Still, Goodenough had confidence in his battery chemistry. He visited the Atomic Energy Research Establishment (AERE), a government lab in Harwell, about 20 kilometers from Oxford. The lab agreed to bankroll the patent, but only if the 59-year-old scientist signed away his financial rights. Goodenough complied. The lab patented it in 1981; Goodenough never saw a penny of the original battery’s earnings.

For the AERE lab, this should have been the ultimate windfall. It had done none of the research, yet now owned a patent that would turn out to be astronomically valuable. But managers at the lab didn’t see that coming. They filed it away and forgot about it.

Asahi Chemical steps up to the plate

The rechargeable lithium battery’s next champion was Akira Yoshino, a 34-year-old chemist at Asahi Chemical in Japan. Yoshino had independently begun to investigate using a plastic anode—made from electroconductive polyacetylene—in a battery and was looking for a cathode to pair with it. While cleaning his desk on the last day of 1982, he found a 1980 technical paper coauthored by Goodenough, Yoshino recalled in his autobiography, Lithium-Ion Batteries Open the Door to the Future, Hidden Stories by the Inventor. The paper—which Yoshino had sent for but hadn’t gotten around to reading—described a lithium cobalt oxide cathode. Could it work with his plastic anode?

Yoshino, along with a small team of colleagues, paired Goodenough’s cathode with the plastic anode. They also tried pairing the cathode with a variety of other anode materials, mostly made from different types of carbons. Eventually, he and his colleagues settled on a carbon-based anode made from petroleum coke.

Three three-dimensional block diagrams of a battery, with arrows showing electron flow, labeled Yoshino\u2019s battery. Yoshino’s battery, developed at Asahi Chemical in the late 1980s, combined Goodenough’s cathode with a petroleum coke anode. Johan Jarnestad/The Royal Swedish Academy of Sciences

This choice of petroleum coke turned out to be a major step forward. Whittingham and Goodenough had used anodes made from metallic lithium, which was volatile and even dangerous. By switching to carbon, Yoshino and his colleagues had created a battery that was far safer.

Still, there were problems. For one, Asahi Chemical was a chemical company, not a battery maker. No one at Asahi Chemical knew how to build production batteries at commercial scale, nor did the company own the coating or winding equipment needed to manufacture batteries. The researchers had simply built a crude lab prototype.

Enter Isao Kuribayashi, an Asahi Chemical research executive who had been part of the team that created the battery. In his book, A Nameless Battery with Untold Stories, Kuribayashi recounted how he and a colleague sought out consultants in the United States who could help with the battery’s manufacturing. One consultant recommended Battery Engineering, a tiny firm based in a converted truck garage in the Hyde Park area of Boston. The company was run by a small band of Ph.D. scientists who were experts in the construction of unusual batteries. They had built batteries for a host of uses, including fighter jets, missile silos, and downhole drilling rigs.

A photo of a man leaning over a board of electronics, wires, and lights Nikola Marincic, working at Battery Engineering in Boston, transformed Asahi Chemical’s crude prototype [below] into preproduction cells. Lidija Ortloff

A photo of an unusual glass vessel containing a liquid

So Kuribayashi and his colleague flew to Boston in June of 1986, showing up at Battery Engineering unannounced with three jars of slurry—one containing the cathode, one the anode, and the third the electrolyte. They asked company cofounder Nikola Marincic to turn the slurries into cylindrical cells, like the kind someone might buy for a flashlight.

“They said, ‘If you want to build the batteries, then don’t ask any more questions,’” Marincic told me in a 2020 interview. “They didn’t tell me who sent them, and I didn’t want to ask.”

Kuribayashi and his colleague further stipulated that Marincic tell no one about their battery. Even Marincic’s employees didn’t know until 2020 that they had participated in the construction of the world’s first preproduction lithium-ion cells.

Marincic charged $30,000 ($83,000 in today’s dollars) to build a batch of the batteries. Two weeks later, Kuribayashi and his colleague departed for Japan with a box of 200 C-size cells.

Even with working batteries in hand, however, Kuribayashi still met resistance from Asahi Chemical’s directors, who continued to fear moving into an unknown business.

Sony gets pulled into the game

Kuribayashi wasn’t ready to give up. On 21 January 1987, he visited Sony’s camcorder division to make a presentation about Asahi Chemical’s new battery. He took one of the C cells and rolled it across the conference room table to his hosts.

Kuribayashi didn’t give many more details in his book, simply writing that by visiting Sony, he hoped to “confirm the battery technology.”

Sony, however, did more than “confirm” it. By this time, Sony was considering developing its own rechargeable lithium battery, according to its corporate history. When company executives saw Asahi’s cell, they recognized its enormous value. Because Sony was both a consumer electronics manufacturer and a battery manufacturer, its management team understood the battery from both a customer’s and a supplier’s perspective.

And the timing was perfect. Sony engineers were working on a new camcorder, later to be known as the Handycam, and that product dearly needed a smaller, lighter battery. To them, the battery that Kuribayashi presented seemed like a gift from the heavens.

An image of a patent. John Goodenough and his coinventor, Koichi Mizushima, convinced the Atomic Energy Research Establishment to fund the cost of patenting their lithium cobalt oxide battery but had to sign away their financial rights to do so. U.S. PATENT AND TRADEMARK OFFICE

Several meetings followed. Some Sony scientists were allowed inside Asahi’s labs, and vice versa, according to Kuribayashi. Ultimately, Sony proposed a partnership. Asahi Chemical declined.

Here, the story of the lithium-ion battery’s journey to commercialization gets hazy. Sony researchers continued to work on developing rechargeable lithium batteries, using a chemistry that Sony’s corporate history would later claim was created in house. But Sony’s battery used the same essential chemistry as Asahi Chemical’s. The cathode was lithium cobalt oxide; the anode was petroleum coke; the liquid electrolyte contained lithium ions.

What is clear is that for the next two years, from 1987 to 1989, Sony engineers did the hard work of transforming a crude prototype into a product. Led by battery engineer Yoshio Nishi, Sony’s team worked with suppliers to develop binders, electrolytes, separators, and additives. They developed in-house processes for heat-treating the anode and for making cathode powder in large volumes. They deserve credit for creating a true commercial product.

Only one step remained. In 1989, one of Sony’s executives called the Atomic Energy Research Establishment in Harwell, England. The executive asked about one of the lab’s patents that had been gathering dust for eight years—Goodenough’s cathode. He said Sony was interested in licensing the technology.

Scientists and executives at the Harwell lab scratched their heads. They couldn’t imagine why anyone would be interested in the patent “Electrochemical Cell with New Fast Ion Conductors.”

“It was not clear what the market was going to be, or how big it would be,” Bill Macklin, an AERE scientist at the time, told me. A few of the older scientists even wondered aloud whether it was appropriate for an atomic lab in England to share secrets with a company in Japan, a former World War II adversary. Eventually, though, a deal was struck.

Sony takes it across the finish line

Sony introduced the battery in 1991, giving it the now-familiar moniker “ lithium-ion.” It quickly began to make its way into camcorders, then cellphones.

By that time, 19 years had passed since Whittingham’s invention. Multiple entities had had the opportunity to take this technology all the way—and had dismissed it.

First, there was Exxon, whose executives couldn’t have dreamed that lithium-ion batteries would end up enabling electric vehicles to compete with oil in a big way. Some observers would later contend that, by abandoning the technology, Exxon had conspired to suppress a challenger to oil. But Exxon licensed the technology to three other companies, and none of those succeeded with it, either.

Then there was the University of Oxford, which had refused to pay for a patent.

Finally, there was Asahi Chemical, whose executives struggled with the decision of whether to enter the battery market. (Asahi finally got into the game in 1993, teaming up with Toshiba to make lithium-ion batteries.)

Sony and AERE, the entities that gained the most financially from the battery, both benefitted from luck. The Atomic Energy Research Establishment paid only legal fees for what turned out to be a valuable patent and later had to be reminded that it even owned the patent. AERE’s profits from its patent are unknown, but most observers agree that it reaped at least $50 million, and possibly more than $100 million, before the patent expired.

Sony, meanwhile, had received that fortuitous visit from Asahi Chemical’s Kuribayashi, which set the company on the path toward commercialization. Sony sold tens of millions of cells and then sublicensed the AERE patent to more than two dozen other Asian battery manufacturers, which made billions more. (In 2016, Sony sold its battery business to Murata Manufacturing for 17.5 billion yen, roughly $126 million today).

None of the original investigators—Whittingham, Goodenough, and Yoshino—received a cut of these profits. All three, however, shared the 2019 Nobel Prize in Chemistry. Sony’s Yoshio Nishi, by then retired, wasn’t included in the Nobel, a decision he criticized at a press conference, according to the Mainichi Shimbun newspaper.

In retrospect, lithium-ion’s early history now looks like a tale of two worlds. There was a scientific world and a business world, and they seldom overlapped. Chemists, physicists, and materials scientists worked quietly, sharing their triumphs in technical publications and on overhead projectors at conferences. The commercial world, meanwhile, did not look to university scientists for breakthroughs, and failed to spot the potential of this new battery chemistry, even those advances made by their own researchers.

Had it not been for Sony, the rechargeable lithium battery might have languished for many more years. Almost certainly, the company succeeded because its particular circumstances prepared it to understand and appreciate Kuribayashi’s prototype. Sony was already in the battery business, it needed a better battery for its new camcorder, and it had been toying with the development of its own rechargeable lithium battery. Sony engineers and managers knew exactly where this puzzle piece could go, recognizing what so many others had overlooked. As Louis Pasteur had famously stated more than a century earlier, “Chance favors the prepared mind.”

The story of the lithium-ion battery shows that Pasteur was right.

This article appears in the August 2023 print issue as “The Lithium-ion Battery’s Long and Winding Road.”

Reference: https://ift.tt/aBGTWLS

Saturday, July 29, 2023

This Rwandan Engineer is Learning How to Manage Humanitarian Projects




After several years of volunteering for IEEE humanitarian technology projects, Samantha Mugeni Niyoyita decided she needed more than just technical skills to help underserved communities become more self-sufficient. The IEEE member from Kigali, Rwanda, participated in installing portable sinks in nearby rural markets to curb the spread of COVID-19 and provided clean water and sanitation services to people displaced by the Mount Nyiragongo volcano eruption in 2021.

Niyoyita wanted to learn how to tackle other issues such as access to quality health care, understanding different cultures, and becoming familiar with local policies. And she felt she needed to enhance her leadership and communications skills and learn how to manage projects.

Thanks to a scholarship from IEEE Smart Village, she is now getting that education through the master’s degree program in development practice from Regis University, in Denver. The program, offered virtually and in person, combines theory and hands-on training on topics such as community outreach and engagement, health care, the environment, and sustainability. It teaches leadership and other soft skills.

In addition to bringing electricity to remote communities, IEEE Smart Village offers educational and employment opportunities. To be eligible for its scholarship, the student’s thesis project must support the program’s mission.

Niyoyita, who attends classes remotely, is a process engineer at Africa Improved Foods, also in Kigali. AIF manufactures porridge from maize and other cereals and fortifies it with vitamins and minerals. She has worked there for more than four years.

“Smart Village wants to empower its members so that we can implement projects in our local community knowing what the best practices are,” she says.

She acknowledges she would not have been able to afford to attend Regis without help from IEEE.

Electronic medical records to improve care

Niyoyita is now in the second year of the degree program. Her research project is to assess the impact of digitizing the medical records of primary care clinics, known as health posts, in rural Rwanda.

“The health post records are mostly paper-based, and transitioning to electronic records would improve patient outcomes,” Niyoyita says. “This provides easy access to records and improves coordination of care.”

She plans to evaluate just how access to electronic records by health care professionals can improve patient care.

Her scholarship of US $5,045 was funded by donations to IEEE Smart Village. Since the educational program was launched in 2015, more than 30 individuals from 16 countries have participated.

“I was fortunate to receive this scholarship,” she says. “It has helped me a lot when it comes to soft skills. As an engineer, normally we tend to be very technical. Expressing ourselves and sharing our skills and expertise are the kinds of things you can only learn through a social science master’s degree.”

Many opportunities as an industrial engineer

As a youngster, Niyoyita was more interested in subjects that required her to reason and think creatively instead of memorizing information. She excelled at mathematics and physics.

“That was how I got into engineering,” she says, adding that she also was inspired by her brother, an engineer.

The degree from Regis is in addition to those Niyoyita already holds from the University of Applied Sciences and Arts, known as HES-SO Valais-Wallis, in Sion, Switzerland. She earned a bachelor’s degree in industrial systems engineering in 2015 and a master’s in engineering with a concentration in mechatronics in 2017.

She chose to study industrial engineering, she says, because she finds it to be a “discipline that offers numerous pathways to various fields and career opportunities. I’m able to understand concept designs—which includes mechanical and electrical—programming, and automation. You have a wealth of career opportunities and a chance to make an impact.”

“IEEE Smart Village wants to empower its members so that we can implement projects in our local community knowing what the best practices are.”

At AIF, she analyzes the company’s processes to identify bottlenecks in the manufacturing line, and she proposes ways to fix them.

“We receive these cereals and clean and grind them,” she says. “We have a cooking section and fortify the cereals through mixing. Then we package and sell them.”

She evaluates the production flow and checks on the performance of the equipment. In addition, she provides technological support when new products are being developed.

AIF is benefiting from the training she’s receiving from the master’s degree program, she says, as she is learning to lead teams, provide innovative solutions, and collaborate with others.

A successful IEEE power conference to Rwanda

Niyoyita joined IEEE while a student at HES-SO Valais-Wallis because she needed access to its journals for her research papers. After she graduated, she continued her membership and started volunteering for IEEE Smart Village in 2019. She served as a secretary for its Africa Working Group team, which worked on humanitarian projects.

She also got involved in organizing conferences in Africa. Her first event was the 2019 PowerAfrica Conference, held in Abuja, Nigeria. It covered emerging power system technologies, applications, government policies, and regulatory frameworks. As a member of the conference’s technical program committee, she helped develop the program and reviewed article submissions. She also was a speaker on the IEEE Women in Engineering panel.

Based on that positive experience, she says, she vowed to bring the conference to Rwanda—which she did last year. As cochair, she oversaw the budget, conference logistics, and other arrangements to “ensure that local and foreign attendees had an excellent experience,” she says. More than 300 people from 43 countries attended.

Providing entrepreneurs with skills to succeed

One project that Niyoyita has put on the back burner because of her work and school commitments is providing her country’s technicians with the skills they need to become entrepreneurs.

Many recent graduates of vocational technical schools in rural Rwanda have told her they want to start their own company, she says, but she has noticed they lack the skills to do so.

“Even though they provide problem-solving products or ideas, they often lack the marketing skills and financial literacy to be able to sustain their project,” she says. “They also need to know how to pitch an idea and make a proposal so they can get funding.”

She would like to create an after-school incubation hub to provide the technicians with training, access to the Internet so they can flesh out their ideas, mentorship opportunities, and advisors who can tell them where to find financing.

“I was able to get some of the skills from the master’s degree program,” she says, “but most of them I got from my work and also from my involvement in IEEE.”

Reference: https://ift.tt/LXHfiuh

This Machine Could Keep Moore’s Law on Track




Over the last half-century, we’ve come to think of Moore’s Law—the roughly biennial doubling of the number of transistors in a given area of silicon, the gains that drive computing forward—as something that just happens, as though it were a natural, inevitable process, akin to evolution or aging. The reality, of course, is much different. Keeping pace with Moore’s Law requires almost unimaginable expenditures of time, energy, and human ingenuity—thousands of people on multiple continents and endless acres of some of the most complex machinery on the planet.

Perhaps the most essential of these machines performs extreme-ultraviolet (EUV) photolithography. EUV lithography, the product of decades of R&D, is now the driving technology behind the past two generations of cutting-edge chips, used in every top-end smartphone, tablet, laptop, and server in the last three years. Yet Moore’s Law must march on, and chipmakers continue to advance their road maps, meaning they’ll need to shrink device geometries even further.

So at ASML, my colleagues and I are developing the next generation of lithography. Called high-numerical-aperture EUV lithography, it involves a major overhaul of the system’s internal optics. High-NA EUV should be ready for commercial use in 2025, and chipmakers are depending on its capabilities to keep their promised advances through the end of this decade

The 3 factors of photolithography

Moore’s Law relies on improving the resolution of photolithography so chipmakers can lay down finer and finer circuits. Over the last 35 years, engineers have achieved a resolution reduction of two orders of magnitude by working on a combination of three factors: the wavelength of the light; k 1, a coefficient that encapsulates process-related factors; and numerical aperture (NA), a measure of the range of angles over which the system can emit light.

An equation with four variablesCAPTION: The critical dimension\u2014the resolution of a photolithography system\u2014is equal to the wavelength of light used divided by the numerical aperture and multiplied by a quality, k1, related to process improvements. Source: IEEE Spectrum

The critical dimension—that is, the smallest possible feature size you can print with a certain photolithography-exposure tool—is proportional to the wavelength of light divided by the numerical aperture of the optics. So you can achieve smaller critical dimensions by using either shorter light wavelengths or larger numerical apertures or a combination of the two. The k 1 value can be pushed as close as possible to its physical lower limit of 0.25 by improving manufacturing-process control, for example.

In general, the most economical ways to boost resolution are by increasing the numerical aperture and by improving tool and process control to allow for a smaller k 1. Only after chipmakers run out of options to further improve NA and k1 do they resort to reducing the wavelength of the light source.

Nevertheless, the industry has had to make that wavelength change a number of times. The historical progression of wavelengths went from 365 nanometers, generated using a mercury lamp, to 248 nm, via a krypton-fluoride laser, in the late 1990s, and then to 193 nm, from an argon-fluoride laser, at the beginning of this century. For each generation of wavelength, the numerical aperture of lithography systems was progressively increased before industry jumped to a shorter wavelength.

For example, as the use of 193 nm was coming to an end, a novel approach to increasing NA was introduced: immersion lithography. By placing water between the bottom of the lens and the wafer, the NA could be significantly enlarged from 0.93 to 1.35. From its introduction around 2006, 193-nm immersion lithography was the industry workhorse for leading-edge lithography

A chart shows dots descending from right to left. The dots are grouped by color and labelled The resolution of photolithography has improved about 10,000-fold over the last four decades. That’s due in part to using smaller and smaller wavelengths of light, but it has also required greater numerical aperture and improved processing techniques.Source: ASML

The dawn of EUV

But as the need to print features smaller than 30 nm increased, and because the NA of 193-nm lithography had been maxed out, keeping up with Moore’s Law grew more and more complex. To create features smaller than 30 nm requires either using multiple patterns to produce a single layer of chip features—a technologically and economically burdensome technique—or another change of wavelength. It took more than 20 years and an unparalleled development effort to bring the next new wavelength online: 13.5-nm EUV.

EUV necessitates an entirely new way to generate light. It’s a remarkably complex process that involves hitting molten tin droplets in midflight with a powerful CO2 laser. The laser vaporizes the tin into a plasma, emitting a spectrum of photonic energy. From this spectrum, the EUV optics harvest the required 13.5-nm wavelength and direct it through a series of mirrors before it is reflected off a patterned mask to project that pattern onto the wafer. And all of this must be done in an ultraclean vacuum, because the 13.5-nm wavelength is absorbed by air. (In previous generations of photolithography, light was directed through the mask to project a pattern onto the wafer. But EUV is so readily absorbed that the mask and other optics must be reflective instead.)

A cutaway of a rectangular machine. Purple beams bounce off objects within the machine. In a vacuum chamber, EUV light [purple] reflects off multiple mirrors before bouncing off the photomask [top center]. From there the light continues its journey until it is projected onto the wafer [bottom center], carrying the photomask’s pattern. The illustration shows today’s commercial system with a 0.33 numerical aperture. The optics in future systems, with an NA of 0.55, will be different. Source: ASML

The switch to EUV from 193-nanometer light did part of the job of decreasing the critical dimension. A process called “design for manufacturing,” which involves setting the design rules of circuit blocks to take advantage of photolithography’s limits, has done a lot to reduce k 1. Now it’s time to boost numerical aperture again, from today’s 0.33 to 0.55.

Making high-NA EUV work

Increasing the NA from today’s 0.33 to the target value of 0.55 inevitably entails a cascade of other adjustments. Projection systems like EUV lithography have an NA at the wafer and also at the mask. When you increase the NA at the wafer, it also increases the NA at the mask. Consequently, at the mask, the incoming and outgoing cones of light become larger and must be angled away from each other to avoid overlapping. Overlapping cones of light produce an asymmetric diffraction pattern, resulting in unpleasant imaging effects.

But there’s a limit to this angle. Because the reflective masks needed for EUV lithography are actually made of multiple layers of material, you can’t ensure getting a proper reflection above a certain reflective angle. EUV masks have a maximum reflective angle of 11 degrees. There are other challenges as well, but reflective angle is the biggest.

A line chart curves up and then descends to the right. A point on the descent is highlighted. If the EUV light strikes the photomask at too steep an angle, it will not reflect properly.Source: ASML

A row of 3 images shows purple cones pointing toward a patterned square. The angle of reflection at the mask in today’s EUV is at its limit [left] Increasing the numerical aperture of EUV would result in an angle of reflection that is too wide [center]. So high-NA EUV uses anamorphic optics, which allow the angle to increase in only one direction [right]. The field that can be imaged this way is half the size, so the pattern on the mask must be distorted in one direction, but that’s good enough to maintain throughput through the machine.Source: ASML

The only way to overcome this challenge is to increase a quality called demagnification. Demagnification is exactly what it sounds like—taking the reflected pattern from the mask and shrinking it. To compensate for the reflective-angle problem, my colleagues and I had to double the demagnification to 8x. As a consequence, the part of the mask imaged will be much smaller on the wafer. This smaller image field means it will take longer to produce the complete chip pattern. Indeed, this requirement would reduce the throughput of our high-NA scanner to under 100 wafers per hour—a productivity level that would make chip manufacturing uneconomical.

Thankfully, we found that it is necessary to increase the demagnification in only one direction—the one in which the largest reflective angles occur. The demagnification in the other direction can remain unchanged. This results in an acceptable field size on the wafer—about half the size used in today’s EUV systems, or 26 by 16.5 millimeters instead of 26 by 33 mm. This kind of direction-dependent, or anamorphic, demagnification forms the basis of our high-NA system. The optics manufacturer Carl Zeiss has made a herculean effort to design and manufacture an anamorphic lens with the specifications required for our new machine.

To ensure the same productivity levels with the half-size field, we had to redevelop the system’s reticle and wafer stages—the platforms that hold the mask and wafer, respectively—and move them in sync with each other as the scanning process takes place. The redesign resulted in nanometer-precision stages with acceleration improved by a factor of four.

High-NA EUV in production in 2025

The first high-NA EUV system, the ASML EXE:5000, will be installed in a new lab that we’re opening jointly with the Belgium-based nanoelectronics research facility Imec, in early 2024. This lab will allow customers, mask makers, photoresist suppliers, and others to develop the infrastructure needed to make high-NA EUV a reality.

And it is essential that we do make it a reality, because high-NA EUV is a critical component in keeping Moore’s Law alive. Getting to 0.55 NA won’t be the final step, though. From there, ASML, Zeiss, and the entire semiconductor ecosystem will be stretching even further toward technologies that are better, faster, and innovative in ways we can hardly imagine yet.

Reference: https://ift.tt/B8gzNF4

Friday, July 28, 2023

Political Backlash Ramps Up Digital Privacy Laws




The wheels of justice may turn slowly, but tech ramifications sometimes turn around on a shorter timetable.

The U.S. Supreme Court’s 2022 overruling of its landmark 1973 Roe v. Wade decision—alongside subsequent state-level prosecutions for abortions—provoked a pro-privacy backlash now wending its way through administrations and legislatures. At the same time, though, there may be a catch. Between industry lobbying and legislative mistakes, some of the proposed or recent rules may leave room for data brokers to still profit and for buyers to still continue obtaining people’s locations without explicit consent.

At the moment, unlike the early 1970s when the previous Supreme Court precedent was set, broad-sweeping digital toolkits are widely available. In states tightening their abortion laws and seeking to prosecute women seeking or obtaining abortions in defiance of those laws, prosecutors have access to mobile phone location histories—currently available on the open market throughout the U.S.

“Even if you are a privacy-conscious person, just by going out in public, there are going to be digital breadcrumbs.”
—Alex Marthews, Restore the Fourth

“I think there is increased anxiety that is being spurred in part by the overruling of Roe vs. Wade,” says Alex Marthews, national chair of Restore The Fourth, a civil society organization in Boston. “There is anxiety about residents’ browser and location information being subject to information requests in states that have essentially outlawed abortion,” he says.

Political leaders in both parties are responding. The Republican-led U.S. House Judiciary Committee last week held a markup hearing for a bill that would prevent U.S. law enforcement and intelligence agencies from buying cellphone user data. And the Democrat-led U.S. Department of Health and Human Services is preparing an update to the Health Insurance Portability and Accountability Act (HIPAA) that would provide protection for abortion-related information.

At the state level, Washington, California, and Massachusetts state legislators have introduced bills that seek to limit abortion-related data sharing. Washington’s, which passed in April, requires users to request the deletion of health data, but obliges companies to do so. The so-called Location Shield Act under consideration in Massachusetts would go further, by preventing companies from selling location data, regardless of user consent. The act would further allow for people to sue data brokers for misuse, something lobbyists managed to negotiate out of earlier drafts of both California’s 2018 Consumer Privacy Act (CCPA) and the European Union’s 2018 General Data Protection Regulations (GDPR). A more recent bill under consideration in California would have tighter protections.

The Massachusetts bill does not prevent re-identifiability from supposedly anonymized location data. The bill seeks to limit location data to a radius greater than 564 meters (1,850 feet, as specified in the statue). But that is not enough, according to David, a privacy engineering consultant who did not want to provide his last name, citing his own privacy concerns. At least one abortion clinic in western Massachusetts, for example, is more than 564 m from any other facility, for example. It is also easy to reconstruct a person’s movements, even with intermittently sampled location data. “This is a major flaw,” David says.

The office of the bill’s sponsor, Massachusetts state senator Cindy Creem, a Democrat, did not respond to IEEE Spectrum’s questions about the bill.

In California, tech companies have provided partial data to law enforcement, such as when law enforcement act on a so-called geo-fence warrant. Then, after law enforcement agents have analyzed the partial data and identified a smaller list of devices of interest, tech companies have provided fuller data on those devices. However, a California appeals court has ruled that broad geo-fence warrants violate the fourth amendment against unreasonable searches.

Instead, as more and more jurisdictions curtail location sharing, tech companies may need to brace for building data catalogs that track where they store personal location data and for what purposes they may use it. Companies will also need to set expiration dates for how long they can use data, as they already do under the EU’s GDPR. They will need to monitor and report on their own handling of personal location data, and build logic for deleting it in accordance with the appropriate rules.

Even with such safeguards in place, companies and law enforcement agencies intent on tracking people are likely to find a way to do it, warns Marthews. “Even if you are a privacy-conscious person, just by going out in public, there are going to be digital breadcrumbs that you leave.”

Reference: https://ift.tt/mA1evw5

The Sneaky Standard

A version of this post originally appeared on Tedium , Ernie Smith’s newsletter, which hunts for the end of the long tail. Personal c...