Wednesday, November 30, 2022

John Bardeen’s Terrific Transistorized Music Box




On 16 December 1947, after months of work and refinement, the Bell Labs physicists John Bardeen and Walter Brattain completed their critical experiment proving the effectiveness of the point-contact transistor. Six months later, Bell Labs gave a demonstration to officials from the U.S. military, who chose not to classify the technology because of its potentially broad applications. The following week, news of the transistor was released to the press. The New York Herald Tribune predicted that it would cause a revolution in the electronics industry. It did.


How John Bardeen got his music box

In 1949 an engineer at Bell Labs built three music boxes to show off the new transistors. Each Transistor Oscillator-Amplifier Box contained an oscillator-amplifier circuit and two point-contact transistors powered by a B-type battery. It electronically produced five distinct tones, although the sounds were not exactly melodious delights to the ear. The box’s design was a simple LC circuit, consisting of a capacitor and an inductor. The capacitance was selectable using the switch bank, which Bardeen “played” when he demonstrated the box.

An older man holds an electronic gadget encased in clear plastic in one hand and points at it with his other hand. John Bardeen, co-inventor of the point-contact transistor, liked to play the tune “How Dry I Am” on his music box. The Spurlock Museum/University of Illinois at Urbana-Champaign

Bell Labs used one of the boxes to demonstrate the transistor’s portability. In early demonstrations, the instantaneous response of the circuits wowed witnesses, who were accustomed to having to wait for vacuum tubes to warm up. The other two music boxes went to Bardeen and Brattain. Only Bardeen’s survives.

Bardeen brought his box to the University of Illinois at Urbana-Champaign, when he joined the faculty in 1951. Despite his groundbreaking work at Bell Labs, he was relieved to move. Shortly after the invention of the transistor, Bardeen’s work environment began to deteriorate. William Shockley, Bardeen’s notoriously difficult boss, prevented him from further involvement in transistors, and Bell Labs refused to allow Bardeen to set up another research group that focused on theory.

Frederick Seitz recruited Bardeen to Illinois with a joint appointment in electrical engineering and physics, and he spent the rest of his career there. Although Bardeen earned a reputation as an unexceptional instructor—an opinion his student Nick Holonyak Jr. would argue was unwarranted—he often got a laugh from students when he used the music box to play the Prohibition-era song “How Dry I Am.” He had a key to the sequence of notes taped to the top of the box.

In 1956, Bardeen, Brattain, and Shockley shared the Nobel Prize in Physics for their “research on semiconductors and their discovery of the transistor effect.” That same year, Bardeen collaborated with postdoc Leon Cooper and grad student J. Robert Schrieffer on the work that led to their April 1957 publication in Physical Review of “Microscopic Theory of Superconductivity.” The trio won a Nobel Prize in 1972 for the development of the BCS model of superconductivity (named after their initials). Bardeen was the first person to win two Nobels in the same field and remains the only double laureate in physics. He died in 1991.

Overcoming the “inherent vice” of Bardeen’s music box

Curators at the Smithsonian Institution expressed interest in the box, but Bardeen instead offered it on a long-term loan to the World Heritage Museum (predecessor to the Spurlock Museum) at the University of Illinois. That way he could still occasionally borrow it for use in a demonstration.

In general, though, museums frown upon allowing donors—or really anyone—to operate objects in their collections. It’s a sensible policy. After all, the purpose of preserving objects in a museum is so that future generations have access to them, and any additional use can cause deterioration or damage. (Rest assured, once the music box became part of the accessioned collections after Bardeen’s death, few people were allowed to handle it other than for approved research.) But musical instruments, and by extension music boxes, are functional objects: Much of their value comes from the sound they produce. So curators have to strike a balance between use and preservation.

As it happens, Bardeen’s music box worked up until the 1990s. That’s when “inherent vice” set in. In the lexicon of museum practice, inherent vice refers to the natural tendency for certain materials to decay despite preservation specialists’ best attempts to store the items at the ideal temperature, humidity, and light levels. Nitrate film, highly acidic paper, and natural rubber are classic examples. Some objects decay quickly because the mixture of materials in them creates unstable chemical reactions. Inherent vice is a headache for any curator trying to keep electronics in working order.

The museum asked John Dallesasse, a professor of electrical engineering at Illinois, to take a look at the box, hoping that it just needed a new battery. Dallesasse’s mentor at Illinois was Holoynak, whose mentor was Bardeen. So Dallesasse considered himself Bardeen’s academic grandson.

It soon became clear that one of the original point-contact transistors had failed, and several of the wax capacitors had degraded, Dallesasse told me recently. But returning the music box to operable status was not as simple as replacing those parts. Most professional conservators abide by a code of ethics that limits their intervention; they make only changes that can be easily reversed.

An electronic gadget is connected with wires to an oscilloscope and other equipment. In 2019, University of Illinois professor John Dallesasse carefully restored Bardeen’s music box.The Spurlock Museum/University of Illinois at Urbana-Champaign

The museum was lucky in one respect: The point-contact transistor had failed as an open circuit instead of a short. This allowed Dallesasse to jumper in replacement parts, running wires from the music box to an external breadboard to bypass the failed components, instead of undoing any of the original soldering. He made sure to use time-period appropriate parts, including a working point-contact transistor borrowed from John’s son Bill Bardeen, even though that technology had been superseded by bipolar junction transistors.

Despite Dallesasse’s best efforts, the rewired box emitted a slight hum at about 30 kilohertz that wasn’t present in the original. He concluded that it was likely due to the extra wiring. He adjusted some of the capacitor values to tune the tones closer to the box’s original sounds. Dallesasse and others recalled that the first tone had been lower. Unfortunately, the frequency could not be reduced any further because it was at the edge of performance for the oscillator.

“Restoring the Bardeen Music Box” www.youtube.com

From a preservation perspective, one of the most important things Dallesasse did was to document the restoration process. Bardeen had received the box as a gift without any documentation from the original designer, so Dallesasse mapped out the circuit, which helped him with the troubleshooting. Also, documentary filmmaker Amy Young and multimedia producer Jack Brighton recorded a short video of Dallesasse explaining his approach and technique. Now future historians have resources about the second life of the music box, and we can all hear a transistor-generated rendition of “How Dry I Am.”

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the December 2022 print issue as “John Bardeen’s Marvelous Music Box.”

Reference: https://ift.tt/8gDZ9BE

Physicists Create ‘the Smallest, Crummiest Wormhole You Can Imagine’


Scientists used a quantum computer to explore the ultimate escape route from a black hole.

Watch a Live Interview With Sam Bankman-Fried.


Register for a free live interview on Nov. 30 with Mr. Bankman-Fried, the former chief executive of the failed crypto exchange FTX, by Andrew Ross Sorkin.

Is Spreading Medical Misinformation a Doctor’s Free Speech Right?


Two lawsuits in California have pre-emptively challenged a new law that would punish doctors for misleading patients about Covid-19.

Amazon Wants to Review Your Sleep. (No, Thanks.)


The new Halo Rise studies your body and breathing and rates your restfulness, from “Poor” to “Great.” Who needs this? Reference :

Tuesday, November 29, 2022

Used thin client PCs are an unsexy, readily available Raspberry Pi alternative


This <a href="https://arstechnica.com/gadgets/2019/05/lenovo-bumps-x1-extreme-to-i9-gtx-1650-introduces-new-mainstream-thinkbooks/">ThinkCentre M90n-1 Nano from 2019</a>, passively cooled with a big heatsink, was $145 when the author last looked on eBay. It's not a Raspberry Pi, and it looks like Batman's reception desk system, but it can do the work.

Enlarge / This ThinkCentre M90n-1 Nano from 2019, passively cooled with a big heatsink, was $145 when the author last looked on eBay. It's not a Raspberry Pi, and it looks like Batman's reception desk system, but it can do the work. (credit: Andrew Cunningham)

"Raspberry Pi boards are hard to get, probably also next year," says Andreas Spiess, single-board enthusiast and YouTuber, in his distinctive Swiss accent. He's not wrong. Spiess says he and his fellow Pi devotees need "a strategy to survive" without new boards, so he suggests looking in one of the least captivating, most overlooked areas of computing: used, corporate-minded thin client PCs.

Andreas Spiess' suggestion to "survive" the Raspberry Pi shortage: cheap thin clients.

Spiess' Pi replacements, suggested and refined by many of his YouTube commenters and Patreon subscribers, are Fujitsu Futros, Lenovo ThinkCentres, and other small systems (some or all of which could be semantically considered "thick clients" or simply "mini PCs," depending on your tastes and retro-grouch sensibilities). They're the kind of systems you can easily find used on eBay, refurbished on Amazon Renewed, or through other enterprise and IT asset disposition sources. They're typically in good shape, given their use and environment. And compared to single-board enthusiast systems, many more are being made and replaced each year.

They've always been there, of course, but it makes more sense to take another look at them now. "Back to the future," as Spiess puts it (in an analogy we're not entirely sure works).

Read 5 remaining paragraphs | Comments

Reference : https://ift.tt/vhJwBzD

The Future of the Transistor Is Our Future




This is a guest post in recognition of the 75th anniversary of the invention of the transistor. It is adapted from an essay in the July 2022 IEEE Electron Device Society Newsletter. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

On the 75th anniversary of the invention of the transistor, a device to which I have devoted my entire career, I’d like to answer two questions: Does the world need better transistors? And if so, what will they be like?

I would argue, that yes, we are going to need new transistors, and I think we have some hints today of what they will be like. Whether we’ll have the will and economic ability to make them is the question.

I believe the transistor is and will remain key to grappling with the impacts of global warming. With its potential for societal, economic, and personal upheaval, climate change calls for tools that give us humans orders-of-magnitude more capability.

Semiconductors can raise the abilities of humanity like no other technology. Almost by definition, all technologies increase human abilities. But for most of them, natural resources and energy constrains make orders-of magnitude improvements questionable. Transistor-enabled technology is a unique exception for the following reasons.

  1. As transistors improve, they enable new abilities such as computing and high-speed communication, the Internet, smartphones, memory and storage, robotics, artificial intelligence, and other things no one has thought of yet.
  2. These abilities have wide applications, and they transform all technologies, industries, and sciences.
    a. Semiconductor technology is not nearly as limited in growth by its material and energy usages as other technologies. ICs use relatively small amounts of materials. As a result, they’re being made smaller, and the less materials they use, the faster, more energy efficient, and capable they become.
  3. Theoretically, the energy required for information processing can still be reduced to less than one-thousandth of what is required today . Although we do not yet know exactly how to approach such theoretical efficiency, we know that increasing energy efficiency a thousandfold would not violate physical laws. In contrast, the energy efficiencies of most other technologies, such as motors and lighting, are already at 30 to 80 percent of their theoretical limits.

Transistors: past, present, and future

How we’ll continue to improve transistor technology is relatively clear in the short term, but it gets murkier the farther out you go from today. In the near term, you can glimpse the transistor’s future by looking at its recent past.

The basic planar (2D) MOSFET structure remained unchanged from 1960 until around 2010, when it became impossible to further increase transistor density and decrease the device’s power consumption. My lab at the University of California, Berkeley, saw that point coming more than a decade earlier. We reported the invention of the FinFET, the planar transistor’s successor, in 1999. FinFET, the first 3D MOSFET, changed the flat and wide transistor structure to a tall and narrow one. The benefit is better performance in a smaller footprint, much like the benefit of multistory buildings over single-story ones in a crowded city.

The FinFET is also what’s called a thin-body MOSFET, a concept that continues to guide the development of new devices. It arose from the insight that current will not leak through a transistor within several nanometers of the silicon surface because the surface potential there is well controlled by the gate voltage. FinFETs take this thin-body concept to heart. The device’s body is the vertical silicon fin, which is covered by oxide insulator and gate metal, leaving no silicon outside the range of strong gate control. FinFETs reduced leakage current by orders of magnitude and lowered transistor operating voltage. It also pointed toward the path for further improvement: reducing the body thickness even more.

The fin of the FinFET has become thinner and taller with each new technology node. But this progress has now become too difficult to maintain. So industry is adopting a new 3D thin-body CMOS structure, called gate-all-around (GAA). Here, a stack of ribbons of semiconductor make up the thin body.

Each evolution of the MOSFET structure has been aimed at producing better control over charge in the silicon by the gate [pink]. Dielectric [yellow] prevents charge from moving from the gate into the silicon body [blue].

The 3D thin-body trend will continue from these 3D transistors to 3D-stacked transistors, 3D monolithic circuits, and multichip packaging. In some cases, this 3D trend has already reached great heights. For instance, the regularity of the charge-trap memory-transistor array allowed NAND flash memory to be the first IC to transition from 2D circuits to 3D circuits. Since the first report of 3D NAND by Toshiba in 2007, the number of stacked layers has grown from 4 to beyond 200.

Monolithic 3D logic ICs will likely start modestly, with stacking the two transistors of a CMOS inverter to reduce all logic gates’ footprints [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”]. But the number of stacks may grow. Other paths to 3D ICs may employ the transfer or deposition of additional layers of semiconductor films, such as silicon, silicon germanium, or indium gallium arsenide onto a silicon wafer.

The thin-body trend might meet its ultimate endpoint in 2D semiconductors, whose thickness is measured in atoms. Molybdenum disulfide molecules, for example, are both naturally thin and relatively large, forming a 2D semiconductor that may be no more than three atoms wide yet have very good semiconductor properties. In 2016, engineers in California and Texas used a film of the 2D-semiconductor molecule molybdenum disulfide and a carbon nanotube to demonstrate a MOSFET with a critical dimension: a gate length just 1 nanometer across. Even with a gate as short as 1 nm, the transistor leakage current was only 10 nanoamperes per millimeter, comparable with today’s best production transistor.

One can imagine that in the distant future, the entire transistor may be prefabricated as a single molecule. These prefabricated building blocks might be brought to their precise locations in an IC through a process called directed-self-assembly (DSA). To understand DSA, it may be helpful to recall that a COVID virus uses its spikes to find and chemically dock itself onto an exact spot at the surface of particular human cells. In DSA, the docking spots, the “spikes,” and the transistor cargo are all carefully designed and manufactured. The initial docking spots may be created with lithography on a substrate, but additional docking spots may be brought in as cargo in subsequent steps. Some of the cargo may be removed by heat or other means if they are needed only during the fabrication process but not in the final product.

Besides making transistors smaller, we’ll have to keep reducing their power consumption. Here we could see an order-of-magnitude reduction through the use of what are called negative-capacitance field-effect transistors (NCFET). These require the insertion of a nanometer-thin layer of ferroelectric material, such as hafnium zirconium oxide, in the MOSFET’s gate stack. Because the ferroelectric contains its own internal electric field, it takes less energy to switch the device on or off. An additional advantage of the thin ferroelectric is the possible use of the ferroelectric’s capacity to store a bit as the state of its electric field, thereby integrating memory and computing in the same device.

To some degree the devices I’ve described arose out of existing trends. But future transistors may have very different materials, structures, and operating mechanisms from those of today’s transistor. For example, the nanoelectromechanical switch is a return to the mechanical relays of decades past rather than an extension of the transistor. Rather than relying on the physics of semiconductors, it uses only metals, dielectrics, and the force between closely spaced conductors with different voltages applied to them.

All these examples have been demonstrated with experiments years ago. However, bringing them to production will require much more time and effort than previous breakthroughs in semiconductor technology.

Getting to the future

Will we be able to achieve these feats? Some lessons from the past indicate that we could.

The first lesson is that the progress of transistor technology has not been even or smooth. Around 1980, the rising power consumption per chip reached a painful level. The adoption of CMOS, replacing NMOS and bipolar technologies—and later, the gradual reduction of operation voltage from 5 volts to 1—gave the industry 30 years of more or less straightforward progress. But again, power became an issue. Between 2000 and 2010, the heat generated per square centimeter of IC was projected by thoughtful researchers to soon reach that of the nuclear-reactor core. The adoption of 3D thin-body FinFET and multicore processor architectures averted the crisis and ushered in another period of relatively smooth progress.

The history of transistor technology may be described as climbing one mountain after another. Only when we got to the top of one were we able see the vista beyond and map a route to climb the next taller and steeper mountain.

The second lesson is that the core strength of the semiconductor industry—nanofabrication—is formidable. History proves that, given sufficient time and economic incentives, the industry has been able to turn any idea into reality, as long as that idea does not violate scientific laws.

But will the industry have sufficient time and economic incentives to continue climbing taller and steeper mountains and keep raising humanity’s abilities?

It’s a fair question. Even as the fab industry’s resources grow, the mountains of technology development grow even faster. A time may come when no one fab company can reach the top of the mountain to see the path ahead. What happens then?

The revenue of all semiconductor fabs (both independent and those, like Intel, that are integrated companies) is about one-third of the semiconductor industry revenue. But fabs make up just 2 percent of the combined revenues of the IT, telecommunications, and consumer-electronics industries that semiconductor technology enables. Yet the fab industry bears most of the growing burden of discovering, producing, and marketing new transistors and nanofabrication technologies. That needs to change.

For the industry to survive, the relatively meager resources of the fab industry must be prioritized in favor of fab building and shareholder needs over scientific exploration. While the fab industry is lengthening its research time horizon, it needs others to take on the burden too. Humanity’s long-term problem-solving abilities deserve targeted public support. The industry needs the help of very-long-term exploratory research, publicly funded, in a Bell Labs–like setting or by university researchers with career-long timelines and wider and deeper knowledge in physics, chemistry, biology, and algorithms than corporate research currently allows. This way, humanity will continue to find new transistors and gain the abilities it will need to face the challenges in the centuries ahead.

About the Author

Chenming Hu is the recipient of the 2020 IEEE Medal of Honor, the 2014 U.S. National Medal of Technology and Innovation, and other honors. An IEEE Life Fellow, Hu led the development of the FinFET, the device used by most advanced computer processors today. He has been a professor at the University of California, Berkeley, since 1976.

Reference: https://ift.tt/Grl9jD3

Waiting for Superbatteries




If grain must be dragged to market on an oxcart, how far can it go before the oxen eat up all the cargo? This, in brief, is the problem faced by any transportation system in which the vehicle must carry its own fuel. The key value is the density of energy, expressed with respect to either mass or volume.

The era of large steam-powered ocean liners began during the latter half of the 19th century, when wood was still the world’s dominant fuel. But no liners fired their boilers with wood: There would have been too little space left for passengers and cargo. Soft wood, such as spruce or pine, packs less than 10 megajoules per liter, whereas bituminous coal has 2.5 times as much energy by volume and at least twice as much by mass. By comparison, gasoline has 34 MJ/L and diesel about 38 MJ/L.


But in a world that aspires to leave behind all fuels (except hydrogen or maybe ammonia) and to electrify everything, the preferred measure of stored energy density is watt-hours per liter. By this metric, air-dried wood contains about 3,500 Wh/L, good steam coal around 6,500, gasoline 9,600, aviation kerosene 10,300, and natural gas (methane) merely 9.7—less than 1/1,000 the density of kerosene.


How do batteries compare with the fuels they are to displace? The first practical battery, Gaston Planté’s lead-acid cell introduced in 1859, has gradually improved from less than 60 Wh/L to about 90 Wh/L. The nickel-cadmium battery, invented by Waldemar Jungner in 1899, now frequently stores more than 150 Wh/L, and today’s best mass-manufactured performers are lithium-ion batteries, the first commercial versions of which came out in 1991. The best energy density now commercially available in very large quantities for lithium-ion batteries is at 750 Wh/L, which is widely seen in electric cars. In 2020 Panasonic promised it would reach about 850 Wh/L by 2025 (and do so without the expensive cobalt). Eventually, the company aims to reach a 1,000-Wh/L product.

Over the past 50 years, the highest energy density of mass-produced batteries has roughly quintupled

Claims of new energy-density records for lithium-ion batteries appear regularly. In March 2021, Sion Power announced an 810-Wh/L pouch cell; three months later NanoGraf announced a cylindrical cell with 800 Wh/L. Earlier claims spoke of even loftier energy densities—QuantumScape mentioned a 1,000-Wh/L cell in a December 2020 claim, and Sion Power of a 1,400-Wh/L cell as far back as 2018. But Sion’s cells came from a pilot production line, not from a routine mass-scale operation, and QuantumScape’s claim was based on laboratory tests of single-layer cells, not on any commercially available multilayer products.

The real-world leader seems to be Amprius Technologies of Fremont, Calif.: In February 2022, the company announced the first delivery of batteries rated as high as 1,150 Wh/L, to a maker of a new generation of high-altitude uncrewed aircraft, to be used to relay signals. This is obviously a niche market, orders of magnitude smaller than the potential market for electric vehicles, but it is a welcome confirmation of continuous density gains.

There is a long way to go before batteries rival the energy density of liquid fuels. Over the past 50 years, the highest energy density of mass-produced batteries has roughly quintupled, from less than 150 to more than 700 Wh/L. But even if that trend continues for the next 50 years, we would still see top densities of about 3,500 Wh/L, no more than a third that of kerosene. The wait for superbatteries ready to power intercontinental flight may not be over by even 2070.

This article appears in the December 2022 print issue.

Reference: https://ift.tt/42PCNqB

The Transistor at 75




Seventy-five years is a long time. It’s so long that most of us don’t remember a time before the transistor, and long enough for many engineers to have devoted entire careers to its use and development. In honor of this most important of technological achievements, this issue’s package of articles explores the transistor’s historical journey and potential future.


In “The First Transistor and How it Worked,” Glenn Zorpette dives deep into how the point-contact transistor came to be. Then, in “The Ultimate Transistor Timeline,” Stephen Cass lays out the device’s evolution, from the flurry of successors to the point-contact transistor to the complex devices in today’s laboratories that might one day go commercial. The transistor would never have become so useful and so ubiquitous if the semiconductor industry had not succeeded in making it small and cheap. We try to give you a sense of that scale in “The State of the Transistor.”

So what’s next in transistor technology? In less than 10 years’ time, transistors could take to the third dimension, stacked atop each other, write Marko Radosavljevic and Jack Kavalieros in “Taking Moore’s Law to New Heights.” And we asked experts what the transistor will be like on the 100th anniversary of its invention in “The Transistor of 2047.”

Meanwhile, IEEE’s celebration of the transistor’s 75th anniversary continues. The Electron Devices Society has been at it all year, writes Joanna Goodrich in The Institute, and has events planned into 2023 that you can get involved in. So go out and celebrate the device that made the modern world possible.

Reference: https://ift.tt/lhgBoca

‘No Cooperation’: How Sam Bankman-Fried Tried to Cling to FTX


Emails and text messages show how lawyers and executives struggled to persuade the 30-year-old entrepreneur to give up control of his collapsing crypto exchange.

Monday, November 28, 2022

Sympathy, and Job Offers, for Twitter’s Misinformation Experts


Seeing false and toxic information as a potentially expensive liability, companies in and outside the tech industry are angling to hire people who can keep it in check.

The Excitement Around E-Sports Is Growing. But Where Are the Profits?


Traditional sports owners who invested in competitive video gaming say the money isn’t flowing in as quickly as they had expected.

Meta Fined $275 Million for Breaking E.U. Data Privacy Law


The penalty, imposed by Irish data regulators, brings European fines against Facebook’s parent company to more than $900 million since last year.

Sunday, November 27, 2022

The Ultimate Transistor Timeline




Even as the initial sales receipts for the first transistors to hit the market were being tallied up in 1948, the next generation of transistors had already been invented (see “The First Transistor and How it Worked.”) Since then, engineers have reinvented the transistor over and over again, raiding condensed-matter physics for anything that might offer even the possibility of turning a small signal into a larger one.


But physics is one thing; mass production is another. This timeline shows the time elapsed between the invention of several transistor types and the year they became commercially available. To be honest, finding the latter set of dates was often a murky business, and we welcome corrections. But it’s clear that the initial breakneck pace of innovation seems to have slowed from 1970 to 2000, likely because these were the golden years for Moore’s Law, when scaling down the dimensions of the existing metal-oxide-semiconductor field-effect transistors (MOSFETs) led to computers that doubled in speed every couple of years for the same money. Then, when the inevitable end of this exponential improvement loomed on the horizon, a renaissance in transistor invention seems to have begun and continues to this day.

This article appears in the December 2022 print issue.

Reference: https://ift.tt/90xfKRV

Saturday, November 26, 2022

The State of the Transistor in 3 Charts




The most obvious change in transistor technology in the last 75 years has been just how many we can make. Reducing the size of the device has been a titanic effort and a fantastically successful one, as these charts show. But size isn’t the only feature engineers have been improving.


In 1947, there was only one transistor. According to TechInsight’s forecast, the semiconductor industry is on track to produce almost 2 billion trillion (1021) devices this year. That’s more transistors than were cumulatively made in all the years prior to 2017. Behind that barely conceivable number is the continued reduction in the price of a transistor, as engineers have learned to integrate more and more of them into the same area of silicon.


Scaling down transistors in the 2D space of the plane of the silicon has been a smashing success: Transistor density in logic circuits has increased more than 600,000-fold since 1971. Reducing transistor size requires using shorter wavelengths of light, such as extreme ultraviolet, and other lithography tricks to shrink the space between transistor gates and between metal interconnects. Going forward, it’s the third dimension, where transistors will be built atop one another, that counts. This trend is more than a decade old in flash memory, but it’s still in the future for logic (see “Taking Moore’s Law to New Heights.”)


Perhaps the crowning achievement of all this effort is the ability to integrate millions, even billions, of transistors into some of the most complex systems on the planet: CPUs. Here’s a look at some of the high points along the way.

What Transistors Have Become


Besides making them tiny and numerous, engineers have devoted their efforts to enhancing the device’s other qualities. Here is a small sampling of what transistors have become in the last 75 years:


Icon of a series of circles.

Ephemeral:

Researchers in Illinois developed circuits that dissolve in the body using a combination of ultrathin silicon membranes, magnesium conductors, and magnesium oxide insulators. Five minutes in water was enough to turn the first generation to mush. But recently researchers used a more durable version to make temporary cardiac pacemakers that release an anti-inflammatory drug as they disappear.


An icon of lightning bolt over a circle.

Fast:

The first transistor was made for radio frequencies, but there are now devices that operate at about a billion times those frequencies. Engineers in South Korea and Japan reported the invention of an indium gallium arsenide high-electron mobility transistor, or HEMT, that reached a maximum frequency of 738 gigahertz. Seeking raw speed, engineers at Northrop Grumman made a HEMT that passed 1 terahertz.



An icon of an iron with a line underneath.

Flat:

Today’s (and yesterday’s) transistors depend on the semiconducting properties of bulk (3D) materials. Tomorrow’s devices might rely on 2D semiconductors, such as molybdenum disulfide and tungsten disulfide. These transistors might be built in the interconnect layers above a processor’s silicon, researchers say. So 2D semiconductors could help lead to 3D processors.


An icon of a circle with a series of lines on it

Flexible:

The world is not flat, and neither are the places transistors need to operate. Using indium gallium arsenide, engineers in South Korea recently made high-performance logic transistors on plastic that hardly suffered when bent around a radius of just 4 millimeters. And engineers in Illinois and England have made microcontrollers that are both affordable and bendable.



Icon of a eye with a question mark in the center.

Invisible:

When you need to hide your computing in plain sight, turn to transparent transistors. Researchers in Fuzhou, China, recently made a see-through analogue of flash memory using organic semiconductor thin-film transistors. And researchers in Japan and Malaysia produced transparent diamond devices capable of handling more than 1,000 volts.


Icon of a brain made out of square icons

Mnemonic:

NAND flash memory cells can store multiple bits in a single device. Those on the market today store either 3 or 4 bits each. Researchers at Kioxia Corp. built a modified NAND flash cell and dunked it in 77-kelvin liquid nitrogen. A single superchilled transistor could store up to 7 bits of data, or 128 different values.



Icon of a circle with a star inside.

Talented:

In 2018, engineers in Canada used an algorithm to generate all the possible unique and functional elementary circuits that can be made using just two metal-oxide field-effect transistors. The number of circuits totaled an astounding 582. Increasing the scope to three transistors netted 56,280 circuits, including several amplifiers previously unknown to engineering.


Icon of a shield

Tough:

Some transistors can take otherworldly punishment. NASA Glenn Research Center built 200-transistor silicon carbide ICs and operated them for 60 days in a chamber that simulates the environment on the surface of Venus—460 °C heat, a planetary-probe-crushing 9.3 megapascals of pressure, and the hellish planet’s corrosive atmosphere.

This article appears in the December 2022 print issue as “The State of the Transistor.”

Reference: https://ift.tt/05UNioI

Wednesday, November 23, 2022

Frederick P. Brooks Jr., Computer Design Innovator, Dies at 91


He was a lead designer of the computers that cemented IBM’s dominance for decades. He later wrote a book on software engineering that became a quirky classic.

Tuesday, November 22, 2022

Meta researchers create AI that masters Diplomacy, tricking human players


A screenshot of Diplomacy provided by a CICERO researcher.

Enlarge / A screenshot of an online game of Diplomacy, including a running chat dialog, provided by a Cicero researcher. (credit: Meta AI)

On Tuesday, Meta AI announced the development of Cicero, which it clams is the first AI to achieve human-level performance in the strategic board game Diplomacy. It's a notable achievement because the game requires deep interpersonal negotiation skills, which implies that Cicero has obtained a certain mastery of language necessary to win the game.

Even before Deep Blue beat Garry Kasparov at chess in 1997, board games were a useful measure of AI achievement. In 2015, another barrier fell when AlphaGo defeated Go master Lee Sedol. Both of those games follow a relatively clear set of analytical rules (although Go's rules are typically simplified for computer AI).

But with Diplomacy, a large portion of the gameplay involves social skills. Players must show empathy, use natural language, and build relationships to win—a difficult task for a computer player. With this in mind, Meta asked, "Can we build more effective and flexible agents that can use language to negotiate, persuade, and work with people to achieve strategic goals similar to the way humans do?"

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/8gIqi7w

Thinking about taking your computer to the repair shop? Be very afraid


Thinking about taking your computer to the repair shop? Be very afraid

Enlarge (credit: Getty Images)

If you’ve ever worried about the privacy of your sensitive data when seeking a computer or phone repair, a new study suggests you have good reason. It found that privacy violations occurred at least 50 percent of the time, not surprisingly with female customers bearing the brunt.

Researchers at University of Guelph in Ontario, Canada, recovered logs from laptops after receiving overnight repairs from 12 commercial shops. The logs showed that technicians from six of the locations had accessed personal data and that two of those shops also copied data onto a personal device. Devices belonging to females were more likely to be snooped on, and that snooping tended to seek more sensitive data, including both sexually revealing and non-sexual pictures, documents, and financial information.

Blown away

“We were blown away by the results,” Hassan Khan, one of the researchers, said in an interview. Especially concerning, he said, was the copying of data, which happened during repairs for one from a male customer and the other from a female. “We thought they would just look at [the data] at most.”

Read 14 remaining paragraphs | Comments

Reference : https://ift.tt/YVZy4m7

How Covid Myths Spread on Far-Right Social Media Platforms


The Biden administration has pushed social media giants like Facebook to curb Covid misinformation. But it is thriving on fringe platforms like Gab, a hub for extremist content.

Monday, November 21, 2022

Inside Gary Gensler’s SEC Campaign to Rein In the Crypto Industry


Gary Gensler, the chair of the S.E.C., is at the center of a reckoning over the future of cryptocurrency after the implosion of FTX.

Saturday, November 19, 2022

Elon Musk Reinstates Trump’s Twitter Account


Mr. Musk, who had run a poll on Twitter about whether to bring back the former president to the service, said, “The people have spoken.”

The EV Transition Explained: Battery Challenges




“Energy and information are two basic currencies of organic and social systems," the economics Nobelist Herb Simon once observed. "A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”

Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an ICE vehicle, which can increase the average weight of an EV by 340 kgs (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.

EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean-slate from which to accelerate the design of safe, software-defined vehicles with computing and supporting electronics being the prime enabler of a vehicle’s features, functions and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE powertrain not only reduces the components a vehicle requires, but also frees up space for increased passenger comfort and storage.

The effects of Simon’s “profound changes” are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “ most challenging topic,” according to VW Chief Financial Officer Arno Antlitz.

It can take five or more years to get a lithium mine up and going, but operations can only start after it has secured the required permits, a process which itself can take years.

These plants are also very expensive. Ford and its Korean battery supplier SK Innovation are spending $5.6 billion to produce F-Series EVs and batteries in Stanton, Tennessee, for example, while GM is spending $2 billion to produce its new Cadillac LYRIQ EVs in Spring Hill, Tennessee. As automakers expand their lines of EVs, tens of billions more will need to be invested in both manufacturing and battery plants. It is little wonder that Tesla CEO Elon Musk calls EV factories “gigantic money furnaces.”

Furthermore, Dziczek adds, there are scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison to ICE vehicles allows these disruptors to compete from virtually scratch with legacy automakers, not only in the car market itself, but for the material and labor inputs as well.

Batteries and the supply chain challenge

Another critical question is whether all the planned battery plant output will support expected EV production demands. For instance, the US will require 8 million EV batteries annually by 2030 if its target of half of all new-vehicle sales are EVs is met, with that number rising each year after. As IEA executive director Fatih Birol observes, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”

This mismatch worries automakers. GM, Ford, Tesla and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive Chief Executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90% to 95% of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”

The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has raised the price of the Lightning $6,000 to $8,500 and CEO Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes, than an increase in prices.”

Stiff Competition for Engineering Talent


One critical area of resource competition is over the limited supply of software and systems engineers with mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems engineering expertise onboard, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel if they are not in control of the software, they are not in control of their product.

Volvo’s CEO Jim Rowan stated earlier this year that increasing the computing power in EVs will be harder and more altering of the automotive industry than switching from ICE vehicles to EVs. This means that EV winners and losers will in great part be separated by their “relative strength in their cyber-physical systems engineering,” states Clemson’s Paredis.

Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, but they want greater inside expertise in critical EV supply chain components, especially batteries.

Automakers, including Tesla, are all scrambling for battery talent, with bidding wars reportedly breaking out to acquire top candidates. With automakers planning to spend more than $13 billion to build at least 13 new EV battery plants in North America within the next five to seven years, experienced management and production line talent will likely be in extremely short supply. Tesla’s Texas Gigafactory needs some 10,000 workers alone, for example. With at least 60 new battery plants planned to be in operation globally by 2030, and scores needed soon afterwards, major battery makers are already highlighting their expected skill shortages.


The underlying reason for the worry is that supplying sufficient raw materials to existing and planned battery plants as well as for the manufacturers of other renewable energy sources and military systems, who are competing for the same materials, have several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged that which has been aimed towards EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can only start after it has secured the required permits, a process which itself can take years.

Mining the raw materials, of course, assumes that there is sufficient refining capability to process them, which outside of China, is limited. This is especially true in the US, which according to a Biden Administration special supply chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states that the US “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the US, the Eagle mine in Minnesota, is sent to Canada for smelting.

“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon

One possible solution is to move away from lithium-ion batteries and nickel-metal hydrides batteries to other battery chemistries such as lithium-iron phosphate, lithium-ion phosphate, lithium-sulfur, lithium-metal, and sodium-ion among many others, not to mention solid-state batteries, as a way to alleviate some of the material supply and cost problems. Tesla is moving towards the use of lithium-iron phosphate batteries, as is Ford for some of its vehicles. These batteries are cobalt free, which alleviates several sourcing issues.

Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25% for lithium, 35% for cobalt and nickel and 55% for copper,” according to a report (pdf) by the University of Sidney’s Institute for Sustainable Futures.



While investments into creating EV battery recycling facilities have started, there is a looming question of whether there will be enough battery factory scrap and other lithium-ion battery waste for them to remain operational while they wait for sufficient numbers of batteries to make them profitable. Lithium-ion battery pack recycling is very time-consuming and expensive, making mining lithium often cheaper than recycling it, for example. Recycling low or no-cobalt lithium batteries which is the direction many automakers are taking may also make it unprofitable to recycle them.

An additional concern is that EV batteries, once no longer useful for propelling the EV, have years of life left in them. They can be refurbished, rebuilt and reused in EVs, or repurposed into storage devices for homes, businesses or the grid. Whether it will make economic sense to do either at scale versus recycling them, remains to be seen.

Howard Nusbaum, the administrator of the National Salvage Vehicle Reporting Program (NSVRP), succinctly puts it, “There is no recycling, and no EV recycling industry, if there is no economic basis for one.”

In the next article in the series, we will look at whether the grid can handle tens of millions of EVs.

Reference: https://ift.tt/sVjOnle

Friday, November 18, 2022

New Meta AI demo writes racist and inaccurate scientific literature, gets pulled


An AI-generated illustration of robots making science.

Enlarge / An AI-generated illustration of robots making science. (credit: Ars Technica)

On Tuesday, Meta AI unveiled a demo of Galactica, a large language model designed to "store, combine and reason about scientific knowledge." While intended to accelerate writing scientific literature, adversarial users running tests found it could also generate realistic nonsense. After several days of ethical criticism, Meta took the demo offline, reports MIT Technology Review.

Large language models (LLMs), such as OpenAI's GPT-3, learn to write text by studying millions of examples and understanding the statistical relationships between words. As a result, they can author convincing-sounding documents, but those works can also be riddled with falsehoods and potentially harmful stereotypes. Some critics call LLMs "stochastic parrots" for their ability to convincingly spit out text without understanding its meaning.

Enter Galactica, an LLM aimed at writing scientific literature. Its authors trained Galactica on "a large and curated corpus of humanity’s scientific knowledge," including over 48 million papers, textbooks and lecture notes, scientific websites, and encyclopedias. According to Galactica's paper, Meta AI researchers believed this purported high-quality data would lead to high-quality output.

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/ocJgI7i

NASA Blazes a Path Back to the Moon With Artemis I Rocket Launch


The uncrewed flight of the giant Space Launch System on Wednesday began a new era of spaceflight amid a debate over how to finance rocket development.

What Employees Does Twitter Need, Anyway?


The many departures have set off a wave of hand-wringing about whether the site can continue to operate well.

Elizabeth Holmes Is Sentenced to More Than 11 Years for Theranos Fraud


Ms. Holmes was convicted in January of four counts of wire fraud for deceiving investors with claims about her blood testing start-up Theranos.

Video Friday: Little Robot, Big Stairs




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.

[ CMU ]

This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform has shown its capability on traversing over uneven terrains without active or passive ankle joint. This under-actuation nature of the robot system has been well addressed with our locomotion control framework, which also provides a new perspective on the leg design of bipedal robot.

[ CLEAR Lab ]

Thanks, Zejun!

Inbiodroid is a startup "dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment." Hot off the ANA Avatar XPRIZE competition, they're doing a Kickstarter to fund the next generation of telepresence robots.

[ Kickstarter ] via [ Inbiodroid ]

Thanks, Alejandro!

A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: it seems like fiction, and yet researchers from the Vrije Universiteit Brussel and imec have now finished a prototype that unites all these skills in one robot.

[ VUB ]

Thanks, Bram!

Self-driving bikes present some special challenges, as this excellent video graphically demonstrates.

[ Paper ]

Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in Action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.

[ Pickle ]

Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI) which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot learns highly dynamic skills (e.g. backflips) from only hand-held human demonstrations.

WASABI!

[ WASABI ]

NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.

I thought I was promised some helicopters?

[ NASA ]

A Sanctuary general-purpose robot picks up and sorts medicine pills.

Remotely controlled, if that wasn't clear.

[ Sanctuary ]

I don't know what's going on here, but it scares me.

[ KIMLAB ]

The Canadian Space Agency plans to send a rover to the Moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the Moon, as well as to have the rover survive an entire night on the Moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.

[ CSA ]

Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning barcodes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and oddly shaped, small health and beauty items, and place them precisely onto individual trays.

[ Covariant ]

The solution will integrate Boston Dynamics' Spot® robot, the ExynPak™ powered by ExynAI™ and the Trimble® X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality control workflows.

[ Exyn ]

Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity, enhanced sensors including an improved color sensor, and an improved rechargeable and swappable battery.

$279.

[ Sphero ]

I'm glad Starship is taking this seriously, although it's hard to know from this video how well the robots behave when conditions are less favorable.

[ Starship ]

Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.

[ Paper ]

Drone racing world champion Thomas Bitmatta designed an indoor drone racing track for ETH Zurich's autonomous high speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion capture system).

[ ETH RSL ] via [ BMS Racing ]

Thanks, Paul!

Moravec's paradox is the observation that many things that are difficult to do for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people; a child, a teen, a college student, a grad student, and an expert.

[ Wired ]

Roberto Calandra from Meta AI gives a talk about “Perceiving, Understanding, and Interacting through Touch.”

[ UPenn ]

AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, Chief Science Officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans, to its manifestations and prospects in the tools we create and use.

[ UMich ]

Reference: https://ift.tt/hcExGt4

New "E-nose" Samples Odors 60 Times Per Second

Odors are all around us, and often disperse fast—in hazardous situations like wildfires, for example, wind conditions quickly carry any s...