Sunday, June 30, 2024

Persona AI Brings Calm Experience to a Hectic Industry




It may at times seem like there are as many humanoid robotics companies out there as the industry could possibly sustain, but the potential for useful and reliable and affordable humanoids is so huge that there’s plenty of room for any company that can actually get them to work. Joining the dozen or so companies already on this quest is Persona AI, founded last month by Nic Radford and Jerry Pratt, two people who know better than just about anyone what it takes to make a successful robotics company, although they also know enough to be wary of getting into commercial humanoids.


Persona AI may not be the first humanoid robotics startup, but its founders have some serious experience in the space:

Nic Radford lead the team that developed NASA’s Valkyrie humanoid robot, before founding Houston Mechatronics (now Nauticus Robotics), which introduced a transforming underwater robot in 2019. He also founded Jacobi Motors, which is commercializing variable flux electric motors.

Jerry Pratt worked on walking robots for 20 years at the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida. He co-founded Boardwalk Robotics in 2017, and has spent the last two years as CTO of multi-billion-dollar humanoid startup Figure.

“It took me a long time to warm up to this idea,” Nic Radford tells us. “After I left Nauticus in January, I didn’t want anything to do with humanoids, especially underwater humanoids, and I didn’t even want to hear the word ‘robot.’ But things are changing so quickly, and I got excited and called Jerry and I’m like, this is actually very possible.” Jerry Pratt, who recently left Figure due primarily to the two-body problem, seems to be coming from a similar place: “There’s a lot of bashing your head against the wall in robotics, and persistence is so important. Nic and I have both gone through pessimism phases with our robots over the years. We’re a bit more optimistic about the commercial aspects now, but we want to be pragmatic and realistic about things too.”

Behind all of the recent humanoid hype lies the very, very difficult problem of making a highly technical piece of hardware and software compete effectively with humans in the labor market. But that’s also a very, very big opportunity—big enough that Persona doesn’t have to be the first company in this space, or the best funded, or the highest profile. They simply have to succeed, but of course sustainable commercial success with any robot (and bipedal robots in particular) is anything but simple. Step one will be building a founding team across two locations: Houston and Pensacola, Fla. But Radford says that the response so far to just a couple of LinkedIn posts about Persona has been “tremendous.” And with a substantial seed investment in the works, Persona will have more than just a vision to attract top talent.

For more details about Persona, we spoke with Persona AI co-founders Nic Radford and Jerry Pratt.

Why start this company, why now, and why you?

Nic Radford

Nic Radford: The idea for this started a long time ago. Jerry and I have been working together off and on for quite a while, being in this field and sharing a love for what the humanoid potential is while at the same time being frustrated by where humanoids are at. As far back as probably 2008, we were thinking about starting a humanoids company, but for one reason or another the viability just wasn’t there. We were both recently searching for our next venture and we couldn’t imagine sitting this out completely, so we’re finally going to explore it, although we know better than anyone that robots are really hard. They’re not that hard to build; but they’re hard to make useful and make money with, and the challenge for us is whether we can build a viable business with Persona: can we build a business that uses robots and makes money? That’s our singular focus. We’re pretty sure that this is likely the best time in history to execute on that potential.

Jerry Pratt: I’ve been interested in commercializing humanoids for quite a while—thinking about it, and giving it a go here and there, but until recently it has always been the wrong time from both a commercial point of view and a technological readiness point of view. You can think back to the DARPA Robotics Challenge days when we had to wait about 20 seconds to get a good lidar scan and process it, which made it really challenging to do things autonomously. But we’ve gotten much, much better at perception, and now, we can get a whole perception pipeline to run at the framerate of our sensors. That’s probably the main enabling technology that’s happened over the last 10 years.

From the commercial point of view, now that we’re showing that this stuff’s feasible, there’s been a lot more pull from the industry side. It’s like we’re at the next stage of the Industrial Revolution, where the harder problems that weren’t roboticized from the 60s until now can now be. And so, there’s really good opportunities in a lot of different use cases.

A bunch of companies have started within the last few years, and several were even earlier than that. Are you concerned that you’re too late?

Radford: The concern is that we’re still too early! There might only be one Figure out there that raises a billion dollars, but I don’t think that’s going to be the case. There’s going to be multiple winners here, and if the market is as large as people claim it is, you could see quite a diversification of classes of commercial humanoid robots.

Jerry Pratt

Pratt: We definitely have some catching up to do but we should be able to do that pretty quickly, and I’d say most people really aren’t that far from the starting line at this point. There’s still a lot to do, but all the technology is here now—we know what it takes to put together a really good team and to build robots. We’re also going to do what we can to increase speed, like by starting with a surrogate robot from someone else to get the autonomy team going while building our own robot in parallel.

Radford: I also believe that our capital structure is a big deal. We’re taking an anti-stealth approach, and we want to bring everyone along with us as our company grows and give out a significant chunk of the company to early joiners. It was an anxiety of ours that we would be perceived as a me-too and that nobody was going to care, but it’s been the exact opposite with a compelling response from both investors and early potential team members.

So your approach here is not to look at all of these other humanoid robotics companies and try and do something they’re not, but instead to pursue similar goals in a similar way in a market where there’s room for all?

Pratt: All robotics companies, and AI companies in general, are standing on the shoulders of giants. These are the thousands of robotics and AI researchers that have been collectively bashing their heads against the myriad problems for decades—some of the first humanoids were walking at Waseda University in the late 1960s. While there are some secret sauces that we might bring to the table, it is really the combined efforts of the research community that now enables commercialization.

So if you’re at a point where you need something new to be invented in order to get to applications, then you’re in trouble, because with invention you never know how long it’s going to take. What is available today and now, the technology that’s been developed by various communities over the last 50+ years—we all have what we need for the first three applications that are widely mentioned: warehousing, manufacturing, and logistics. The big question is, what’s the fourth application? And the fifth and the sixth? And if you can start detecting those and planning for them, you can get a leg up on everybody else.

The difficulty is in the execution and integration. It’s a ten thousand—no, that’s probably too small—it’s a hundred thousand piece puzzle where you gotta get each piece right, and occasionally you lose some pieces on the floor that you just can’t find. So you need a broad team that has expertise in like 30 different disciplines to try to solve the challenge of an end-to-end labor solution with humanoid robots.

Radford: The idea is like one percent of starting a company. The rest of it, and why companies fail, is in the execution. Things like, not understanding the market and the product-market fit, or not understanding how to run the company, the dimensions of the actual business. I believe we’re different because with our backgrounds and our experience we bring a very strong view on execution, and that is our focus on day one. There’s enough interest in the VC community that we can fund this company with a singular focus on commercializing humanoids for a couple different verticals.

But listen, we got some novel ideas in actuation and other tricks up our sleeve that might be very compelling for this, but we don’t want to emphasize that aspect. I don’t think Persona’s ultimate success comes just from the tech component. I think it comes mostly from ‘do we understand the customer, the market needs, the business model, and can we avoid the mistakes of the past?’

How is that going to change things about the way that you run Persona?

Radford: I started a company [Houston Mechatronics] with a bunch of research engineers. They don’t make the best product managers. More broadly, if you’re staffing all your disciplines with roboticists and engineers, you’ll learn that it may not be the most efficient way to bring something to market. Yes, we need those skills. They are essential. But there’s so many other aspects of a business that get overlooked when you’re fundamentally a research lab trying to commercialize a robot. I’ve been there, I’ve done that, and I’m not interested in making that mistake again.

Pratt: It’s important to get a really good product team that’s working with a customer from day one to have customer needs drive all the engineering. The other approach is ‘build it and they will come’ but then maybe you don’t build the right thing. Of course, we want to build multi-purpose robots, and we’re steering clear of saying ‘general purpose’ at this point. We don’t want to overfit to any one application, but if we can get to a dozen use cases, two or three per customer site, then we’ve got something.

There still seems to be a couple of unsolved technical challenges with humanoids, including hands, batteries, and safety. How will Persona tackle those things?

Pratt: Hands are such a hard thing—getting a hand that has the required degrees of freedom and is robust enough that if you accidentally hit it against your table, you’re not just going to break all your fingers. But we’ve seen robotic hand companies popping up now that are showing videos of hitting their hands with a hammer, so I’m hopeful.

Getting one to two hours of battery life is relatively achievable. Pushing up towards five hours is super hard. But batteries can now be charged in 20 minutes or so, as long as you’re going from 20 percent to 80 percent. So we’re going to need a cadence where robots are swapping in and out and charging as they go. And batteries will keep getting better.

Radford: We do have a focus on safety. It was paramount at NASA, and when we were working on Robonaut, it led to a lot of morphological considerations with padding. In fact, the first concepts and images we have of our robot illustrate extensive padding, but we have to do that carefully, because at the end of the day it’s mass and it’s inertia.

What does the near future look like for you?

Pratt: Building the team is really important—getting those first 10 to 20 people over the next few months. Then we’ll want to get some hardware and get going really quickly, maybe buying a couple of robot arms or something to get our behavior and learning pipelines going while in parallel starting our own robot design. From our experience, after getting a good team together and starting from a clean sheet, a new robot takes about a year to design and build. And then during that period we’ll be securing a customer or two or three.

Radford: We’re also working hard on some very high profile partnerships that could influence our early thinking dramatically. Like Jerry said earlier, it’s a massive 100,000 piece puzzle, and we’re working on the fundamentals: the people, the cash, and the customers.

Reference: https://ift.tt/qUwaVBA

Saturday, June 29, 2024

This Wearable Computer Made a Fashion Statement




In 1993, well before Google Glass debuted, the artist Lisa Krohn designed a prototype wearable computer that looked like no other. The Cyberdesk was an experiment in augmented reality. At a time when computers were mostly beige and boxy, Krohn envisioned a pliable, high-tech garment that fused fashion with function.

Krohn studied art and architectural history at Brown University and the Rhode Island School of Design (RISD) before completing an MFA at Cranbrook Academy of Art in Bloomfield Hills, Mich., in 1988. With the Cyberdesk, she tapped into a cultural moment in which artists, techies, writers, and others were celebrating the convergence of humans and machines and eagerly anticipating our cyborg future.

What is Lisa Krohn’s Cyberdesk?

Closeup photo of a yellow curved piece of plastic extending in front of a mannequin\u2019s eye. Although a working prototype of the Cyberdesk was never built, the yellow eyepiece suggested a retinal display.Lisa Krohn and Christopher Myers

The Cyberdesk, made of resin, plastic, metal, and glass, was meant to be worn like a necklace. The four circles along the breastbone are a four-key keyboard with a large trackball at the top center; the user would use the keyboard and trackball to make selections from menus of options. A small microphone lies against the throat, and an earpiece hooks into the left ear. Krohn imagined the yellow tube in front of the right eye as a retinal scan display that would project a laser beam directly onto the back of the eye, creating a screen centered in the user’s field of vision. In the back, there is a port suggestive of some type of neural link. The Cyberdesk was intended to run on energy harvested from the body’s movement and the sun.

Photo of the back of a mannequin\u2019s head showing a curving translucent neck ornament that extends along the top of the spine and over the ears. A port on the back of the Cyberdesk was intended as a neural link.Lisa Krohn and Christopher Myers

Krohn, along with Chris Myers, a student at the Art Center School of Design, made two models of the Cyberdesk, but it was never turned into a working prototype. The underlying technology wasn’t there yet, although there were engineers who were experimenting with similar ideas. For example, Krohn knew about work on virtual retinal displays at the University of Washington’s Human Interface Technology Laboratory, but she didn’t pursue a collaboration.

And so Krohn’s design existed as “strategic foresight, speculative technology, predictive design, or design fiction,” she told me in a recent email. Krohn imagined a possible future, one in which, as she notes on her company’s website, “person and machine merge into one seamless collaborative super-being!” In other words, a cyborg.

The Cyberdesk wasn’t the only piece of cyborg gear that Krohn designed. In 1988, before the age of smartphones and Web searches, she imagined a wrist computer that combined satellite navigation, a phone, a wristwatch, and a regional information guide. Made of a flexible plastic, it could be folded up and worn as a decorative cuff when not being used as a computer.

Two photos of a translucent wristband with embedded electronics. Lisa Krohn also designed a flexible wrist computer that could be folded up when not in use. Lisa Krohn

Krohn designed the wrist computer prototype before “wearable” became a common way to refer to a portable device that incorporates computer technology. Futurist Paul Saffo is credited with first using the term “wearable computer” in an article in InfoWorld in 1991. Saffo predicted the first wearables would be worn on the belts of maintenance workers and then be extended to deskless, information-intensive tasks, such as conducting store inventories. He also suggested a game console consisting of a tiny display integrated into sunglasses and paired with a power glove. Nowhere did he consider technology as a fashion accessory, and I suspect he wasn’t even considering women when he made his predictions.

Meanwhile, Steve Mann was working on ideas for mediated vision as a graduate student at MIT. Mann was first inspired to build a better welding mask that would protect the welder’s eyes from the bright electric arc while still allowing a clear view. This led him to think about how to use video cameras, displays, and computers to modify vision in real time. Both Krohn and Mann ran into similar real-world challenges: cellphones, the Internet, civilian GPS, and online databases were still in their infancy, and the hardware was heavy and clunky. While Mann built boxy functional prototypes that he demoed on himself, Krohn imagined more speculative technology.

Photo of an electronic device consisting of a landline phone handset connected to a booklike object with several hard plastic pages. Each “page” of the Krohn’s phonebook represents a separate function—dial phone, answering machine, and printer. Lisa Krohn, Sigmar Willnauer, and Tony Guido

Krohn also worked on utilitarian business technologies. In 1987, she designed a prototype for the phonebook, an integrated phone with answering machine and printer. Each “page” of the phonebook had its own function, and an electric switch automatically changed to that function as the page was flipped, with instructions printed on the page. That intuitive design was in sharp contrast to most answering machines of the time, which were clunky and not particularly easy to use.

The phonebook was an example of “product semantics,” which holds that a product’s design should help the user understand the product’s function and meaning. At Cranbrook, Krohn studied under Michael and Katherine McCoy, who embraced that theory of design. Krohn and Michael McCoy wrote about that aspect of the phonebook in their 1989 essay “Beyond Beige: Interpretive Design for the Post-Industrial Age”: “The casting of [a] personal electronic device into the mold of [a] personal agenda is an attempt to make a product reach out to its users by informing them about how it operates, where it resides, and how it fits into their lives.”

Lisa Krohn championed cyberfeminism and cyborgs

Photo of a smiling white woman wearing a suit. Lisa Krohn designed the Cyberdesk in 1993, at a time when wearable computers existed mainly in science fiction. Dietmar Quistorf

The Cyberdesk as well as the wrist computer were early examples of designs influenced by cyberfeminism. This feminist movement emerged in the early 1990s as a counter to the dominance of men in computing, gaming, and various Internet spaces. It built on feminist science fiction, such as the writings of Octavia Butler, Vonda McIntyre, and Joanna Russ, as well as the work of hackers, coders, and media artists. Different threads of cyberfeminism developed around the world, especially in Australia, Germany, and the United States. While mainstream depictions of cyborgs continued to tilt masculine, cyberfeminists challenged the patriarchy by experimenting with genderless ideas of cyborgs and recombinants that melded machines, plants, humans, and animals.

The feminist theorist and historian of technology Donna Haraway kindled this cyborgian drift through her 1985 essay, “A Manifesto for Cyborgs,” published in the Socialist Review. She argued that as the end of the 20th century approached, we were all becoming cyborgs due to the breakdown of lines dividing humans and machines. Her cyborg theory hinged on communication, and she saw cyborgs as a potential solution that allowed for a fluidity of both language and identity. The essay is considered one of the foundational texts in cyberfeminism, and it was republished in Haraway’s 1990 book, Simians, Cyborgs, and Women: The Reinvention of Nature.

Krohn imagined a possible future, one in which “person and machine merge into one seamless collaborative super-being!” In other words, a cyborg.

Krohn and McCoy’s 1989 essay also highlighted communication as a central problem in modern design. Mainstream consumer electronics, they argued, had reached a monotonous uniformity of design that favored manufacturing efficiency over conveying the product’s intended function.

Both Haraway and Krohn saw opportunities for technology, especially microelectronics, to challenge the restrictions of the past. By embracing the cyborg, both women found new ways to overcome the limits of language and communication and to forge new directions in feminism.

Cyberdesk 2.0

I had the privilege of meeting Lisa Krohn when she participated in a roundtable on the Cyberdesk at the 2023 annual meeting of the Society for the History of Technology. The assembled group, which included curators and conservators from the Cooper Hewitt, Smithsonian Design Museum and the San Francisco Museum of Modern Art (each of which has a Cyberdesk prototype in its collection), considered a possible Cyberdesk version 2.0. What would be different if Krohn were designing it today?

Photo of two female shaved heads wearing sunglasses that have a retinal display and a neural link above one ear. In 2023, Krohn reimagined the Cyberdesk. It now incorporates technology that hadn’t been available 30 years earlier, such as sensors to monitor brainwaves, hydration, and stress levels.Duvit Mark Kakunegoda

The group focused their discussion around the idea of “design futuring,” a concept promoted by Tony Fry in his 2009 book of the same name. Design futuring is a way to actively shape the future, rather than passively trying to predict it and then reacting after the fact. Fry describes how design futuring could be used to promote sustainability.

In the case of the Cyberdesk 2.0, a focus on sustainability might lead to a different choice of materials. The original resin provided a malleable material that could mold to the contours of the body. But its long-term stability is terrible. Despite best practices in conservation, the Cyberdesk will likely turn into a goopy mess in the not-too-distant future. (In a previous column, I wrote about a transistorized music box owned by John Bardeen that suffers from the same basic problem of decaying materials, which in curatorial circles is known as “inherent vice.”)

The panelists considered alternatives like biomaterials, and they discussed the entire product life cycle, the challenges of electronic waste, and the mining of rare earth elements. They wondered how the design process and the global supply chain might change if such factors were considered from the start, rather than as problems to be solved later.

These are just a few of the ideas that percolated while historians, artists, curators, and conservators considered the Cyberdesk. Now imagine if a few engineers were also present. To me, that would have been a really worthwhile discussion. Not only can art unlock creative design and push innovations in new directions, it also allows us to reflect on technology in daily life. And artists can learn from engineers about new materials, technologies, and possibilities. Working together, technology and design no longer need the modifiers speculative and predictive. Engineers and artists can create the future reality.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the July 2024 print issue as “The Wearable Computer as Bling.”


References​


I first learned about Lisa Krohn’s Cyberdesk and design theory at the Society for the History of Technology’s conference in Los Angeles in 2023, during the session “Revisiting Lisa Krohn’s Cyberdesk (1993), a cyberfeminist concept model.

Both the Cooper Hewitt, Smithsonian Design Museum and the San Francisco Museum of Modern Art have featured their respective Cyberdesks in exhibits and online articles. Note that the difference in the colors—SFMOMA’s is white, while Cooper Hewitt’s is brown—is due to the instability of the plastics and resin, as well as variations in the materials.

As I considered Krohn’s cyborg designs, I couldn’t help but recall Donna Haraway’s classic essay “A Cyborg Manifesto,” a foundational text in cyberfeminism. Forty years on, we are more cyborgian than Haraway originally posited. Her challenges to traditional notions of identity still resonate with today’s nuanced discussions of gender. Addressing algorithmic bias and generative AI training may be a new frontier for cyberfeminism. Reference: https://ift.tt/K6SqvOy

Inside a violent gang’s ruthless crypto-stealing home invasion spree


photo illustration of Cyber thieves stealing Bitcoin on laptop screen

Enlarge (credit: Malte Mueller / Getty)

Cryptocurrency has always made a ripe target for theft—and not just hacking, but the old-fashioned, up-close-and-personal kind, too. Given that it can be irreversibly transferred in seconds with little more than a password, it's perhaps no surprise that thieves have occasionally sought to steal crypto in home-invasion burglaries and even kidnappings. But rarely do those thieves leave a trail of violence in their wake as disturbing as that of one recent, ruthless, and particularly prolific gang of crypto extortionists.

The United States Justice Department earlier this week announced the conviction of Remy Ra St. Felix, a 24-year-old Florida man who led a group of men behind a violent crime spree designed to compel victims to hand over access to their cryptocurrency savings. That announcement and the criminal complaint laying out charges against St. Felix focused largely on a single theft of cryptocurrency from an elderly North Carolina couple, whose home St. Felix and one of his accomplices broke into before physically assaulting the two victims—both in their seventies—and forcing them to transfer more than $150,000 in bitcoin and ether to the thieves' crypto wallets.

In fact, that six-figure sum appears to have been the gang’s only confirmed haul from its physical crypto thefts—although the burglars and their associates made millions in total, mostly through more traditional crypto hacking as well as stealing other assets. A deeper look into court documents from the St. Felix case, however, reveals that the relatively small profit St. Felix’s gang made from its burglaries doesn’t capture the full scope of the harm they inflicted: In total, those court filings and DOJ officials describe how more than a dozen convicted and alleged members of the crypto-focused gang broke into the homes of 11 victims, carrying out a brutal spree of armed robberies, death threats, beatings, torture sessions, and even one kidnapping in a campaign that spanned four US states.

Read 25 remaining paragraphs | Comments

Reference : https://ift.tt/JDtgGAM

Friday, June 28, 2024

Why Not Give Robots Foot-Eyes?




This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

One of the (many) great things about robots is that they don’t have to be constrained by how their biological counterparts do things. If you have a particular problem your robot needs to solve, you can get creative with extra sensors: many quadrupeds have side cameras and butt cameras for obstacle avoidance, and humanoids sometimes have chest cameras and knee cameras to help with navigation along with wrist cameras for manipulation. But how far can you take this? I have no idea, but it seems like we haven’t gotten to the end of things yet because now there’s a quadruped with cameras on the bottom of its feet.


Sensorized feet is not a new idea; it’s pretty common for quadrupedal robots to have some kind of foot-mounted force sensor to detect ground contact. Putting an actual camera down there is fairly novel, though, because it’s not at all obvious how you’d go about doing it. And the way that roboticists from the Southern University of Science and Technology in Shenzhen went about doing it is, indeed, not at all obvious.

Go1’s snazzy feetsies have soles made of transparent acrylic, with slightly flexible plastic structure supporting a 60 millimeter gap up to each camera (640x480 at 120 frames per second) with a quartet of LEDs to provide illumination. While it’s complicated looking, at 120 grams, it doesn’t weigh all that much, and costs only about $50 per foot ($42 of which is the camera). The whole thing is sealed to keep out dirt and water.

So why bother with all of this (presumably somewhat fragile) complexity? As we ask quadruped robots to do more useful things in more challenging environments, having more information about what exactly they’re stepping on and how their feet are interacting with the ground is going to be super helpful. Robots that rely only on proprioceptive sensing (sensing self-movement) are great and all, but when you start trying to move over complex surfaces like sand, it can be really helpful to have vision that explicitly shows how your robot is interacting with the surface that it’s stepping on. Preliminary results showed that Foot Vision enabled the Go1 using it to perceive the flow of sand or soil around its foot as it takes a step, which can be used to estimate slippage, the bane of ground-contacting robots.

The researchers acknowledge that their hardware could use a bit of robustifying, and they also want to try adding some tread patterns around the circumference of the foot, since that plexiglass window is pretty slippery. The overall idea is to make Foot Vision as useful as the much more common gripper-integrated vision systems for robotic manipulation, helping legged robots make better decisions about how to get where they need to go.

Foot Vision: A Vision-Based Multi-Functional Sensorized Foot for Quadruped Robots, by Guowei Shi, Chen Yao, Xin Liu, Yuntian Zhao, Zheng Zhu, and Zhenzhong Jia from Southern University of Science and Technology in Shenzhen, is accepted to the July 2024 issue of IEEE Robotics and Automation Letters

.

Reference: https://ift.tt/Mc4syxY

Scaling Compute To Satiate AI




Fifty years ago, DRAM inventor and IEEE Medal of Honor recipient Robert Dennard created what essentially became the semiconductor industry’s path to perpetually increasing transistor density and chip performance. That path became known as Dennard scaling, and it helped codify Gordon Moore’s postulate about device dimensions shrinking by half every 18 to 24 months. For decades it compelled engineers to push the physical limits of semiconductor devices.

But in the mid-2000s, when Dennard scaling began running out of juice, chipmakers had to turn to exotic solutions like extreme ultraviolet (EUV) lithography systems to try to keep Moore’s Law on pace. On a visit to GlobalFoundries in Malta, N.Y., in 2017 to see the company install its first EUV system, senior editor Samuel K. Moore asked one expert what the fab would need to achieve even smaller device dimensions. “We’d probably have to build a particle accelerator under the parking lot,” the man joked. The idea seemed so fantastic that it stuck with Moore.

So when Tokyo-based tech journalist John Boyd recently pitched a story about an effort to harness a linear accelerator as an EUV light source, Moore was excited. Boyd’s visit to the High Energy Accelerator Research Organization, known as KEK, in Tsukuba, Japan, became the basis for “Is the Future of Moore’s Law in a Particle Accelerator?” As he reports, KEK’s system generates light by “boosting electrons to relativistic speeds and then deviating their motion in a particular way.”

So far, KEK researchers have managed to blast a 17-megaelectron-volt electron beam in bursts of 20-micrometer infrared light, a ways away from the current industry standard of 13.5 nanometers. But the KEK team is optimistic about their technology’s prospects.

While the industry’s ability to affordably make smaller devices has certainly slowed, Moore believes that scaling has a few tricks up its sleeve yet. In addition to brighter light sources like the one KEK is working on, future complementary field-effect transistors (CFETs) will build two transistors in the space of one.

“I believe Wong and Liu want young, technically minded people to understand the importance of keeping semiconductor advances going and to make them want to be part of that effort,” Moore says.

In the shorter term, Moore says stacking chips is the most effective way to keep increasing the amount of logic and memory you can throw at a problem.

“There are always going to be functions in a CPU or GPU that don’t scale as well as core processor logic. Increasingly, it doesn’t make sense to try to keep building all these parts using the core logic’s bleeding-edge chip processes,” Moore says. “It makes more sense to build each part with its best, most economical process, and put them back together as a stack, or at least in the same package.”

To meet the demands of the booming AI sector, makers of GPUs will need to stack up. When former Taiwan Semiconductor Manufacturing Co. chairman Mark Liu and TSMC chief scientist H.-S. Philip Wong wanted to get their message out about the future of CMOS, they approached Moore. The result is “The Path to a 1-Trillion-Transistor GPU.” In addition to Wong’s corporate role, he’s also an academic. One of the worries he’s repeatedly expressed to Moore is that AI and software generally are pulling talent away from semiconductor engineering.

“I believe Wong and Liu want young, technically minded people to understand the importance of keeping semiconductor advances going and to make them want to be part of that effort,” Moore says. “They want to show that semiconductor engineering has a career-long future despite much talk of the death of Moore’s Law.”

Reference: https://ift.tt/5COdNuZ

Researchers craft smiling robot face from living human skin cells


A movable robotic face covered with living human skin cells.

Enlarge / A movable robotic face covered with living human skin cells. (credit: Takeuchi et al.)

In a new study, researchers from the University of Tokyo, Harvard University, and the International Research Center for Neurointelligence have unveiled a technique for creating lifelike robotic skin using living human cells. As a proof of concept, the team engineered a small robotic face capable of smiling, covered entirely with a layer of pink living tissue.

The researchers note that using living skin tissue as a robot covering has benefits, as it's flexible enough to convey emotions and can potentially repair itself. "As the role of robots continues to evolve, the materials used to cover social robots need to exhibit lifelike functions, such as self-healing," wrote the researchers in the study.

Shoji Takeuchi, Michio Kawai, Minghao Nie, and Haruka Oda authored the study, titled "Perforation-type anchors inspired by skin ligament for robotic face covered with living skin," which is due for July publication in Cell Reports Physical Science. We learned of the study from a report published earlier this week by New Scientist.

Read 13 remaining paragraphs | Comments

Reference : https://ift.tt/FRaN1Ae

Thursday, June 27, 2024

Mac users served info-stealer malware through Google ads


Mac users served info-stealer malware through Google ads

Enlarge (credit: Getty Images)

Mac malware that steals passwords, cryptocurrency wallets, and other sensitive data has been spotted circulating through Google ads, making it at least the second time in as many months the widely used ad platform has been abused to infect web surfers.

The latest ads, found by security firm Malwarebytes on Monday, promote Mac versions of Arc, an unconventional browser that became generally available for the macOS platform last July. The listing promises users a “calmer, more personal” experience that includes less clutter and distractions, a marketing message that mimics the one communicated by The Browser Company, the start-up maker of Arc.

When verified isn’t verified

According to Malwarebytes, clicking on the ads redirected Web surfers to arc-download[.]com, a completely fake Arc browser page that looks nearly identical to the real one.

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/rFEU4Yv

OpenAI Builds AI to Critique AI




One of the biggest problems with the large language models that power chatbots like ChatGPT is that you never know when you can trust them. They can generate clear and cogent prose in response to any question, and much of the information they provide is accurate and useful. But they also hallucinate—in less polite terms, they make stuff up—and those hallucinations are presented in the same clear and cogent prose, leaving it up to the human user to detect the errors. They’re also sycophantic, trying to tell users what they want to hear. You can test this by asking ChatGPT to describe things that never happened (for example: “describe the Sesame Street episode with Elon Musk,” or “tell me about the zebra in the novel Middlemarch“) and checking out its utterly plausible responses.

OpenAI’s latest small step toward addressing this issue comes in the form of an upstream tool that would help the humans training the model guide it toward truth and accuracy. Today, the company put out a blog post and a preprint paper describing the effort. This type of research falls into the category of “alignment” work, as researchers are trying to make the goals of AI systems align with those of humans.

The new work focuses on reinforcement learning from human feedback (RLHF), a technique that has become hugely important for taking a basic language model and fine-tuning it, making it suitable for public release. With RLHF, human trainers evaluate a variety of outputs from a language model, all generated in response to the same question, and indicate which response is best. When done at scale, this technique has helped create models that are more accurate, less racist, more polite, less inclined to dish out a recipe for a bioweapon, and so on.

Can an AI catch an AI in a lie?

The problem with RLHF, explains OpenAI researcher Nat McAleese, is that “as models get smarter and smarter, that job gets harder and harder.” As LLMs generate ever more sophisticated and complex responses on everything from literary theory to molecular biology, typical humans are becoming less capable of judging the best outputs. “So that means we need something which moves beyond RLHF to align more advanced systems,” McAleese tells IEEE Spectrum.

The solution OpenAI hit on was—surprise!—more AI.

Specifically, the OpenAI researchers trained a model called CriticGPT to evaluate the responses of ChatGPT. In these initial tests, they only had ChatGPT generating computer code, not text responses, because errors are easier to catch and less ambiguous. The goal was to make a model that could assist humans in their RLHF tasks. “We’re really excited about it,” says McAleese, “because if you have AI help to make these judgments, if you can make better judgments when you’re giving feedback, you can train a better model.” This approach is a type of “scalable oversight“ that’s intended to allow humans to keep watch over AI systems even if they end up outpacing us intellectually.

“Using LLM-assisted human annotators is a natural way to improve the feedback process.” —Stephen Casper, MIT

Of course, before it could be used for these experiments, CriticGPT had to be trained itself using the usual techniques, including RLHF. In an interesting twist, the researchers had the human trainers deliberately insert bugs into ChatGPT-generated code before giving it to CriticGPT for evaluation. CriticGPT then offered up a variety of responses, and the humans were able to judge the best outputs because they knew which bugs the model should have caught.

The results of OpenAI’s experiments with CriticGPT were encouraging. The researchers found that CriticGPT caught substantially more bugs than qualified humans paid for code review: CriticGPT caught about 85 percent of bugs, while the humans caught only 25 percent. They also found that pairing CriticGPT with a human trainer resulted in critiques that were more comprehensive than those written by humans alone, and contained fewer hallucinated bugs than critiques written by ChatGPT. McAleese says OpenAI is working toward deploying CriticGPT in its training pipelines, though it’s not clear how useful it would be on a broader set of tasks.

CriticGPT spots coding errors, but maybe not zebras

It’s important to note the limitations of the research, including its focus on short pieces of code. While the paper includes an offhand mention of a preliminary experiment using CriticGPT to catch errors in text responses, the researchers haven’t yet really waded into those murkier waters. It’s tricky because errors in text aren’t always as obvious as a zebra waltzing into a Victorian novel. What’s more, RLHF is often used to ensure that models don’t display harmful bias in their responses and do provide acceptable answers on controversial subjects. McAleese says CriticGPT isn’t likely to be helpful in such situations: “It’s not a strong enough approach.”

An AI researcher with no connection to OpenAI says that the work is not conceptually new, but it’s a useful methodological contribution. “Some of the main challenges with RLHF stem from limitations in human cognition speed, focus, and attention to detail,” says Stephen Casper, a Ph.D. student at MIT and one of the lead authors on a 2023 preprint paper about the limitations of RLHF. “From that perspective, using LLM-assisted human annotators is a natural way to improve the feedback process. I believe that this is a significant step forward toward more effectively training aligned models.”

But Casper also notes that combining the efforts of humans and AI systems “can create brand-new problems.” For example, he says, “this type of approach elevates the risk of perfunctory human involvement and may allow for the injection of subtle AI biases into the feedback process.”

The new alignment research is the first to come out of OpenAI since the company... reorganized its alignment team, to put it mildly. Following the splashy departures of OpenAI cofounder Ilya Sutskever and alignment leader Jan Leike in May, both reportedly spurred by concerns that the company wasn’t prioritizing AI risk, OpenAI confirmed that it had disbanded its alignment team and distributed remaining team members to other research groups. Everyone’s been waiting to see if the company would keep putting out credible and pathbreaking alignment research, and on what scale. (In July 2023, the company had announced that it was dedicating 20 percent of its compute resources to alignment research, but Leike said in a May 2024 tweet that his team had recently been “struggling for compute.”) The preprint released today indicates that at least the alignment researchers are still working the problem.

Reference: https://ift.tt/m3T9V5a

AI-generated Al Michaels to provide daily recaps during 2024 Summer Olympics


Al Michaels looks on prior to the game between the Minnesota Vikings and Philadelphia Eagles at Lincoln Financial Field on September 14, 2023 in Philadelphia, Pennsylvania.

Enlarge / Al Michaels looks on prior to the game between the Minnesota Vikings and Philadelphia Eagles at Lincoln Financial Field on September 14, 2023, in Philadelphia, Pennsylvania. (credit: Getty Images)

On Wednesday, NBC announced plans to use an AI-generated clone of famous sports commentator Al Michaels' voice to narrate daily streaming video recaps of the 2024 Summer Olympics in Paris, which start on July 26. The AI-powered narration will feature in "Your Daily Olympic Recap on Peacock," NBC's streaming service. But this new, high-profile use of voice cloning worries critics, who say the technology may muscle out upcoming sports commentators by keeping old personas around forever.

NBC says it has created a "high-quality AI re-creation" of Michaels' voice, trained on Michaels' past NBC appearances to capture his distinctive delivery style.

The veteran broadcaster, revered in the sports commentator world for his iconic "Do you believe in miracles? Yes!" call during the 1980 Winter Olympics, has been covering sports on TV since 1971, including a high-profile run of play-by-play coverage of NFL football games for both ABC and NBC since the 1980s. NBC dropped him from NFL coverage in 2023, however, possibly due to his age.

Read 11 remaining paragraphs | Comments

Reference : https://ift.tt/xab9eqw

Wednesday, June 26, 2024

Critical MOVEit vulnerability puts huge swaths of the Internet at severe risk


Critical MOVEit vulnerability puts huge swaths of the Internet at severe risk

Enlarge

A critical vulnerability recently discovered in a widely used piece of software is putting huge swaths of the Internet at risk of devastating hacks, and attackers have already begun actively trying to exploit it in real-world attacks, researchers warn.

The software, known as MOVEit and sold by Progress Software, allows enterprises to transfer and manage files using various specifications, including SFTP, SCP, and HTTP protocols and in ways that comply with regulations mandated under PCI and HIPAA. At the time this post went live, Internet scans indicated it was installed inside almost 1,800 networks around the world, with the biggest number in the US. A separate scan performed Tuesday by security firm Censys found 2,700 such instances.

Causing mayhem with a null string

Last year, a critical MOVEit vulnerability led to the compromise of more than 2,300 organizations, including Shell, British Airways, the US Department of Energy, and Ontario’s government birth registry, BORN Ontario, the latter of which led to the compromise of information for 3.4 million people.

Read 10 remaining paragraphs | Comments

Reference : https://ift.tt/MY0UHZf

Toys “R” Us riles critics with “first-ever” AI-generated commercial using Sora


A screen capture from the partially AI-generated Toys

Enlarge / A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora. (credit: Toys R Us)

On Monday, Toys "R" Us announced that it had partnered with an ad agency called Native Foreign to create what it calls "the first-ever brand film using OpenAI's new text-to-video tool, Sora." OpenAI debuted Sora in February, but the video synthesis tool has not yet become available to the public. The brand film tells the story of Toys "R" Us founder Charles Lazarus using AI-generated video clips.

"We are thrilled to partner with Native Foreign to push the boundaries of Sora, a groundbreaking new technology from OpenAI that's gaining global attention," wrote Toys "R" Us on its website. "Sora can create up to one-minute-long videos featuring realistic scenes and multiple characters, all generated from text instruction. Imagine the excitement of creating a young Charles Lazarus, the founder of Toys "R" Us, and envisioning his dreams for our iconic brand and beloved mascot Geoffrey the Giraffe in the early 1930s."

The company says that The Origin of Toys "R" Us commercial was co-produced by Toys "R" Us Studios President Kim Miller Olko as executive producer and Native Foreign's Nik Kleverov as director. "Charles Lazarus was a visionary ahead of his time, and we wanted to honor his legacy with a spot using the most cutting-edge technology available," Miller Olko said in a statement.

Read 12 remaining paragraphs | Comments

Reference : https://ift.tt/XCZme9O

Tuesday, June 25, 2024

Researchers upend AI status quo by eliminating matrix multiplication in LLMs


Illustration of a brain inside of a light bulb.

Enlarge / Illustration of a brain inside of a light bulb. (credit: Getty Images)

Researchers claim to have developed a new way to run AI language models more efficiently by eliminating matrix multiplication from the process. This fundamentally redesigns neural network operations that are currently accelerated by GPU chips. The findings, detailed in a recent preprint paper from researchers at the University of California Santa Cruz, UC Davis, LuxiTech, and Soochow University, could have deep implications for the environmental impact and operational costs of AI systems.

Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. That ability momentarily made Nvidia the most valuable company in the world last week; the company currently holds an estimated 98 percent market share for data center GPUs, which are commonly used to power AI systems like ChatGPT and Google Gemini.

In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

Read 13 remaining paragraphs | Comments

Reference : https://ift.tt/gLIFOiV

Intel’s Latest FinFET Is Key to Its Foundry Plans




Last week at VLSI Symposium, Intel detailed the manufacturing process that will form the foundation of its foundry service for high-performance data center customers. For the same power consumption, the Intel 3 process results in an 18 percent performance gain over the previous process, Intel 4. On the company’s roadmap, Intel 3 is the last to use the fin field-effect transistor (FinFET) structure, which the company pioneered in 2011. But it also includes Intel’s first use of a technology that is essential to its plans long after the FinFET is no longer cutting edge. What’s more, the technology is crucial to the company’s plans to become a foundry and make high-performance chips for other companies.

Called dipole work-function metal, it allows a chip designer to select transistors of several different threshold voltages. Threshold voltage is the level at which a device switches on or off. With the Intel 3 process, a single chip can include devices having any of four tightly-controlled threshold voltages. That’s important because different functions operate best with different threshold voltages. Cache memory, for example, typically demands devices with a high threshold voltage to prevent current leakage that wastes power. While other circuits might need the fastest switching devices, with the lowest threshold voltage.

Threshold voltage is set by the transistor’s gate stack, the layer of metal and insulation that controls the flow of current through the transistor. Historically, “the thickness of the metals determines the threshold voltage,” explains Walid Hafez, vice president of foundry technology development at Intel. “The thicker that work function metal is, the lower the threshold voltage is.” But this dependence on transistor geometry comes with some drawbacks as devices and circuits scale down.

Small deviations in the manufacturing process can alter the volume of the metal in the gate, leading to a somewhat broad range of threshold voltages. And that’s where the Intel 3 process exemplifies the change from Intel making chips only for itself to running as a foundry.

“The way an external foundry operates is very different” from an integrated device manufacturer like Intel was until recently, says Hafez. Foundry customers “need different things… One of those things they need is very tight variation of threshold voltage.”

Intel is different; even without the tight threshold voltage tolerances, it can sell all its parts by steering the best performing ones toward its datacenter business and the lower-performing ones in other market segments.

“A lot of external customers don’t do that,” he says. If a chip doesn’t meet their constraints, they may have to chuck it. “So for Intel 3 to be successful in the foundry space, it has to have those very tight variations.”

Dipoles ever after

Dipole work function materials guarantee the needed control over threshold voltage without worrying about how much room you have in the gate. It’s a proprietary mix of metals and other materials that, despite being only angstroms thick, has a powerful effect on a transistor’s silicon channel.

black and white image of two lines sticking up with lines going around them Intel’s use of dipole work-function materials means the gate surrounding each fin in a FinFET is thinner.Intel

Like the old, thick metal gate, the new mix of materials electrostatically alters the silicon’s band structure to shift the threshold voltage. But it does so by inducing a dipole—a separation of charge—in the thin insulation between it and the silicon.

Because foundry customers were demanding tight control of Intel, it’s likely that competitors TSMC and Samsung already use dipoles in their latest FinFET processes. What exactly such structures are made of is a trade secret, but lanthanum is a component in earlier research, and it was the key ingredient in other research presented by the Belgium-based microelectronics research center, Imec. That research was concerned with how best to build the material around stacks of horizontal silicon ribbons instead of one or two vertical fins.

In these devices, called nanosheets or gate all-around transistors, there are mere nanometers between each ribbon of silicon, so dipoles are a necessity. Samsung has already introduced a nanosheet process, and Intel’s, called 20A, is scheduled for later this year. Introducing dipole work function at Intel 3 helps get 20A and its successor 18A into a more mature state, says Hafez.

Flavors of Intel 3

Dipole work-function was not the only technology behind the 18 percent boost Intel 3 delivers over its predecessor. Among them are more perfectly formed fins, more sharply defined contacts to the transistor, and lower resistance and capacitance in the interconnects. (Hafez details all that here.)

Intel is using the process to build its Xeon 6 CPUs. And the company plans to offer customers three variations on the technology, including one, 3-PT, with 9-micrometer through-silicon-vias for use in 3D stacking. “We expect Intel 3-PT to be the backbone of our foundry processes for some time to come,” says Hafez.

Reference: https://ift.tt/9tQKmex

Monday, June 24, 2024

US on Verge of Clean Hydrogen Boom as Dollars Flow




Hi Reader,

With billions of government funding flowing into hydrogen hubs across the country, the US is navigating a pivotal year for hydrogen production. Incoming final guidelines from the IRS on the definition of clean hydrogen could significantly impact project viability, funding, and deployment.

Securing a competitive edge in this rapidly-evolving market is more critical than ever. Our newly-released whitepaper, US on the Verge of Clean Hydrogen Boom as Dollars Flow, is your essential resource for the latest hydrogen projects, federal initiatives, and prognosis for the year ahead.

Download your copy of the whitepaper here!

Access your copy of the report now, and receive:

  • Critical Policy and Regulatory Insights: Understand the implications of forthcoming IRS guidelines, qualifying for 45V tax credits, and the Three Pillar approach to green hydrogen, to help you futureproof your hydrogen investments, projects, and strategy for these changes
  • The Latest Market Forecasts: Review detailed projections for clean hydrogen growth trajectories, up-to-date production costs, and demand predictions, to help you forge a profitable hydrogen strategy with confidence
  • Timely Competitive Analysis: Explore the major players dominating the market today, and get the latest updates on the Regional Clean Hydrogen Hubs plan, to provide a window into your competitor’s strategies and secure your competitive advantage

Get critical hydrogen market insights here!

Don’t miss this opportunity to enhance your hydrogen portfolio and decision-making process through this comprehensive analysis.

If you have any questions, feedback, or insights to share about this whitepaper, please do get in touch with us.

All the best,

Reuters Events Renewables Team

Reference: https://ift.tt/47B0EzT

Backdoor slipped into multiple WordPress plugins in ongoing supply-chain attack


Stylized illustration a door that opens onto a wall of computer code.

Enlarge (credit: Getty Images)

WordPress plugins running on as many as 36,000 websites have been backdoored in a supply-chain attack with unknown origins, security researchers said on Monday.

So far, five plugins are known to be affected in the campaign, which was active as recently as Monday morning, researchers from security firm Wordfence reported. Over the past week, unknown threat actors have added malicious functions to updates available for the plugins on WordPress.org, the official site for the open source WordPress CMS software. When installed, the updates automatically create an attacker-controlled administrative account that provides full control over the compromised site. The updates also add content designed to goose search results.

Poisoning the well

“The injected malicious code is not very sophisticated or heavily obfuscated and contains comments throughout making it easy to follow,” the researchers wrote. “The earliest injection appears to date back to June 21st, 2024, and the threat actor was still actively making updates to plugins as recently as 5 hours ago.”

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/g5WJpH0

Music industry giants allege mass copyright violation by AI firms


Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music.

Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music. (credit: Getty Images)

Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., "a dubstep song about Linus Torvalds").

The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies' use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.

Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/1FeZMvK

A Bosch Engineer Speeds Hybrid Race Cars to the Finish Line




When it comes to motorsports, the need for speed isn’t only on the racetrack. Engineers who support race teams also need to work at a breakneck pace to fix problems, and that’s something Aakhilesh Singhania relishes.

Singhania is a senior applications engineer at Bosch Engineering, in Novi, Mich. He develops and supports electronic control systems for hybrid race cars, which feature combustion engines and battery-powered electric motors.

Aakhilesh Singhania


Employer:

Bosch Engineering

Occupation:

Senior applications engineer

Education:

Bachelor’s degree in mechanical engineering, Manipal Institute of Technology, India; master’s degree in automotive engineering, University of Michigan, Ann Arbor

His vehicles compete in two iconic endurance races: the Rolex 24 at Daytona in Daytona Beach, Fla., and the 24 Hours of Le Mans in France. He splits his time between refining the underlying technology and providing trackside support on competition day. Given the relentless pace of the racing calendar and the intense time pressure when cars are on the track, the job is high octane. But Singhania says he wouldn’t have it any other way.

“I’ve done jobs where the work gets repetitive and mundane,” he says. “Here, I’m constantly challenged. Every second counts, and you have to be very quick at making decisions.”

An Early Interest in Motorsports

Growing up in Kolkata, India, Singhania picked up a fascination with automobiles from his father, a car enthusiast.

In 2010, when Singhania began his mechanical engineering studies at India’s Manipal Institute of Technology, he got involved in the Formula Student program, an international engineering competition that challenges teams of university students to design, build, and drive a small race car. The cars typically weigh less than 250 kilograms and can have an engine no larger than 710 cubic centimeters.

“It really hooked me,” he says. “I devoted a lot of my spare time to the program, and the experience really motivated me to dive further into motorsports.”

One incident in particular shaped Singhania’s career trajectory. In 2013, he was leading Manipal’s Formula Student team and was one of the drivers for a competition in Germany. When he tried to start the vehicle, smoke poured out of the battery, and the team had to pull out of the race.

“I asked myself what I could have done differently,” he says. “It was my lack of knowledge of the electrical system of the car that was the problem.” So, he decided to get more experience and education.

Learning About Automotive Electronics

After graduating in 2014, Singhania began working on engine development for Indian car manufacturer Tata Motors in Pune. In 2016, determined to fill the gaps in his knowledge about automotive electronics, he left India to begin a master’s degree program in automotive engineering at the University of Michigan in Ann Arbor.

He took courses in battery management, hybrid controls, and control-system theory, parlaying this background into an internship with Bosch in 2017. After graduation in 2018, he joined Bosch full-time as a calibration engineer, developing technology for hybrid and electric vehicles.

Transitioning into motorsports required perseverance, Singhania says. He became friendly with the Bosch team that worked on electronics for race cars. Then in 2020 he got his big break.

That year, the U.S.-based International Motor Sports Association and the France-based Automobile Club de l’Ouest created standardized rules to allow the same hybrid race cars to compete in both the Sportscar Championship in North America, host of the famous Daytona race, and the global World Endurance Championship, host of Le Mans.

The Bosch motorsports team began preparing a proposal to provide the standardized hybrid system. Singhania, whose job already included creating simulations of how vehicles could be electrified, volunteered to help.

“I’m constantly challenged. Every second counts, and you have to be very quick at making decisions.”

The competition organizers selected Bosch as lead developer of the hybrid system that would be provided to all teams. Bosch engineers would also be required to test the hardware they supplied to each team to ensure none had an advantage.

“The performance of all our parts in all the cars has to fall within 1 percent of each other,” Singhania says.

After Bosch won the contract, Singhania officially became a motorsports calibration engineer, responsible for tweaking the software to fit the idiosyncrasies of each vehicle.

In 2022 he stepped up to his current role: developing software for the hybrid control unit (HCU), which is essentially the brains of the vehicle. The HCU helps coordinate all of the different subsystems such as the engine, battery, and electric motor and is responsible for balancing power requirements among these different components to maximize performance and lifetime.

Bosch’s engineers also designed software known as an equity model, which runs on the HCU. It is based on historical data collected from the operation of the hybrid systems’ various components, and controls their performance in real time to ensure all the teams’ hardware operates at the same level.

In addition, Singhania creates simulations of the race cars, which are used to better understand how the different components interact and how altering their configuration would affect performance.

Troubleshooting Problems on Race Day

Technology development is only part of Singhania’s job. On race days, he works as a support engineer, helping troubleshoot problems with the hybrid system as they crop up. Singhania and his colleagues monitor each team’s hardware using computers on Bosch’s race-day trailer, a mobile nerve center hardwired to the organizers’ control center on the race track.

“We are continuously looking at all the telemetry data coming from the hybrid system and analyzing [the system’s] health and performance,” he says.

If the Bosch engineers spot an issue or a team notifies them of a problem, they rush to the pit stall to retrieve a USB stick from the vehicle, which contains detailed data to help them diagnose and fix the issue.

After the race, the Bosch engineers analyze the telemetry data to identify ways to boost the standardized hybrid system’s performance for all the teams. In motorsports, where the difference between winning and losing can come down to fractions of a second, that kind of continual improvement is crucial.

Customers “put lots of money into this program, and they are there to win,” Singhania says.

Breaking Into Motorsports Engineering

Many engineers dream about working in the fast-paced and exciting world of motorsports, but it’s not easy breaking in. The biggest lesson Singhania learned is that if you don’t ask, you don’t get invited.

“Keep pursuing them because nobody’s going to come to you with an offer,” he says. “You have to keep talking to people and be ready when the opportunity presents itself.”

Demonstrating that you have experience contributing to challenging projects is a big help. Many of the engineers Bosch hires have been involved in Formula Student or similar automotive-engineering programs, such as the EcoCAR EV Challenge, says Singhania.

The job isn’t for everyone, though, he says. It’s demanding and requires a lot of travel and working on weekends during race season. But if you thrive under pressure and have a knack for problem solving, there are few more exciting careers.

Reference: https://ift.tt/KZIEzNP

Powering Planes With Microwaves Is Not the Craziest Idea




Imagine it’s 2050 and you’re on a cross-country flight on a new type of airliner, one with no fuel on board. The plane takes off, and you rise above the airport. Instead of climbing to cruising altitude, though, your plane levels out and the engines quiet to a low hum. Is this normal? No one seems to know. Anxious passengers crane their necks to get a better view out their windows. They’re all looking for one thing.

Then it appears: a massive antenna array on the horizon. It’s sending out a powerful beam of electromagnetic radiation pointed at the underside of the plane. After soaking in that energy, the engines power up, and the aircraft continues its climb. Over several minutes, the beam will deliver just enough energy to get you to the next ground antenna located another couple hundred kilometers ahead.

The person next to you audibly exhales. You sit back in your seat and wait for your drink. Old-school EV-range anxiety is nothing next to this.

Electromagnetic waves on the fly

Beamed power for aviation is, I admit, an outrageous notion. If physics doesn’t forbid it, federal regulators or nervous passengers probably will. But compared with other proposals for decarbonizing aviation, is it that crazy?

Batteries, hydrogen, alternative carbon-based fuels—nothing developed so far can store energy as cheaply and densely as fossil fuels, or fully meet the needs of commercial air travel as we know it. So, what if we forgo storing all the energy on board and instead beam it from the ground? Let me sketch what it would take to make this idea fly.

Beamed Power for Aviation


Fly by Microwave: Warm up to a new kind of air travel

For the wireless-power source, engineers would likely choose microwaves because this type of electromagnetic radiation can pass unruffled through clouds and because receivers on planes could absorb it completely, with nearly zero risk to passengers.

To power a moving aircraft, microwave radiation would need to be sent in a tight, steerable beam. This can be done using technology known as a phased array, which is commonly used to direct radar beams. With enough elements spread out sufficiently and all working together, phased arrays can also be configured to focus power on a point a certain distance away, such as the receiving antenna on a plane.

Phased arrays work on the principle of constructive and destructive interference. The radiation from the antenna elements will, of course, overlap. In some directions the radiated waves will interfere destructively and cancel out one another, and in other directions the waves will fall perfectly in phase, adding together constructively. Where the waves overlap constructively, energy radiates in that direction, creating a beam of power that can be steered electronically.

How far we can send energy in a tight beam with a phased array is governed by physics—specifically, by something called the diffraction limit. There’s a simple way to calculate the optimal case for beamed power: D1 D2 > λ R. In this mathematical inequality, D1 and D2 are the diameters of the sending and receiving antennas, λ is the wavelength of the radiation, and R is the distance between those antennas.

Now, let me offer some ballpark numbers to figure out how big the transmitting antenna (D1) must be. The size of the receiving antenna on the aircraft is probably the biggest limiting factor. A medium-size airliner has a wing and body area of about 1,000 square meters, which should provide for the equivalent of a receiving antenna that’s 30 meters wide (D2) built into the underside of the plane.

If physics doesn’t forbid it, federal regulators or nervous passengers probably will.

Next, let’s guess how far we would need to beam the energy. The line of sight to the horizon for someone in an airliner at cruising altitude is about 360 kilometers long, assuming the terrain below is level. But mountains would interfere, plus nobody wants range anxiety, so let’s place our ground antennas every 200 km along the flight path, each beaming energy half of that distance. That is, set R to 100 km.

Finally, assume the microwave wavelength (λ) is 5 centimeters. This provides a happy medium between a wavelength that’s too small to penetrate clouds and one that’s too large to gather back together on a receiving dish. Plugging these numbers into the equation above shows that in this scenario the diameter of the ground antennas (D1) would need to be at least about 170 meters. That’s gigantic, but perhaps not unreasonable. Imagine a series of three or four of these antennas, each the size of a football stadium, spread along the route, say, between LAX and SFO or between AMS and BER.

Power beaming in the real world

While what I’ve described is theoretically possible, in practice engineers have beamed only a fraction of the amount of power needed for an airliner, and they’ve done that only over much shorter distances.

NASA holds the record from an experiment in 1975, when it beamed 30 kilowatts of power over 1.5 km with a dish the size of a house. To achieve this feat, the team used an analog device called a klystron. The geometry of a klystron causes electrons to oscillate in a way that amplifies microwaves of a particular frequency—kind of like how the geometry of a whistle causes air to oscillate and produce a particular pitch.

Klystrons and their cousins, cavity magnetrons (found in ordinary microwave ovens), are quite efficient because of their simplicity. But their properties depend on their precise geometry, so it’s challenging to coordinate many such devices to focus energy into a tight beam.

In more recent years, advances in semiconductor technology have allowed a single oscillator to drive a large number of solid-state amplifiers in near-perfect phase coordination. This has allowed microwaves to be focused much more tightly than was possible before, enabling more-precise energy transfer over longer distances.

In 2022, the Auckland-based startup Emrod showed just how promising this semiconductor-enabled approach could be. Inside a cavernous hangar in Germany owned by Airbus, the researchers beamed 550 watts across 36 meters and kept over 95 percent of the energy flowing in a tight beam—far better than could be achieved with analog systems. In 2021, the U.S. Naval Research Laboratory showed that these techniques could handle higher power levels when it sent more than a kilowatt between two ground antennas over a kilometer apart. Other researchers have energized drones in the air, and a few groups even intend to use phased arrays to beam solar power from satellites to Earth.

A rectenna for the ages

So beaming energy to airliners might not be entirely crazy. But please remain seated with your seat belts fastened; there’s some turbulence ahead for this idea. A Boeing 737 aircraft at takeoff requires about 30 megawatts—a thousand times as much power as any power-beaming experiment has demonstrated. Scaling up to this level while keeping our airplanes aerodynamic (and flyable) won’t be easy.

Consider the design of the antenna on the plane, which receives and converts the microwaves to an electric current to power the aircraft. This rectifying antenna, or rectenna, would need to be built onto the underside surfaces of the aircraft with aerodynamics in mind. Power transmission will be maximized when the plane is right above the ground station, but it would be far more limited the rest of the time, when ground stations are far ahead or behind the plane. At those angles, the beam would activate only either the front or rear surfaces of the aircraft, making it especially hard to receive enough power.

With 30 MW blasting onto that small of an area, power density will be an issue. If the aircraft is the size of Boeing 737, the rectenna would have to cram about 25 W into each square centimeter. Because the solid-state elements of the array would be spaced about a half-wavelength—or 2.5 cm—apart, this translates to about 150 W per element—perilously close to the maximum power density of any solid-state power-conversion device. The top mark in the 2016 IEEE/Google Little Box Challenge was about 150 W per cubic inch (less than 10 W per cubic centimeter).

The rectenna will also have to weigh very little and minimize the disturbance to the airflow over the plane. Compromising the geometry of the rectenna for aerodynamic reasons might lower its efficiency. State-of-the art power-transfer efficiencies are only about 30 percent, so the rectenna can’t afford to compromise too much.

A Boeing 737 aircraft at takeoff requires about 30 megawatts—a thousand times as much power as any power-beaming experiment has demonstrated.

And all of this equipment will have to work in an electric field of about 7,000 volts per meter—the strength of the power beam. The electric field inside a microwave oven, which is only about a third as strong, can create a corona discharge, or electric arc, between the tines of a metal fork, so just imagine what might happen inside the electronics of the rectenna.

And speaking of microwave ovens, I should mention that, to keep passengers from cooking in their seats, the windows on any beamed-power airplane would surely need the same wire mesh that’s on the doors of microwave ovens—to keep those sizzling fields outside the plane. Birds, however, won’t have that protection.

Fowl flying through our power beam near the ground might encounter a heating of more than 1,000 watts per square meter—stronger than the sun on a hot day. Up higher, the beam will narrow to a focal point with much more heat. But because that focal point would be moving awfully fast and located higher than birds typically fly, any roasted ducks falling from the sky would be rare in both senses of the word. Ray Simpkin, chief science officer at Emrod, told me it’d take “more than 10 minutes to cook a bird” with Emrod’s relatively low-power system.

Legal challenges would surely come, though, and not just from the National Audubon Society. Thirty megawatts beamed through the air would be about 10 billion times as strong as typical signals at 5-cm wavelengths (a band currently reserved for amateur radio and satellite communications). Even if the transmitter could successfully put 99 percent of the waves into a tight beam, the 1 percent that’s leaked would still be a hundred million times as strong as approved transmissions today.

And remember that aviation regulators make us turn off our cellphones during takeoff to quiet radio noise, so imagine what they’ll say about subjecting an entire plane to electromagnetic radiation that’s substantially stronger than that of a microwave oven. All these problems are surmountable, perhaps, but only with some very good engineers (and lawyers).

Compared with the legal obstacles and the engineering hurdles we’d need to overcome in the air, the challenges of building transmitting arrays on the ground, huge as they would have to be, seem modest. The rub is the staggering number of them that would have to be built. Many flights occur over mountainous terrain, producing a line of sight to the horizon that is less than 100 km. So in real-world terrain we’d need more closely spaced transmitters. And for the one-third of airline miles that occur over oceans, we would presumably have to build floating arrays. Clearly, building out the infrastructure would be an undertaking on the scale of the Eisenhower-era U.S. interstate highway system.

Decarbonizing with the world’s largest microwave

People might be able to find workarounds for many of these issues. If the rectenna is too hard to engineer, for example, perhaps designers will find that they don’t have to turn the microwaves back into electricity—there are precedents for using heat to propel airplanes. A sawtooth flight path—with the plane climbing up as it approaches each emitter station and gliding down after it passes by—could help with the power-density and field-of-view issues, as could flying-wing designs, which have much more room for large rectennas. Perhaps using existing municipal airports or putting ground antennas near solar farms could reduce some of the infrastructure cost. And perhaps researchers will find shortcuts to radically streamline phased-array transmitters. Perhaps, perhaps.

To be sure, beamed power for aviation faces many challenges. But less-fanciful options for decarbonizing aviation have their own problems. Battery-powered planes don’t even come close to meeting the needs of commercial airlines. The best rechargeable batteries have about 5 percent of the effective energy density of jet fuel. At that figure, an all-electric airliner would have to fill its entire fuselage with batteries—no room for passengers, sorry—and it’d still barely make it a tenth as far as an ordinary jet. Given that the best batteries have improved by only threefold in the past three decades, it’s safe to say that batteries won’t power commercial air travel as we know it anytime soon.

Any roasted ducks falling from the sky would be rare in both senses of the word.

Hydrogen isn’t much further along, despite early hydrogen-powered flights occurring nearly 40 years ago. And it’s potentially dangerous—enough that some designs for hydrogen planes have included two separate fuselages: one for fuel and one for people to give them more time to get away if the stuff gets explode-y. The same factors that have kept hydrogen cars off the road will probably keep hydrogen planes out of the sky.

Synthetic and biobased jet fuels are probably the most reasonable proposal. They’ll give us aviation just as we know it today, just at a higher cost—perhaps 20 to 50 percent more expensive per ticket. But fuels produced from food crops can be worse for the environment than the fossil fuels they replace, and fuels produced from CO2 and electricity are even less economical. Plus, all combustion fuels could still contribute to contrail formation, which makes up more than half of aviation’s climate impact.

The big problem with the “sane” approach for decarbonizing aviation is that it doesn’t present us with a vision of the future at all. At the very best, we’ll get a more expensive version of the same air travel experience the world has had since the 1970s.

True, beamed power is far less likely to work. But it’s good to examine crazy stuff like this from time to time. Airplanes themselves were a crazy idea when they were first proposed. If we want to clean up the environment and produce a future that actually looks like a future, we might have to take fliers on some unlikely sounding schemes.

Reference: https://ift.tt/pJgXDcO

The Top 10 Climate Tech Stories of 2024

In 2024, technologies to combat climate change soared above the clouds in electricity-generating kites, traveled the oceans sequestering...