Friday, June 30, 2023

Radar Imaging Could Be the Key to Monitoring Climate Change


As a child, Alberto Moreira discovered his passion for electronics from the kits for exploring science, technology, engineering, and mathematics that his father bought him every month. The kits taught him not only about electronics but also about chemistry and physics. As he got older, he and his brother began making their own electronic circuits. When Moreira was 15, the duo built high-fidelity amplifiers and control panels for neon signs, selling them to small companies that used such displays to advertise their business. Those early experiences ultimately led to a successful career as director of the German Aerospace Center (DLR)’s Microwaves and Radar Institute, in Oberpfaffenhofen, Bavaria, where the IEEE Fellow developed a space-based interferometric synthetic-aperture radar system. ALBERTO MOREIRA EMPLOYER German Aerospace Center’s Microwaves and Radar Institute, in Oberpfaffenhofen, Bavaria TITLE Director MEMBER GRADE Fellow ALMA MATER SInstituto Tecnológico de Aeronáutica, São José dos Campos, Brazil; and the Technical University of Munich That InSAR system has generated digital elevation maps of the Earth’s surface with unparalleled accuracy and resolution. The models now serve as a standard for many geoscientific, remote sensing, topographical, and commercial applications. Moreira’s technology also helps to track the effects of climate change. For his “leadership and innovative concepts in the design, deployment, and utilization of airborne and space-based radar systems,” Moreira is this year’s recipient of the IEEE Dennis J. Picard Medal for Radar Technologies and Applications. It is sponsored by Raytheon Technologies. Moreira says he’s honored to receive the “most prestigious award in the radar technologies and applications field.” “It recognizes the 20 years of hard work my team and I put into our research,” he says. “What makes the honor more special is that the award is from IEEE.” Using radar to map the Earth’s surface Before Moreira and his team developed their InSAR system in 2010, synthetic-aperture radar systems were the state of the art, he says. Unlike optical imaging systems, ones that use SAR can penetrate through clouds and rain to take high-resolution images of the Earth from space. It can also operate at night. An antenna on an orbiting satellite sends pulsed microwave signals to the Earth’s surface as it passes over the terrain being mapped. The signals are then reflected back to the antenna, allowing the system to measure the distance between the antenna and the point on the Earth’s surface where the signal is reflected. Using data-processing algorithms, the reflected signals are combined in such a way that a computationally generated, synthetic antenna acts as though it were a much larger one—which provides improved resolution. That’s why the approach is called synthetic-aperture radar. “The system is documenting changes taking place on Earth and facilitating the early detection of irreversible damage.” While leading a research team at the DLR in the early 1990s, Moreira saw the potential of using information gathered from such radar satellites to help address societal issues such as sustainable development and the climate crisis. But he wanted to take the technology a step further and use interferometric synthetic-aperture radar, InSAR, which, he realized, would be more powerful. SAR satellites provide 2D images, but InSAR allows for 3D imaging of the Earth’s surface, meaning that you can map topography, not just radar reflectivity. It took Moreira and his team almost 10 years to develop their InSAR system, the first to use two satellites, each with its own antenna. Their approach allows elevation maps to be created. The two satellites, named TerraSAR-X and TanDEM-X, orbit the Earth in almost circular orbits, with the distance between the satellites varying from 150 to 500 meters at any given time. To avoid collisions, Moreira and his team developed a double helix orbit; the satellites travel along an ellipse and corkscrew around each other. The satellites communicate with each other and with ground stations, sending altitude and position data so that their separation can be fine-tuned to help avoid collisions. Each satellite emits microwave pulses and each one receives the backscattered signals. Although the backscattered signals received by each satellite are almost identical, they differ slightly due to the different viewing geometries. And those differences in the received signals depend on the terrain height, allowing the surface elevation to be mapped. By combining measurements of the same area obtained at different times to form interferograms, scientists can determine whether there were subtle changes in elevation in the area, such as rising sea levels or deforestation, during the intervening time period. The InSAR system was used in the DLR’s 2010 TanDEM-X mission. Its goal was to create a topographical map of the Earth with a horizontal pixel spacing of 12 meters. After its launch, the system surveyed the Earth’s surface multiple times in five years and collected more than 3,000 terabytes of data. In September 2016 the first global digital elevation map with a 2-meter height accuracy was produced. It was 30 times more accurate than any previous effort, Moreira says. The satellites are currently being used to monitor environmental effects, specifically deforestation and glacial melting. The hope, Moreira says, is that early detection of irreversible damage can help scientists pinpoint where intervention is needed. He and his team are developing a system that uses more satellites flying in close formation to improve the data available from radar imaging. “By collecting more detailed information, we can better understand, for example, how the forests are changing internally by imaging every layer,” he says, referring to the emergent layer and the canopy, understory, and forest floor. He also is developing a space-based radar system that uses digital beamforming to produce images of the Earth’s surface with higher spatial resolution in less time. It currently takes radar systems about 12 days to produce a global map with a 20-meter resolution, Moreira says, but the new system will be able to do it in six days with a 5-meter resolution. Digital beamforming represents a paradigm shift for spaceborne SAR systems. It consists of an antenna divided in several parts, each of which has its own receiving channel and analog-to-digital converter. The channels are combined in such a way that different antenna beams can be computed a posteriori to increase the imaged swath and the length of the synthetic aperture—which allows for a higher spatial resolution, Moreira says. He says he expects three such systems to be launched within the next five years. A lifelong career at the DLR Moreira earned bachelor’s and master’s degrees in electrical engineering from the Instituto Tecnológico de Aeronáutica, in São José dos Campos, Brazil, in 1984 and 1986. He decided to pursue a doctorate outside the country after his master’s thesis advisor told him there were more research opportunities elsewhere. Moreira earned his Ph.D. in engineering at the Technical University of Munich. As a doctoral student, he conducted research at the DLR on real-time radar processing. For his dissertation, he created algorithms that generated high-resolution images from one of the DLR’s existing airborne radar systems. “Having students and engineers work together on large-scale projects is a dream come true.” After graduating in 1993, he planned to move back to Brazil, but instead he accepted an offer to become a DLR group leader. Moreira led a research team of 10 people working on airborne- and satellite-system design and data processing. In 1996 he was promoted to chief scientist and engineer in the organization’s SAR-technology department. He worked in that position until 2001, when he became director of the Microwaves and Radar Institute. “I selected the right profession,” he says. “I couldn’t imagine doing anything other than research and electronics.” He is also a professor of microwave remote sensing at the Karlsruhe Institute of Technology, in Germany, and has been the doctoral advisor for more than 50 students working on research at DLR facilities. One of his favorite parts about being a director and professor is working with his students, he says: “I spend about 20 percent of my time with them. Having students and engineers work together on large-scale projects is a dream come true. When I first began my career at DLR I was not aware that this collaboration would be so powerful.” The importance of creating an IEEE network It was during his time as a doctoral student that Moreira was introduced to IEEE. He presented his first research paper in 1989 at the International Geoscience and Remote Sensing Symposium, in Vancouver. While attending his second conference, he says, he realized that by not being a member he was “missing out on many important things” such as networking opportunities, so he joined. He says IEEE has played an important role throughout his career. He has presented all his research at IEEE conferences, and he has published papers in the organization’s journals. He is a member of the IEEE Aerospace and Electronic Systems, IEEE Antennas and Propagation, IEEE Geoscience and Remote Sensing (GRSS), IEEE Information Technology, IEEE Microwave Theory and Techniques, and IEEE Signal Processing societies. “I recommend that everyone join not only IEEE but also at least one of its societies,” he says, calling them “the home of your research.” He founded the IEEE GRSS Germany Section in 2003 and served as the society’s 2010 president. An active volunteer, he was a member of the IEEE GRSS administrative committee and served as associate editor from 2003 to 2007 for IEEE Geoscience and Remote Sensing Letters. Since 2005 he has been associate editor for the IEEE Transactions on Geoscience and Remote Sensing. Through his volunteer work and participation in IEEE events, he says, he has connected with other members in different fields including aerospace technology, geoscience, and remote sensing and collaborated with them on projects. He received the IEEE Dennis J. Picard Medal for Radar Technologies and Applications on 5 May during the IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony, held in Atlanta. The event is available on IEEE.tv. Reference: https://ift.tt/8SqtTHL

TSMC says some of its data was swept up in a hack on a hardware supplier


Enlarge (credit: Getty Images) Chipmaker TSMC said on Friday that one of its hardware suppliers experienced a “security incident” that allowed the attackers to obtain configurations and settings for some of the servers the company uses in its corporate network. The disclosure came a day after the LockBit ransomware crime syndicate listed TSMC on its extortion site and threatened to publish the data unless it received a payment of $70 million. The hardware supplier, Kinmax Technology, confirmed that one of its test environments had been attacked by an external group, which was then able to retrieve configuration files and other parameter information. The company said it learned of the breach on Thursday and immediately shut down the compromised systems and notified the affected customer. “Since the above information has nothing to do with the actual application of the customer, it is only the basic setting at the time of shipment,” Kinmax officials wrote. “At present, no damage has been caused to the customer, and the customer has not been hacked by it.” Read 9 remaining paragraphs | Comments Reference : https://ift.tt/6vsUzRJ

Kicking It With Robots


In July of 2010, I traveled to Singapore to take care of my then 6-year-old son Henry while his mother attended an academic conference. But I was really there for the robots. IEEE Spectrum’s digital product manager, Erico Guizzo, was our robotics editor at the time. We had just combined forces with robot blogger par excellence and now Spectrum senior editor Evan “BotJunkie” Ackerman to supercharge our first and most successful blog, Automaton. When I told Guizzo I was going to be in Singapore, he told me that RoboCup, an international robot soccer competition, was going on at the same time. So of course we wrangled a press pass for me and my plus one. I brought Henry and a video camera to capture the bustling bots and their handlers. Guizzo told me that videos of robots flailing at balls would do boffo Web traffic, so I was as excited as my first grader (okay, more excited) to be in a convention center filled with robots and teams of engineers toiling away on the sidelines to make adjustments and repairs and talk with each other and us about their creations. Even better than the large humanoid robots lurching around like zombies and the smaller, wheeled bots scurrying to and fro were the engineers who tended to them. They exuded the kind of joy that comes with working together to build cool stuff, and it was infectious. On page 40 of this issue, Peter Stone—past president of the RoboCup Federation, professor in the computer science department of the University of Texas at Austin, and executive director of Sony AI America—captures some of that unbridled enthusiasm and gives us the history of the event. To go along with his story, we include action shots taken at various RoboCups throughout the 25 years of the event. You can check out this year’s RoboCup competitions going on 6–9 July at the University of Bordeaux, in Nouvelle-Aquitaine, France. Earlier in 2010, the same year as my first RoboCup, Apple introduced what was in part pitched as the future of magazines: the iPad. Guizzo and photography director Randi Klett instantly grokked the possibilities of the format and the new sort of tactile interactivity (ah, the swipe!) to showcase the coolest robots they could find. Channeling the same spirit I experienced in Singapore, Guizzo, Klett, and app-maker Tendigi launched the Robots app in 2012. It was an instant hit, with more than 1.3 million downloads. To reach new audiences on other devices beyond the iOS platform, we ported Robots from appworld to the Web. With the help of founding sponsors—including the IEEE Robotics and Automation Society and Walt Disney Imagineering—and the support of the IEEE Foundation, the Robots site launched in 2018 and quickly found a following among STEM educators, students, roboticists, and the general public. By 2022 it was clear that the site, whose basic design had not changed in years, needed a reboot. We gave it a new name and URL to make it easy for more people to find: RobotsGuide.com. And with the help of Pentagram, the design consultancy that reimagined Spectrum’s print magazine and website in 2021, in collaboration with Standard, a design and technology studio, we built the site as a modern, fully responsive Web app. Featuring almost 250 of the world’s most advanced and influential robots, hundreds of photos and videos, detailed specs, 360-degree interactives, games, user ratings, educational content, and robot news from around the world, the Robots Guide helps everyone learn more about robotics. So grab your phone, tablet, or computer and delve into the wondrous world of robots. It will be time—likely a lot of it—well spent. Reference: https://ift.tt/LgwuesE

Video Friday: Training ARTEMIS


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE RSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREA IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREA IROS 2023: 1–5 October 2023, DETROIT CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL Humanoids 2023: 12–14 December 2023, AUSTIN, TEX. Enjoy today’s videos! Humanoid robot ARTEMIS training for RoboCup. Fully autonomous soccer playing outdoors. [ RoMeLa ] Imperial College London and Empa researchers have built a drone that can withstand high enough temperatures to enter burning buildings. The prototype drone, called FireDrone, could be sent into burning buildings or woodland to assess hazards and provide crucial first-hand data from danger zones. The data would then be sent to first responders to help inform their emergency response. [ Imperial ] We integrated Stable Diffusion to give Ameca the power to imagine drawings. One of the big challenges here was converting the image to vectors, (lines), that Ameca could draw. The focus was on making fast sketches that are fun to watch. Ameca always signs their artwork. I just don’t understand art. [ Engineered Arts ] Oregon State Professor Heather Knight and Agility’s Head of Customer Experience Bambi Brewer get together to talk about human-robot interaction. [ Agility ] Quadrupeds are great, but they have way more degrees of freedom than it’s comfortable to control. Maybe motion capture can fix that? [ Leeds ] The only thing I know for sure about this video is that Skydio has no idea what’s going on here. [ Ugo ] We are very sad to share the passing of Joanne Pransky. Robin Murphy shares a retrospective. [ Robotics Through Science Fiction ] ICRA 2023 was kind of bonkers. This video doesn’t do it justice, of course, but there were a staggering 6,000 people in attendance. And next year is going to be even bigger! [ ICRA 2023 ] India Flying Labs recently engaged more than 350 girls and boys in a two-day STEM workshop with locally-made drones. [ WeRobotics ] This paper proposes the application of a very low weight (3.2 kg) anthropomorphic dual-arm system capable of rolling along linear infrastructures such as power lines to perform dexterous and bimanual manipulation tasks like the installation of clip-type bird flight diverters or conduct contact-based inspection operations on pipelines to detect corrosion or leaks. [ GRVC ] In collaboration with Trimble, we are announcing a proof-of-concept to enable robots and machines to follow humans and other machines in industrial applications. Together, we have integrated a patent-pending PFF follow™ smart-following module prototype developed by Piaggio Fast Forward onto a Boston Dynamics’ Spot® robot platform controlled by Trimble’s advanced positioning technology. [ PFF ] X20 tunnel inspection quadruped robot can achieve accurate detection and real-time uploading of faults such as cable surface discharge, corona discharge, internal discharge, and temperature abnormality. It can also adapt to inspection tasks in rugged terrain. [ DeepRobotics ] If you’re wondering why the heck anyone would try to build a robot arm out of stained glass, well, that’s an excellent thing to wonder. [ Simone Giertz ] Reference: https://ift.tt/Ov3gcGP

Red Hats new source code policy and the intense pushback explained


Enlarge / A be-hatted person, tipping his brim to the endless amount of text generated by the conflict of corporate versus enthusiast understandings of the GPL. (credit: Getty Images) When CentOS announced in 2020 that it was shutting down its traditional "rebuild" of Red Hat Enterprise Linux (RHEL) to focus on its development build, Stream, CentOS suggested the strategy "removes confusion." Red Hat, which largely controlled CentOS by then, considered it "a natural, inevitable next step." Last week, the IBM-owned Red Hat continued "furthering the evolution of CentOS Stream" by announcing that CentOS Stream would be "the sole repository for public RHEL-related source code releases," with RHEL's core code otherwise restricted to a customer portal. (RHEL access is free for individual developers and up to 16 servers, but that's largely not what is at issue here). Red Hat's post was a rich example of burying the lede and a decisive moment for many who follow the tricky balance of Red Hat's open-source commitments and service contract business. Here's what followed. Read 11 remaining paragraphs | Comments Reference : https://ift.tt/mehdiyP

This 1920 Chess Automaton Was Wired to Win


The Mechanical Turk was a fraud. The chess-playing automaton, dressed in a turban and elaborate Ottoman robes, toured Europe in the closing decades of the 18th century accompanied by its inventor Wolfgang von Kempelen. The Turk wowed Austrian empress Maria Theresa, French emperor Napoleon Bonaparte, and Prussian king Frederick the Great as it defeated some of the great chess players of its day. In reality, though, the automaton was controlled by a human concealed within its cabinetry. What was the first chess-playing automaton? Torres Quevedo made his mark in a number of fields, including funiculars, dirigibles, and remote controls, before turning to “thinking” machines.Alamy A century and a half after von Kempelen’s charade, Spanish engineer Leonardo Torres Quevedo debuted El Ajedrecista (The Chessplayer), a true chess-playing automaton. The machine played a modified endgame against a human opponent. It featured a vertical chessboard with pegs for the chess pieces; a mechanical arm moved the pegs. Torres Quevedo invented his electromechanical device in 1912 and publicly debuted it at the University of Paris two years later. Although clunky in appearance, the experimental model still managed to create a stir worldwide, including a brief write-up in 1915 in Scientific American. In El Ajedrecista’s endgame, the machine (white) played a king and a rook against a human’s lone king (black). The program required a fixed starting position for the machine’s king and rook, but the opposing king could be placed on any square in the first six ranks (the horizontal rows, that is) that wouldn’t put the king in danger. The program assumed that the two kings would be on opposite sides of the rank controlled by the rook. Torres Quevedo’s algorithm allowed for 63 moves without capturing the king, well beyond the usual 50-move rule that results in a draw. With these restrictions in place, El Ajedrecista was guaranteed a win. In 1920, Torres Quevedo upgraded the appearance and mechanics of his automaton [pictured at top], although not its programming. The new version moved its pieces by way of electromagnets concealed below an ordinary chessboard. A gramophone recording announced jaque al rey (Spanish for “check”) or mate (checkmate). If the human attempted an illegal move, a lightbulb gave a warning signal; after three illegal attempts, the game would shut down. Building a machine that thinks The first version of the chess automaton, from 1912, featured a vertical chessboard and a mechanical arm to move the pieces.Leonardo Torres Quevedo Museum/Polytechnic University of Madrid Unlike Wolfgang von Kempelen, Torres Quevedo did not create his chess-playing automaton for the entertainment of the elite or to make money as a showman. The Spanish engineer was interested in building a machine that “thinks”—or at least makes choices from a relatively complex set of relational possibilities. Torres Quevedo wanted to reframe what we mean by thinking. As the 1915 Scientific American article about the chess automaton notes, “There is, of course, no claim that it will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought.” In 1914, Torres Quevedo laid out his ideas in the article, “Ensayos sobre automática. Si definición. Extensión teórica de sus aplicaciones” (“Essays on Automatics. Its Definition. Theoretical Extent of Its Applications”). In the article, he updated Charles Babbage’s ideas for the analytical engine with the currency of the day: electricity. He proposed machines doing arithmetic using switching circuits and relays, as well as automated machines equipped with sensors that would be able to adjust to their surroundings and carry out tasks. Automatons with feelings were the future, in Torres Quevedo’s view. How far could human collaboration with machines go? Torres Quevedo built his chess player to find out, as he explained in his 1917 book Mis inventos y otras páginas de vulgarización (My inventions and other popular writings). By entrusting machines with tasks previously reserved for human intelligence, he believed that he was freeing humans from a type of servitude or bondage. He was also redefining what was categorized as thought. Claude Shannon, the information-theory pioneer, later picked up this theme in a 1950 article, “A Chess-Playing Machine,” in Scientific American on whether electronic computers could be said to think. From a behavioral perspective, Shannon argued, a chess-playing computer mimics the thinking process. On the other hand, the machine does only what it has been programmed to do, clearly not thinking outside its set parameters. Torres Quevedo hoped his chess player would shed some light on the matter, but I think he just opened a Pandora’s box of questions. Why isn’t Leonardo Torres Quevedo known outside Spain? Despite Torres Quevedo’s clear position in the early history of computing—picking up from Babbage and laying a foundation for artificial intelligence —his name has often been omitted from narratives of the development of the field (at least outside of Spain), much to the dismay of the historians and engineers familiar with his work. That’s not to say he wasn’t known and respected in his own time. Torres Quevedo was elected a member of the Spanish Royal Academy of Sciences in 1901 and became an associate member of the French Academy of Sciences in 1927. He was also a member of the Spanish Society of Physics and Chemists and the Spanish Royal Academy of Language and an honorary member of the Geneva Society of Physics and Natural History. Plus El Ajedrecista has always had a fan base among chess enthusiasts. Even after Torres Quevedo’s death in 1936, the machine continued to garner attention among the cybernetic set, such as when it defeated Norbert Wiener at an influential conference in Paris in 1951. (To be fair, it defeated everyone, and Wiener was known to be a terrible player.) One reason Torres Quevedo’s efforts in computing aren’t more widely known might be because the experiments came later in his life, after a very successful career in other engineering fields. In a short biography for Proceedings of the IEEE, Antonio Pérez Yuste and Magdalena Salazar Palma outlined three areas that Torres Quevedo contributed to before his work on the automatons. Torres Quevedo’s design for the Whirlpool Aero Car, which offers a thrilling ride over Niagara River, debuted in 1916.Wolfgang Kaehler/LightRocket/Getty Images First came his work, beginning in the 1880s, on funiculars, the most famous of which is the Whirlpool Aero Car. The cable car is suspended over a dramatic gorge on the Niagara River on six interlocking steel cables, connecting two points along the shore half a kilometer apart. It is still in operation today. His second area of expertise was aeronautics, in which he held patents on a semirigid frame system for dirigible balloons based on an internal frame of flexible cables. And finally, he invented the Telekine, an early remote control device, which he developed as a way to safely test his airships without risking human life. He started by controlling a simple tricycle using a wireless telegraph transmitter. He then successfully used his Telekine to control boats in the Bilbao estuary. But he abandoned these efforts after the Spanish government denied his request for funding. The Telekine was marked with an IEEE Milestone in 2007. If you’d like to explore Torres Quevedo’s various inventions, including the second chess-playing automaton, consider visiting the Museo Torres Quevedo, located in the School of Civil Engineering at the Polytechnic University of Madrid. The museum has also developed online exhibits in both Spanish and English. A more cynical view of why Torres Quevedo’s computer prowess is not widely known may be because he saw no need to commercialize his chess player. Nick Montfort, a professor of digital media at MIT, argues in his book Twisty Little Passages (MIT Press, 2005) that El Ajedrecista was the first computer game, although he concedes that people might not recognize it as such because it predated general-purpose digital computing by decades. Of course, for Torres Quevedo, the chess player existed as a physical manifestation of his ideas and techniques. And no matter how visionary he may have been, he did not foresee the multibillion-dollar computer gaming industry. The upshot is that, for decades, the English-speaking world mostly overlooked Torres Quevedo, and his work had little direct effect on the development of the modern computer. We are left to imagine an alternate history of how things might have unfolded if his work had been considered more central. Fortunately, a number of scholars are working to tell a more international, and more complete, history of computing. Leonardo Torres Quevedo’s is a name worth inserting back into the historical narrative. References I first learned about El Ajedrecista while reading the article “Leonardo Torres Quevedo: Pioneer of Computing, Automatics, and Artificial Intelligence” by Francisco González de Posada, Francisco A. González Redondo, and Alfonso Hernando González (IEEE Annals of the History of Computing, July-September 2021). In their introduction, the authors note the minimal English-language scholarship on Torres Quevedo, with the notable exception of Brian Randell’s article “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush” (IEEE Annals of the History of Computing, October-December 1982). Although I read Randell’s article after I had drafted my own, I began my research on the chess-playing automaton with the Museo Torres Quevedo’s excellent online exhibit. I then consulted contemporary accounts of the device, such as “Electric Automaton” (Scientific American, 16 May 1914) and “Torres and His Remarkable Automatic Devices” (Scientific American Supplement No. 2079, 6 November 1915). My reading comprehension of Spanish is not what it should be for true academic scholarship in the field, but I tracked down several of Torres Quevedo’s original books and articles and muddled through translating specific passages to confirm claims by other secondary sources. There is clearly an opportunity for someone with better language skills than I to do justice to this pioneer in computer history. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the July 2023 print issue as “Computer Chess, Circa 1920.” Reference: https://ift.tt/8V15ODf

Torrent of image-based phishing emails are harder to detect and more convincing


Enlarge / Man hand holding a mobile phone with QR code. (credit: Getty Images) Phishing mongers have released a torrent of image-based junk emails that embed QR codes into their bodies to successfully bypass security protections and provide a level of customization to more easily fool recipients, researchers said. In many cases, the emails come from a compromised email address inside the organization the recipient works in, a tactic that provides a false sense of authenticity, researchers from security firm Inky said. The emails Inky detected instruct the employee to resolve security issues such as a missing two-factor authentication enrollment or to change a password and warn of repercussions that may occur if the recipient fails to follow through. Those who take the bait and click on the QR code are led to a site masquerading as a legitimate one used by the company but it captures passwords and sends them to the attackers. Inky described the campaign's approach as “spray and pray” because the threat actors behind it send the emails to as many people as possible to generate results. Read 7 remaining paragraphs | Comments Reference : https://ift.tt/SsUiV2b

Thursday, June 29, 2023

Welcome to Fusion City USA


The future of carbon-free energy smells like teriyaki and sounds like a low-flying 737. A sleepy strip mall beside Boeing’s sprawling campus in Everett, WA isn’t necessarily where you’d expect to find technology promising to harness the power of the sun, release humanity from the grip of fossil fuels, and unlock an estimated US $40 trillion market. But here, and in an even more anonymous office park nearby, startup Zap Energy is trialing a prototype reactor that is already producing high-energy neutrons from nuclear fusion—if not yet enough to send power back into the grid. The unglamorous location is no accident, says Derek Sutherland, Zap’s senior research scientist. “If you squint hard enough, building a fusion system is not that different from building an airplane,” he tells Spectrum on a visit in June. “It requires a little bit of retooling and retraining but you can transfer a lot of those skills.” Zap isn’t the only fusion company fishing in aviation’s talent pool. Less than two miles away, Helion Energy has its own facility, purchased from a Boeing contractor and housing its own operational fusion prototype built in part by aerospace veterans. The two startups represent a unique concentration of fusion expertise and funding, and epitomize a new confidence that fusion power is now a solvable engineering challenge rather than an eternally elusive scientific puzzle. Zap Energy has already conducted tens of thousands of fusion pulse tests in its early prototype reactor.Mark Harris Zap’s Fuze-Q prototype sits in an odor-free air-conditioned room and makes only a barely-audible tick when it operates. Since going active last summer, the office-desk-size device has housed thousands of fusion reactions, each generating reams of data as Zap gradually ramps it up towards the temperatures, plasma densities, and reaction times necessary to generate more power than it consumes. The entire fusion process is about as dramatic as flipping a light switch, and Sutherland walks us right up to the small reactor shortly afterwards one such operation. This isn’t some scaled-down experimental toy. Zap’s commercial fusion reactor, intended to reliably produce enough power for 30,000 homes—day and night, year-round—will be exactly the same size as the prototype, with the addition of a liquid-metal “blanket,” heat exchangers, and steam turbines to turn its energetic neutrons into electricity. The core reactor will be shorter than a Mini Cooper. If this doesn’t match your mental image of fusion power, you’re probably picturing the city-block-size ITER megaproject currently taking shape in southern France. By the time that long-delayed publicly-funded reactor goes live, possibly not until 2029, it will be 30 meters tall and weigh more than 18,000 Mini Coopers. It will also have cost China, the European Union, the United States and other partners over US $22 billion. “The two main drivers of cost are complexity and size,” says Sutherland. “Zap excels at reducing both of those as much as possible because the system has no cryogenics, no superconducting coils, no auxiliary heating, and no magnets.” Zap Energy is developing an approach to fusion called a sheared-flow-stabilized Z pinch, which produces fusion reactions in small bursts rather than a continuous stream.Zap Energy Zap and Helion are leading the charge for what is often called “alternative fusion”—the belief that gargantuan systems are neither necessary nor desirable in the search for practical fusion power. To understand why, it’s helpful to have a quick refresher on nuclear physics. Fusing together ions of some light elements in a gaseous plasma can release a bunch of energy if—and it’s a big if—you can overcome their mutual electrostatic repulsion. That means increasing the ions’ kinetic energy until they’re moving fast enough (i.e. they’re hot enough) to collide and fuse. ITER’s reactor is a traditional tokamak design that aims to ignite a burning plasma ten times hotter than the Sun, in a giant hollow donut 20 meters wide. The larger the donut, the more power is produced; thus ITER’s gargantuan size. But the faster and hotter the ions, the harder they are to confine. Zap compares stabilizing plasma to holding jelly with rubber bands, and keeping ITER’s fusion reaction going will require an immense battery of cryogenically-cooled superconducting magnets. Zap and Helion’s bet is that instead of trying to coax a continuous fusion reaction to life, it will be easier to string together short pulses of fusion activity. Zap’s pulses start with a puff of deuterium (an isotope of hydrogen) plasma at one end of a meter-long vacuum tube, at the center of which is an electrode. The plasma is accelerated down the tube until it reaches the tapered end of the electrode, at which point magnetic forces pinch it into a tight column, with different layers flowing at different speeds. This sheared flow keeps the plasma stable and generating high energy neutrons until it collapses. At the moment, that happens after about ten microseconds. In a commercial device, it will need to last closer to a hundred, and the fuel will include a short-lived, expensive, and hard-to-find isotope of hydrogen called tritium. “There will be a few more devices between Fuze-Q and a pilot plant,” says Sutherland. “We think five to 10 years is realistic. But we also think that maybe it doesn’t do the public any favors to hear us promise you a plant in five years.” Helion’s prototype reactor requires four distinct stages for fusion that ultimately lead to producing electricity through induced current changes.Helion Energy Just up the road in Everett, Helion has gone one step further than promising a pilot reactor. It has already sold 50 megawatts of power to Microsoft, for delivery in 2028. This confidence is reflected in Helion’s modern, securely-gated campus, home to three cavernous warehouses and an auxiliary site crawling with earth movers. Many of Helion’s 160 staff work in its largest 150,000 square feet warehouse, where components for its seventh and final prototype Polaris are now being assembled. My visit starts in Helion’s capacitor “kitchen,” so-called for the various processes involving in coating, testing, and baking the thousands of oil-filled capacitors Polaris will need. Huge banks of capacitors are the only way to quickly deliver the massive pulses of energy necessary to kick-start both Zap’s and Helion’s fusion reactions. Zap’s capacitor bank will store 1.5 megajoules of energy—about a third of the energy released from a kilo of TNT. Helion’s will store a staggering 50 MJ, requiring 150 shipping containers full of capacitors, synchronized with semiconductor switches to discharge in less a millisecond. A fusion technician welds an in-house manufactured capacitor for Helion’s Polaris generator. The latest prototype will need thousands of completed capacitors like this.Left: Helion Energy; Right: Mark Harris When complete, Helion’s reactor will be bigger than Zap’s, about two meters tall and 12 meters long. Its initial jolt powers a series of electromagnets at either end of Polaris that form and accelerate clouds of plasma towards their common center. It is at the reactor’s narrowest point—subject to the strongest magnetic field—that fusion briefly occurs. Like Zap’s design, Helion’s commercial reactor is intended to pulse about once per second and generate 50 MW. But there are some big differences. For a start, Helion will fuse deuterium with helium-3, an ultra-rare and extremely expensive isotope of helium, in a reaction that produces relatively few neutrons. That isn’t a problem for Helion because it doesn’t need neutrons to boil water but instead produces electricity directly from the fusion reaction. In Polaris, each fusion pulse should cause the plasma to expand, increasing its magnetic flux and inducing electric current in the magnetic coils that ultimately flows back to the capacitors. “The National Ignition Facility experiment last year proved key science in igniting a plasma for the first time,” says David Kirtley, Helion’s founder and CEO. “But in the process they threw away 99.9 percent of the input energy. We have proven our system can recover 95 percent, so we only lose about 5 percent of the energy that we put into the fuel. That means we have to do that much less fusion to reach net gain.” Relatively few fusion start-ups are planning to use helium-3 as a fuel, which is so scarce that some experts have even suggested mining it on the moon. Polaris, however, should be able to produce its own helium-3 from deuterium, and Helion claims that it has already generated (although not separated) a small amount. Workers construct banks of capacitors at Helion Energy. The startup’s prototype reactor will need enough capacitors to store 50 megajoules of energy. Helion Energy With just 100 capacitors and one of the formation magnet coils built so far, Kirtley’s plan is to assemble Polaris by January 2024. Helion will then gradually increase power and compression through the year. “If all the scaling holds and everything works the way we expect, we should be able to recover enough electromagnetic energy from the fusion system to recharge those banks plus a little bit extra,” he says. “And that little bit extra is net electricity.” But even Polaris is unlikely to produce any leftover power once the energy demands of cooling and switching systems are factored in. That will fall to Polaris’s successor, a pilot fusion reactor aiming to fulfill Microsoft’s power contract sometime in 2028. While the location of that has yet to determined, it’s likely to remain in the state. “Washington in particular has been very friendly to fusion,” says Kirtley. “You have the University of Washington that’s done fusion since the 1970s, you have the industrial expertise and a huge aerospace industry to draw on, and at the government level, they’ve been really thoughtful about new technologies.” That matters because fusion, with no risk of runway chain reactions and producing vastly less radioactive waste, is being conducted under existing regulations by individual states, rather than the federal Nuclear Regulatory Commission. “We’re in a period of transition from science towards engineering, but we still have plasma physicists on staff and we will for quite some time,” says Zap’s Sutherland. “We’re trying to decarbonize the energy base load for the entire planet. If Zap works, it will change the world.” Reference: https://ift.tt/BWZOujq

Op-ed: Why the great #TwitterMigration didnt quite pan out


Enlarge / Let's look deep within. (credit: Aurich Lawson | Getty Images) I've been using fediverse stuff (Mastodon and, most recently, Calckey—I'm just going to use "Mastodon" as shorthand here; purists can bite me) for over a year now and have been doing so full time for about six months, following Elon Musk buying Twitter (since on principle, I decline to give Elon Musk money or attention). This latter part coincided with the "November 2022 influx," when lots of new people joined Mastodon for similar reasons. A lot of that influx has not stuck around. Everyone is very aware at this point that active user numbers of Mastodon have dropped off a cliff. I have evidence of this. I recently shut down my Mastodon instance that I started in November, mastodon.bloonface.com, and (as is proper) it sent out about 700,000 kill messages to inform other instances that it had federated with that it was going offline for good and to delete all record of it from their databases. Around 25 percent of these were returned undelivered because the instances had simply dropped offline. These are people and organizations who were engaged with Mastodon and fediverse to the point of investing real time and resources into it but simply dropped out without a trace sometime between November 2022 and now. I know multiple people who tried it and then gave up due to a lack of engagement with what they were posting, a lack of people to follow, an inability to deal with the platform's technical foibles, or, worse, because they found the experience actively unpleasant. Something has gone badly wrong. There are some good reasons for this that really point to both shortcomings in the whole idea and also how Mastodon is and was sold to potential new users, some of which might be uncomfortable for existing Mastodon users to hear. There are some conclusions to draw from it, some of which might also be uncomfortable, but some which actually might be seen as reassuring to those who quite liked the place as it was pre-November and would prefer it if it would go back to that. Read 42 remaining paragraphs | Comments Reference : https://ift.tt/iaLzAU9

This 1920 Chess Automaton Was Wired to Win


The Mechanical Turk was a fraud. The chess-playing automaton, dressed in a turban and elaborate Ottoman robes, toured Europe in the closing decades of the 18th century accompanied by its inventor Wolfgang von Kempelen. The Turk wowed Austrian empress Maria Theresa, French emperor Napoleon Bonaparte, and Prussian king Frederick the Great as it defeated some of the great chess players of its day. In reality, though, the automaton was controlled by a human concealed within its cabinetry. What was the first chess-playing automaton? Torres Quevedo made his mark in a number of fields, including funiculars, dirigibles, and remote controls, before turning to “thinking” machines.Alamy A century and a half after von Kempelen’s charade, Spanish engineer Leonardo Torres Quevedo debuted El Ajedrecista (The Chessplayer), a true chess-playing automaton. The machine played a modified endgame against a human opponent. It featured a vertical chessboard with pegs for the chess pieces; a mechanical arm moved the pegs. Torres Quevedo invented his electromechanical device in 1912 and publicly debuted it at the University of Paris two years later. Although clunky in appearance, the experimental model still managed to create a stir worldwide, including a brief write-up in 1915 in Scientific American. In El Ajedrecista’s endgame, the machine (white) played a king and a rook against a human’s lone king (black). The program required a fixed starting position for the machine’s king and rook, but the opposing king could be placed on any square in the first six ranks (the horizontal rows, that is) that wouldn’t put the king in danger. The program assumed that the two kings would be on opposite sides of the rank controlled by the rook. Torres Quevedo’s algorithm allowed for 63 moves without capturing the king, well beyond the usual 50-move rule that results in a draw. With these restrictions in place, El Ajedrecista was guaranteed a win. In 1920, Torres Quevedo upgraded the appearance and mechanics of his automaton [pictured at top], although not its programming. The new version moved its pieces by way of electromagnets concealed below an ordinary chessboard. A gramophone recording announced jaque al rey (Spanish for “check”) or mate (checkmate). If the human attempted an illegal move, a lightbulb gave a warning signal; after three illegal attempts, the game would shut down. Building a machine that thinks The first version of the chess automaton, from 1912, featured a vertical chessboard and a mechanical arm to move the pieces.Leonardo Torres Quevedo Museum/Polytechnic University of Madrid Unlike Wolfgang von Kempelen, Torres Quevedo did not create his chess-playing automaton for the entertainment of the elite or to make money as a showman. The Spanish engineer was interested in building a machine that “thinks”—or at least makes choices from a relatively complex set of relational possibilities. Torres Quevedo wanted to reframe what we mean by thinking. As the 1915 Scientific American article about the chess automaton notes, “There is, of course, no claim that it will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought.” In 1914, Torres Quevedo laid out his ideas in the article, “Ensayos sobre automática. Si definición. Extensión teórica de sus aplicaciones” (“Essays on Automatics. Its Definition. Theoretical Extent of Its Applications”). In the article, he updated Charles Babbage’s ideas for the analytical engine with the currency of the day: electricity. He proposed machines doing arithmetic using switching circuits and relays, as well as automated machines equipped with sensors that would be able to adjust to their surroundings and carry out tasks. Automatons with feelings were the future, in Torres Quevedo’s view. How far could human collaboration with machines go? Torres Quevedo built his chess player to find out, as he explained in his 1917 book Mis inventos y otras páginas de vulgarización (My inventions and other popular writings). By entrusting machines with tasks previously reserved for human intelligence, he believed that he was freeing humans from a type of servitude or bondage. He was also redefining what was categorized as thought. Claude Shannon, the information-theory pioneer, later picked up this theme in a 1950 article, “A Chess-Playing Machine,” in Scientific American on whether electronic computers could be said to think. From a behavioral perspective, Shannon argued, a chess-playing computer mimics the thinking process. On the other hand, the machine does only what it has been programmed to do, clearly not thinking outside its set parameters. Torres Quevedo hoped his chess player would shed some light on the matter, but I think he just opened a Pandora’s box of questions. Why isn’t Leonardo Torres Quevedo known outside Spain? Despite Torres Quevedo’s clear position in the early history of computing—picking up from Babbage and laying a foundation for artificial intelligence —his name has often been omitted from narratives of the development of the field (at least outside of Spain), much to the dismay of the historians and engineers familiar with his work. That’s not to say he wasn’t known and respected in his own time. Torres Quevedo was elected a member of the Spanish Royal Academy of Sciences in 1901 and became an associate member of the French Academy of Sciences in 1927. He was also a member of the Spanish Society of Physics and Chemists and the Spanish Royal Academy of Language and an honorary member of the Geneva Society of Physics and Natural History. Plus El Ajedrecista has always had a fan base among chess enthusiasts. Even after Torres Quevedo’s death in 1936, the machine continued to garner attention among the cybernetic set, such as when it defeated Norbert Wiener at an influential conference in Paris in 1951. (To be fair, it defeated everyone, and Wiener was known to be a terrible player.) One reason Torres Quevedo’s efforts in computing aren’t more widely known might be because the experiments came later in his life, after a very successful career in other engineering fields. In a short biography for Proceedings of the IEEE, Antonio Pérez Yuste and Magdalena Salazar Palma outlined three areas that Torres Quevedo contributed to before his work on the automatons. Torres Quevedo’s design for the Whirlpool Aero Car, which offers a thrilling ride over Niagara River, debuted in 1916.Wolfgang Kaehler/LightRocket/Getty Images First came his work, beginning in the 1880s, on funiculars, the most famous of which is the Whirlpool Aero Car. The cable car is suspended over a dramatic gorge on the Niagara River on six interlocking steel cables, connecting two points along the shore half a kilometer apart. It is still in operation today. His second area of expertise was aeronautics, in which he held patents on a semirigid frame system for dirigible balloons based on an internal frame of flexible cables. And finally, he invented the Telekine, an early remote control device, which he developed as a way to safely test his airships without risking human life. He started by controlling a simple tricycle using a wireless telegraph transmitter. He then successfully used his Telekine to control boats in the Bilbao estuary. But he abandoned these efforts after the Spanish government denied his request for funding. The Telekine was marked with an IEEE Milestone in 2007. If you’d like to explore Torres Quevedo’s various inventions, including the second chess-playing automaton, consider visiting the Museo Torres Quevedo, located in the School of Civil Engineering at the Polytechnic University of Madrid. The museum has also developed online exhibits in both Spanish and English. A more cynical view of why Torres Quevedo’s computer prowess is not widely known may be because he saw no need to commercialize his chess player. Nick Montfort, a professor of digital media at MIT, argues in his book Twisty Little Passages (MIT Press, 2005) that El Ajedrecista was the first computer game, although he concedes that people might not recognize it as such because it predated general-purpose digital computing by decades. Of course, for Torres Quevedo, the chess player existed as a physical manifestation of his ideas and techniques. And no matter how visionary he may have been, he did not foresee the multibillion-dollar computer gaming industry. The upshot is that, for decades, the English-speaking world mostly overlooked Torres Quevedo, and his work had little direct effect on the development of the modern computer. We are left to imagine an alternate history of how things might have unfolded if his work had been considered more central. Fortunately, a number of scholars are working to tell a more international, and more complete, history of computing. Leonardo Torres Quevedo’s is a name worth inserting back into the historical narrative. References I first learned about El Ajedrecista while reading the article “Leonardo Torres Quevedo: Pioneer of Computing, Automatics, and Artificial Intelligence” by Francisco González de Posada, Francisco A. González Redondo, and Alfonso Hernando González (IEEE Annals of the History of Computing, July-September 2021). In their introduction, the authors note the minimal English-language scholarship on Torres Quevedo, with the notable exception of Brian Randell’s article “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush” (IEEE Annals of the History of Computing, October-December 1982). Although I read Randell’s article after I had drafted my own, I began my research on the chess-playing automaton with the Museo Torres Quevedo’s excellent online exhibit. I then consulted contemporary accounts of the device, such as “Electric Automaton” (Scientific American, 16 May 1914) and “Torres and His Remarkable Automatic Devices” (Scientific American Supplement No. 2079, 6 November 1915). My reading comprehension of Spanish is not what it should be for true academic scholarship in the field, but I tracked down several of Torres Quevedo’s original books and articles and muddled through translating specific passages to confirm claims by other secondary sources. There is clearly an opportunity for someone with better language skills than I to do justice to this pioneer in computer history. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the July 2023 print issue as “Computer Chess, Circa 1920.” Reference: https://ift.tt/ldn7giF

Wednesday, June 28, 2023

Brave will soon control which sites can access your local network resources


Enlarge The Brave browser will take action against websites that snoop on visitors by scanning their open Internet ports or accessing other network resources that can expose personal information. Starting in version 1.54, Brave will automatically block website port scanning, a practice that a surprisingly large number of sites were found engaging in a few years ago. According to this list compiled in 2021 by a researcher who goes by the handle G666g1e, 744 websites scanned visitors’ ports, most or all without providing notice or seeking permission in advance. eBay, Chick-fil-A, Best Buy, Kroger, and Macy's were among the offending websites. (credit: https://nullsweep.com/why-is-this-website-port-scanning-me/) Some sites use similar tactics in an attempt to fingerprint visitors so they can be re-identified each time they return, even if they delete browser cookies. By running scripts that access local resources on the visiting devices, the sites can detect unique patterns in a visiting browser. Sometimes there are benign reasons a site will access local resources, such as detecting insecurities or allowing developers to test their websites. Often, however, there are more abusive or malicious motives involved. Read 7 remaining paragraphs | Comments Reference : https://ift.tt/r9EU4y8

EV Interference Doesnt Have to Kill AM Radio


AM radio could very easily fit into the hallowed category of “perfected technology.” It’s been around for over a century, the physics and engineering are very well understood, and there have been no major technological advancements in its design for at least a couple decades. Which is perhaps why it’s so understandable that so many car manufacturers are now opting to remove it from their EVs, citing poor quality performance courtesy of the electric system. After all, maybe it’s time to gracefully sunset a legacy technology that no longer serves much purpose. However, AM radio remains widely used in many parts of the world—including the United States—and still plays a significant role in both day-to-day life and emergency communications for hundreds of millions of people. “There is not a lot of measurement data regarding interference on the AM radio band from electric vehicles specifically.” —Zamir Ahmed, National Association of Broadcasters Despite the current clamor, AM radio phaseouts in EVs have been underway for a while now. BMW decided not to include the tech in the 2014 i3. But in recent years, the trend has accelerated. In March, Senator Markey of Massachusetts surveyed 20 car manufacturers and found that 8 of them–BMW, Ford, Mazda, Polestar, Rivian, Tesla, Volkswagen, and Volvo—had removed AM radios from their EVs. Ford had also intended to remove the radios from its gas cars in 2024, but recently reversed course on that decision. The problem stems not from the car’s battery itself, but from the current driving the motor. “The current system is the vehicle, a lot of them are pulse-width modulated,” says Ashruf El-Dinary, a senior vice president of digital platforms at Xperi. “So it has some high current going through, which creates inductance, which can transfer back into the antenna system.” This leads to unwanted noise in the entire AM band, especially in the lower portion (between 500 to 700 kilohertz), that comes across as a hum or a whine. Part of the problem is that there’s also just not enough data to fully understand the problem. Zamir Ahmed, the vice president of communications for the National Association of Broadcasters (NAB), wrote in an email to IEEE Spectrum that “There is not a lot of measurement data regarding interference on the AM radio band from electric vehicles specifically. In general, some electronic devices create interference on discrete frequencies or clusters of frequencies, while interference from other devices may affect the entire band.” NAB launched the Depend on AM Radio campaign in April to bring more awareness to the issue. Why are carmakers phasing out AM radio? As it turns out, EV batteries and motors aren’t the only cause for concern for AM radios—they’re simply the most noticeable in a longer list of culprits. “There are some other noise effects as well,” says El-Dinary. “Even power windows or power mirrors in a standard car, if not designed correctly, or shielded correctly, that can affect analog reception.” Properly shielding an entire EV system is certainly a more complicated and expensive task than properly shielding a power window system. And the consequences are more noticeable if not done right. Therefore, most manufacturers have opted to remove AM radios entirely rather than go to the trouble of redesigning their vehicles further. In the U.S. alone, 47 million people still listen to AM radio. In rural areas, it’s often one of a few options—if not the only one. Volkswagen, for instance, has stated that incorporating the changes needed to shield the AM radio from the current will result in extra weight that drags down a vehicle’s range. One of the few manufacturers to tackle the problem, Stellantis—the parent of Chrysler and Jeep—is using shielded cables and physically moving the radio receivers farther from the motor in future cars. (Stellantis did not respond to requests for comment for more details about its mitigation strategies). Part of the reason that the AM radio die-off in EVs has been able to proceed under the hood until recently may be that it’s such an established technology. “If you’re talking about pure amplitude modulation, that has not changed in 100 years,” says El-Dinary. “The radio designs I think pretty much matured about 15 years ago, once they got into more of the [digital signal processing] arena, and demodulation or decoding was done at a software level.” Perhaps AM radio’s common conception as an old technology that’s run through its development cycle has lead car manufacturers to write it off as not worth including, given that it doesn’t play nicely with the latest innovations. But that misses the fact that roughly 47 million Americans still listen to AM radio, and in rural areas, it’s often one of a few options—if not the only one. That’s not to say there aren’t some advancements still happening in the world of AM radio. One of Xperi’s products is HD Radio, which allows an AM station to simultaneously broadcast an analog radio transmission alongside a digital one carrying all of the data for a modern infotainment system: things like song titles, album covers, station logos, and more. El-Dinary, who has worked on HD Radio for 25 years, says he’s heard of some AM stations opting to turn off their analog transmissions entirely in favor of going fully digital. And that might be one way forward: “A lot of our testing has shown that digital broadcasting can survive that kind of noise environment,” he says. Even so, he stresses that HD Radio is not intended to be a complete replacement for AM radio, but more of a backward-compatible evolution. Spectrum allocation abhors a vacuum, and so the frequency bands will surely still find interested users for broadcasting—after all, it’s long been proven ideal for long-range transmissions. Whether that’s by transitioning to something new like HD Radio or something else, or by being protected through legislation like the AM for Every Vehicle Act introduced by U.S. lawmakers, remains to be seen. If nothing else, car manufacturers may still want to consider finding a way to include AM radios even in their EVs specifically because of its long-established tenure. During a hurricane, or a wildfire, or any other kind of emergency, simple, reliable technologies are still the best way to keep people informed. “Cell towers have a lot of vulnerabilities during a crisis like that, and even FM radio. TV holds a lot of promise but its not necessarily as portable,” says El-Dinary. “If people are trying to evacuate an area, or be mobile and find gas or food or whatever, they’re going to be in their car. And that’s where you want them to have the information.” Reference: https://ift.tt/2usowhA

Intel and Nvidia Square Off in GPT-3 Time Trials


For the first time, a large language model—a key driver of recent AI hype and hope—has been added to MLPerf, a set of neural network training benchmarks that have previously been called the Olympics of machine learning. Computers built around Nvidia’s H100 GPU and Intel’s Habana Gaudi2 chips were the first to be tested on how quickly they could perform a modified train of GPT-3, the large language model behind ChatGPT. A 3,584-GPU computer run as a collaboration between Nvidia and cloud provider CoreWeave performed this task in just under 11 minutes. The smallest entrant a 256-Gaudi2 system did it in a little over 7 hours. On a per chip basis, H100 systems were 3.6-times faster at the task than Gaudi2. However, the Gaudi2 computers were operating “with one hand tied behind their back,” says Jordan Plawner senior director of AI products Intel, because a capability called mixed precision has not yet been enabled on the chips. By one estimate, Nvidia and Coreweave’s 11-minute record-setting training time would scale up to about two days of full-scale training. Computer scientists have found that for GPT-3’s type of neural network, called a transformer network, training can be greatly accelerated by doing parts of the process using less-precise arithmetic. Versions of 8-bit floating point numbers (FP8) can be used in certain layers of the network, while more precise 16-bit or 32-bit numbers are needed in others. Figuring out which layers are which is the key. Both H100 and Gaudi2 were built with mixed precision hardware, but it’s taken time for each companies’ engineers to discover the right layers and enable it. Nvidia’s system in the H100 is called the transformer engine, and it was fully engaged for the GPT-3 results. Habana engineers will have Gaudi2’s FP8 capabilities ready for GPT-3 training in September, says Plawner. He says that at that point, Gaudi2 will be “competitive” with H100, and he expects it to be beat it on the combination of price and performance. Gaudi2, for what it’s worth, is made using the same process technology—7 nanometers—as the H100’s predecessor, the A100. Making GPT-3 work Large language models “and generative AI have fundamentally changed how AI is used in the market,” says Dave Salvatore, Nvidia’s director of AI benchmarking and cloud computing. So finding a way to benchmark these behemoths was important. But turning GPT-3 into a useful industry benchmark was no easy task. A complete training of the full 1.75-billion parameter network with an entire training dataset could take weeks and cost millions of dollars. “We wanted to keep the runtime reasonable,” says David Kanter, executive director of MLPerf’s parent organization MLCommons. “But this is still far-and-away the most computationally demanding of our benchmarks.” Most of the benchmark networks in MLPerf can be run on a single processor, but GPT-3 takes 64 at a minimum, he says. Instead of training on an entire dataset, participants trained on a representative portion. And they did not train to completion, or convergence, in the industry parlance. Instead, the systems trained to a point that indicated that further training would lead to convergence. Systems built using the Habana Gaudi2 were the only non-Nvidia-based systems that participated in MLPerf’s initial GPT-3 benchmark.Intel Figuring out that point, the right fraction of data, and other parameters so that the benchmark is representative of the full training task took “a lot of experiments,” says Ritika Borkar, senior deep learning architect at Nvidia and chair of the MLPerf training working group. On Twitter, Abhi Vengalla, a research scientist at MosaicML, estimated that Nvidia and Coreweave’s 11-minute record would scale up to about two days of full-scale training. H100 training records This round of MLPerf wasn’t just about GPT-3, of course; the contest consists of seven other benchmark tests: image recognition, medical-imaging segmentation, two versions of object detection, speech recognition, natural-language processing, and recommendation. Each computer system is evaluated on the time it takes to train the neural network on a given dataset to a particular accuracy. They are placed into three categories: cloud computing systems; available on-premises systems; and preview systems, which are scheduled to become available within six months. For these other benchmarks, Nvidia was largely involved in a proxy fight against itself. Most of the entrants were from system makers such as Dell, GIGABYTE, and the like, but they nearly all used Nvidia GPUs. Eighty of 88 entries were powered by them, and about half of those used the H100, a chip made using TSMC’s 5-nanometer process that went to customers in Q4 of 2022. Either Nvidia computers or those of CoreWeave set the records for each of the eight categories. In addition to adding GPT-3, MLPerf significantly upgraded its recommender system test to a benchmark called DLRM DCNv2. “Recommendation is really a critical thing for the modern era, but it’s often an unsung hero,” says Kanter. Because of the risk surrounding identifiable personal information in the data set “recommendation is in some ways the hardest thing to make a benchmark for,” he says. The new DRLRM DCNv2 is meant to better match what industry is using, he says. It requires five times the memory operations and the network is similarly more computationally complex. The size of the dataset it’s trained on is about 4 times larger than the 1 terabyte its predecessor used. You can see all the results here. Reference: https://ift.tt/ZJDSXr7

How Remote Sensing Technologies Increase Food Production


You might be able to take the girl off the farm, but you can’t necessarily take the farm out of the girl, as the saying goes. That was the case for Melba Crawford, who as a teenager couldn’t wait to leave her family’s farm in Illinois to pursue an engineering career. MELBA M. CRAWFORD EMPLOYER Purdue University, West Lafayette, Ind. TITLE ‘Professor of civil engineering, agronomy, and electrical and computer engineering MEMBER GRADE Fellow ALMA MATERS University of Illinois, Urbana-Champaign Ohio State University, Columbus But her engineering path eventually led her back to agriculture. She has developed remote sensing technologies that, among other uses, map crops more accurately, increase crop yields, and improve management practices while reducing the time it takes to select promising new hybrids. Crawford is a professor of civil engineering, agronomy, and electrical and computer engineering at Purdue University, in West Lafayette, Ind. In agronomy, she collaborates with researchers in plant genetics, plant physiology, agrometeorology, and soil science. In engineering, she focuses on developing methods to analyze remote sensing data. For her “contributions to remote sensing technology and leadership in its application for the benefit of humanity,” the IEEE Fellow is the recipient of this year’s IEEE Mildred Dresselhaus Medal. The award is sponsored by Google. An early environmental activist As a teen in the 1960s, Crawford was drawn to the U.S. space program; she initially aimed for a career in the aerospace industry. “Space was really inspiring in those days,” she says. “I did not want to be an astronaut, because I’m claustrophobic. I was interested in the design and operation of airplanes and spacecraft.” But during her time as a freshman at the University of Illinois in Urbana-Champaign, the aerospace industry went through a period of mass layoffs. She realized that aerospace was a boom-and-bust industry, she says. Meanwhile, she says, environmental issues were gaining attention as the impact of acid rain and contamination of lakes and rivers became better understood. “People were really interested in clean air and clean water, so I decided there would always be interest in the environment,” Crawford says. She switched her major to civil and environmental engineering. “You study many areas in civil engineering, including structures, soils, transportation, and the environment,” she says. After graduating in 1970, she stayed on at the university to earn a master’s degree in civil engineering in 1973, focusing on environmental engineering. She then pursued a Ph.D. in systems engineering at Ohio State University, in Columbus. Her research there focused on model-based system approaches and mathematics. When it came time to find a topic for her dissertation, she learned that the U.S. Environmental Protection Agency, in implementing the 1970 Clean Air Act, was struggling to establish baselines for air-quality levels, she says. “To utilize these regulations, you first must establish where you currently are with air-quality levels,” she says. Her dissertation focused on developing methods to determine a baseline for concentrations of pollutants in the atmosphere based on historical spatial-temporal measurements. She earned her Ph.D. in 1981. She had planned to become a consultant for an environmentally related organization, but during her last year at Ohio State a faculty member became ill, and the department asked her to teach one of his classes. “That’s how my career took a turn toward academia,” she says. Remote sensing technologies to tackle earth science problems In 1980 Crawford joined the mechanical engineering department at the University of Texas at Austin, staying for 25 years. As a member of the industrial engineering and operations research group, she developed advanced methods for image analysis and applications for mapping and monitoring land cover using satellite imagery. She also founded an interdisciplinary research and applications development program in space-based and airborne remote sensing at UT. As part of that effort, she installed and operated a receiving station to acquire and analyze data from the U.S. National Oceanic and Atmospheric Administration’s satellites over extended areas of North America and the Gulf of Mexico. Her team’s research program included projects in Australia and Africa. The team also developed new algorithms for analysis of hyperspectral imagery, which provides detailed information across the electromagnetic spectrum to detect chemistry-based changes in vegetation. Agriculture didn’t come back on her radar until 2006, when she joined the Purdue faculty. The university already had a 30-year history of collaboration between the colleges of engineering and agriculture in using remote sensing to address problems in agriculture such as detection and mapping disease in crops. Researchers originally used aerial photography, then satellite images. Through the university’s Laboratory for Applications of Remote Sensing, the colleges had developed international collaborations in advancing analysis of remote sensing data and developing open-source software for applications and education. “People were really interested in clean air and clean water, so I decided there would always be interest in the environment.” “When I told my family that Purdue had recruited me, my father asked whether the school understood that I didn’t know much about agriculture,” she says, laughing. “Purdue was interested in my work in remote sensing and data analytics. These technologies provide capability to acquire data frequently over extended spatial areas—which is critical for agriculture.” Her work is currently contributing to developing improved strategies for applications of nutrients and herbicides to crops. Through her contributions to plant breeding, she is helping improve the security and resilience of food production internationally. Crawford recently helped lead a project funded by the U.S. Department of Energy to develop unmanned aerial vehicle platforms and algorithms to support plant breeders in developing sorghum hybrids for biofuels—potential substitutes for corn-based ethanol. Beyond academia, Crawford advised NASA as a member of the Earth Science Advisory Committee and the advisory board for the NASA Socioeconomic Data and Applications Center. She was a Jefferson Science Fellow at the U.S. State Department, where she focused on promoting geospatial technologies in developing countries. In 2001 she realized her goal of working on a satellite mission. She was a member of NASA’s Earth Observing–1 science validation team, which conducted the first successful U.S. civilian hyperspectral mission in space. Although it was designed for a life of 18 months, the mission operated successfully for 16 years. Her research focused on developing new methods to analyze hyperspectral imagery to determine the response of vegetation to natural and man-made hazards. The legacy of Mildred Dresselhaus Crawford received the Dresselhaus Medal on 5 May during the IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony, held at the Hilton Atlanta. Mildred Dresselhaus worked at MIT, which she joined in 1960 as a researcher in its Lincoln Laboratory Solid State Division, in Lexington, Mass. She became a professor of electrical engineering in 1967 and joined the physics department in 1983. In 2015 Dresselhaus became the first female recipient of the IEEE Medal of Honor. She died in 2017. Crawford never met Dresselhaus, but she did receive congratulatory messages from several people who knew her. “The most amazing thing to me has been the emails I received from so many people who knew her but didn’t know me,” Crawford says. “They were former students, colleagues, and friends. They reinforced the perception that Dresselhaus was a truly amazing person as well as a trailblazer.” One was an MIT electrical engineering student to whom she had given one of her old spectrometers to start his research. Another, who at the time was a junior faculty member at the University of North Carolina in Charlotte, recalled the thoughtful, critical input and encouragement Dresselhaus provided about her research during a visit to the faculty member’s lab. “One of my colleagues at Purdue, who is a well-known researcher in nanotechnology, was emphatic in his appraisal,” she says. “He asked, ‘Do you realize she was the Queen of Carbon?’ It just made me feel so humble and appreciative.” You can watch Crawford accept her award on IEEE.tv. She was also an Innovators Showcase panelist during the summit. Crawford stresses that her research is interdisciplinary: “My contribution is really in developing algorithms to analyze data, but the purpose is always to address a problem. “To contribute to the solution, it is necessary to invest in learning about the problem and to work collaboratively with others who are experts in that field. “Anything that I have accomplished during my career is really the result of a team. I accepted the award on behalf of all of us.” IEEE: Staying connected to a community Crawford joined IEEE when she began her academic career and had to publish her research. More than 50 of her papers are in the IEEE Xplore Digital Library. “Publishing, for many people, is the entry into IEEE,” she says. “Then there is growth in terms of understanding the importance of going to conferences and not only hearing presentations of research but also engaging with people. You start paying attention to what’s going on in a professional society, and you start volunteering. Then you are part of a community.” Crawford served as 2013 president of the IEEE Geoscience and Remote Sensing Society and was an associate editor of the IEEE Transactions on Geoscience and Remote Sensing. She also has served on several of the society’s administrative committees. Her work was recognized with the GRSS 2020 Outstanding Service Award and its 2021David Landgrebe Award. Volunteering, she says, is “a two-way street because you contribute to the society but you also benefit in terms of your engagement with individuals and developing leadership skills.” Being involved provides opportunities, she adds: “People who are quite accomplished in their field attend the meetings, and they’re engaged with the society. You would never, as a junior person, typically have opportunities to meet these people.” Reference: https://ift.tt/ibvhgdf

Pneumatic Actuators Give Robot Cheetah-Like Acceleration


Electric motors have helped bring legged robots into the mainstream, offering a straightforward and compact way of controlling robotic limbs with all the fancy control features that you need for safe and nimble motion. What you can’t get out of electric motors (more than once, anyway) is the kind of instantaneous power that you need to match the performance of biological muscles. This is why Atlas, arguably the most powerful and dynamic robot out there right now, uses hydraulic actuators—to do a backflip with a human-sized robot, that’s just about the only way of getting the kind of power that you need. Inspired by the high-speed maneuvering of cheetahs, roboticists at the University of Cape Town in South Africa have started experimenting with the old-school sibling of hydraulic actuators—pneumatics. By using gas as a working fluid instead of a liquid, you can get a high force-to-weight ratio in a relatively simple and inexpensive form factor with built-in compliance that hydraulics lack. Are pneumatics easy to control? Nope! But to make a robot run like a cheetah, it turns out that complicated control may not even be necessary. “We’re arguing that fine force control is maybe not needed for rapid maneuverability.” —Amir Patel, University of Cape Town, South Africa First, let’s talk about what’s wrong with hydraulics. Because hydraulics are complicated, expensive, and all kinds of messy if they ever explode, which they sometimes will. And while the non-compliant nature of hydraulics makes them easier to model and control, it also makes them less forgiving in real world use. If you go back far enough, to the 1980s when Marc Raibert was developing dynamic legged robots at MIT, those running and jumping robots were relying on pneumatics rather than hydraulics, because pneumatics were much easier to implement. One big reason why everyone seems to be using hydraulics rather than pneumatics nowadays is that air is compressible, which is great for built-in compliance—but messes up most traditional control methods. “Fine force control is difficult with this actuator and most have avoided it,” explains Amir Patel, an associate professor at the University of Cape Town. “Hydraulics is not compressible and can do amazing things, but it’s quite a bit more expensive than pneumatics. And when looking at animals that require explosive motion from their limbs, we thought that pneumatics would be a good, and often overlooked, actuator.” Patel has done an enormous amount of research on cheetah biomechanics. We’ve written about some of it in the past. (For instance, here’s why cheetahs have fluffy tails.) But recently, Patel has been trying to find ways of tracking cheetah dynamics in very high fidelity to figure out how they’re able to move the way that they do. This would be easy if the cheetahs would cooperate, but from the sound of things, trying to get them to run directly over a small force plate or do the maneuver you want while in ideal view of the cameras you’ve set up is kind of a nightmare. Much of this work is ongoing, but Patel has already learned enough to suggest a new approach to cheetah-inspired locomotion. “From our years studying cheetahs here in South Africa, it appears as if they’re not really trying to do fine force control when accelerating from rest,” Patel says. “They’re just pushing off as hard as they can—which makes us think that an on/off [a.k.a. bang-bang] actuator like pneumatics could do that job. We’re arguing that fine force control is maybe not needed for rapid maneuverability tasks.” “We focus on the transient phase of the locomotion—like rapid acceleration from a standstill, or coming to rest once you’re at a high-speed gait.” —Amir Patel, University of Cape Town, South Africa Patel (along with colleagues Christopher Mailer, Stacey Shield, and Reuben Govender) has built a legged robot (or half of a legged robot, anyway) called Kemba to explore the kind of rapid acceleration and maneuverability that pneumatics can offer. Kemba’s hips incorporate high torque quasi-direct drive electric motors at the hips for higher fidelity positioning, with high-force pneumatic pistons attached to the knees. While the electric motors give the kind of precise control that we’ve come to expect from electric motors, the pistons are controlled by simple (and cheap) binary valves that can either be on or off. The researchers did put a lot of effort into modeling the complex dynamics of pneumatic actuators because you do after all need some understanding of what the pneumatics are doing. But again, the concept here is to use the pneumatics for explosive actuation and get finer control from the electric motors at the hips. Kemba, the two-legged (and boom-stabilized) robot, uses electric motors for precision and pneumatics for fast movements.University of Cape Town, South Africa With a boom for support, the 7 kilogram Kemba is able to repeatedly jump to 0.5 meters with a controlled landing, and it reaches a maximum jump height of 1 m. While it’s tempting to focus on metrics like jump height and top speed here, that’s really not what the research is necessarily about, explains Patel. “With Kemba (and all the robots and animals we study in my lab) we focus on the transient phase of the locomotion—like rapid acceleration from a standstill, or coming to rest once you’re at a high-speed gait. Most papers don’t really concentrate on that phase of the motion. I would love for more labs to be publishing their results in this area so that we can have some metrics (and data) to compare to.” Patel would eventually like Kemba to become a platform that biologists could use to understand the biomechanics of animal locomotion, but it’s likely to remain tethered for the foreseeable future, says first author Chris Mailer. “A lot of people have asked when we will build the other half or if it is realistic for Kemba to carry around a compressor. While this would be awesome, that was never the intention for Kemba. The main objective was to execute and learn from bio-inspired motions rather than focus on onboard power or autonomy.” This doesn’t mean that Kemba won’t be getting some upgrades. A spine could be in the works, along with a tail, both of which would provide additional degrees of freedom and enable more dynamic behaviors. There’s a long way to go before legged robots get anywhere close to what a real cheetah can do, but the pneumatic approach certainly seems to have some promise. And anything that has the potential to lower the cost of legged robots is fine by me, because I’m still waiting for one of my own. Getting Air: Modelling and Control of a Hybrid Pneumatic-Electric Legged Robot, by Christopher Mailer, Stacey Shield, Reuben Govender, and Amir Patel from the University of Cape Town, was presented at ICRA 2023 in London. Reference: https://ift.tt/lVvB28N

Roll Your Own All-Sky Raspberry Pi Camera


While driving home one night recently, I saw a spectacularly bright meteor flash across the sky in front of my car. A good-sized chunk of interplanetary detritus must have been on its way to a crash landing not too far away, I said to myself. My next thought was that if I had a bearing on that luminous streak, and if at least one other person in my region also had such information, we might be able to triangulate on it and narrow down where any landing zone might be. I’m, of course, not the only one to ponder this possibility—and, I soon learned, people have indeed successfully found meteorites this way. One example occurred in 2012, when a fireball lit up the sky over Northern California. Images of the meteor were recorded by a project called CAMS ( Cameras for Allsky Meteor Surveillance)—a project of NASA and the SETI Institute. These observations allowed the object’s trajectory and landing zone to be estimated, and coverage of the event in The San Francisco Chronicle soon led to the discovery of what became known as the Novato Meteorite. CAMS is not the only such project looking for meteors. Another is the Global Meteor Network, whose mission is to observe the night sky with “a global science-grade instrument.” Organizers of this network even provide guidance for how anyone can build a suitable camera based on the Raspberry Pi and how to contribute observations that can help determine the orbits of the parent asteroids that spawned particular meteors. I was tempted to join the cause, but after reading more on the subject I discovered alternative strategies for building a camera to survey the night sky. Ultimately, I decided that I wanted to capture attractive color images more than I wanted to contribute data to the Global Meteor Network, which uses black-and-white cameras because of their greater sensitivity. The required components include a Raspberry Pi microcomputer (case not shown), a Raspberry Pi High Quality camera, a lens, a dome-shaped transparent lens cover, a 5-volt power supply, and a waterproof bulkhead connector, allowing AC-mains power to pass through the wall of the waterproof enclosure (not shown) holding the camra.James Provost So I opted to build a different kind of all-sky camera, one that is also based on a Raspberry Pi but that uses the Raspberry Pi High Quality color camera, following the lead of a project called, reasonably enough, Allsky Camera. The hardware for this project consists of a Raspberry Pi and either the Raspberry Pi HQ camera or one of the purpose-built planetary cameras made by ZWO. To be truly “all sky,” the camera should be equipped with a fish-eye lens having a 180-degree field of view. Recognizing that my home is surrounded by trees, I opted for a lens with a narrower (120-degree) field of view. A modern Raspberry Pi 4 is recommended, but I used a several-year-old Raspberry Pi 3 Model B simply because I had it on hand. I decided to use a US $60 Raspberry Pi HQ camera over a ZWO camera because it offered higher resolution. To protect this hardware from the elements, I housed the Pi, camera, and a suitable wall wart for powering the Pi inside a $25 waterproof plastic enclosure. The opening I cut for the camera lens is covered with a $16 clear acrylic dome. The first dome I purchased distorted things, but I ordered another one that worked out much better. I also purchased an $11 case for the Raspberry Pi (one that included a fan) and a long extension cord, which I cut and connected to a waterproof bulkhead connector. This means I can leave the unit outside even when it rains. Following the guidance provided in a very nice tutorial video, I found it straightforward to set up the Allsky Camera software on my Pi, running it in a “headless” configuration—meaning without a monitor or keyboard. I access it wirelessly from a laptop through my local area network using SSH. A meteor’s trajectory through the atmosphere can hold clues to the location of any part of it that survives and also reveals the orbit of the parent body around the sun. With images of the meteor trail captured by two cameras, that trajectory can be ascertained: The position of a glowing trail relative to the background stars in an image defines a plane, and the intersection of two planes defines the trajectory.James Provost I fired everything up—but the camera didn’t work at all. So I turned to the appropriate troubleshooting section in the project’s ample documentation and tried what was advised there—to enable “Glamor” graphic acceleration on the Pi. Still no images, though. Eventually, I discovered some tweaks to a configuration file that are needed when using the HQ camera on a Pi 3B, which allowed me to obtain a hopelessly blurry image of the ceiling of my office. Through trial and error, I was able to get the manual focus of the camera dialed in properly. And slowly I learned how to adjust the multitude of settings available in the Allsky Camera software, which is done either by editing a configuration file or, more conveniently, through a Web interface this software provides. For example, I learned that I should reduce the resolution of the images used to make time-lapse videos, lest images saved at the impressive native resolution of the HQ camera (4,056 by 3,040 pixels) overwhelm the processing and storage available on my Pi. While that required tweaking a configuration file, other settings can be adjusted using the Web interface, which also makes it very easy to view live images, browse images collected earlier, and view and save time-lapse videos. This timelapse video shows the night sky giving way to the rosy-fingered dawn, as captured by this all-sky camera. Spectrum Staff One thing that puzzled me early on was how well such a camera would work to catch meteors flashing by, given that the camera takes still images, not many-frames-per-second videos. But my concerns diminished after capturing images of the night sky over my home, some of which caught the light of passing aircraft. The long trails of light in those images made it apparent that the exposure time must be at least some tens of seconds long. I knew these were aircraft, not meteor trails, because the streaks included parallel tracks (from wingtip lights) and obvious pulsations from strobes. I hope yet to capture meteors some day with this gizmo. For that, I may go camping in the mountains in mid-August, when the Perseids are hitting their peak. My family and I had taken such a trip years ago, but I didn’t have an all-sky camera at the time. So I returned home with only some now-fading memories of the wonderous show nature had put on display above our heads. Next time, I’ll have something I can view over and over! Reference: https://ift.tt/DCKUWyl

Fears grow of deepfake ID scams following Progress hack


Enlarge / The number of deepfakes used in scams in just the first three months of 2023 outstripped all of 2022. (credit: FT Montage/Getty Images) When Progress Corp, the Massachusetts-based maker of business software, revealed its file transfer system had been compromised this month, the issue quickly gathered global significance. A Russian-speaking gang dubbed Cl0p had used the vulnerability to steal sensitive information from hundreds of companies including British Airways, Shell and PwC. It had been expected that the hackers would then attempt to extort affected organizations, threatening to release their data unless a ransom was paid. However, cyber security experts said that the nature of the data stolen in the attack—including the driving licenses, health and pension information of millions of Americans—hints at another way hackers would cash in: ID theft scams, which combined with the latest in so-called deepfake software may prove even more lucrative than extorting companies. Read 24 remaining paragraphs | Comments Reference : https://ift.tt/u6L1g8O

Tuesday, June 27, 2023

Casualties keep growing in this months mass exploitation of MOVEit 0-day


Enlarge (credit: Getty Images) The dramatic fallout continues in the mass exploitation of a critical vulnerability in a widely used file-transfer program, with at least three new victims coming to light in the past few days. They include the New York City Department of Education and energy companies Schneider Electric and Siemens Electric. To date, the hacking spree appears to have breached 122 organizations and obtained the data of roughly 15 million people, based on posts the crime group has published or victim disclosures, Brett Callow, a threat analyst at the antivirus company Emsisoft, said in an interview. Microsoft has tied the attacks to Clop, a Russian-speaking ransomware syndicate. The hacks are all the result of Clop exploiting what had been a zero-day vulnerability in MOVEit, a file-transfer service that’s available in both cloud and on-premises offerings. Read 9 remaining paragraphs | Comments Reference : https://ift.tt/43uB9wO

What AMD Learned From Its Big Chiplet Push


Over the last five years, processors have gone from being single pieces of silicon to a collection of smaller chiplets that collectively act as if they’re one big chip. This approach means that the CPU’s functional pieces can be built using the technology that suits each piece best. Sam Naffziger, a product-technology architect at AMD, was an early proponent of this approach. Naffziger recently answered five chiplet-size questions from IEEE Spectrum on the topic. What are the main challenges you’ve seen for chiplets-based processors? Sam Naffziger: We started out five or six years ago with the EPYC and Ryzen CPU lines. And at the time, we cast a pretty broad net to find what package technologies would be best for connecting the die [small block of silicon]. It’s a complex equation of cost, capability, bandwidth densities, power consumption, and also manufacturing capacity. It’s relatively easy to come up with great package technologies, but it’s a completely different thing to actually manufacture them in high volume, cost effectively. So we’ve invested heavily in that. How might chiplets change the semiconductor-manufacturing process? Naffziger: That’s definitely something that the industry is working through. There’s where we’re at today, and then there’s where we might go in 5 to 10 years. I think today, pretty much, the technologies are general purpose. They can be aligned to monolithic die just fine, or they can function for chiplets. With chiplets, we have much more specialized intellectual property. So, in the future one could envision specializing the process technology and getting performance benefits, cost reductions, and other things. But that’s not where the industry is at today. How will chiplets affect software? Naffziger: One of the goals of our architecture is to have it be completely transparent to software, because software is hard to change. For example, our second-generation EPYC CPU is made up of a centralized I/O [input/output] chiplet surrounded by compute dies. When we went to a centralized I/O die, it reduced memory latency, eliminating a software challenge from the first generation. “One of the goals of our architecture is to have it be completely transparent to software, because software is hard to change.” Now, with the [AMD Instinct] MI300—AMD’s upcoming high-performance computing accelerator—we’re integrating both CPU and GPU compute dies. The software implication of that sort of integration is that they can share one memory address space. Because the software doesn’t have to worry about managing memory, it’s easier to program. How much of the architecture can be separated out onto chiplets? Naffziger: We’re finding ways to scale logic, but SRAM is more of a challenge, and analog stuff is definitely not scaling. We’ve already taken the step of splitting off the analog with the central I/O chiplet. With 3D V-Cache—a high-density cache chiplet 3D-integrated with the compute die—we have split off the SRAM. And I would expect in the future there will be lots more of that kind of specialization. The physics will dictate how fine grained we can go, but I’m bullish about it. What has to happen for mixing and matching different companies’ chiplets into the same package to become a reality? Naffziger: First of all, we need an industry standard on the interface. UCIe, a chiplet interconnect standard introduced in 2022, is an important first step. I think we’ll see a gradual move towards this model because it really is going to be essential to deliver the next level of performance per watt and performance per dollar. Then, you will be able to put together a system-on-chip that is market or customer specific. Sam Naffziger is a senior vice president, corporate fellow, and product-technology architect at AMD and an IEEE Fellow. He is the recipient of the IEEE Solid-State Circuits Society’s 2023 Industry Impact Award. Reference: https://ift.tt/Nqk5aZu

The Sneaky Standard

A version of this post originally appeared on Tedium , Ernie Smith’s newsletter, which hunts for the end of the long tail. Personal c...