Tuesday, May 31, 2022

Broadcom plans a “rapid transition” to subscription revenue for VMware


A Broadcom sign outside one of its offices.

Enlarge / A sign in front of a Broadcom office on June 03, 2021, in San Jose, California. (credit: Getty Images | Justin Sullivan )

Broadcom announced last week that it was seeking to drop $61 billion in cash and stock to acquire VMware. We still don't know exactly what changes Broadcom plans to make to VMware's products or business model once the acquisition completes. Still, Broadcom Software Group President Tom Krause made it clear in Broadcom's earnings call last week: an emphasis on software subscriptions.

As reported by The Register, Broadcom plans a "rapid transition from perpetual licenses to subscriptions" for VMware's products, replacing discrete buy-once-use-forever versions, though "rapid" in this case will still apparently take several years. Broadcom CEO Hock Tan said that the company wants to keep VMware's current customers happy and take advantage of VMware's existing sales team and relationships.

Subscription-based software has some benefits, including continual updates to patch security flaws and ensure compatibility with new operating system updates—virtualization software that requires low-level hardware access gets broken more often by new OS updates than most other apps. But a move toward more subscription-based software licensing could still be unwelcome news for individuals and businesses who prefer to pay for individual upgrades as they want or need them, rather than continuously for as long as they need the software.

Read 3 remaining paragraphs | Comments

Reference : https://ift.tt/Q819VMc

Supreme Court Blocks Texas Law Regulating Social Media Platforms


The law, prompted by conservative complaints about censorship, prohibits big technology companies like Facebook and Twitter from removing posts based on the views they express.

How to make critical infrastructure safer—there’s a long way to go


Making critical infrastructure safer at Ars Frontiers. Click here for transcript. (video link)

In the run-up to Ars Frontiers, I had the opportunity to talk with Lesley Carhart, director of Incident Response at Dragos. Known on Twitter as @hacks4pancakes, Carhart is a veteran responder to cyber incidents affecting critical infrastructure and has been dealing with the challenges of securing industrial control systems and operational technology (OT) for years. So it seemed appropriate to get her take on what needs to be done to improve the security of critical infrastructure both in industry and government, particularly in the context of what’s going on in Ukraine.

Much of it is not new territory. “Something that we’ve noticed for years in the industrial cybersecurity space is that people from all different organizations, both military and terrorists around the world, have been pre-positioning to do things like sabotage and espionage via computers for years,” Carhart explained. But these sorts of things rarely get attention because they’re not flashy—and as a result, they don’t get attention from those holding the purse strings for investments that might correct them.

Read 4 remaining paragraphs | Comments

Reference : https://ift.tt/t1fy984

It’s Doom Times in Tech


Will this meltdown permanently damage the tech world, or is this one more temporary blip?

A First Small Step Towards a LEGO-Size Humanoid Robot




When we think of bipedal humanoid robots, we tend to think of robots that aren’t just human-shaped, but also human-sized. There are exceptions, of course—among them, a subcategory of smaller humanoids that includes research and hobby humanoids that aren’t really intended to do anything practical. But at the In International Conference on Robotics and Automation (ICRA) last week, roboticists from Carnegie Mellon University (CMU) are asked an interesting question: What happens if you try to scale down a bipedal robot? Like, way down? This line from the paper asking this question sums it up: “our goal with this project is to make miniature walking robots, as small as a LEGO Minifigure (1centimeter leg) or smaller.”


The current robot, while small (its legs are 15cm long), is obviously much bigger than a LEGO minifig. But that’s okay, because it’s not supposed to be quite as tiny as the group's ultimate ambition would have it. At least not yet. It’s a platform that the CMU researchers are using to figure out how to proceed. They're still assessing what it’s going to take to shrink bipedal walking robots to the point where they could ride in Matchbox cars. At very small scales, robots run into all kinds of issues, including space and actuation efficiency. These crop up mainly because it’s simply not possible to cram the same number of batteries and motors that go into bigger bots into something that tiny. So, in order to make a tiny robot that can usefully walk, designers have to get creative.

Bipedal walking is already a somewhat creative form of locomotion. Despite how complex bipedal robots tend to be, if the only criteria for a bipedal robot is that it walks, then it’s kind of crazy how simple roboticists can make them. Here’s a 1990-ish (!) video from Tad McGeer, the first roboticist to explore the concept of passive dynamic walking by completely unpowered robots placed on a gentle downwards slope:


The above video comes from the AMBER Lab, which has been working on efficient walking for large humanoids for a long time (you remember DURUS, right?). For small humanoids, the CMU researchers are trying to figure out how to leverage the principle of dynamic walking to make robots that can move efficiently and controllably while needing the absolute minimum of hardware, and in a way that can be scaled. With a small battery and just one actuator per leg, CMU’s robot is fully controllable, with the ability to turn and to start and stop on its own.

“Building at a larger scale allows us to explore the parameter space of construction and control, so that we know how to scale down from there,” says Justin Yim, one of the authors of the ICRA paper. “If you want to get robots into small spaces for things like inspection or maintenance or exploration, walking could be a good option, and being able to build robots at that size scale is a first step.”

“Obviously [at that scale] we will not have a ton of space,” adds Aaron Johnson, who runs CMU’s Robomechanics Lab. “Minimally actuated designs that leverage passive dynamics will be key. We aren't there yet on the LEGO scale, but with this paper we wanted to understand the way this particular morphology walks before dealing with the smaller actuators and constraints.”


Scalable Minimally Actuated Leg Extension Bipedal Walker Based on 3D Passive Dynamics, by Sharfin Islam, Kamal Carter, Justin Yim, James Kyle, Sarah Bergbreiter, and Aaron M. Johnson from CMU, was presented at ICRA 2022. Reference: https://ift.tt/8n5d7zk

Cryptocurrency Firms Expand Physical Footprint in New York


Cryptocurrency firms have been setting up or expanding office space in New York, but the recent market turbulence could hamper their longevity.

Monday, May 30, 2022

U.S. Retakes Top Spot in Supercomputer Race


A massive machine in Tennessee has been deemed the world’s speediest. Experts say two supercomputers in China may be faster, but the country didn’t participate in the rankings.

Sunday, May 29, 2022

Pulse Detection in the Presence of Noise




Join Teledyne SP Devices for an introductory webinar providing technical insight and practical advice on how to improve triggered acquisition of weak signals in the presence of noise. Register now for this free webinar!

Topics covered in this webinar:

  • What is pattern noise and why may it cause false trigger events?
  • Why baseline drift caused by temperature variations can result in false readings
  • How noise analysis can help optimize the acquisition of weak signals
  • Overview of digital signal processing solutions that address these challenges

Who should attend? Engineers that want to learn more about noise sources and how to better optimize the acquisition of weak signals.

What attendees will learn? The origin of pattern noise in time-interleaved analog-to-digital converters (ADCs), its impact on the idle-channel noise level, and the resulting risk of false trigger events and signal distortion. Characteristics of temperature-dependent baseline drift and challenges associated with that. How to determine noise distribution in order to set an appropriate trigger level.

Presenter: Thomas Elter, Senior Field Applications Engineer

Reference: https://ift.tt/JjQMw6U

The mystery of China’s sudden warnings about US hackers


Chinese flag with digital matrix -Innovation Concept - Digital Tech Wallpaper - 3D illustration

Enlarge / Chinese flag with digital matrix -Innovation Concept - Digital Tech Wallpaper - 3D illustration (credit: peterschreiber.media | Getty Images)

For the best part of a decade, US officials and cybersecurity companies have been naming and shaming hackers they believe work for the Chinese government. These hackers have stolen terabytes of data from companies like pharmaceutical and video game firms, compromised servers, stripped security protections, and highjacked hacking tools, according to security experts. And as China’s alleged hacking has grown more brazen, individual Chinese hackers face indictments. However, things may be changing.

Since the start of 2022, China’s Foreign Ministry and the country’s cybersecurity firms have increasingly been calling out alleged US cyberespionage. Until now, these allegations have been a rarity. But the disclosures come with a catch: They appear to rely on years-old technical details, which are already publicly known and don’t contain fresh information. The move may be a strategic change for China as the nation tussles to cement its position as a tech superpower.

“These are useful materials for China’s tit-for-tat propaganda campaigns when they faced US accusation and indictment of China’s cyberespionage activities,” says Che Chang, a cyber threat analyst at the Taiwan-based cybersecurity firm TeamT5.

Read 15 remaining paragraphs | Comments

Reference : https://ift.tt/YHcvNzt

They Did Their Own ‘Research.’ Now What?


In spheres as disparate as medicine and cryptocurrencies, “do your own research,” or DYOR, can quickly shift from rallying cry to scold.

Saturday, May 28, 2022

Eavesdropping on the Brain with 10,000 Electrodes




Imagine a portable computer built from a network of 86 billion switches, capable of general intelligence sophisticated enough to build a spacefaring civilization—but weighing just 1.2 to 1.3 kilograms, consuming just 20 watts of power, and jiggling like Jell-O as it moves. There’s one inside your skull right now. It is a breathtaking achievement of biological evolution. But there are no blueprints.

Now imagine trying to figure out how this wonder of bioelectronics works without a way to observe its microcircuitry in action. That’s like asking a microelectronics engineer to reverse engineer the architecture, microcode, and operating system running on a state-of-the-art processor without the use of a digital logic probe, which would be a virtually impossible task.

So it’s easy to understand why many of the operational details of humans’ brains (and even the brains of mice and much simpler organisms) remain so mysterious, even to neuroscientists. People often think of technology as applied science, but the scientific study of brains is essentially applied sensor technology. Each invention of a new way to measure brain activity—including scalp electrodes, MRIs, and microchips pressed into the surface of the cortex—has unlocked major advances in our understanding of the most complex, and most human, of all our organs.

The brain is essentially an electrical organ, and that fact plus its gelatinous consistency pose a hard technological problem. In 2010, I met with leading neuroscientists at the Howard Hughes Medical Institute (HHMI) to explore how we might use advanced microelectronics to invent a new sensor. Our goal: to listen in on the electrical conversations taking place among thousands of neurons at once in any given thimbleful of brain tissue.

Timothy D. Harris, a senior scientist at HHMI, told me that “we need to record every spike from every neuron” in a localized neural circuit within a freely moving animal. That would mean building a digital probe long enough to reach any part of the thinking organ, but slim enough not to destroy fragile tissues on its way in. The probe would need to be durable enough to stay put and record reliably for weeks or even months as the brain guides the body through complex behaviors.

Against a black background, neurons are shown in green and blue. Different metallic shafts come down vertically through the neurons. Different kinds of neural probes pick up activity from firing neurons: three tines of a Utah array with one electrode on each tine [left], a single slender tungsten wire electrode [center], and a Neuropixels shank that has electrodes all along its length [checkered pattern, right].Massachusetts General Hospital/Imec/Nature Neuroscience

For an electrical engineer, those requirements add up to a very tall order. But more than a decade of R&D by a global, multidisciplinary team of engineers, neuroscientists, and software designers has at last met the challenge, producing a remarkable new tool that is now being put to use in hundreds of labs around the globe.

As chief scientist at Imec, a leading independent nanoelectronics R&D institute, in Belgium, I saw the opportunity to extend advanced semiconductor technology to serve broad new swaths of biomedicine and brain science. Envisioning and shepherding the technological aspects of this ambitious project has been one of the highlights of my career.

We named the system Neuropixels because it functions like an imaging device, but one that records electrical rather than photonic fields. Early experiments already underway—including some in humans—have helped explore age-old questions about the brain. How do physiological needs produce motivational drives, such as thirst and hunger? What regulates behaviors essential to survival? How does our neural system map the position of an individual within a physical environment?

Successes in these preliminary studies give us confidence that Neuropixels is shifting neuroscience into a higher gear that will deliver faster insights into a wide range of normal behaviors and potentially enable better treatments for brain disorders such as epilepsy and Parkinson’s disease.

Version 2.0 of the system, demonstrated last year, increases the sensor count by about an order of magnitude over that of the initial version produced just four years earlier. It paves the way for future brain-computer interfaces that may enable paralyzed people to communicate at speeds approaching those of normal conversation. With version 3.0 already in early development, we believe that Neuropixels is just at the beginning of a long road of exponential Moore’s Law–like growth in capabilities.

In the 1950s, researchers used a primitive electronic sensor to identify the misfiring neurons that give rise to Parkinson’s disease. During the 70 years since, the technology has come far, as the microelectronics revolution miniaturized all the components that go into a brain probe: from the electrodes that pick up the tiny voltage spikes that neurons emit when they fire, to the amplifiers and digitizers that boost signals and reduce noise, to the thin wires that transmit power into the probe and carry data out.

By the time I started working with HHMI neuroscientists in 2010, the best electrophysiology probes, made by NeuroNexus and Blackrock Neurotech, could record the activity of roughly 100 neurons at a time. But they were able to monitor only cells in the cortical areas near the brain’s surface. The shallow sensors were thus unable to access deep brain regions—such as the hypothalamus, thalamus, basal ganglia, and limbic system—that govern hunger, thirst, sleep, pain, memory, emotions, and other important perceptions and behaviors. Companies such as Plexon make probes that reach deeper into the brain, but they are limited to sampling 10 to 15 neurons simultaneously. We set for ourselves a bold goal of improving on that number by one or two orders of magnitude.

We needed a way to place thousands of micrometer-size electrodes directly in contact with vertical columns of neurons, anywhere in the brain.

To understand how brain circuits work, we really need to record the individual, rapid-fire activity of hundreds of neurons as they exchange information in a living animal. External electrodes on the skull don’t have enough spatial resolution, and functional MRI technology lacks the speed necessary to record fast-changing signals. Eavesdropping on these conversations requires being in the room where it happens: We needed a way to place thousands of micrometer-size electrodes directly in contact with vertical columns of neurons, anywhere in the brain. (Fortuitously, neuroscientists have discovered that when a brain region is active, correlated signals pass through the region both vertically and horizontally.)

These functional goals drove our design toward long, slender silicon shanks packed with electrical sensors. We soon realized, however, that we faced a major materials issue. We would need to use Imec’s CMOS fab to mass-produce complex devices by the thousands to make them affordable to research labs. But CMOS-compatible electronics are rigid when packed at high density.

Pliers hold a long and slender electronic device upright. The device has eight delicate wires projecting from its top. The team realized that they could mount two Neuropixels 2.0 probes on one headstage, the board that sits outside the skull, providing a total of eight shanks with 10,240 recording electrodes. Imec

The brain, in contrast, has the same elasticity as Greek yogurt. Try inserting strands of angel-hair pasta into yogurt and then shaking them a few times, and you’ll see the problem. If the pasta is too wet, it will bend as it goes in or won’t go in at all. Too dry, and it breaks. How would we build shanks that could stay straight going in yet flex enough inside a jiggling brain to remain intact for months without damaging adjacent brain cells?

Experts in brain biology suggested that we use gold or platinum for the electrodes and an organometallic polymer for the shanks. But none of those are compatible with advanced CMOS fabrication. After some research and lots of engineering, my Imec colleague Silke Musa invented a form of titanium nitride—an extremely tough electroceramic—that is compatible with both CMOS fabs and animal brains. The material is also porous, which gives it a low impedance; that quality is very helpful in getting currents in and clean signals out without heating the nearby cells, creating noise, and spoiling the data.

Thanks to an enormous amount of materials-science research and some techniques borrowed from microelectromechanical systems (MEMS), we are now able to control the internal stresses created during the deposition and etching of the silicon shanks and the titanium nitride electrodes so that the shanks consistently come out almost perfectly straight, despite being only 23 micrometers (µm) thick. Each probe consists of four parallel shanks, and each shank is studded with 1,280 electrodes. At 1 centimeter in length, the probes are long enough to reach any spot in a mouse’s brain. Mouse studies published in 2021 showed that Neuropixels 2.0 devices can collect data from the same neurons continuously for over six months as the rodents go about their lives.

The thousandfold difference in elasticity between CMOS-compatible shanks and brain tissue presented us with another major problem during such long-term studies: how to keep track of individual neurons as the probes inevitably shift in position relative to the moving brain. Neurons are 20 to 100 µm in size; each square pixel (as we call the electrodes) is 15 µm across, small enough so that it can record the isolated activity of a single neuron. But over six months of jostling activity, the probe as a whole can move within the brain by up to 500 µm. Any particular pixel might see several neurons come and go during that time.

A closeup image compares two tiny metal devices. The device on the left has a rectangular metal base with many rows of tiny spikes poking out. A yellow box superimposed over the device indicates a further zoom-in; an inset image shows the points of the spikes with arrows pointing toward their tips. The device on the right is a single long metal shaft that stretches from the top of the image to the bottom. A blue box superimposed over the device indicates a further zoom-in; an inset image shows a portion of the long shank with arrows pointing to the checkerboard pattern on its surface.

An image produced by a scanning electron microscope shows several long and thin objects with a pattern of squares along most of their lengths and pointed tips. The most common neural recording device today is the Utah array [top image, at left], which has one electrode at the tip of each of its tines. In contrast, a Neuropixels probe [top image, at right] has hundreds of electrodes along each of its long shanks. An image taken by a scanning electron microscope [bottom] magnifies the tips of several Neuropixels shanks.

The 1,280 electrodes on each shank are individually addressable, and the four parallel shanks give us an effectively 2D readout, which is quite analogous to a CMOS camera image, and the inspiration for the name Neuropixels. That similarity made me realize that this problem of neurons shifting relative to pixels is directly analogous to image stabilization. Just like the subject filmed by a shaky camera, neurons in a chunk of brain are correlated in their electrical behavior. We were able to adapt knowledge and algorithms developed years ago for fixing camera shake to solve our problem of probe shake. With the stabilization software active, we are now able to apply automatic corrections when neural circuits move across any or all of the four shanks.

Version 2.0 shrank the headstage—the board that sits outside the skull, controls the implanted probes, and outputs digital data—to the size of a thumbnail. A single headstage and base can now support two probes, each extending four shanks, for a total of 10,240 recording electrodes. Control software and apps written by a fast-growing user base of Neuropixels researchers allow real-time, 30-kilohertz sampling of the firing activity of 768 distinct neurons at once, selected at will from the thousands of neurons touched by the probes. That high sampling rate, which is 500 times as fast as the 60 frames per second typically recorded by CMOS imaging chips, produces a flood of data, but the devices cannot yet capture activity from every neuron contacted. Continued advances in computing will help us ease those bandwidth limitations in future generations of the technology.

In just four years, we have nearly doubled the pixel density, doubled the number of pixels we can record from simultaneously, and increased the overall pixel count more than tenfold, while shrinking the size of the external electronics by half. That Moore’s Law–like pace of progress has been driven in large part by the use of commercial-scale CMOS and MEMS fabrication processes, and we see it continuing.

A next-gen design, Neuropixels 3.0, is already under development and on track for release around 2025, maintaining a four-year cadence. In 3.0, we expect the pixel count to leap again, to allow eavesdropping on perhaps 50,000 to 100,000 neurons. We are also aiming to add probes and to triple or quadruple the output bandwidth, while slimming the base by another factor of two.

That Moore’s Law–like pace of progress has been driven in large part by the use of commercial-scale CMOS fabrication processes.

Just as was true of microchips in the early days of the semiconductor industry, it’s hard to predict all the applications Neuropixels technology will find. Adoption has skyrocketed since 2017. Researchers at more than 650 labs around the world now use Neuropixels devices, and a thriving open-source community has appeared to create apps for them. It has been fascinating to see the projects that have sprung up: For example, the Allen Institute for Brain Science in Seattle recently used Neuropixels to create a database of activity from 100,000-odd neurons involved in visual perception, while a group at Stanford University used the devices to map how the sensation of thirst manifests across 34 different parts of the mouse brain.

We have begun fabricating longer probes of up to 5 cm and have defined a path to probes of 15 cm—big enough to reach the center of a human brain. The first trials of Neuropixels in humans were a success, and soon we expect the devices will be used to better position the implanted stimulators that quiet the tremors caused by Parkinson’s disease, with 10-µm accuracy. Soon, the devices may also help identify which regions are causing seizures in the brains of people with epilepsy, so that corrective surgery eliminates the problematic bits and no more.

Two long and slender devices have delicate wires at left, tape-like connectors at center, and circuit boards at right. The top device is bigger and has one delicate wire, the bottom device is smaller and has four delicate wires. The first Neuropixels device [top] had one shank with 966 electrodes. Neuropixels 2.0 [bottom] has four shanks with 1,280 electrodes each. Two probes can be mounted on one headstage.Imec

Future generations of the technology could play a key role as sensors that enable people who become “locked in” by neurodegenerative diseases or traumatic injury to communicate at speeds approaching those of typical conversation. Every year, some 64,000 people worldwide develop motor neuron disease, one of the more common causes of such entrapment. Though a great deal more work lies ahead to realize the potential of Neuropixels for this critical application, we believe that fast and practical brain-based communication will require precise monitoring of the activity of large numbers of neurons for long periods of time.

An electrical, analog-to-digital interface from wetware to hardware has been a long time coming. But thanks to a happy confluence of advances in neuroscience and microelectronics engineering, we finally have a tool that will let us begin to reverse engineer the wonders of the brain.

This article appears in the June 2022 print issue as “Eavesdropping on the Brain.”

A Fab and a Vision

Image of Barun Dutta, Chief Scientist STS Author Barun Dutta [above] of Imec brought his semiconductor expertise to the field of neuroscience.Fred Loosen/Imec

An unusual union of big tech and big science created Neuropixels, a fundamentally new technology for observing brains in action. The alliance also created a shared global facility for collaborative neuroscience akin to the CERN particle accelerator for high-energy physics.

The partnership began with conversations between Barun Dutta and Timothy D. Harris in 2010. Dutta is chief scientist of Imec, a leading nonprofit nanoelectronics R&D institute in Belgium, and had use of state-of-the-art semiconductor manufacturing facilities. Harris, who directs the applied physics and instrumentation group at the Howard Hughes Medical Institute’s Janelia Research Campus, in Virginia, had connections with world-class neuroscientists who shared his vision of building a new kind of probe for neuron-level observation of living brains.

Dutta and Harris recruited allies from their institutions and beyond. They raised US $10 million from the HHMI, the Allen Institute for Brain Science in Seattle, the Gatsby Foundation, and the Wellcome Trust to fund the intensive R&D needed to produce a working prototype. Dutta led the effort at Imec to tap into semiconductor technology that had been inaccessible to the neuroscience community. Imec successfully delivered the first generation of Neuropixels probes to some 650 labs globally, and the second generation is due to be released in 2022.

In addition to a vibrant open-source community that sprang up to develop software to analyze the large data sets generated by these brain probes, the Allen Institute has created OpenScope, a shared brain observatory to which researchers around the world can propose experiments to test hypotheses about brain function. “Think of us as being the Intel of neuroscience,” Dutta says. “We’re providing the chips, and then labs and companies and open-source software groups around the world are building code and doing experiments with them.”

Reference: https://ift.tt/01UYuRb

What Is Wi-Fi 7?




New generations of Wi-Fi have sprung onto the scene at a rapid pace in recent years. After a storied five-year presence, Wi-Fi 5 was usurped in 2019 by Wi-Fi 6, only for the latter to be toppled a year later in 2020 by an intermediate generation, Wi-Fi 6E. And now, just a couple years later, we’re on the verge of Wi-Fi 7.

Wi-Fi 7 (the official IEEE standard is 802.11be) may only give Wi-Fi 6 a scant few years in the spotlight, but it’s not just an upgrade for the sake of an upgrade. Several new technologies—and some that debuted in Wi-Fi 6E but haven’t entirely yet come into their own—will allow Wi-Fi 7 routers and devices to make full use of an entirely new band of spectrum at 6 gigahertz. This spectrum—first tapped into with Wi-Fi 6E—adds a third wireless band alongside the more familiar 2.4-GHz and 5-GHz bands.

New technologies called automated frequency coordination, multi-link operations, and 4K QAM (all described below) will further increase wireless capacity, reduce latency, and generally make Wi-Fi networks more flexible and responsive for users.

Automated Frequency Coordination (AFC)

Automated frequency coordination (AFC) solves a thorny problem with the 6-GHz band in that, while Wi-Fi is the new kid in town, it’s moving into an otherwise well-staked-out portion of the spectrum. In the United States, for example, federal agencies like NASA and the Department of Defense often use the 6-GHz band to communicate with geostationary satellites. Weather radar systems and radio astronomers rely on this band a lot as well. And these incumbents really don’t appreciate errant Wi-Fi signals muscling in on their frequency turf. Fortunately, the preexisting uses of 6-GHz microwaves are largely predictable, localized, and stationary. So AFC allows Wi-Fi into the band by making it possible to coordinate with and work around existing use cases.

“We’re looking at where all of these fixed services are located,” says Chris Szymanski, a director of product marketing at Broadcom. “We’re looking at the antenna patterns of these fixed services, and we’re looking at the direction they’re pointing.” All of this information is added into cloud-based databases. The databases will also run interference calculations, so that when a Wi-Fi 7 access point checks the database, it will be alerted to any incumbent operators—and their particulars—in its vicinity.

AFC makes it possible for Wi-Fi 7 networks to operate around incumbents by preventing transmissions in bands that would interfere with nearby weather radar, radio telescopes, or others. At the same time, it frees up Wi-Fi 7 networks to broadcast at a higher power when they know there’s no preexisting spectrum user nearby to worry about. Szymanski says that Wi-Fi 7 networks will be able to use AFC to transmit on the 6-GHz band using 63 times as much power when the coast is clear than they would if they had to maintain a uniform low-level transmission power to avoid disturbing any incumbents. More power translates to better service over longer distances, more reliability, and greater throughput.

AFC is not new to Wi-Fi 7. It debuted with Wi-Fi 6E, the incremental half-step generation between Wi-Fi 6 and Wi-Fi 7 that emerged as a consequence of the 6-GHz band becoming available in many places. With Wi-Fi 7, however, more classes of wireless devices will receive AFC certification, expanding its usefulness and impact.

Multi-link Operations (MLO)

Multi-link operations (MLO) will take advantage of the fact that Wi-Fi’s existing 5-GHz band and new 6-GHz band are comparatively closer than the 2.4-GHz and 5-GHz bands are to each other. Wi-Fi access points have long had the ability to support transmissions over multiple wireless channels at the same time. With Wi-Fi 7, devices like cellphones and IoT devices will be able to access multiple channels at the same time. (Think about how you currently have to connect to either a 2.4-GHz network or a 5-GHz network when you’re joining a Wi-Fi network).

MLO will allow a device to connect to both a 5-GHz channel and a 6-GHz channel at the same time and use both to send and receive data. This wasn’t really possible before the addition of the 6-GHz band, explains Andy Davidson, a senior director of product technology planning at Qualcomm. The 5-GHz and 6-GHz bands are close enough that they have functionally the same speeds. Trying the same trick with the 2.4-GHz and 5-GHz bands would drag down the effectiveness of the 5-GHz transmissions as they waited for the slower 2.4-GHz transmissions to catch up.

This is especially clear in alternating multi-link, a type of MLO in which, as the name implies, a device alternates between two channels, sending portions of its transmissions on each (As opposed to simultaneous multi-link, in which the two channels are simply used in tandem). Using alternating multi-link with the 2.4-GHz and 5-GHz bands is like trying to run two trains at different speeds on one track. “If one of those trains is slow, especially if they’re very slow, it means your fast train can’t even do anything because it’s waiting for the slow train to complete” its trip, says Davidson.

4K Quadrature Amplitude Modulation (4K QAM)

There’s also 4K QAM—short for quadrature amplitude modulation (More on the “4K” in a moment). At its core, QAM is a way of sending multiple bits of information in the same instant of a transmission by superimposing signals of different amplitudes and phases. The “4K” in 4K QAM means that it is possible to superimpose more than 4,000 signals at once—4,096 to be exact.

4K QAM is also not new to Wi-Fi 7, but Davidson says the new generation will make 4K QAM standard. Like multi-link operations and automated frequency coordination, 4K QAM increases capacity and, by extension, reduces latency.

When Wi-Fi 7 becomes available, there will be differences between regions. The availability of spectrum varies between countries, depending on how their respective regulatory agencies have assigned out spectrum. For example, while multi-link operations in the United States will be able to use the channels at 5 GHz and 6 GHz, the latter won’t be available for Wi-Fi use in China. Instead, Wi-Fi devices in China can use two different channels in the 5-GHz band.

Companies including Broadcom and Qualcomm have announced their Wi-Fi 7 components in recent weeks. That doesn’t mean Wi-Fi 7 routers and cellphones are right around the corner. Over the next months, those devices will be built and certified using the components from Broadcom, Qualcomm, and others. But the wait won’t be too long—Wi-Fi 7 devices will likely be available by the end of the year.

Reference: https://ift.tt/xHs5CBV

Friday, May 27, 2022

Redefining “privacy” and “personal security” in a changing infosec world


Redefining privacy at Ars Frontiers. Click here for transcript. (video link)

At the Ars Frontiers event in Washington, DC, I had the privilege of moderating two panels on two closely linked topics: digital privacy and information security. Despite significant attempts to improve things, conflicting priorities and inadequate policy have weakened both privacy and security. Some of the same fundamental issues underly the weaknesses in both: Digital privacy and information security are still too demanding for average people to manage, let alone master.

Our privacy panel consisted of Electronic Frontier Foundation deputy executive Kurt Opsahl, security researcher Runa Sandvik, and ACLU Senior Policy Analyst Jay Stanley. Individuals trying to protect their digital privacy face "a constant arms race between what the companies are trying to do, or doing because they can, versus then what people are saying that they either like or don't like," Sandvik explained.

Read 7 remaining paragraphs | Comments

Reference : https://ift.tt/cOWjpNF

Charles Babbage’s Difference Engine Turns 200




It was an idea born of frustration, or at least that’s how Charles Babbage would later recall the events of the summer of 1821. That fateful summer, Babbage and his friend and fellow mathematician John Herschel were in England editing astronomical tables. Both men were founding members of the Royal Astronomical Society, but editing astronomical tables is a tedious task, and they were frustrated by all of the errors they found. Exasperated, Babbage exclaimed, “I wish to God these calculations had been executed by steam.” To which Herschel replied, “It is quite possible.“

Babbage and Herschel were living in the midst of what we now call the Industrial Revolution, and steam-powered machinery was already upending all types of business. Why not astronomy too?


Babbage set to work on the concept for a Difference Engine, a machine that would use a clockwork mechanism to solve polynomial equations. He soon had a small working model (now known as Difference Engine 0), and on 14 June 1822, he presented a one-page “Note respecting the Application of Machinery to the Calculation of Astronomical Tables” to the Royal Astronomical Society. His note doesn’t go into much detail—it’s only one page, after all—but Babbage claimed to have “repeatedly constructed tables of squares and triangles of numbers” as well as of the very specific formula x2 + x + 41. He ends his note with much optimism: “From the experiments I have already made, I feel great confidence in the complete success of the plans I have proposed.” That is, he wanted to build a full-scale Difference Engine.

Perhaps Babbage should have tempered his enthusiasm. His magnificent Difference Engine proved far more difficult to build than his note suggested.

It wasn’t for lack of trying, or lack of funds. For Babbage managed to do something else that was almost as unimaginable: He convinced the British government to fund his plan. The government saw the value in a machine that could calculate the many numerical tables used for navigation, construction, finance, and engineering, thereby reducing human labor (and error). With an initial investment of £1,700 in 1823 (about US $230,000 today), Babbage got to work.

The Difference Engine was a calculator with 25,000 parts

A daguerreotype portrait of British mathematician Charles Babbage shows a man in period dress of the mid-19th century. The 19th-century mathematician Charles Babbage’s visionary contributions to computing were rediscovered in the 20th century.The Picture Art Collection/Alamy

Babbage based his machine on the mathematical method of finite differences, which allows you to solve polynomial equations in a series of iterative steps that compare the differences in the resulting values. This method had the advantage of requiring simple addition only, which was easier to implement using gear wheels than one based on multiplication and division would have been. (The Computer History Museum has an excellent description of how the Difference Engine works.) Although Babbage had once dreamed of a machine powered by steam, his actual design called for a human to turn a crank to advance each iteration of calculations.

Difference Engine No. 1 was divided into two main parts: the calculator and the printing mechanism. Although Babbage considered using different numbering systems (binary, hexadecimal, and so on), he decided to stick with the familiarity of the base-10 decimal system. His design in 1830 had a capacity of 16 digits and six orders of difference. Each number value was represented by its own wheel/cam combination. The wheels represented only whole numbers; the machine was designed to jam if a result came out between whole numbers.

As the calculator cranked out the results, the printing mechanism did two things: It printed a table while simultaneously making a stereotype mold (imprinting the results in a soft material such as wax or plaster of paris). The mold could be used to make printing plates, and because it was made at the same time as the calculations, there would be no errors introduced by humans copying the results.

Difference Engine No. 1 contained more than 25,000 distinct parts, split roughly equally between the calculator and the printer. The concepts of interchangeable parts and standardization were still in their infancy. Babbage thus needed a skilled craftsman to manufacture the many pieces. Marc Isambard Brunel, part of the father-and-son team of engineers who had constructed the first tunnel under the Thames, recommended Joseph Clement. Clement was an award-winning machinist and draftsman whose work was valued for its precision.

Babbage and Clement were both brilliant at their respective professions, but they often locked horns. Clement knew his worth and demanded to be paid accordingly. Babbage grew concerned about costs and started checking on Clement’s work, which eroded trust. The two did produce a portion of the machine [shown at top] that was approximately one-seventh of the complete engine and featured about 2,000 moving parts. Babbage demonstrated the working model in the weekly soirees he held at his home in London.

The machine impressed many of the intellectual society set, including a teenage Ada Byron, who understood the mathematical implications of the machine. Byron was not allowed to attend university due to her sex, but her mother supported her academic interests. Babbage suggested several tutors in mathematics, and the two remained correspondents over their lifetimes. In 1835, Ada married William King. Three years later, when he became the first Earl of Lovelace, Ada became Countess of Lovelace. (More about Ada Lovelace shortly.)

Despite the successful chatter in society circles about Babbage’s Difference Engine, trouble was brewing—cost overruns, political opposition to the project, and Babbage and Clement’s personality differences, which were causing extreme delays. Eventually, the relationship between Babbage and Clement reached a breaking point. After yet another fight over finances, Clement abruptly quit in 1832.

The Analytical Engine was a general-purpose computer

A watercolor portrait of the British mathematician Ada Lovelace shows a young woman with dark curls in a white dress. Ada Lovelace championed Charles Babbage’s work by, among other things, writing the first computer algorithm for his unbuilt Analytical Engine.Interim Archives/Getty Images

Despite these setbacks, Babbage had already started developing a more ambitious machine: the Analytical Engine. Whereas the Difference Engine was designed to solve polynomials, this new machine was intended to be a general-purpose computer. It was composed of several smaller devices: one to list the instruction set (on punch cards popularized by the Jacquard loom); one (called the mill) to process the instructions; one (which Babbage called the store but we would consider the memory) to store the intermediary results; and one to print out the results.

In 1840 Babbage gave a series of lectures in Turin on his Analytical Engine, to much acclaim. Italian mathematician Luigi Federico Menabrea published a description of the engine in French in 1842, “Notions sur la machine analytique.” This is where Lady Lovelace returns to the story.

Lovelace translated Menabrea’s description into English, discreetly making a few corrections. The English scientist Charles Wheatstone, a friend of both Lovelace and Babbage, suggested that Lovelace augment the translation with explanations of the Analytical Engine to help advance Babbage’s cause. The resulting “Notes,” published in 1843 in Richard Taylor’s Scientific Memoirs, was three times the length of Menabrea’s original essay and contained what many historians consider the first algorithm or computer program. It is quite an accomplishment to write a program for an unbuilt computer whose design was still in flux. Filmmakers John Fuegi and Jo Francis captured Ada Lovelace’s contributions to computing in their 2003 documentary Ada Byron Lovelace: To Dream Tomorrow. They also wrote a companion article published in the IEEE Annals of the History of Computing, entitled “Lovelace & Babbage and the Creation of the 1843 ‘Notes’.”

Although Lovelace’s translation and “Notes” were hailed by leading scientists of the day, they did not win Babbage any additional funding. Prime Minister Robert Peel had never been a fan of Babbage’s; as a member of Parliament back in 1823, he had been a skeptic of Babbage’s early design. Now that Peel was in a position of power, he secretly solicited condemnations of the Difference Engine. In a stormy meeting on 11 November 1842, the two men argued past each other. In January 1843, Babbage was informed that Parliament was sending the finished portion of Difference Engine No. 1 to the King’s College Museum. Two months later, Parliament voted to withdraw support for the project. By then, the government had spent £17,500 (about US $3 million today) and waited 20 years and still didn’t have a working machine. You could see why Peel thought it was a waste.

But Babbage, perhaps reinvigorated by his work on the Analytical Engine, decided to return to the Difference Engine in 1846. Difference Engine No. 2 required only 8,000 parts and had a much more elegant and efficient design. He estimated it would weigh 5 tons and measure 11 feet long and 7 feet high. He worked for another two years on the machine and left 20 detailed drawings, which were donated to the Science Museum after he died in 1871.

A modern team finally builds Babbage’s Difference Engine

A tall mechanical apparatus with columns of metal gears and a hand crank on the side. In 1985, a team at the Science Museum in London set out to build the streamlined Difference Engine No. 2 based on Babbage’s drawings. The 8,000-part machine was finally completed in 2002.Science Museum Group

Although Difference Engine No. 2, like all the other engines, was never completed during Babbage’s lifetime, a team at the Science Museum in London set out to build one. Beginning in 1985, under the leadership of Curator of Computing Doron Swade, the team created new drawings adapted to modern manufacturing techniques. In the process, they sought to answer a lingering question: Was 19th-century precision a limiting factor in Babbage’s design? The answer is no. The team concluded that if Babbage had been able to secure enough funding and if he had had a better relationship with his machinist, the Difference Engine would have been a success.

That said, some of the same headaches that plagued Babbage also affected the modern team. Despite leaving behind fairly detailed designs, Babbage left no introductory notes or explanations of how the pieces worked together. Much of the groundbreaking work interpreting the designs was done by Australian computer scientist and historian Allan G. Bromley, beginning in 1979. Even so, the plans had dimension inconsistencies, errors, and entire parts omitted (such as the driving mechanism for the inking), as described by Swade in a 2005 article for the IEEE Annals of the History of Computing.

The team had wanted to complete the Difference Engine by 1991, in time for the bicentenary of Babbage’s birth. They did finish the calculating section by then. But the printing and stereotyping section—the part that would have alleviated all of Babbage’s frustrations in editing those astronomical tables—took another nine years. The finished product is on display at the Science Museum.

A duplicate engine was built with funding from former Microsoft chief technology officer Nathan Myhrvold. The Computer History Museum displayed that machine from 2008 to 2016, and it now resides in the lobby of Myhrvold’s Intellectual Ventures in Bellevue, Wash.

The title of the textbook for the very first computer science class I ever took was The Analytical Engine. It opened with a historical introduction about Babbage, his machines, and his legacy. Babbage never saw his machines built, and after his death, the ideas passed into obscurity for a time. Over the course of the 20th century, though, his genius became more clear. His work foreshadowed many features of modern computing, including programming, iteration, looping, and conditional branching. These days, the Analytical Engine is often considered an invention 100 years ahead of its time. It would be anachronistic and ahistorical to apply today’s computer terminology to Babbage’s machines, but he was clearly one of the founding visionaries of modern computing.

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the June 2022 print issue as “The Clockwork Computer."

Reference: https://ift.tt/QjUZ3Dp

The S.E.C. Sent a Letter to Musk About His Twitter Shares in April


The regulator questioned whether the Tesla chief executive had disclosed his stake at the right time.

How Influencers Hype Crypto, Without Disclosing Their Financial Ties


Logan Paul, Paul Pierce and other celebrities have promoted risky and obscure digital currencies, sometimes failing to mention their conflicts of interest.

Accused of Cheating by an Algorithm, and a Professor She Had Never Met


An unsettling glimpse at the digitization of education.

Thursday, May 26, 2022

Broadcom will pay $61 billion to become the latest company to acquire VMware


Broadcom will pay $61 billion to become the latest company to acquire VMware

Enlarge (credit: VMWare)

Chipmaker Broadcom will be acquiring VMware for $61 billion in cash and stock, the companies announced today.

Broadcom is best known for designing and selling a wide range of chips for wired and wireless communication, including Wi-Fi and Bluetooth chips and the processors that power many routers and modems. But the company has spent billions in recent years to acquire an enterprise software portfolio$18.9 billion for CA Technologies in 2018 and $10.7 billion for Symantec in 2019. The VMware buy is much larger than either of those purchases, but it fits the pattern of Broadcom's other software acquisitions.

Once the acquisition is completed, the Broadcom Software Group will adopt the VMware name. If approved, Broadcom expects the transaction to be complete at some point in 2023.

Read 2 remaining paragraphs | Comments

Reference : https://ift.tt/zuQlCBw

Broadcom to Acquire VMware in $61 Billion Enterprise Computing Deal


The resulting combination of chip company and software maker would be one of the most important suppliers of technology to the cloud computing market.

Wednesday, May 25, 2022

Twitter Fined in Privacy Settlement, as Musk Commits More Equity for Deal


Twitter did not do enough to tell its users that the personal data it had collected was used partly to help marketers target ads, the F.T.C. and Justice Department said.

Debunking 3 Viral Rumors About the Texas Shooting


False and unfounded claims started to spread online shortly after the killings.

‘Quantum Internet’ Inches Closer With Advance in Data Teleportation


Scientists have improved their ability to send quantum information across distant computers — and have taken another step toward the network of the future.

World Builders Put Happy Face On Superintelligent AI




One of the biggest challenges in a World Building competition that asked teams to imagine a positive future with superintelligent AI: Make it plausible.

The Future of Life Institute, a nonprofit that focuses on existential threats to humanity, organized the contest and is offering a hefty prize purse of up to $140,000, to be divided among multiple winners. Last week FLI announced the 20 finalists from 144 entries, and the group will declare the winners on June 15.

“We’re not trying to push utopia. We’re just trying to show futures that are not dystopian, so people have something to work toward.”
—Anna Yelizarova, Future of Life Institute

The contest aims to counter the common dystopian narrative of artificial intelligence that becomes smarter than humans, escapes our control, and makes the world go to hell in one way or another. The philosopher Nick Bostrom famously imagined a factory AI turning all the world’s matter into paperclips to fulfill its objective, and many respected voices in the field, such as computer scientist Stuart Russell, have argued that it’s essential to begin work on AI safety now, before superintelligence is achieved. Add in the sci-fi novels, TV shows, and movies that tell dark tales of AI taking over—the Blade Runners, the Westworlds, the Terminators, the Matrices (both original recipe and Resurrections)—and it’s no wonder the public feels wary of the technology.

Anna Yelizarova, who’s managing the contest and other projects at FLI, says she feels bombarded by images of dystopia in the media, and says it makes her wonder “what kind of effect that has on our worldview as a society.” She sees the contest partly as a way to provide hopeful visions of the future. “We’re not trying to push utopia,” she says, noting that the worlds built for the contest are not perfect places with zero conflicts or struggles. “We’re just trying to show futures that are not dystopian, so people have something to work toward,” she says.

The contest asked a lot from the teams that entered: They had to provide a timeline of events from now until 2045 that includes the invention of artificial general intelligence (AGI), two “day in the life” short stories, answers to a list of questions, and a media piece reflecting their imagined world.

Yelizarova says that another motivation for the contest was to see what sorts of ideas people would come up with. Imagining a hopeful future with AGI is inherently more difficult than imagining a dystopian one, she notes, because it requires coming up with solutions to some of the biggest challenges facing humanity. For example, how to ensure that world governments work together to deploy AGI responsibly and don't treat its development as an arms race? And how to create AGI agents whose goals are aligned with those of humans? “If people are suggesting new institutions or new ways of tackling problems,” Yelizarova says, “those can become actual policy efforts we can pursue in the real world.”

“For a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better. ... And the idea that such a world might be possible is a future that I want to fight for.”
—Rebecca Rapple, finalist in the Future of Life Institute’s superintelligent AI contest

It's worth diving into the worlds created by the 20 finalists and browsing through the positive possible futures. IEEE Spectrum corresponded with two finalists that have very different visions.

The first, a solo effort by Rebecca Rapple of Portland, Ore., imagines a world in which an AGI agent named TAI has a direct connection with nearly every human on earth via brain-computer interfaces. The world's main currency is one of TAI’s devising called Contribucks, which are earned via positive social contributions and which lose value the longer they’re stored. People routinely plug into a virtual experience called COMMUNITAS, which Rapple’s entry describes as “a TAI-facilitated ecstatic group experience where sentience communes, sharing in each other’s experiences directly through TAI.” While TAI is not directly under humans’ control, she has stated that “she loves every soul” and people both trust her and think she’s helping them to live better lives.

Rapple, who describes herself as a pragmatic optimist, says that crafting her world was an uplifting process. “The assumption at the core of my world is that for a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better,” she tells IEEE Spectrum. “Better to ourselves, our neighbors, our planet. And the idea that such a world might be possible is a future that I want to fight for.”

The second team IEEE Spectrum corresponded with is a trio from Nairobi, Kenya: Conrad Whitaker, Dexter Findley, and Tracey Kamande. This team imagined AGI emerging from a “new non-von Neumann computing paradigm” in which memory is fully integrated into processing. As an AGI agent describes it in one of the team's short stories, AGI has resulted “from the digital replication of human brain structure, with all its separate biological components, neural networks and self-referential loops. Nurtured in a naturalistic setting with constant positive human interaction, just like a biological human infant.”

In this world there are over 1000 AGIs, or digital humans, by the year 2045; the machine learning and neural networks that we know as AI today are widely used for optimization problems, but aren’t considered true, general-purpose intelligence. Those AIs, in so many words, are not AGI. But in the present scenario being imagined, many people live in AGI-organized “digital nations” that they can join regardless of their physical locations, and which bring many health and social benefits.

In an email, the Kenyan team says they aimed to paint a picture of a future that is “strong on freedoms and rights for both humans and AGIs—going so far as imagining that a caring and respectful environment that encouraged unbridled creativity and discourse (conjecture and criticism) was critical to bringing an ‘artificial person’ to maturity in the first place.” They imagine that such AGI agents wouldn’t see themselves as separate from humans as they would be “human-like” in both their experience of knowledge and their sense of self, and that the AGI agents would therefore have a “human-like” capacity for moral knowledge.

Meaning that these AGI agents would see the problem with turning all humans on earth into paperclips.

Reference: https://ift.tt/Y4PXpMZ

I Tried Apple’s Self-Repair Program With My iPhone. Disaster Ensued.


Apple’s do-it-yourself tools and instructions are far from ideal for most of us. I know this because I broke my phone trying to use them.

I Tried Apple’s Self-Repair Program With My iPhone. Disaster Ensued.


Apple’s do-it-yourself tools and instructions are far from ideal for most of us. I know this because I broke my phone trying to use them. Reference :

Tuesday, May 24, 2022

These Investors Are Putting $1 Billion Into Trump Media


A draft document contains the names of dozens of hedge funds and others behind the $1 billion private investment announced in December.

6 Podcasts About the Dark Side of the Internet


These shows tap into the dangers of our wired life, exploring cybercrime, cryptocurrency and the many flavors of horror that lurk on the dark web.

Digital driver’s license billed as harder than plastic to forge is easily forged


Digital driver’s license billed as harder than plastic to forge is easily forged

Enlarge (credit: Service NSW)

In late 2019, the government of New South Wales in Australia rolled out digital driver's licenses. The new licenses allowed people to use their iPhone or Android device to show proof of identity and age during roadside police checks or at bars, stores, hotels, and other venues. ServiceNSW, as the government body is usually referred to, promised it would “provide additional levels of security and protection against identity fraud, compared to the plastic [driver's license]” citizens had used for decades.

Now, 30 months later, security researchers have shown that it’s trivial for just about anyone to forge fake identities using the digital driver's licenses, or DDLs. The technique allows people under drinking age to change their date of birth and for fraudsters to forge fake identities. The process takes well under an hour, doesn’t require any special hardware or expensive software, and will generate fake IDs that pass inspection using the electronic verification system used by police and participating venues. All of this, despite assurances that security was a key priority for the newly created DDL system.

“To be clear, we do believe that if the Digital Driver's Licence was improved by implementing a more secure design, then the above statement made on behalf of ServiceNSW would indeed be true, and we would agree that the Digital Driver's Licence would provide additional levels of security against fraud compared to the plastic driver's licence,” Noah Farmer, the researcher who identified the flaws, wrote in a post published last week.

Read 18 remaining paragraphs | Comments

Reference : https://ift.tt/5U4HKIs

Interested in Solar Panels? Here Is Some Advice.


Buying a solar energy system can be expensive and confusing. Here are some things to think about if you are in the market for solar panels.

Why Has the CPI Inflation Calculation Changed Over Time?


As prices soar, some critics are raising doubts about the official inflation figures. But many economists say the figures are an accurate snapshot of rising prices.

Monday, May 23, 2022

Tridge Is Providing a Middleman Service for Global Food Trade


The Korean start-up Tridge is working to create a network for buyers and sellers, collecting valuable data in the process.

The Top 10 Climate Tech Stories of 2024

In 2024, technologies to combat climate change soared above the clouds in electricity-generating kites, traveled the oceans sequestering...