Wednesday, July 31, 2024

Cloudflare once again comes under pressure for enabling abusive sites


Cloudflare once again comes under pressure for enabling abusive sites

Enlarge (credit: Getty Images)

A familiar debate is once again surrounding Cloudflare, the content delivery network that provides a free service that protects websites from being taken down in denial-of-service attacks by masking their hosts: Is Cloudflare a bastion of free speech or an enabler of spam, malware delivery, harassment and the very DDoS attacks it claims to block?

The controversy isn't new for Cloudflare, a network operator that has often taken a hands-off approach to moderating the enormous amount of traffic flowing through its infrastructure. With Cloudflare helping deliver 16 percent of global Internet traffic, processing 57 million web requests per second, and serving anywhere from 7.6 million to 15.7 million active websites, the decision to serve just about any actor, regardless of their behavior, has been the subject of intense disagreement, with many advocates of free speech and Internet neutrality applauding it and people fighting crime and harassment online regarding it as a pariah.

Content neutral or abuse enabling?

Spamhaus—a nonprofit organization that provides intelligence and blocklists to stem the spread of spam, phishing, malware, and botnets—has become the latest to criticize Cloudflare. On Tuesday, the project said Cloudflare provides services for 10 percent of the domains listed in its domain block list and, to date, serves sites that are the subject of more than 1,200 unresolved complaints regarding abuse.

Read 16 remaining paragraphs | Comments

Reference : https://ift.tt/mdEDOty

ChatGPT Advanced Voice Mode impresses testers with sound effects, catching its breath


Stock Photo: AI Cyborg Robot Whispering Secret Or Interesting Gossip

Enlarge / A stock photo of a robot whispering to a man. (credit: AndreyPopov via Getty Images)

On Tuesday, OpenAI began rolling out an alpha version of its new Advanced Voice Mode to a small group of ChatGPT Plus subscribers. This feature, which OpenAI previewed in May with the launch of GPT-4o, aims to make conversations with the AI more natural and responsive. In May, the feature triggered criticism of its simulated emotional expressiveness and prompted a public dispute with actress Scarlett Johansson over accusations that OpenAI copied her voice. Even so, early tests of the new feature shared by users on social media have been largely enthusiastic.

In early tests reported by users with access, Advanced Voice Mode allows them to have real-time conversations with ChatGPT, including the ability to interrupt the AI mid-sentence almost instantly. It can sense and respond to a user's emotional cues through vocal tone and delivery, and provide sound effects while telling stories.

But what has caught many people off-guard initially is how the voices simulate taking a breath while speaking.

Read 13 remaining paragraphs | Comments

Reference : https://ift.tt/kxmD3iv

Will This Flying Camera Finally Take Off?




Ten years. Two countries. Multiple redesigns. Some US $80 million invested. And, finally, Zero Zero Robotics has a product it says is ready for consumers, not just robotics hobbyists—the HoverAir X1. The company has sold several hundred thousand flying cameras since the HoverAir X1 started shipping last year. It hasn’t gotten the millions of units into consumer hands—or flying above them—that its founders would like to see, but it’s a start.

“It’s been like a 10-year-long Ph.D. project,” says Zero Zero founder and CEO Meng Qiu Wang. “The thesis topic hasn’t changed. In 2014 I looked at my cell phone and thought that if I could throw away the parts I don’t need—like the screen—and add some sensors, I could build a tiny robot.”

I first spoke to Wang in early 2016, when Zero Zero came out of stealth with its version of a flying camera—at $600. Wang had been working on the project for two years. He started the project in Silicon Valley, where he and cofounder Tony Zhang were finishing up Ph.D.s in computer science at Stanford University. Then the two decamped for China, where development costs are far less.

Flying cameras were a hot topic at the time; startup Lily Robotics demonstrated a $500 flying camera in mid-2015 (and was later charged with fraud for faking its demo video), and in March of 2016 drone-maker DJI introduced a drone with autonomous flying and tracking capabilities that turned it into much the same type of flying camera that Wang envisioned, albeit at the high price of $1400.

Wang aimed to make his flying camera cheaper and easier to use than these competitors by relying on image processing for navigation—no altimeter, no GPS. In this approach, which has changed little since the first design, one camera looks at the ground and algorithms follow the camera’s motion to navigate. Another camera looks out ahead, using facial and body recognition to track a single subject.

The current version, at $349, does what Wang had envisioned, which is, he told me, “to turn the camera into a cameraman.” But, he points out, the hardware and software, and particularly the user interface, changed a lot. The size and weight have been cut in half; it’s just 125 grams. This version uses a different and more powerful chipset, and the controls are on board; while you can select modes from a smart phone app, you don’t have to.

I can verify that it is cute (about the size of a paperback book), lightweight, and extremely easy to use. I’ve never flown a standard drone without help or crashing but had no problem sending the HoverAir up to follow me down the street and then land on my hand.

It isn’t perfect. It can’t fly over water—the movement of the water confuses the algorithms that judge speed through video images of the ground. And it only tracks people; though many would like it to track their pets, Wang says animals behave erratically, diving into bushes or other places the camera can’t follow. Since the autonomous navigation algorithms rely on the person being filmed to avoid objects and simply follows that path, such dives tend to cause the drone to crash.

Since we last spoke eight years ago, Wang has been through the highs and lows of the startup rollercoaster, turning to contract engineering for a while to keep his company alive. He’s become philosophical about much of the experience.

Here’s what he had to say.

We last spoke in 2016. Tell me how you’ve changed.

Meng Qiu Wang: When I got out of Stanford in 2014 and started the company with Tony [Zhang], I was eager and hungry and hasty and I thought I was ready. But retrospectively, I wasn’t ready to start a company. I was chasing fame and money, and excitement.

Now I’m 42, I have a daughter—everything seems more meaningful now. I’m not a Buddhist, but I have a lot of Zen in my philosophy now.

I was trying so hard to flip the page to see the next chapter of my life, but now I realize, there is no next chapter, flipping the page itself is life.

You were moving really fast in 2016 and 2017. What happened during that time?

Wang: After coming out of stealth, we ramped up from 60 to 140 people planning to take this product into mass production. We got a crazy amount of media attention—covered by 2,200 media outlets. We went to CES, and it seemed like we collected every trophy there was there.

And then Apple came to us, inviting us to retail at all the Apple stores. This was a big deal; I think we were the first third party robotic product to do live demos in Apple stores. We produced about 50,000 units, bringing in about $15 million in revenue in six months.

Then a giant company made us a generous offer and we took it. But it didn’t work out. It was a certainly lesson learned for us. I can’t say more about that, but at this point if I walk down the street and I see a box of pizza, I would not try to open it; there really is no free lunch.

a black caged drone with fans and a black box in the middle This early version of the Hover flying camera generated a lot of initial excitement, but never fully took off.Zero Zero Robotics

How did you survive after that deal fell apart?

Wang: We went from 150 to about 50 people and turned to contract engineering. We worked with toy drone companies, with some industrial product companies. We built computer vision systems for larger drones. We did almost four years of contract work.

But you kept working on flying cameras and launched a Kickstarter campaign in 2018. What happened to that product?

Wang: It didn’t go well. The technology wasn’t really there. We filled some orders and refunded ones that we couldn’t fill because we couldn’t get the remote controller to work.

We really didn’t have enough resources to create a new product for a new product category, a flying camera, to educate the market.

So we decided to build a more conventional drone—our V-Coptr, a V-shaped bi-copter with only two propellers—to compete against DJI. We didn’t know how hard it would be. We worked on it for four years. Key engineers left out of total dismay, they lost faith, they lost hope.

We came so close to going bankrupt so many times—at least six times in 10 years I thought I wasn’t going to be able to make payroll for the next month, but each time I got super lucky with something random happening. I never missed paying one dime—not because of my abilities, just because of luck.

We still have a relatively healthy chunk of the team, though. And this summer my first ever software engineer is coming back. The people are the biggest wealth that we’ve collected over the years. The people who are still with us are not here for money or for success. We just realized along the way that we enjoy working with each other on impossible problems.

When we talked in 2016, you envisioned the flying camera as the first in a long line of personal robotics products. Is that still your goal?

Wang: In terms of short-term strategy, we are focusing 100 percent on the flying camera. I think about other things, but I’m not going to say I have an AI hardware company, though we do use AI. After 10 years I’ve given up on talking about that.

Do you still think there’s a big market for a flying camera?

Wang: I think flying cameras have the potential to become the second home robot [the first being the robotic vacuum] that can enter tens of millions of homes.

Reference: https://ift.tt/h1vLwW3

Tuesday, July 30, 2024

Mysterious family of malware hid in Google Play for years


An image illustrating a phone infected with malware

Enlarge

A mysterious family of Android malware with a demonstrated history of effectively concealing its myriad spying activities has once again been found in Google Play after more than two years of hiding in plain sight.

The apps, disguised as file-sharing, astronomy, and cryptocurrency apps, hosted Mandrake, a family of highly intrusive malware that security firm Bitdefender called out in 2020. Bitdefender said the apps appeared in two waves, one in 2016 through 2017 and again in 2018 through 2020. Mandrake’s ability to go unnoticed then was the result of some unusually rigorous steps to fly under the radar. They included:

  • Not working in 90 countries, including those comprising the former Soviet Union
  • Delivering its final payload only to victims who were extremely narrowly targeted
  • Containing a kill switch the developers named seppuku (Japanese form of ritual
  • suicide) that fully wiped all traces of the malware
  • Fully functional decoy apps in categories including finance, Auto & Vehicles, Video Players & Editors, Art & Design, and Productivity
  • Quick fixes for bugs reported in comments
  • TLS certificate pinning to conceal communications with command and control servers.

Lurking in the shadows

Bitdefender estimated the number of victims in the tens of thousands for the 2018 to 2020 wave and “probably hundreds of thousands throughout the full 4-year period.”

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/754HslG

AI search engine accused of plagiarism announces publisher revenue-sharing plan


Robot caught in a flashlight vector illustration

Enlarge (credit: Moor Studio via Getty Images)

On Tuesday, AI-powered search engine Perplexity unveiled a new revenue-sharing program for publishers, marking a significant shift in its approach to third-party content use, reports CNBC. The move comes after plagiarism allegations from major media outlets, including Forbes, Wired, and Ars parent company Condé Nast. Perplexity, valued at over $1 billion, aims to compete with search giant Google.

"To further support the vital work of media organizations and online creators, we need to ensure publishers can thrive as Perplexity grows," writes the company in a blog post announcing the problem. "That’s why we’re excited to announce the Perplexity Publishers Program and our first batch of partners: TIME, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and WordPress.com."

Under the program, Perplexity will share a percentage of ad revenue with publishers when their content is cited in AI-generated answers. The revenue share applies on a per-article basis and potentially multiplies if articles from a single publisher are used in one response. Some content providers, such as WordPress.com, plan to pass some of that revenue on to content creators.

Read 9 remaining paragraphs | Comments

Reference : https://ift.tt/cC8o14B

A Robot Dentist Might Be a Good Idea, Actually




I’ll be honest: when I first got this pitch for an autonomous robot dentist, I was like: “Okay, I’m going to talk to these folks and then write an article, because there’s no possible way for this thing to be anything but horrific.” Then they sent me some video that was, in fact, horrific, in the way that only watching a high speed drill remove most of a tooth can be.

But fundamentally this has very little to do with robotics, because getting your teeth drilled just sucks no matter what. So the real question we should be asking is this: How can we make a dental procedure as quick and safe as possible, to minimize that inherent horrific-ness?And the answer, surprisingly, may be this robot from a startup called Perceptive.

Perceptive is today announcing two new technologies that I very much hope will make future dental experiences better for everyone. While it’s easy to focus on the robot here (because, well, it’s a robot), the reason the robot can do what it does (which we’ll get to in a minute) is because of a new imaging system. The handheld imager, which is designed to operate inside of your mouth, uses optical coherence tomography (OCT) to generate a 3D image of the inside of your teeth, and even all the way down below the gum line and into the bone. This is vastly better than the 2D or 3D x-rays that dentists typically use, both in resolution and positional accuracy.

A hand in a blue medical glove holds a black wand-like device with a circuit board visible. Perceptive’s handheld optical coherence tomography imager scans for tooth decay.Perceptive

X-Rays, it turns out, are actually really bad at detecting cavities; Perceptive CEO Chris Ciriello tells us that the accuracy is on the order of 30 percent of figuring out the location and extent of tooth decay. In practice, this isn’t as much of a problem as it seems like it should be, because the dentist will just start drilling into your tooth and keep going until they find everything. But obviously this won’t work for a robot, where you need all of the data beforehand. That’s where the OCT comes in. You can think of OCT as similar to an ultrasound, in that it uses reflected energy to build up an image, but OCT uses light instead of sound for much higher resolution.

A short video shows outlines of teeth in progressively less detail, but highlights some portions in blood red. Perceptive’s imager can create detailed 3D maps of the insides of teeth.Perceptive

The reason OCT has not been used for teeth before is because with conventional OCT, the exposure time required to get a detailed image is several seconds, and if you move during the exposure, the image will blur. Perceptive is instead using a structure from motion approach (which will be familiar to many robotics folks), where they’re relying on a much shorter exposure time resulting in far fewer data points, but then moving the scanner and collecting more data to gradually build up a complete 3D image. According to Ciriello, this approach can localize pathology within about 20 micrometers with over 90 percent accuracy, and it’s easy for a dentist to do since they just have to move the tool around your tooth in different orientations until the scan completes.

Again, this is not just about collecting data so that a robot can get to work on your tooth. It’s about better imaging technology that helps your dentist identify and treat issues you might be having. “We think this is a fundamental step change,” Ciriello says. “We’re giving dentists the tools to find problems better.”

A silvery robotic arm with a small drill at the end. The robot is mechanically coupled to your mouth for movement compensation.Perceptive

Ciriello was a practicing dentist in a small mountain town in British Columbia, Canada. People in such communities can have a difficult time getting access to care. “There aren’t too many dentists who want to work in rural communities,” he says. “Sometimes it can take months to get treatment, and if you’re in pain, that’s really not good. I realized that what I had to do was build a piece of technology that could increase the productivity of dentists.”

Perceptive’s robot is designed to take a dental procedure that typically requires several hours and multiple visits, and complete it in minutes in a single visit. The entry point for the robot is crown installation, where the top part of a tooth is replaced with an artificial cap (the crown). This is an incredibly common procedure, and it usually happens in two phases. First, the dentist will remove the top of the tooth with a drill. Next, they take a mold of the tooth so that a crown can be custom fit to it. Then they put a temporary crown on and send you home while they mail the mold off to get your crown made. A couple weeks later, the permanent crown arrives, you go back to the dentist, and they remove the temporary one and cement the permanent one on.

With Perceptive’s system, it instead goes like this: on a previous visit where the dentist has identified that you need a crown in the first place, you’d have gotten a scan of your tooth with the OCT imager. Based on that data, the robot will have planned a drilling path, and then the crown could be made before you even arrive for the drilling to start, which is only possible because the precise geometry is known in advance. You arrive for the procedure, the robot does the actually drilling in maybe five minutes or so, and the perfectly fitting permanent crown is cemented into place and you’re done.

A silvery robotic arm with a small drill at the end. The arm is mounted on a metal cart with a display screen. The robot is still in the prototype phase but could be available within a few years.Perceptive

Obviously, safety is a huge concern here, because you’ve got a robot arm with a high-speed drill literally working inside of your skull. Perceptive is well aware of this.

The most important thing to understand about the Perceptive robot is that it’s physically attached to you as it works. You put something called a bite block in your mouth and bite down on it, which both keeps your mouth open and keeps your jaw from getting tired. The robot’s end effector is physically attached to that block through a series of actuated linkages, such that any motions of your head are instantaneously replicated by the end of the drill, even if the drill is moving. Essentially, your skull is serving as the robot’s base, and your tooth and the drill are in the same reference frame. Purely mechanical coupling means there’s no vision system or encoders or software required: it’s a direct physical connection so that motion compensation is instantaneous. As a patient, you’re free to relax and move your head somewhat during the procedure, because it makes no difference to the robot.

Human dentists do have some strategies for not stabbing you with a drill if you move during a procedure, like putting their fingers on your teeth and then supporting the drill on them. But this robot should be safer and more accurate than that method, because of the rigid connection leading to only a few tens of micrometers of error, even on a moving patient. It’ll move a little bit slower than a dentist would, but because it’s only drilling exactly where it needs to, it can complete the procedure faster overall, says Ciriello.

There’s also a physical counterbalance system within the arm, a nice touch that makes the arm effectively weightless. (It’s somewhat similar to the PR2 arm, for you OG robotics folks.) And the final safety measure is the dentist-in-the-loop via a foot pedal that must remain pressed or the robot will stop moving and turn off the drill.

Ciriello claims that not only is the robot able to work faster, it also will produce better results. Most restorations like fillings or crowns last about five years, because the dentist either removed too much material from the tooth and weakened it, or removed too little material and didn’t completely solve the underlying problem. Perceptive’s robot is able to be far more exact. Ciriello says that the robot can cut geometry that’s “not humanly possible,” fitting restorations on to teeth with the precision of custom-machined parts, which is pretty much exactly what they are.

A short video shows a d dental drill working on a tooth in a person's mouth. Perceptive has successfully used its robot on real human patients, as shown in this sped-up footage. In reality the robot moves slightly slower than a human dentist.Perceptive

While it’s easy to focus on the technical advantages of Perceptive’s system, dentist Ed Zuckerberg (who’s an investor in Perceptive) points out that it’s not just about speed or accuracy, it’s also about making patients feel better. “Patients think about the precision of the robot, versus the human nature of their dentist,” Zuckerberg says. It gives them confidence to see that their dentist is using technology in their work, especially in ways that can address common phobias. “If it can enhance the patient experience or make the experience more comfortable for phobic patients, that automatically checks the box for me.”

There is currently one other dental robot on the market. Called Yomi, it offers assistive autonomy for one very specific procedure for dental implants. Yomi is not autonomous, but instead provides guidance for a dentist to make sure that they drill to the correct depth and angle.

While Perceptive has successfully tested their first-generation system on humans, it’s not yet ready for commercialization. The next step will likely be what’s called a pivotal clinical trial with the FDA, and if that goes well, Cirello estimates that it could be available to the public in “several years”. Perceptive has raised US $30 million in funding so far, and here’s hoping that’s enough to get them across the finish line.

Reference: https://ift.tt/Thl08dU

Your Gateway to a Vibrant Career in the Expanding Semiconductor Industry




This sponsored article is brought to you by Purdue University.

The CHIPS America Act was a response to a worsening shortfall in engineers equipped to meet the growing demand for advanced electronic devices. That need persists. In its 2023 policy report, Chipping Away: Assessing and Addressing the Labor Market Gap Facing the U.S. Semiconductor Industry, the Semiconductor Industry Association forecast a demand for 69,000 microelectronic and semiconductor engineers between 2023 and 2030—including 28,900 new positions created by industry expansion and 40,100 openings to replace engineers who retire or leave the field.

This number does not include another 34,500 computer scientists (13,200 new jobs, 21,300 replacements), nor does it count jobs in other industries that require advanced or custom-designed semiconductors for controls, automation, communication, product design, and the emerging systems-of-systems technology ecosystem.

Purdue University is taking charge, leading semiconductor technology and workforce development in the U.S. As early as Spring 2022, Purdue University became the first top engineering school to offer an online Master’s Degree in Microelectronics and Semiconductors.

U.S. News & World Report has ranked the university’s graduate engineering program among America’s 10 best every year since 2012 (and among the top 4 since 2022)

“The degree was developed as part of Purdue’s overall semiconductor degrees program,” says Purdue Prof. Vijay Raghunathan, one of the architects of the semiconductor program. “It was what I would describe as the nation’s most ambitious semiconductor workforce development effort.”

A person dressed in a dark suit with a white shirt and red tie poses for a professional portrait against a dark background. Prof. Vijay Raghunathan, one of the architects of the online Master’s Degree in Microelectronics and Semiconductors at Purdue.Purdue University

Purdue built and announced its bold high-technology online program while the U.S. Congress was still debating the $53 billion “Creating Helpful Incentives to Produce Semiconductors for America Act” (CHIPS America Act), which would be passed in July 2022 and signed into law in August.

Today, the online Master’s in Microelectronics and Semiconductors is well underway. Students learn leading-edge equipment and software and prepare to meet the challenges they will face in a rejuvenated, and critical, U.S. semiconductor industry.

Is the drive for semiconductor education succeeding?

“I think we have conclusively established that the answer is a resounding ‘Yes,’” says Raghunathan. Like understanding big data, or being able to program, “the ability to understand how semiconductors and semiconductor-based systems work, even at a rudimentary level, is something that everybody should know. Virtually any product you design or make is going to have chips inside it. You need to understand how they work, what the significance is, and what the risks are.”

Earning a Master’s in Microelectronics and Semiconductors

Students pursuing the Master’s Degree in Microelectronics and Semiconductors will take courses in circuit design, devices and engineering, systems design, and supply chain management offered by several schools in the university, such as Purdue’s Mitch Daniels School of Business, the Purdue Polytechnic Institute, the Elmore Family School of Electrical and Computer Engineering, and the School of Materials Engineering, among others.

Professionals can also take one-credit-hour courses, which are intended to help students build “breadth at the edges,” a notion that grew out of feedback from employers: Tomorrow’s engineering leaders will need broad knowledge to connect with other specialties in the increasingly interdisciplinary world of artificial intelligence, robotics, and the Internet of Things.

“This was something that we embarked on as an experiment 5 or 6 years ago,” says Raghunathan of the one-credit courses. “I think, in hindsight, that it’s turned out spectacularly.”

A researcher wearing a white lab coat, hairnet, and gloves works with scientific equipment, with a computer monitor displaying a detailed scientific pattern. A researcher adjusts imaging equipment in a lab in Birck Nanotechnology Center, home to Purdue’s advanced research and development on semiconductors and other technology at the atomic scale.Rebecca Robiños/Purdue University

The Semiconductor Engineering Education Leader

Purdue, which opened its first classes in 1874, is today an acknowledged leader in engineering education. U.S. News & World Report has ranked the university’s graduate engineering program among America’s 10 best every year since 2012 (and among the top 4 since 2022). And Purdue’s online graduate engineering program has ranked in the country’s top three since the publication started evaluating online grad programs in 2020. (Purdue has offered distance Master’s degrees since the 1980s. Back then, of course, course lectures were videotaped and mailed to students. With the growth of the web, “distance” became “online,” and the program has swelled.)

Thus, Microelectronics and Semiconductors Master’s Degree candidates can study online or on-campus. Both tracks take the same courses from the same instructors and earn the same degree. There are no footnotes, asterisks, or parentheses on the diploma to denote online or in-person study.

“If you look at our program, it will become clear why Purdue is increasingly considered America’s leading semiconductors university” —Prof. Vijay Raghunathan, Purdue University

Students take classes at their own pace, using an integrated suite of proven online-learning applications for attending lectures, submitting homework, taking tests, and communicating with faculty and one another. Texts may be purchased or downloaded from the school library. And there is frequent use of modeling and analytical tools like Matlab. In addition, Purdue is also the home of national the national design-computing resources nanoHUB.org (with hundreds of modeling, simulation, teaching, and software-development tools) and its offspring, chipshub.org (specializing in tools for chip design and fabrication).

From R&D to Workforce and Economic Development

“If you look at our program, it will become clear why Purdue is increasingly considered America’s leading semiconductors university, because this is such a strategic priority for the entire university, from our President all the way down,” Prof. Raghunathan sums up. “We have a task force that reports directly to the President, a task force focused only on semiconductors and microelectronics. On all aspects—R&D, the innovation pipeline, workforce development, economic development to bring companies to the state. We’re all in as far as chips are concerned.”

Reference: https://ift.tt/diyHZWs

Monday, July 29, 2024

Hackers exploit VMware vulnerability that gives them hypervisor admin


Hackers exploit VMware vulnerability that gives them hypervisor admin

Enlarge (credit: Getty Images)

Microsoft is urging users of VMware’s ESXi hypervisor to take immediate action to ward off ongoing attacks by ransomware groups that give them full administrative control of the servers the product runs on.

The vulnerability, tracked as CVE-2024-37085, allows attackers who have already gained limited system rights on a targeted server to gain full administrative control of the ESXi hypervisor. Attackers affiliated with multiple ransomware syndicates—including Storm-0506, Storm-1175, Octo Tempest, and Manatee Tempest—have been exploiting the flaw for months in numerous post-compromise attacks, meaning after the limited access has already been gained through other means.

Admin rights assigned by default

Full administrative control of the hypervisor gives attackers various capabilities, including encrypting the file system and taking down the servers they host. The hypervisor control can also allow attackers to access hosted virtual machines to either exfiltrate data or expand their foothold inside a network. Microsoft discovered the vulnerability under exploit in the normal course of investigating the attacks and reported it to VMware. VMware parent company Broadcom patched the vulnerability on Thursday.

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/InSaPwJ

From sci-fi to state law: California’s plan to prevent AI catastrophe


The California state capital building in Sacramento.

Enlarge / The California State Capitol Building in Sacramento. (credit: Getty Images)

California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today.

SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents."

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly.

Read 16 remaining paragraphs | Comments

Reference : https://ift.tt/v3rAzXp

How LG and Samsung Are Making TV Screens Disappear




A transparent television might seem like magic, but both LG and Samsung demonstrated such displays this past January in Las Vegas at CES 2024. And those large transparent TVs, which attracted countless spectators peeking through video images dancing on their screens, were showstoppers.

Although they are indeed impressive, transparent TVs are not likely to appear—or disappear—in your living room any time soon. Samsung and LG have taken two very different approaches to achieve a similar end—LG is betting on OLED displays, while Samsung is pursuing microLED screens—and neither technology is quite ready for prime time. Understanding the hurdles that still need to be overcome, though, requires a deeper dive into each of these display technologies.

How does LG’s see-through OLED work?

OLED stands for organic light-emitting diode, and that pretty much describes how it works. OLED materials are carbon-based compounds that emit light when energized with an electrical current. Different compounds produce different colors, which can be combined to create full-color images.

To construct a display from these materials, manufacturers deposit them as thin films on some sort of substrate. The most common approach arranges red-, green-, and blue-emitting (RGB) materials in patterns to create a dense array of full-color pixels. A display with what is known as 4K resolution contains a matrix of 3,840 by 2,160 pixels—8.3 million pixels in all, formed from nearly 25 million red, green, and blue subpixels.


The timing and amount of electrical current sent to each subpixel determines how much light it emits. So by controlling these currents properly, you can create the desired image on the screen. To accomplish this, each subpixel must be electrically connected to two or more transistors, which act as switches. Traditional wires wouldn’t do for this, though: They’d block the light. You need to use transparent (or largely transparent) conductive traces.

An image of an array of 15 transparent TVs, shot with a fish-eye lens and displaying white trees with pink and green swaths of color above them. LG’s demonstration of transparent OLED displays at CES 2024 seemed almost magical. Ethan Miller/Getty Images

A display has thousands of such traces arranged in a series of rows and columns to provide the necessary electrical connections to each subpixel. The transistor switches are also fabricated on the same substrate. That all adds up to a lot of materials that must be part of each display. And those materials must be carefully chosen for the OLED display to appear transparent.

The conductive traces are the easy part. The display industry has long used indium tin oxide as a thin-film conductor. A typical layer of this material is only 135 nanometers thick but allows about 80 percent of the light impinging on it to pass through.

The transistors are more of a problem, because the materials used to fabricate them are inherently opaque. The solution is to make the transistors as small as you can, so that they block the least amount of light. The amorphous silicon layer used for transistors in most LCD displays is inexpensive, but its low electron mobility means that transistors composed of this material can only be made so small. This silicon layer can be annealed with lasers to create low-temperature polysilicon, a crystallized form of silicon, which improves electron mobility, reducing the size of each transistor. But this process works only for small sheets of glass substrate.

Faced with this challenge, designers of transparent OLED displays have turned to indium gallium zinc oxide (IGZO). This material has high enough electron mobility to allow for smaller transistors than is possible with amorphous silicon, meaning that IGZO transistors block less light.

These tactics help solve the transparency problem, but OLEDs have some other challenges. For one, exposure to oxygen or water vapor destroys the light-emissive materials. So these displays need an encapsulating layer, something to cover their surfaces and edges. Because this layer creates a visible gap when two panels are placed edge to edge, you can’t tile a set of smaller displays to create a larger one. If you want a big OLED display, you need to fabricate a single large panel.

The result of even the best engineering here is a “transparent” display that still blocks some light. You won’t mistake LG’s transparent TV for window glass: People and objects behind the screen appear noticeably darker than when viewed directly. According to one informed observer, the LG prototype appears to have 45 percent transparency.

How does Samsung’s magical MicroLED work?

For its transparent displays, Samsung is using inorganic LEDs. These devices, which are very efficient at converting electricity into light, are commonplace today: in household lightbulbs, in automobile headlights and taillights, and in electronic gear, where they often show that the unit is turned on.

In LED displays, each pixel contains three LEDs, one red, one green, and one blue. This works great for the giant digital displays used in highway billboards or in sports-stadium jumbotrons, whose images are meant to be viewed from a good distance. But up close, these LED pixel arrays are noticeable.

TV displays, on the other hand, are meant to be viewed from modest distances and thus require far smaller LEDs than the chips used in, say, power-indicator lights. Two years ago, these “microLED” displays used chips that were just 30 by 50 micrometers. (A typical sheet of paper is 100 micrometers thick.) Today, such displays use chips less than half that size: 12 by 27 micrometers.

A wooden frame surrounds a transparent display featuring an advertisement for a Black Friday Sale and a large image of a smartwatch. While transparent displays are stunning, they might not be practical for home use as televisions. Expect to see them adopted first as signage in retail settings. AUO

These tiny LED chips block very little light, making the display more transparent. The Taiwanese display maker AUO recently demonstrated a microLED display with more than 60 percent transparency.

Oxygen and moisture don’t affect microLEDs, so they don’t need to be encapsulated. This makes it possible to tile smaller panels to create a seamless larger display. And the silicon coating on such small panels can be annealed to create polysilicon, which performs better than IGZO, so the transistors can be even smaller and block less light.

But the microLED approach has its own problems. Indeed, the technology is still in its infancy, with costing a great deal to manufacture and requiring some contortions to get uniform brightness and color across the entire display.

For example, individual OLED materials emit a well-defined color, but that’s not the case for LEDs. Minute variations in the physical characteristics of an LED chip can alter the wavelength of light it emits by a measurable—and noticeable—amount. Manufacturers have typically addressed this challenge by using a binning process: They test thousands of chips and then group them into bins of similar wavelengths, discarding those that don’t fit the desired ranges. This explains in part why those large digital LED screens are so expensive: Many LEDs created for their construction must be discarded.

But binning doesn’t really work when dealing with microLEDs. The tiny chips are difficult to test and are so expensive that costs would be astronomical if too many had to be rejected.

A person wearing a white shirt with red text and a name badge is placing his hand behind a transparent display screen. The screen shows an image of splashing liquid and fire. Though you can see through today’s transparent displays, they do block a noticeable amount of light, making the background darker than when viewed directly. Tekla S. Perry

Instead, manufacturers test microLED displays for uniformity after they’re assembled, then calibrate them to adjust the current applied to each subpixel so that color and brightness are uniform across the display. This calibration process, which involves scanning an image on the panel and then reprogramming the control circuitry, can sometimes require thousands of iterations.

Then there’s the problem of assembling the panels. Remember those 25 million microLED chips that make up a 4K display? Each must be positioned precisely, and each must be connected to the correct electrical contacts.

The LED chips are initially fabricated on sapphire wafers, each of which contains chips of only one color. These chips must be transferred from the wafer to a carrier to hold them temporarily before applying them to the panel backplane. The Taiwanese microLED company PlayNitride has developed a process for creating large tiles with chips spaced less than 2 micrometers apart. Its process for positioning these tiny chips has better than 99.9 percent yields. But even at a 99.9 percent yield, you can expect about 25,000 defective subpixels in a 4K display. They might be positioned incorrectly so that no electrical contact is made, or the wrong color chip is placed in the pattern, or a subpixel chip might be defective. While correcting these defects is sometimes possible, doing so just adds to the already high cost.

A person looks at a transparent micro led screen displaying splashes of liquid in red, yellow, and green. Samsung’s microLED technology allows the image to extend right up to the edge of the glass panel, making it possible to create larger displays by tiling smaller panels together. Brendan Smialowski/AFP/Getty Images

Could MicroLEDs still be the future of flat-panel displays? “Every display analyst I know believes that microLEDs should be the ‘next big thing’ because of their brightness, efficiency, color, viewing angles, response times, and lifetime, “ says Bob Raikes, editor of the 8K Monitor newsletter. “However, the practical hurdles of bringing them to market remain huge. That Apple, which has the deepest pockets of all, has abandoned microLEDs, at least for now, and after billions of dollars in investment, suggests that mass production for consumer markets is still a long way off.”

At this juncture, even though microLED technology offers some clear advantages, OLED is more cost-effective and holds the early lead for practical applications of transparent displays.

But what is a transparent display good for?

Samsung and LG aren’t the only companies to have demonstrated transparent panels recently.

AUO’s 60-inch transparent display, made of tiled panels, won the People’s Choice Award for Best MicroLED-Based Technology at the Society for Information Display’s Display Week, held in May in San Jose, Calif. And the Chinese company BOE Technology Group demonstrated a 49-inch transparent OLED display at CES 2024.

These transparent displays all have one feature in common: They will be insanely expensive. Only LG’s transparent OLED display has been announced as a commercial product. It’s without a price or a ship date at this point, but it’s not hard to guess how costly it will be, given that nontransparent versions are expensive enough. For example, LG prices its top-end 77-inch OLED TV at US $4,500.

A diagram of the structure of a display pixel represented as a grey rectangle, which frames an open area labeled transmissive space, and three rectangular blocks labeled R, G, and B. Displays using both microLED technology [above] and OLED technology have some components in each pixel that block light coming from the background. These include the red, green, and blue emissive materials along with the transistors required to switch them on and off. Smaller components mean that you can have a larger transmissive space that will provide greater transparency. Illustration: Mark Montgomery; Source: Samsung

Thanks to seamless tiling, transparent microLED displays can be larger than their OLED counterparts. But their production costs are larger as well. Much larger. And that is reflected in prices. For example, Samsung’s nontransparent 114-inch microLED TV sells for $150,000. We can reasonably expect transparent models to cost even more.

Seeing these prices, you really have to ask: What are the practical applications of transparent displays?

Don’t expect these displays to show up in many living rooms as televisions. And high price is not the only reason. After all, who wants to see their bookshelves showing through in the background while they’re watching Dune? That’s why the transparent OLED TV LG demonstrated at CES 2024 included a “contrast layer”—basically, a black cloth—that unrolls and covers the back of the display on demand.

Transparent displays could have a place on the desktop—not so you can see through them, but so that a camera can sit behind the display, capturing your image while you’re looking directly at the screen. This would help you maintain eye contact during a Zoom call. One company—Veeo—demonstrated a prototype of such a product at CES 2024, and it plans to release a 30-inch model for about $3,000 and a 55-inch model for about $8,500 later this year. Veeo’s products use LG’s transparent OLED technology.

Transparent screens are already showing up as signage and other public-information displays. LG has installed transparent 55-inch OLED panels in the windows of Seoul’s new high-speed underground rail cars, which are part of a system known as the Great Train eXpress. Riders can browse maps and other information on these displays, which can be made clear when needed for passengers to see what’s outside.

LG transparent panels have also been featured in an E35e excavator prototype by Doosan Bobcat. This touchscreen display can act as the operator’s front or side window, showing important machine data or displaying real-time images from cameras mounted on the vehicle. Such transparent displays can serve a similar function as the head-up displays in some aircraft windshields.

And so, while the large transparent displays are striking, you’ll be more likely to see them initially as displays for machinery operators, public entertainment, retail signage, and even car windshields. The early adopters might cover the costs of developing mass-production processes, which in turn could drive prices down. But even if costs eventually reach reasonable levels, whether the average consumer really want a transparent TV in their home is something that remains to be seen—unlike the device itself, whose whole point is not to be.

Reference: https://ift.tt/0Jq7r1f

Friday, July 26, 2024

Hang out with Ars in San Jose and DC this fall for two infrastructure events


Photograph of servers and racks

Enlarge / Infrastructure!

Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had!

This year, we're back partnering with IBM again and we're looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we're going to the coasts—both east and west. Read on for details!

September: San Jose, California

Our first event will be in San Jose on September 18, and it's titled "Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next." The idea will be to explore what generative AI means for the future of data management. The topics we'll be discussing include:

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/HnsSgBc

Thursday, July 25, 2024

Google AI earns silver medal equivalent at International Mathematical Olympiad


An illustration provided by Google.

Enlarge / An illustration provided by Google. (credit: Google)

On Thursday, Google DeepMind announced that AI systems called AlphaProof and AlphaGeometry 2 reportedly solved four out of six problems from this year's International Mathematical Olympiad (IMO), achieving a score equivalent to a silver medal. The tech giant claims this marks the first time an AI has reached this level of performance in the prestigious math competition—but as usual in AI, the claims aren't as clear-cut as they seem.

Google says AlphaProof uses reinforcement learning to prove mathematical statements in the formal language called Lean. The system trains itself by generating and verifying millions of proofs, progressively tackling more difficult problems. Meanwhile, AlphaGeometry 2 is described as an upgraded version of Google's previous geometry-solving AI modeI, now powered by a Gemini-based language model trained on significantly more data.

According to Google, prominent mathematicians Sir Timothy Gowers and Dr. Joseph Myers scored the AI model's solutions using official IMO rules. The company reports its combined system earned 28 out of 42 possible points, just shy of the 29-point gold medal threshold. This included a perfect score on the competition's hardest problem, which Google claims only five human contestants solved this year.

Read 9 remaining paragraphs | Comments

Reference : https://ift.tt/FLZRmn0

Chrome will now prompt some users to send passwords for suspicious files


Chrome will now prompt some users to send passwords for suspicious files

(credit: Chrome)

Google is redesigning Chrome malware detections to include password-protected executable files that users can upload for deep scanning, a change the browser maker says will allow it to detect more malicious threats.

Google has long allowed users to switch on the Enhanced Mode of its Safe Browsing, a Chrome feature that warns users when they’re downloading a file that’s believed to be unsafe, either because of suspicious characteristics or because it’s in a list of known malware. With Enhanced Mode turned on, Google will prompt users to upload suspicious files that aren’t allowed or blocked by its detection engine. Under the new changes, Google will prompt these users to provide any password needed to open the file.

Beware of password-protected archives

In a post published Wednesday, Jasika Bawa, Lily Chen, and Daniel Rubery of the Chrome Security team wrote:

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/knvMust

Secure Boot is completely broken on 200+ models from 5 big device makers


Secure Boot is completely broken on 200+ models from 5 big device makers

Enlarge (credit: sasha85ru | Getty Imates)

In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat. The threat was the specter of malware that could infect the BIOS, the firmware that loaded the operating system each time a computer booted up. From there, it could remain immune to detection and removal and could load even before the OS and security apps did.

The threat of such BIOS-dwelling malware was largely theoretical and fueled in large part by the creation of ICLord Bioskit by a Chinese researcher in 2007. ICLord was a rootkit, a class of malware that gains and maintains stealthy root access by subverting key protections built into the operating system. The proof of concept demonstrated that such BIOS rootkits weren't only feasible; they were also powerful. In 2011, the threat became a reality with the discovery of Mebromi, the first-known BIOS rootkit to be used in the wild.

Keenly aware of Mebromi and its potential for a devastating new class of attack, the Secure Boot architects hashed out a complex new way to shore up security in the pre-boot environment. Built into UEFI—the Unified Extensible Firmware Interface that would become the successor to BIOS—Secure Boot used public-key cryptography to block the loading of any code that wasn’t signed with a pre-approved digital signature. To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.

Read 36 remaining paragraphs | Comments

Reference : https://ift.tt/cNI587h

Why a Technical Master’s Degree Can Accelerate Your Engineering Career




This sponsored article is brought to you by Purdue University.

Companies large and small are seeking engineers with up-to-date, subject-specific knowledge in disciplines like computer engineering, automation, artificial intelligence, and circuit design. Mid-level engineers need to advance their skillsets to apply and integrate these technologies and be competitive.


As applications for new technologies continue to grow, demand for knowledgeable electrical and computer engineers is also on the rise. According to the Bureau of Labor Statistics, job outlook for electrical and electronics engineers—as well as computer hardware engineers—is set to grow 5 percent through 2032. Electrical and computer engineers work in almost every industry. They design systems, work on power transmission and power supplies, run computers and communication systems, innovate chips for embedded and so much more.

To take advantage of this job growth and get more return-on-investment, engineers are advancing their knowledge by going back to school. The 2023 IEEE-USA Salary and Benefits Survey Report shows that engineers with focused master’s degrees (e.g., electrical and computer engineering, electrical engineering, or computer engineering) earned median salaries almost US $27,000 per year higher than their colleagues with bachelors’ degrees alone.


Purdue’s online MSECE program has been ranked in the top 3 of U.S. News and World Report’s Best Online Electrical Engineering Master’s Programs for five years running


Universities like Purdue University work with companies and professionals to provide upskilling opportunities via distance and online education. Purdue has offered a distance Master of Science in Electrical and Computer Engineering (MSECE) since the 1980s. In its early years, the program’s course lectures were videotaped and mailed to students. Now, “distance” has transformed into “online,” and the program has grown with the web, expanding its size and scope. Today, the online MSECE has awarded master’s degrees to 190+ online students since the Fall 2021 semester.


A person with shoulder-length brown hair is wearing a black blazer over a dark blouse. They have a silver necklace with a pendant. The background consists of a brick wall.

“Purdue has a long-standing reputation of engineering excellence and Purdue engineers work worldwide in every company, including General Motors, Northrop Grumman, Raytheon, Texas Instruments, Apple, and Sandia National Laboratories among scores of others,” said Lynn Hegewald, the senior program manager for Purdue’s online MSECE. “Employers everywhere are very aware of Purdue graduates’ capabilities and the quality of the education they bring to the job.”


Today, the online MSECE program continues to select from among the world’s best professionals and gives them an affordable, award-winning education. The program has been ranked in the top 3 of U.S. News and World Report’s Best Online Electrical Engineering Master’s Programs for five years running (2020, 2021, 2022, 2023, and 2024).


The online MSECE offers high-quality research and technical skills, high-level analytical thinking and problem-solving skills, and new ideas to help innovate—all highly sought-after, according to one of the few studies to systematically inventory what engineering employers want (information corroborated on occupational guidance websites like O-Net and the Bureau of Labor Statistics).

Remote students get the same education as on-campus students and become part of the same alumni network.

“Our online MSECE program offers the same exceptional quality as our on-campus offerings to students around the country and the globe,” says Prof. Milind Kulkarni, Michael and Katherine Birck Head of the Elmore Family School of Electrical and Computer Engineering. “Online students take the same classes, with the same professors, as on-campus students; they work on the same assignments and even collaborate on group projects.


“Our online MSECE program offers the same exceptional quality as our on-campus offerings to students around the country and the globe” —Prof. Milind Kulkarni, Purdue University


“We’re very proud,” he adds, “that we’re able to make a ‘full-strength’ Purdue ECE degree available to so many people, whether they’re working full-time across the country, live abroad, or serve in the military. And the results bear this out: graduates of our program land jobs at top global companies, move on to new roles and responsibilities at their current organizations, or even continue to pursue graduate education at top PhD programs.”


A person wearing a dark blazer over a light blue, patterned shirt is smiling at the camera and standing indoors with a modern background featuring large windows and wooden panels.

Variety and Quality in Purdue’s MSECE

As they study for their MSECE degrees, online students can select from among a hundred graduate-level courses in their primary areas of interest, including innovative one-credit-hour courses that extend the students’ knowledge. New courses and new areas of interest are always in the pipeline.

Purdue MSECE Area of Interest and Course Options


  • Automatic Control
  • Communications, Networking, Signal and Image Processing
  • Computer Engineering
  • Fields and Optics
  • Microelectronics and Nanotechnology
  • Power and Energy Systems
  • VLSI and Circuit Design
  • Semiconductors
  • Data Mining
  • Quantum Computing
  • IoT
  • Big Data

Heather Woods, a process engineer at Texas Instruments, was one of the first students to enroll and chose the microelectronics and nanotechnology focus area. She offers this advice: “Take advantage of the one credit-hour classes! They let you finish your degree faster while not taking six credit hours every semester.”


Completing an online MSECE from Purdue University also teaches students professional skills that employers value like motivation, efficient time-management, high-level analysis and problem-solving, and the ability to learn quickly and write effectively.

“Having an MSECE shows I have the dedication and knowledge to be able to solve problems in engineering,” said program alumnus Benjamin Francis, now an engineering manager at AkzoNobel. “As I continue in my career, this gives me an advantage over other engineers both in terms of professional advancement opportunity and a technical base to pull information from to face new challenges.”


Finding Tuition Assistance

Working engineers contemplating graduate school should contact their human resources departments and find out what their tuition-assistance options are. Does your company offer tuition assistance? What courses of study do they cover? Do they cap reimbursements by course, semester, etc.? Does your employer pay tuition directly, or will you pay out-of-pocket and apply for reimbursement?

Prospective U.S. students who are veterans or children of veterans should also check with the U.S. Department of Veterans Affairs to see if they qualify to for tuition or other assistance.


The MSECE Advantage

In sum, the online Master’s degree in Electrical and Computer Engineering from Purdue University does an extraordinary job giving students the tools they need to succeed in school and then in the workplace: developing the technical knowledge, the confidence, and the often-overlooked professional skills that will help them excel in their careers.


Reference: https://ift.tt/7rd9Kj0

Wednesday, July 24, 2024

We made a cat drink a beer with Runway’s AI video generator, and it sprouted hands


A screen capture of an AI-generated video of a cat drinking a can of beer, created by Runway Gen-3 Alpha.

Enlarge

In June, Runway debuted a new text-to-video synthesis model called Gen-3 Alpha. It converts written descriptions called "prompts" into HD video clips without sound. We've since had a chance to use it and wanted to share our results. Our tests show that careful prompting isn't as important as matching concepts likely found in the training data, and that achieving amusing results likely requires many generations and selective cherry-picking.

An enduring theme of all generative AI models we've seen since 2022 is that they can be excellent at mixing concepts found in training data but are typically very poor at generalizing (applying learned "knowledge" to new situations the model has not explicitly been trained on). That means they can excel at stylistic and thematic novelty but struggle at fundamental structural novelty that goes beyond the training data.

What does all that mean? In the case of Runway Gen-3, lack of generalization means you might ask for a sailing ship in a swirling cup of coffee, and provided that Gen-3's training data includes video examples of sailing ships and swirling coffee, that's an "easy" novel combination for the model to make fairly convincingly. But if you ask for a cat drinking a can of beer (in a beer commercial), it will generally fail because there aren't likely many videos of photorealistic cats drinking human beverages in the training data. Instead, the model will pull from what it has learned about videos of cats and videos of beer commercials and combine them. The result is a cat with human hands pounding back a brewsky.

Read 26 remaining paragraphs | Comments

Reference : https://ift.tt/B4ik5Wh

How Russia-linked malware cut heat to 600 Ukrainian buildings in deep winter


The cityscape from the tower of the Lviv Town Hall in winter.

Enlarge / The cityscape from the tower of the Lviv Town Hall in winter. (credit: Anastasiia Smolienko / Ukrinform/Future Publishing via Getty Images)

As Russia has tested every form of attack on Ukraine's civilians over the past decade, both digital and physical, it's often used winter as one of its weapons—launching cyberattacks on electric utilities to trigger December blackouts and ruthlessly bombing heating infrastructure. Now it appears Russia-based hackers last January tried yet another approach to leave Ukrainians in the cold: a specimen of malicious software that, for the first time, allowed hackers to reach directly into a Ukrainian heating utility, switching off heat and hot water to hundreds of buildings in the midst of a winter freeze.

Industrial cybersecurity firm Dragos on Tuesday revealed a newly discovered sample of Russia-linked malware that it believes was used in a cyberattack in late January to target a heating utility in Lviv, Ukraine, disabling service to 600 buildings for around 48 hours. The attack, in which the malware altered temperature readings to trick control systems into cooling the hot water running through buildings' pipes, marks the first confirmed case in which hackers have directly sabotaged a heating utility.

Dragos' report on the malware notes that the attack occurred at a moment when Lviv was experiencing its typical January freeze, close to the coldest time of the year in the region, and that “the civilian population had to endure sub-zero [Celsius] temperatures.” As Dragos analyst Kyle O'Meara puts it more bluntly: “It's a shitty thing for someone to turn off your heat in the middle of winter.”

Read 12 remaining paragraphs | Comments

Reference : https://ift.tt/jvS1lwH

The Top 10 Climate Tech Stories of 2024

In 2024, technologies to combat climate change soared above the clouds in electricity-generating kites, traveled the oceans sequestering...