Wednesday, May 31, 2023

Researchers tell owners to “assume compromise” of unpatched Zyxel firewalls


Researchers tell owners to “assume compromise” of unpatched Zyxel firewalls

Enlarge (credit: Getty Images)

Firewalls made by Zyxel are being wrangled into a destructive botnet, which is taking control of them by exploiting a recently patched vulnerability with a severity rating of 9.8 out of a possible 10.

“At this stage if you have a vulnerable device exposed, assume compromise,” officials from Shadowserver, an organization that monitors Internet threats in real time, warned four days ago. The officials said the exploits are coming from a botnet that’s similar to Mirai, which harnesses the collective bandwidth of thousands of compromised Internet devices to knock sites offline with distributed denial-of-service attacks.

According to data from Shadowserver collected over the past 10 days, 25 of the top 62 Internet-connected devices waging “downstream attacks”—meaning attempting to hack other Internet-connected devices—were made by Zyxel as measured by IP addresses.

Read 11 remaining paragraphs | Comments

Reference : https://ift.tt/NFzeOHb

AI-expanded album cover artworks go viral thanks to Photoshop’s Generative Fill


An AI-expanded version of a famous album cover involving four lads and a certain road created using Adobe Generative Fill.

Enlarge / An AI-expanded version of a famous album cover involving four lads and a certain road created using Adobe Generative Fill. (credit: Capitol Records / Adobe / Dobrokotov)

Over the weekend, AI-powered makeovers of famous music album covers went viral on Twitter thanks to Adobe Photoshop's Generative Fill, an image synthesis tool that debuted in a beta version of the image editor last week. Using Generative Fill, people have been expanding the size of famous works of art, revealing larger imaginary artworks beyond the borders of the original images.

This image-expanding feat, often called "outpainting" in AI circles (and introduced with OpenAI's DALL-E 2 last year), is possible due to an image synthesis model called Adobe Firefly, which has been trained on millions of works of art from Adobe's stock photo catalog. When given an existing image to work with, Firefly uses what it knows about other artworks to synthesize plausible continuations of the original artwork. And when guided with text prompts that describe a specific scenario, the synthesized results can go in wild places.

For example, an expansion of Michael Jackson's famous Thriller album rendered the rest of Jackson's body lying on a piano. That seems reasonable, based on the context. But depending on user guidance, Generative Fill can also create more fantastic interpretations: An expansion of Katy Perry's Teenage Dream cover art (likely guided by a text suggestion from the user) revealed Perry lying on a gigantic fluffy pink cat.

Read 4 remaining paragraphs | Comments

Reference : https://ift.tt/Vrvhn7F

AI Everywhere, All at Once




It’s been a frenetic six months since OpenAI introduced its large language model ChatGPT to the world at the end of last year. Every day since then, I’ve had at least one conversation about the consequences of the global AI experiment we find ourselves conducting. We aren’t ready for this, and by we, I mean everyone–individuals, institutions, governments, and even the corporations deploying the technology today.

The sentiment that we’re moving too fast for our own good is reflected in an open letter calling for a pause in AI research, which was posted by the Future of Life Institute and signed by many AI luminaries, including some prominent IEEE members. As News Manager Margo Anderson reports online in The Institute, signatories include Senior Member and IEEE’s AI Ethics Maestro Eleanor “Nell” Watson and IEEE Fellow and chief scientist of software engineering at IBM, Grady Booch. He told Anderson, “These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users. My experience and my professional ethics tell me I must take a stand….”

Explore IEEE AI ethics and governance programs


IEEE CAI 2023 Conference on Artificial Intelligence, June 5-6, Santa Clara, Calif.

AI GET Program for AI Ethics and Governance Standards

IEEE P2863 Organizational Governance of Artificial Intelligence Working Group

IEEE Awareness Module on AI Ethics

IEEE CertifAIEd

Recent Advances in the Assessment and Certification of AI Ethics

But research and deployment haven’t paused, and AI is becoming essential across a range of domains. For instance, Google has applied deep-reinforcement learning to optimize placement of logic and memory on chips, as Senior Editor Samuel K. Moore reports in the June issue’s lead news story “Ending an Ugly Chapter in Chip Design.” Deep in the June feature well, the cofounders of KoBold Metals explain how they use machine-learning models to search for minerals needed for electric-vehicle batteries in “This AI Hunts for Hidden Hoards of Battery Minerals.”

Somewhere between the proposed pause and headlong adoption of AI lie the social, economic, and political challenges of creating the regulations that tech CEOs like OpenAI’s Sam Altman and Google’s Sundar Pichai have asked governments to create.

“These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users.”

To help make sense of the current AI moment, I talked with IEEE Spectrum senior editor Eliza Strickland, who recently won a Jesse H. Neal Award for best range of work by an author for her biomedical, geoengineering, and AI coverage. Trustworthiness, we agreed, is probably the most pressing near-term concern. Addressing the provenance of information and its traceability is key. Otherwise people may be swamped by so much bad information that the fragile consensus among humans about what is and isn’t real totally breaks down.

The European Union is ahead of the rest of the world with its proposed Artificial Intelligence Act. It assigns AI applications to three risk categories: Those that create unacceptable risk would be banned, high-risk applications would be tightly regulated, and applications deemed to pose few if any risks would be left unregulated.

The EU’s draft AI Act touches on traceability and deepfakes, but it doesn’t specifically address generative AI–deep-learning models that can produce high-quality text, images, or other content based on its training data. However, a recent article in The New Yorker by the computer scientist Jaron Lanier directly takes on provenance and traceability in generative AI systems.

Lanier views generative AI as a social collaboration that mashes up work done by humans. He has helped develop a concept dubbed “data dignity,” which loosely translates to labeling these systems’ products as machine generated based on data sources that can be traced back to humans, who should be credited with their contributions. “In some versions of the idea,” Lanier writes, “people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do.”

That’s an idea worth exploring right now. Unfortunately, we can’t prompt ChatGPT to spit out a global regulatory regime to guide how we should integrate AI into our lives. Regulations ultimately apply to the humans currently in charge, and only we can ensure a safe and prosperous future for people and our machines.

Reference: https://ift.tt/rpLi5bA

A Snap-based, containerized Ubuntu desktop could be offered in 2024


Snap apps laid out in a grid

Enlarge / Some of the many Snap apps available in Ubuntu's Snap Store, the place where users can find apps and Linux enthusiasts can find deep-seated disagreement. (credit: Canonical)

Ubuntu Core has existed since 2014, providing a fully containerized, immutable Linux distribution aimed at Internet of Things (IoT) and edge computing applications.

That kind of system, based on Ubuntu distributor Canonical's own Snap package format, could be available for desktop users with the next Ubuntu Long Term Support release, according to an Ubuntu mobile engineer. Pointing to a comment in one of his prior posts, Ubuntu blogger Joey Sneddon suggests that an optional "All-Snap Ubuntu Desktop" will be available with Ubuntu 24.04 in April 2024.

It's important to note that a Snap-based Ubuntu would seemingly be an alternate option, not the primary desktop offered. DEB-based Ubuntu would almost certainly remain the mainstream release.

Read 5 remaining paragraphs | Comments

Reference : https://ift.tt/PaDEgzl

Tuesday, May 30, 2023

Critical Barracuda 0-day was used to backdoor networks for 8 months


A stylized skull and crossbones made out of ones and zeroes.

Enlarge (credit: Getty Images)

A critical vulnerability patched 10 days ago in widely used email software from IT security company Barracuda Networks has been under active exploitation since October. The vulnerability has been used to install multiple pieces of malware inside large organization networks and steal data, Barracuda said Tuesday.

The software bug, tracked as CVE-2023-2868, is a remote command injection vulnerability that stems from incomplete input validation of user-supplied .tar files, which are used to pack or archive multiple files. When file names are formatted in a particular way, an attacker can execute system commands through the QX operator, a function in the Perl programming language that handles quotation marks. The vulnerability is present in the Barracuda Email Security Gateway versions 5.1.3.001 through 9.2.0.006; Barracuda issued a patch 10 days ago.

On Tuesday, Barracuda notified customers that CVE-2023-2868 has been under active exploitation since October in attacks that allowed threat actors to install multiple pieces of malware for use in exfiltrating sensitive data out of infected networks.

Read 7 remaining paragraphs | Comments

Reference : https://ift.tt/FyasUAd

Wakka Wakka! This Turing Machine Plays Pac-Man




As I read the newest papers about DNA-based computing, I had to confront a rather unpleasant truth. Despite being a geneticist who also majored in computer science, I was struggling to bridge two concepts—the universal Turing machine, the very essence of computing, and the von Neumann architecture, the basis of most modern CPUs. I had written C++ code to emulate the machine described in Turing’s 1936 paper, and could use it to decide, say, if a word was a palindrome. But I couldn’t see how such a machine—with its one-dimensional tape memory and ability to look at only one symbol on that tape at a time—could behave like a billion-transistor processor with hardware features such as an arithmetic logic unit (ALU), program counter, and instruction register.

I scoured old textbooks and watched online lectures about theoretical computer science, but my knowledge didn’t advance. I decided I would build a physical Turing machine that could execute code written for a real processor.

Rather than a billion-transistor behemoth, I thought I’d target the humble 8-bit 6502 microprocessor. This legendary chip powered the computers I used in my youth. And as a final proof, my simulated processor would have to run Pac-Man, specifically the version of the game written for the Apple II computer.

In Turing’s paper, his eponymous machine is an abstract concept with infinite memory. Infinite memory isn’t possible in reality, but physical Turing machines can be built with enough memory for the task at hand. The hardware implementation of a Turing machine can be organized around a rule book and a notepad. Indeed, when we do basic arithmetic, we use a rule book in our head (such as knowing when to carry a 1). We manipulate numbers and other symbols using these rules, stepping through the process for, say, long division. There are key differences, though. We can move all over a two-dimensional notepad, doing a scratch calculation in the margin before returning to the main problem. With a Turing machine we can only move left or right on a one-dimensional notepad, reading or writing one symbol at a time.

A key revelation for me was that the internal registers of the 6502 could be duplicated sequentially on the one-dimensional notepad using four symbols—0, 1, _ (or space), and $. The symbols 0 and 1 are used to store the actual binary data that would sit in a 6502’s register. The $ symbol is used to delineate different registers, and the _ symbol acts as a marker, making it easy to return to a spot in memory we’re working with. The main memory of the Apple II is emulated in a similar fashion.

: A printed circuit board and the chips and capacitors used to populate the board. Apart from some flip-flops, a couple of NOT gates, and an up-down counter, the PureTuring machine uses only RAM and ROM chips—there are no logic chips. An Arduino board [bottom] monitors the RAM to extract display data. James Provost

Programming a CPU is all about manipulating the registers and transferring their contents to and from main memory using an instruction set. I could emulate the 6502’s instructions as chains of rules that acted on the registers, symbol by symbol. The rules are stored in a programmable ROM, with the output of one rule dictating the next rule to be used, what should be written on the notepad (implemented as a RAM chip), and whether we should read the next symbol or the previous one.

I dubbed my machine PureTuring. The ROM’s data outputs are connected to set of flip-flops. Some of the flip-flops are connected to the RAM, to allow the next or previous symbol to be fetched. Others are connected to the ROM’s own address lines in a feedback loop that selects the next rule.

It turned out to be more efficient to interleave the bits of some registers rather than leaving them as separate 8-bit chunks. Creating the rule book to implement the 6502’s instruction set required 9,000 rules. Of these, 2,500 were created using an old-school method of writing them on index cards, and the rest were generated by a script. Putting this together took about six months.

A diagram showing processor registers interleaved along a 1-D \u201ctape\u201d Only some of the 6502 registers are exposed to programmers [green]; its internal, hidden registers [purple] are used to execute instructions. Below each register a how the registers are arranged, and sometime interleaved, on the PureTuring’s “tape.”James Provost

To fetch a software instruction, PureTuring steps through the notepad using $ symbols as landmarks until it gets to the memory location pointed to by the program counter. The 6502 opcodes are one byte long, so by the time the eighth bit is read, PureTuring is in one of 256 states. Then PureTuring returns to the instruction register and writes the opcode there, before moving on to perform the instruction. A single instruction can take up to 3 million PureTuring clock cycles to fetch, versus one to six cycles for the actual 6502!

The 6502 uses a memory-mapped input/output system. This means that devices such as displays are represented as locations somewhere within main memory. By using an Arduino to monitor the part of the notepad that corresponds to the Apple II’s graphics memory, I could extract pixels and show them on an attached terminal or screen. This required writing a “dewozzing” function for the Arduino as the Apple II’s pixel data is laid out in a complex scheme. ( Steve Wozniak created this scheme to enable the Apple II to fake an analog color TV signal with digital chips and keep the dynamic RAM refreshed.)

I could have inserted input from a keyboard into the notepad in a similar fashion, but I didn’t bother because actually playing Pac-Man on the PureTuring would require extraordinary patience: It took about 60 hours just to draw one frame’s worth of movement for the Pac-Man character and the pursuing enemy ghosts. A modification that moved the machine along the continuum toward a von Neumann architecture added circuitry to permit random access to a notepad symbol, making it unnecessary to step through all prior symbols. This adjustment cut the time to draw the game characters to a mere 20 seconds per frame!

Looking forward, features can be added one by one, moving piecemeal from a Turing machine to a von Neumann architecture: Widen the bus to read eight symbols at a time instead of one, replace the registers in the notepad with hardware registers, add an ALU, and so on.

Now when I read papers and articles on DNA-based computing, I can trace each element back to something in a Turing machine or forward to a conventional architecture, running my own little mental machine along a conceptual tape!

Reference: https://ift.tt/vVUBQtd

OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter


An AI-generated image of

Enlarge / An AI-generated image of "AI taking over the world." (credit: Stable Diffusion)

On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life's work could potentially extinguish all of humanity.

The brief statement, which CAIS says is meant to open up discussion on the topic of "a broad spectrum of important and urgent risks from AI," reads as follows: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

Read 14 remaining paragraphs | Comments

Reference : https://ift.tt/UTVIQNK

Sunday, May 28, 2023

Robot Passes Turing Test for Polyculture Gardening




I love plants. I am not great with plants. I have accepted this fact and have therefore entrusted the lives of all of the plants in my care to robots. These aren’t fancy robots: they’re automated hydroponic systems that take care of water and nutrients and (fake) sunlight, and they do an amazing job. My plants are almost certainly happier this way, and therefore I don’t have to feel guilty about my hands-off approach. This is especially true that there is now data from roboticist at UC Berkeley to back up the assertion that robotic gardeners can do just as good of a job as even the best human gardeners can. In fact, in some metrics, the robots can do even better.


In 1950, Alan Turing considered the question “Can Machines Think?” and proposed a test based on comparing human vs. machine ability to answer questions. In this paper, we consider the question “Can Machines Garden?” based on comparing human vs. machine ability to tend a real polyculture garden.

UC Berkeley has a long history of robotic gardens, stretching back to at least the early 90s. And (as I have experienced) you can totally tend a garden with a robot. But the real question is this: Can you usefully tend a garden with a robot in a way that is as effective as a human tending that same garden? Time for some SCIENCE!

AlphaGarden is a combination of a commercial gantry robot farming system and UC Berkeley’s AlphaGardenSim, which tells the robot what to do to maximize plant health and growth. The system includes a high-resolution camera and soil moisture sensors for monitoring plant growth, and everything is (mostly) completely automated, from seed planting to drip irrigation to pruning. The garden itself is somewhat complicated, since it’s a polyculture garden (meaning of different plants). Polyculture farming mimics how plants grow in nature; its benefits include pest resilience, decreased fertilization needs, and improved soil health. But since different plants have different needs and grow in different ways at different rates, polyculture farming is more labor-intensive than monoculture, which is how most large-scale farming happens.

To test AlphaGarden’s performance, the UC Berkeley researchers planted two side-by-side farming plots with the same seeds at the same time. There were 32 plants in total, including kale, borage, swiss chard, mustard greens, turnips, arugula, green lettuce, cilantro, and red lettuce. Over the course of two months, AlphaGarden tended its plot full time, while professional horticulturalists tended the plot next door. Then, the experiment was repeated, except that AlphaGarden was allowed to stagger the seed planting to give slower-growing plants a head start. A human did have to help the robot out with pruning from time to time, but just to follow the robot’s directions when the pruning tool couldn’t quite do what it wanted to do.

An overhead view of four garden plots that look very similar showing a diversity of healthy green plants. The robot and the professional human both achieved similar results in their garden plots.UC Berkeley

The results of these tests showed that the robot was able to keep up with the professional human in terms of both overall plant diversity and coverage. In other words, stuff grew just as well when tended by the robot as it did when tended by a professional human. The biggest difference is that the robot managed to keep up while using 44 percent less water: several hundred liters less over two months.

“AlphaGarden has thus passed the Turing Test for gardening,” the researchers say. They also say that “much remains to be done,” mostly by improving the AlphaGardenSim plant growth simulator to further optimize water use, although there are other variables to explore like artificial light sources. The future here is a little uncertain, though—the hardware is pretty expensive, and human labor is (relatively) cheap. Expert human knowledge is not cheap, of course. But for those of us who are very much non-experts, I could easily imagine mounting some cameras above my garden and installing some sensors and then just following the orders of the simulator about where and when and how much to water and prune. I’m always happy to donate my labor to a robot that knows what it’s doing better than I do.

“Can Machines Garden? Systematically Comparing the AlphaGarden vs. Professional Horticulturalists,” by Simeon Adebola, Rishi Parikh, Mark Presten, Satvik Sharma, Shrey Aeron, Ananth Rao, Sandeep Mukherjee, Tomson Qu, Christina Wistrom, Eugen Solowjow, and Ken Goldberg from UC Berkeley, will be presented at ICRA 2023 in London.

Reference: https://ift.tt/H3os2Ox

Saturday, May 27, 2023

The Relay That Changed the Power Industry




For more than a century, utility companies have used electromechanical relays to protect power systems against damage that might occur during severe weather, accidents, and other abnormal conditions. But the relays could neither locate the faults nor accurately record what happened.

Then, in 1977, Edmund O. Schweitzer III invented the digital microprocessor-based relay as part of his doctoral thesis. Schweitzer’s relay, which could locate a fault within the radius of 1 kilometer, set new standards for utility reliability, safety, and efficiency.

Edmund O. Schweitzer III


Employer:

Schweitzer Engineering Laboratories

Title:

President and CTO

Member grade:

Life Fellow

Alma maters:

Purdue University, West Lafayette, Ind.; Washington State University, Pullman

To develop and manufacture his relay, he launched Schweitzer Engineering Laboratories in 1982 from his basement in Pullman, Wash. Today SEL manufactures hundreds of products that protect, monitor, control, and automate electric power systems in more than 165 countries.

Schweitzer, an IEEE Life Fellow, is his company’s president and chief technology officer. He started SEL with seven workers; it now has more than 6,000.

The 40-year-old employee-owned company continues to grow. It has four manufacturing facilities in the United States. Its newest one, which opened in March in Moscow, Idaho, fabricates printed circuit boards.

Schweitzer has received many accolades for his work, including the 2012 IEEE Medal in Power Engineering. In 2019 he was inducted into the U.S. National Inventors Hall of Fame.

Advances in power electronics

Power system faults can happen when a tree or vehicle hits a power line, a grid operator makes a mistake, or equipment fails. The fault shunts extra current to some parts of the circuit, shorting it out.

If there is no proper scheme or device installed with the aim of protecting the equipment and ensuring continuity of the power supply, an outage or blackout could propagate throughout the grid.

Overcurrent is not the only damage that can occur, though. Faults also can change voltages, frequencies, and the direction of current.

A protection scheme should quickly isolate the fault from the rest of the grid, thus limiting damage on the spot and preventing the fault from spreading to the rest of the system. To do that, protection devices must be installed.

That’s where Schweitzer’s digital microprocessor-based relay comes in. He perfected it in 1982. It later was commercialized and sold as the SEL-21 digital distance relay/fault locator.

Inspired by a blackout and a protective relays book

Schweitzer says his relay was, in part, inspired by an event that took place during his first year of college.

“Back in 1965, when I was a freshman at Purdue University, a major blackout left millions without power for hours in the U.S. Northeast and Ontario, Canada,” he recalls. “It was quite an event, and I remember it well. I learned many lessons from it. One was how difficult it was to restore power.”

He says he also was inspired by the book Protective Relays: Their Theory and Practice. He read it while an engineering graduate student at Washington State University, in Pullman.

“I bought the book on the Thursday before classes began and read it over the weekend,” he says. “I couldn’t put it down. I was hooked.

“I realized that these solid-state devices were special-purpose signal processors. They read the voltage and current from the power systems and decided whether the power systems’ apparatuses were operating correctly. I started thinking about how I could take what I knew about digital signal processing and put it to work inside a microprocessor to protect an electric power system.”

The 4-bit and 8-bit microprocessors were new at the time.

“I think this is how most inventions start: taking one technology and putting it together with another to make new things,” he says. “The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”

He says he was introduced to signal processing, signal analysis, and how to use digital techniques in 1968 while at his first job, working for the U.S. Department of Defense at Fort Meade, in Maryland.

Faster ways to clear faults and improve cybersecurity

Schweitzer continues to invent ways of protecting and controlling electric power systems. In 2016 his company released the SEL-T400L, which samples a power system every microsecond to detect the time between traveling waves moving at the speed of light. The idea is to quickly detect and locate transmission line faults.

The relay decides whether to trip a circuit or take other actions in 1 to 2 milliseconds. Previously, it would take a protective relay on the order of 16 ms. A typical circuit breaker takes 30 to 40 ms in high-voltage AC circuits to trip.

“The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”

“I like to talk about the need for speed,” Schweitzer says. “In this day and age, there’s no reason to wait to clear a fault. Faster tripping is a tremendous opportunity from a point of view of voltage and angle stability, safety, reducing fire risk, and damage to electrical equipment.

“We are also going to be able to get a lot more out of the existing infrastructure by tripping faster. For every millisecond in clearing time saved, the transmission system stability limits go up by 15 megawatts. That’s about one feeder per millisecond. So, if we save 12 ms, all of the sudden we are able to serve 12 more distribution feeders from one part of one transmission system.”

The time-domain technology also will find applications in transformer and distribution protection schemes, he says, as well as have a significant impact on DC transmission.

What excites Schweitzer today, he says, is the concept of energy packets, which he and SEL have been working on. The packets measure energy exchange for all signals including distorted AC systems or DC networks.

“Energy packets precisely measure energy transfer, independent of frequency or phase angle, and update at a fixed rate with a common time reference such as every millisecond,” he says. “Time-domain energy packets provide an opportunity to speed up control systems and accurately measure energy on distorted systems—which challenges traditional frequency-domain calculation methods.”

He also is focusing on improving the reliability of critical infrastructure networks by improving cybersecurity, situational awareness, and performance. Plug-and-play and best-effort networking aren’t safe enough for critical infrastructure, he says.

SEL OT SDN technology solves some significant cybersecurity problems,” he says, “and frankly, it makes me feel comfortable for the first time with using Ethernet in a substation.”

From engineering professor to inventor

Schweitzer didn’t start off planning to launch his own company. He began a successful career in academia in 1977 after joining the electrical engineering faculty at Ohio University, in Athens. Two years later, he moved to Pullman, Wash., where he taught at Washington State’s Voiland College of Engineering and Architecture for the next six years. It was only after sales of the SEL-21 took off that he decided to devote himself to his startup full time.

It’s little surprise that Schweitzer became an inventor and started his own company, as his father and grandfather were inventors and entrepreneurs.

His grandfather, Edmund O. Schweitzer, who held 87 patents, invented the first reliable high-voltage fuse in collaboration with Nicholas J. Conrad in 1911, the year the two founded Schweitzer and Conrad—today known as S&C Electric Co.—in Chicago.

Schweitzer’s father, Edmund O. Schweitzer Jr., had 208 patents. He invented several line-powered fault-indicating devices, and he founded the E.O. Schweitzer Manufacturing Co. in 1949. It is now part of SEL.

Schweitzer says a friend gave him the best financial advice he ever got about starting a business: Save your money.

“I am so proud that our 6,000-plus-person company is 100 percent employee-owned,” Schweitzer says. “We want to invest in the future, so we reinvest our savings into growth.”

He advises those who are planning to start a business to focus on their customers and create value for them.

“Unleash your creativity,” he says, “and get engaged with customers. Also, figure out how to contribute to society and make the world a better place.”

Reference: https://ift.tt/cYUkMsB

Wakka Wakka! This Turing Machine Plays Pac-Man




As I read the newest papers about DNA-based computing, I had to confront a rather unpleasant truth. Despite being a geneticist who also majored in computer science, I was struggling to bridge two concepts—the universal Turing machine, the very essence of computing, and the von Neumann architecture, the basis of most modern CPUs. I had written C++ code to emulate the machine described in Turing’s 1936 paper, and could use it to decide, say, if a word was a palindrome. But I couldn’t see how such a machine—with its one-dimensional tape memory and ability to look at only one symbol on that tape at a time—could behave like a billion-transistor processor with hardware features such as an arithmetic logic unit (ALU), program counter, and instruction register.

I scoured old textbooks and watched online lectures about theoretical computer science, but my knowledge didn’t advance. I decided I would build a physical Turing machine that could execute code written for a real processor.

Rather than a billion-transistor behemoth, I thought I’d target the humble 8-bit 6502 microprocessor. This legendary chip powered the computers I used in my youth. And as a final proof, my simulated processor would have to run Pac-Man, specifically the version of the game written for the Apple II computer.

In Turing’s paper, his eponymous machine is an abstract concept with infinite memory. Infinite memory isn’t possible in reality, but physical Turing machines can be built with enough memory for the task at hand. The hardware implementation of a Turing machine can be organized around a rule book and a notepad. Indeed, when we do basic arithmetic, we use a rule book in our head (such as knowing when to carry a 1). We manipulate numbers and other symbols using these rules, stepping through the process for, say, long division. There are key differences, though. We can move all over a two-dimensional notepad, doing a scratch calculation in the margin before returning to the main problem. With a Turing machine we can only move left or right on a one-dimensional notepad, reading or writing one symbol at a time.

A key revelation for me was that the internal registers of the 6502 could be duplicated sequentially on the one-dimensional notepad using four symbols—0, 1, _ (or space), and $. The symbols 0 and 1 are used to store the actual binary data that would sit in a 6502’s register. The $ symbol is used to delineate different registers, and the _ symbol acts as a marker, making it easy to return to a spot in memory we’re working with. The main memory of the Apple II is emulated in a similar fashion.

: A printed circuit board and the chips and capacitors used to populate the board. Apart from some flip-flops, a couple of NOT gates, and an up-down counter, the PureTuring machine uses only RAM and ROM chips—there are no logic chips. An Arduino board [bottom] monitors the RAM to extract display data. James Provost

Programming a CPU is all about manipulating the registers and transferring their contents to and from main memory using an instruction set. I could emulate the 6502’s instructions as chains of rules that acted on the registers, symbol by symbol. The rules are stored in a programmable ROM, with the output of one rule dictating the next rule to be used, what should be written on the notepad (implemented as a RAM chip), and whether we should read the next symbol or the previous one.

I dubbed my machine PureTuring. The ROM’s data outputs are connected to set of flip-flops. Some of the flip-flops are connected to the RAM, to allow the next or previous symbol to be fetched. Others are connected to the ROM’s own address lines in a feedback loop that selects the next rule.

It turned out to be more efficient to interleave the bits of some registers rather than leaving them as separate 8-bit chunks. Creating the rule book to implement the 6502’s instruction set required 9,000 rules. Of these, 2,500 were created using an old-school method of writing them on index cards, and the rest were generated by a script. Putting this together took about six months.

A diagram showing processor registers interleaved along a 1-D \u201ctape\u201d Only some of the 6502 registers are exposed to programmers [green]; its internal, hidden registers [purple] are used to execute instructions. Below each register a how the registers are arranged, and sometime interleaved, on the PureTuring’s “tape.”James Provost

To fetch a software instruction, PureTuring steps through the notepad using $ symbols as landmarks until it gets to the memory location pointed to by the program counter. The 6502 opcodes are one byte long, so by the time the eighth bit is read, PureTuring is in one of 256 states. Then PureTuring returns to the instruction register and writes the opcode there, before moving on to perform the instruction. A single instruction can take up to 3 million PureTuring clock cycles to fetch, versus one to six cycles for the actual 6502!

The 6502 uses a memory-mapped input/output system. This means that devices such as displays are represented as locations somewhere within main memory. By using an Arduino to monitor the part of the notepad that corresponds to the Apple II’s graphics memory, I could extract pixels and show them on an attached terminal or screen. This required writing a “dewozzing” function for the Arduino as the Apple II’s pixel data is laid out in a complex scheme. ( Steve Wozniak created this scheme to enable the Apple II to fake an analog color TV signal with digital chips and keep the dynamic RAM refreshed.)

I could have inserted input from a keyboard into the notepad in a similar fashion, but I didn’t bother because actually playing Pac-Man on the PureTuring would require extraordinary patience: It took about 60 hours just to draw one frame’s worth of movement for the Pac-Man character and the pursuing enemy ghosts. A modification that moved the machine along the continuum toward a von Neumann architecture added circuitry to permit random access to a notepad symbol, making it unnecessary to step through all prior symbols. This adjustment cut the time to draw the game characters to a mere 20 seconds per frame!

Looking forward, features can be added one by one, moving piecemeal from a Turing machine to a von Neumann architecture: Widen the bus to read eight symbols at a time instead of one, replace the registers in the notepad with hardware registers, add an ALU, and so on.

Now when I read papers and articles on DNA-based computing, I can trace each element back to something in a Turing machine or forward to a conventional architecture, running my own little mental machine along a conceptual tape!

Reference: https://ift.tt/sEXHmT2

Friday, May 26, 2023

Inner workings revealed for “Predator,” the Android malware that exploited 5 0-days


An image illustrating a phone infected with malware

Enlarge

Smartphone malware sold to governments around the world can surreptitiously record voice calls and nearby audio, collect data from apps such as Signal and WhatsApp, and hide apps or prevent them from running upon device reboots, researchers from Cisco’s Talos security team have found.

An analysis Talos published on Thursday provides the most detailed look yet at Predator, a piece of advanced spyware that can be used against Android and iOS mobile devices. Predator is developed by Cytrox, a company that Citizen Lab has said is part of an alliance called Intellexa, “a marketing label for a range of mercenary surveillance vendors that emerged in 2019.” Other companies belonging to the consortium include Nexa Technologies (formerly Amesys), WiSpear/Passitora Ltd., and Senpai.

Last year, researchers with Google’s Threat Analysis Group, which tracks cyberattacks carried out or funded by nation-states, reported that Predator had bundled five separate zero-day exploits in a single package and sold it to various government-backed actors. These buyers went on to use the package in three distinct campaigns. The researchers said Predator worked closely with a component known as Alien, which “lives inside multiple privileged processes and receives commands from Predator.” The commands included recording audio, adding digital certificates, and hiding apps.

Read 10 remaining paragraphs | Comments

Reference : https://ift.tt/WCwxvAj

Video Friday: The Coolest Robots




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDON
Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON, TEXAS, USA
RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE
RSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREA
IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREA
IROS 2023: 1–5 October 2023, DETROIT, MICHIGAN, USA
CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL
Humanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

We’ve just relaunched the IEEE Robots Site over at RobotsGuide.com, featuring new robots, new interactives, and a complete redesign from the ground up. Tell your friends, tell your family, and explore 250 robots in pictures and videos and detailed facts and specs, with lots more on the way!

[Robots Guide]

The qualities that make a knitted sweater comfortable and easy to wear are the same things that might allow robots to better interact with humans. RobotSweater, developed by a research team from Carnegie Mellon University’s Robotics Institute, is a machine-knitted textile “skin” that can sense contact and pressure.

RobotSweater’s knitted fabric consists of two layers of conductive yarn made with metallic fibers to conduct electricity. Sandwiched between the two is a net-like, lace-patterned layer. When pressure is applied to the fabric—say, from someone touching it—the conductive yarn closes a circuit and is read by the sensors. In their research, the team demonstrated that pushing on a companion robot outfitted in RobotSweater told it which way to move or what direction to turn its head. When used on a robot arm, RobotSweater allowed a push from a person’s hand to guide the arm’s movement, while grabbing the arm told it to open or close its gripper. In future research, the team wants to explore how to program reactions from the swipe or pinching motions used on a touchscreen.

[CMU]

DEEP Robotics Co. yesterday announced that it has launced the latest version of its Lite3 robotic dog in Europe. The system combines advanced moblity and an open modular structure to serve the education, research, and entertainment markets, said the Hangzhou, China-based company.

Lite3’s announced price is US $2900. It ships in September.

[Deep Robotics]

Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains. We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback in a self-supervised manner. We validate our method in multiple short and large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV) on challenging off-road terrains, and demonstrate ease of integration on a separate large ground robot.

This work will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2023) in London next week.

[Mateo Guaman Castro]

Thanks, Mateo!

Sheet Metal Workers’ Local Union 104 has introduced a training course on automating and innovating field layout with the Dusty Robotics FieldPrinter system.

[Dusty Robotics]

Apptronik has half of its general purpose robot ready to go!

The other half is still a work in progress, but here’s progress:

[Apptronik]

A spotted lanternfly-murdering robot is my kind of murdering robot.

[FRC]

ANYmal is rated IP67 for water resistance, but this still terrifies me.

[ANYbotics]

Check out the impressive ankle action on this humanoid walking over squishy terrain.

[CNRS-AIST JRL]

Wing’s progress can be charted along the increasingly dense environments in which we’ve been able to operate: from rural farms to lightly populated suburbs to more dense suburbs to large metropolitan areas like Brisbane, Australia, Helsinki, Finland, and the Dallas Fort Worth metro area in Texas. Earlier this month, we did a demonstration delivery at Coors Field–home of the Colorado Rockies–delivering beer (Coors of course) and peanuts to the field. Admittedly, it wasn’t on a game day, but there were 1,000 people in the stands enjoying the kickoff party for AUVSI’s annual autonomous systems conference.

[ Wing ]

Pollen Robotics’s team will be going to ICRA 2023 in London! Come and meet us there to try teleoperating Reachy by yourself and give us your feedback!

[ Pollen Robotics ]

The most efficient drone engine is no engine at all.

[ MAVLab ]

Is your robot spineless? Should it be? Let’s find out.

[ UPenn ]

Looks like we’re getting closer to that robot butler.

[ Prisma Lab ]

This episode of the Robot Brains podcast features Raff D’Andrea, from Kiva and Verity and ETH Zurich.

[ Robot Brains ]

Reference: https://ift.tt/jBaoQGX

Green hills forever: Windows XP activation algorithm cracked after 21 years


With this background, potentially <a href="https://en.wikipedia.org/wiki/Bliss_(image)">the most viewed photograph in human history</a>, Windows XP always signaled that it was prepared for a peaceful retirement. Yet some would have us disturb it.

Enlarge / With this background, potentially the most viewed photograph in human history, Windows XP always signaled that it was prepared for a peaceful retirement. Yet some would have us disturb it. (credit: Charles O'Rear/Microsoft)

It has never been too hard for someone with the right amount of time, desperation, or flexible scruples to get around Windows XP's activation scheme. And yet XP activation, the actual encrypted algorithm, loathed since before it started, has never been truly broken, at least entirely offline. Now, far past the logical end of all things XP, the solution exists, floating around the web's forum-based backchannels for months now.

On the blog of tinyapps.org (first spotted by The Register), which provides micro-scale, minimalist utilities for constrained Windows installations, a blog post appropriately titled "Windows XP Activation: GAME OVER" runs down the semi-recent history of folks looking to activate Windows XP more than 20 years after it debuted, nine years after its end of life, and, crucially, some years after Microsoft turned off its online activation servers (or maybe they just swapped certificates).

xp_activate32.exe, a 18,432-byte program (hash listed on tinyapps' blog post), takes the code generated by Windows XP's phone activation option and processes it into a proper activation key (Confirmation ID), entirely offline. It's persistent across system wipes and re-installs. It is, seemingly, the same key Microsoft would provide for your computer.

Read 2 remaining paragraphs | Comments

Reference : https://ift.tt/Q8tPfzl

Thursday, May 25, 2023

Who’s the Coolest Robot of All?




Calling all robot fanatics! We are the creators of the Robots Guide, IEEE’s interactive site about robotics, and we need your help.

Today, we’re expanding our massive catalog to nearly 250 robots, and we want your opinion to decide which are the coolest, most wanted, and also creepiest robots out there.

To submit your votes, find robots on the site that are interesting to you and rate them based on their design and capabilities. Every Friday, we’ll crunch the votes to update our Robot Rankings.

Screenshot of Robots Guide site showing the robot ratings module, with overall rating, want this robot rating, and appearance rating. Rate this robot: For each robot on the site, you can submit your overall rating, answer if you’d want to have this robot, and rate its appearance.IEEE Spectrum

May the coolest (or creepiest) robot win!

Our collection currently features 242 robots, including humanoids, drones, social robots, underwater vehicles, exoskeletons, self-driving cars, and more.

Screenshot of Robots Guide showing the Robot Rankings page with three rankings, Top Rated, Most Wanted, and Creepiest. The Robots Guide features three rankings: Top Rated, Most Wanted, and Creepiest.IEEE Spectrum

You can explore the collection by filtering robots by category, capability, and country, or sorting them by name, year, or size. And you can also search robots by keywords.

In particular, check out some of the new additions, which could use more votes. These include some really cool robots like LOVOT, Ingenuity, GITAI G1, Tertill, Salto, Proteus, and SlothBot.

Each robot profile includes detailed tech specs, photos, videos, history, and some also have interactives that let you move and spin robots 360º on the screen.

And note that these are all real-world robots. If you’re looking for sci-fi robots, check out our new Face-Off: Sci-Fi Robots game.

Robots Redesign

Today, we’re also relaunching the Robots Guide site with a fast and sleek new design, more sections and games, and thousands of photos and videos.

The new site was designed by Pentagram, the prestigious design consultancy, in collaboration with Standard, a design and technology studio.


The site is built as a modern, fully responsive web app. It’s powered by Remix.run, a React-based web framework, with structured content by Sanity.io and site search by Algolia.

More highlights:

  • Explore nearly 250 robots
  • Make robots move and spin 360º
  • View over 1,000 amazing photos
  • Watch 900 videos of robots in action
  • Play the Sci-Fi Robots Face-Off game
  • Keep up to date with daily robot news
  • Read detailed tech specs about each robot
  • Robot Rankings: Top Rated, Most Wanted, Creepiest

The Robots Guide was designed for anyone interested in learning more about robotics, including robot enthusiasts, both experts and beginners, researchers, entrepreneurs, STEM educators, teachers, and students.

The foundation for the Robots Guide is IEEE’s Robots App, which was downloaded 1.3 million times and is used in classrooms and STEM programs all over the world.

The Robots Guide is an editorial product of IEEE Spectrum, the world’s leading technology and engineering magazine and the flagship publication of the IEEE.

Reference: https://ift.tt/WB0rkTh

Unearthed: CosmicEnergy, malware for causing Kremlin-style power disruptions


Unearthed: CosmicEnergy, malware for causing Kremlin-style power disruptions

Enlarge (credit: Getty Images)

Researchers have uncovered malware designed to disrupt electric power transmission and may have been used by the Russian government in training exercises for creating or responding to cyberattacks on electric grids.

Known as CosmicEnergy, the malware has capabilities that are comparable to those found in malware known as Industroyer and Industroyer2, both of which have been widely attributed by researchers to Sandworm, the name of one of the Kremlin’s most skilled and cutthroat hacking groups. Sandworm deployed Industroyer in December 2016 to trigger a power outage in Kyiv, Ukraine, that left an estimated large swath of the city without power for an hour. The attack occurred almost a year after an earlier one disrupted power for 225,000 Ukrainians for six hours. Industroyer2 came to light last year and is believed to have been used in a third attack on Ukraine’s power grids, but it was detected and stopped before it could succeed.

The attacks illustrated the vulnerability of electric power infrastructure and Russia’s growing skill at exploiting it. The attack in 2015 used repurposed malware known as BlackEnergy. While the resulting BlackEnergy3 allowed Sandworm to successfully break into the corporate networks of Ukrainian power companies and further encroach on their supervisory control and data acquisition systems, the malware had no means to interface with operational technology gear directly.

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/o2wc6gk

OpenAI CEO raises $115M for crypto company that scans people’s eyeballs


A spherical device that scans people's eyeballs.

Enlarge / Worldcoin's "Orb," a device that scans your eyeballs to verify that you're a real human.

A company co-founded by OpenAI CEO Sam Altman has raised $115 million for Worldcoin, a crypto coin project that scans users' eyeballs in order "to establish an individual's unique personhood." In addition to leading the maker of ChatGPT and GPT-4, Altman is co-founder and chairman of Tools for Humanity, a company that builds technology for the Worldcoin project.

Tools for Humanity today announced $115 million in Series C funding from Blockchain Capital, Andreessen Horowitz's crypto fund, Bain Capital Crypto, and Distributed Global. Blockchain Capital said that Worldcoin's "World ID" system that involves eyeball-scanning will make it easier for applications to distinguish between bots and humans.

The Orb's components.

The Orb's components. (credit: Worldcoin)

"Worldcoin strives to become the world's largest and most inclusive identity and financial network, built around World ID and the Worldcoin token—a public utility that will be owned by everyone regardless of their background or economic status," the crypto firm's funding press release said.

Read 26 remaining paragraphs | Comments

Reference : https://ift.tt/yuPEVKi

Meet the Forksheet: Imec’s In-Between Transistor




The most advanced manufacturers of computer processors are in the middle of the first big change in device architecture in a decade—the shift from finFETs to nanosheets. Another ten years should bring about another fundamental change, where nanosheet devices are stacked atop each other to form complementary FETs (CFETs), capable of cutting the size of some circuits in half. But the latter move is likely to be a heavy lift, say experts. An in-between transistor called the forksheet might keep circuits shrinking without quite as much work.

The idea for the forksheet came from exploring the limits of the nanosheet architecture, says Julien Ryckaert, the vice president for logic technologies at Imec. The nanosheet’s main feature is its horizontal stacks of silicon ribbons surrounded by its current-controlling gate. Although nanosheets only recently entered production, experts were already looking for their limits years ago. Imec was tasked with figuring out “at what point nanosheet will start tanking,” he says.

Ryckaert’s team found that one of the main limitations to shrinking nanosheet-based logic is keeping the separation between the two types of transistor that make up CMOS logic. The two types—NMOS and PMOS—must maintain a certain distance to limit capacitance that saps the devices’ performance and power consumption. “The forksheet is a way to break that limitation,” Ryckaert says.

Instead of individual nanosheet devices, the forksheet scheme builds them as pairs on either side of a dielectric wall. (No, it doesn’t really resemble a fork much.) The wall allows the devices to be placed closer together without causing a capacitance problem, says Naoto Horiguchi, the director of CMOS technology at Imec. Designers could use the extra space to shrink logic cells, or they could use the extra room to build transistors with wider sheets leading to better performance, he says.

Four multicolored blocks with arrows between them indicating a progression. Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The forksheet may be an intermediary step on the way.Imec

“CFET is probably the ultimate CMOS architecture,” says Horiguchi of the device that Imec expects to reach production readiness around 2032. But he adds that CFET “integration is very complex.” Forksheet reuses most of the nanosheet production steps, potentially making it an easier job, he says. Imec predicts it could be ready around 2028.

There are still many hurdles to leap over, however. “It’s more complex than initially thought,” Horiguchi says. From a manufacturing perspective, the dielectric wall is a bit of a headache. There are several types of dielectric used in advanced CMOS and several steps that involve etching it away. Making forksheets means etching those others without accidentally attacking the wall. And it’s still an open question which types of transistor should go on either side of the wall, Horiguchi says. The initial idea was to put PMOS on one side and NMOS on the other, but there may be advantages to putting the same type on both sides instead.

Reference: https://ift.tt/py6SD0N

Minnesota enacts right-to-repair law that covers more devices than any other state


Hands on a circuit board, using multimeter probes to find errors

Enlarge / Minnesota's right-to-repair bill is the first to pass in the US that demands broad access to most electronics' repair manuals, tools, and diagnostic software. Game consoles, medical devices, and other specific gear, however, are exempted. (credit: Getty Images)

It doesn't cover video game consoles, medical gear, farm or construction equipment, digital security tools, or cars. But in demanding that manuals, tools, and parts be made available for most electronics and appliances, Minnesota's recently passed right-to-repair bill covers the most ground of any US state yet.

The Digital Right to Repair bill, passed as part of an omnibus legislation and signed by Gov. Tim Walz on Wednesday, "fills in many of the loopholes that watered down the New York Right to Repair legislation," said Nathan Proctor, senior director for the Public Interest Research Group's right-to-repair campaign, in a post.

New York's bill, beset by lobbyists, was signed in modified form by Gov. Kathy Hochul. It also exempted motor vehicles and medical devices, as well as devices sold before July 1, 2023, and all "business-to-business" and "business-to-government" devices. The modified bill also allowed manufacturers to sell "assemblies" of parts—like a whole motherboard instead of an individual component, or the entire top case Apple typically provides instead of a replacement battery or keyboard—if an improper individual part installation "heightens the risk of injury."

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/X4gIBKA

Wednesday, May 24, 2023

Grimes Reviews A.I. Grimes Songs


The producer and pop singer, long a proponent of technological experimentation, has “open-sourced” her voice using new A.I. tools. She’s been impressed by the results.

Brain and Spine Implants Allow Paralyzed Man to Walk Naturally Again


In a new study, researchers describe a device that connects the intentions of a paralyzed patient to his physical movements.

AI Needs an International Watchdog, OpenAI Leaders Say


To manage its risks, “superintelligent” artificial intelligence should be governed by a body similar to the International Atomic Energy Agency, the lab’s leadership said in note on its website.

What to Know About Limiting Your Child’s Screen Time


Concerned parents have many tools, including free software from Apple and Google, to actively oversee how children use their tech.

What to Know About Limiting Your Child’s Screen Time


Concerned parents have many tools, including free software from Apple and Google, to actively oversee how children use their tech. Reference :

The Sneaky Standard

A version of this post originally appeared on Tedium , Ernie Smith’s newsletter, which hunts for the end of the long tail. Personal c...