Sunday, August 31, 2025

Fiber to the Home




The crew has come: caravan of trucks

stationed on the street, unstacking cones

and digging ditches, deft and efficient.

Here in our yards, long years of earth

have been hefted by hand and heaped up on tarps.

A pneumatic mole emerges from the trailer,

and a heavy hose is hauled into place.

With a pop, the pumping compressor wakes

with startling strength. The strata are threaded,

pierced by the pounding power that forges

a buried boulevard. This burrow will convey

packets with payloads, pulses of light

modulated with meaning in marks and spaces,

carrying commerce and conversation.

The uproar ebbs by afternoon.

Machines are shut down and shovels return,

covering conduits with clods of soil.

The sod is reset and soaked thoroughly.

It’s late now. They load the last of the gear.

The dirt-girded duct is dark and untapped.

The glass-road will run to reach the houses

after fees are paid, when the final strands

will mate with modems and make connections.

Reference: https://ift.tt/Ec6rmi4

Tech Founders Must Prioritize the Problem Before Their Solution




When I left Los Alamos National Laboratory to start a company 11 years ago, I thought my team was ready. We had developed a new class of quantum dots—nanoscale particles of light-emitting semiconductor material that can be used in displays, solar cells, and more. Our technology was safer, more stable, and less expensive than existing quantum-dot materials. The technical advantages were real, but I quickly learned that no amount of scientific merit guarantees market success.

For many tech-startup founders, this is an uncomfortable but necessary realization. You can build an elegant solution, but if it doesn’t solve a meaningful problem in the market, it won’t go anywhere commercially. The earlier you embrace that lesson, the better your odds of success.

Your Invention Isn’t the Business

My background is in research. I have a Ph.D. in materials science from the University of Illinois–Urbana-Champaign, and I did a postdoc at Los Alamos, in New Mexico, working on nanomaterials in the chemistry division. My focus was always on advancing scientific knowledge, publishing papers, and in some cases filing patents. Like many researchers, I eventually grew tired of chasing citations and wanted to apply that work to the real world.

That’s why I started UbiQD. We had a material that solved the technical shortcomings of conventional quantum dots, which require toxic heavy metals like cadmium or lead and involve expensive manufacturing processes. However, when we first introduced it to the market, the conversations we had were eye-opening. People didn’t care about the material for its own sake. They cared about whether it solved their problem, and the severity of the problem defined their urgency.

My advice: If you can’t clearly explain how your technology makes someone’s life easier, safer, more sustainable, or more profitable, you’re not ready to sell it.

“Throwing It Over the Fence” Doesn’t Work

Our early thinking was overly simplistic: Create better quantum dots, scale production, and let customers apply the technology to the industries that benefit from quantum dots’ ability to manipulate light. We figured, if we make it, the customers will come.

To speed things up, we offered research-grade samples for testing. A number of early adopters asked for samples, but that “throw it over the fence” approach typically doesn’t work with a novel enabling technology. Whether it’s advanced materials, hardware, or software, you can’t expect customers to figure out what solution works best; that’s your job.

So before scaling your tech, spend time with potential customers. Listen more than you talk and identify their true pain points. Ask them what keeps them up at night. That’s where the real opportunities lie: in the chances to provide a must-have painkiller, rather than a nice-to-have daily vitamin.

Shelve the Ideas That Don’t Fit

One of the hardest lessons tech founders must learn is how to recognize when a beloved idea doesn’t align with market needs.

Take solar windows for greenhouses, one of the ideas we thought would be a hit but then had to shelve. Greenhouses spend a lot on electricity, so it seemed logical that they’d want to generate electricity directly in the facade of the greenhouse. However, growers told us their biggest concern was crop yield, not operational costs. Light-absorbing windows could potentially cause a slight reduction in yield, and any such reduction—even with energy savings—would likely hurt their bottom line.

That’s where the real opportunities lie: in the chances to provide a must-have painkiller, rather than a nice-to-have daily vitamin.

So we paused the solar-window idea in 2018 and focused instead on a simpler, higher-impact product: greenhouse films that shift the color of light to help plants grow faster. The growers cared about yield, and that’s what we addressed using our technology. This agricultural application is now one of UbiQD’s main focus areas.

Don’t get emotionally attached to one application or use case. If your company is built on a platform technology, stay flexible. The market will tell you where your technology fits—and where it doesn’t.

Competition Means You’re on the Right Track

Many founders dread competition. I see it differently. When we entered the agriculture space, we saw other startups and a few large companies exploring similar ideas. That wasn’t discouraging. It was validating. If no one else is working on the problem, it might not be a worthwhile opportunity.

Some startups also try to stay under the radar to gain an edge. But if potential partners or early customers don’t know you’re working on a problem, they can’t contribute to, challenge, or help accelerate your solution. That said, differentiation matters. Our edge comes from robust intellectual property, technical depth, and years of hard-earned data. We’ve had lots of help and input from outside the company, and some of our best customers found us first.

Expect and embrace competition, and don’t be shy about it. Just make sure you have a defendable advantage—whether through technology, partnerships, data, or expertise.

Earn the Right to Expand

As you begin to succeed in one market, you’ll be tempted to expand quickly to others. But be cautious about moving too quickly. We’ve turned down plenty of tempting market opportunities over the years because we hadn’t earned the right to go after them yet.

For example, applying our materials to cosmetics or paints is exciting, but until we achieved sufficient production scale and cost reduction, it didn’t make sense economically. Now that we’ve lowered costs and built better infrastructure, those markets are back on the table, but we understood this only after studying potential customers and their needs.

Build scale and generate revenue in your first market, and only then explore adjacent opportunities.

Advice I Wish I Had Heard Earlier

If I could give my younger self just one piece of advice, it would be this: Fall in love with the problem first, then the solution.

As a tech-company founder, I spent years perfecting our technology and publishing papers about how great the science of the solutions is. Building a company, though, also means understanding your customers and the economics of solving their problems.

Science and engineering are critical, but so are customer discovery, product management, and market research. Those skills are essential, and you’ll probably need them sooner than you think.

So get out of the lab. Talk to potential customers as early as possible. Be ready to adapt as you listen. And remember, the value of your technology comes from the problem it solves.

Reference: https://ift.tt/AUnMCm8

Saturday, August 30, 2025

This DIY Test Equipment Could Save Your Radio




Recently I noticed an irresistible offer on Craigslist: a Majestic 3C70 AM/shortwave radio for just US $50. This model dates from the 1930s, when such radios came in gorgeous wooden cabinets. The specimen I stumbled on was still in the possession of the original owner, who used to listen to it with her family when she was a little girl. The wood and speaker fabric were nicely preserved, probably looking much as they when Japan attacked Pearl Harbor. I snatched it up.

I knew at the very least I’d need to replace a bunch of capacitors. But after scrutinizing the underside of the chassis, I realized I’d be doing a lot more, as much of the original wire insulation had disintegrated. Thus began a journey that eventually led me to build my own version of a critical piece of restoration technology: a dim-bulb tester.

My journey started with online searching that turned up a circuit diagram for my radio, along with plenty of advice from vintage-electronics restoration experts. The chief piece of wisdom was “Be careful.” Even when new, electronics of the vacuum-tube era could be dangerous. Being the cautious type, I wanted to take all appropriate safety measures.

In particular, when working with tube-era electronics, you should resist the urge to just plug it in to see if it works. Decades-old paper and electrolytic capacitors are almost guaranteed to be bad. And much else could be amiss as well. Instead, make the repairs and upgrades you determine are needed first. Even then, don’t just plug in your relic and flip the power switch. Better to start it up gently to look for signs of trouble.

How Does a Dim-Bulb Tester Work?

But how do you turn on old equipment gently? That concept was foreign to me, having grown up in the transistor era. And this is when I learned about dim-bulb testers. They take advantage of the fact that the resistance of an ordinary incandescent light bulb increases markedly as the filament heats up. The tester sits between your device and the wall plug. The bulb is wired in series to the power line and acts as a current limiter: Even if a component or wire in your device fails and causes a short, the current flowing into the device won’t exceed the current that would normally flow through the bulb. You can control the maximum current by using bulbs of different wattages.

Key components for the dim bulb tester Caption: Because the dim-bulb tester relies on an incandescent bulb [top middle], a certain retro look is guaranteed. I leaned into this aesthetic by using vintage analog meters [top left and right], and having a metal front panel custom-made by a sign maker [bottom].James Provost

Sure, you can cobble together such a tester using just an outlet box, a lamp base, and a switch. But I decided to go all out on the safety front and build a more fully featured dim-bulb tester, something akin to a design that I saw online that includes a variable transformer along with panel meters to monitor voltage and current. And for fun, I decided to give my tester a vintage look.

I hunted on eBay for vintage bits and pieces (or ones that could pass as vintage). While the effort to make my tester look old increased the cost and slowed construction, I was beginning to like the idea of restoring old electronics as a new hobby, so I figured: Why not?

The end result was a unit that included two Triplett analog panel meters that, best I can figure out, date from shortly after the Second World War. It also includes three indicator lights that must be from the 1950s. They adorn a front panel that I fabricated by ordering a custom aluminum sign and cutting the openings using hole saws.

An electronic schematic. The dim-bulb tester allows me to ramp up the voltage applied to old equipment. The resistance of the bulb prevents damaging current flows to the equipment while looking for any signs of troublet.James Provost

Choosing the proper enclosure for my ersatz test instrument was one of the bigger challenges. Large enclosures tend to be expensive, and I also struggled to find something that wouldn’t have looked out of place in the TV repair shops of my youth. The solution was to purchase a damaged vintage test instrument (a tube-equipped signal generator), pull the chassis out, and use its painted steel enclosure. I bought it for less than I would’ve paid for a new enclosure. I also bought a small collection of incandescent light bulbs of different wattages. Assembling my tester was straightforward.

I wasn’t quite done, though. In my investigations into how repair vintage electronics safely, I learned about using an isolation transformer to help protect against shocks. I toyed with the idea of building one into my dim-bulb tester’s enclosure, but I decided it was more practical to purchase a stand-alone unit. I got a used one for a good price, but it took some work to fix and modify it so that it truly isolated the input from the output. (Oddly enough, commercial units don’t typically offer full isolation—you have to mod them for this.) I figure that I can just plug my device into my dim-bulb tester, plug the tester into the isolation transformer, then plug the transformer into the wall.

With my completed tester ready to go, I carefully examined the wiring and components of my Majestic radio and ordered what I think I’ll need to fix it. I’ve just received the box of components from Mouser, so repair and live testing will begin shortly. I should add that while working on my dim-bulb tester, I couldn’t resist making another $50 antique-radio purchase: a Zenith AM/FM tabletop radio from the late 1950s. The person I bought it from said that it works, but I now know there’s a right way and a wrong way to verify that assertion. So I’ve got plenty to keep me busy in my newfound hobby—along with the gear I need to pursue it safely. Reference: https://ift.tt/o71zFYL

Friday, August 29, 2025

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave


Within days of joining Meta, Shengjia Zhao, co-creator of OpenAI’s ChatGPT, had threatened to quit and return to his former employer, in a blow to Mark Zuckerberg’s multibillion-dollar push to build “personal superintelligence.”

Zhao went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the matter, he was given the title of Meta’s new “chief AI scientist.”

The incident underscores Zuckerberg’s turbulent effort to direct the most dramatic reorganisation of Meta’s senior leadership in the group’s 20-year history.

Read full article

Comments

Reference : https://ift.tt/UbSzfLv

Video Friday: Spot’s Got Talent




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CLAWAR 2025: 5–7 September 2025, SHENZHEN, CHINA
ACTUATE 2025: 23–24 September 2025, SAN FRANCISCO
CoRL 2025: 27–30 September 2025, SEOUL
IEEE Humanoids: 30 September–2 October 2025, SEOUL
World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
IROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Boston Dynamics is back and their dancing robot dogs are bigger, better, and bolder than ever! Watch as they bring a “dead” robot to life and unleash a never before seen synchronized dance routine to “Good Vibrations.”

And much more interestingly, here’s a discussion of how they made it work:

[ Boston Dynamics ]

I don’t especially care whether a robot falls over. I care whether it gets itself back up again.

[ LimX Dynamics ]

The robot autonomously connects multiple wires to the environment using small flying anchors—drones equipped with anchoring mechanisms at the wire tips. Guided by an onboard RGB-D camera for control and environmental recognition, the system enables wire attachment in unprepared environments and supports simultaneous multi-wire connections, expanding the operational range of wire-driven robots.

[ JSK Robotics Laboratory ] at [ University of Tokyo ]

Thanks, Shintaro!

For a robot that barely has a face, this is some pretty good emoting.

[ Pollen ]

Learning skills from human motions offers a promising path toward generalizable policies for whole-body humanoid control, yet two key cornerstones are missing: (1) a scalable, high-quality motion tracking framework that faithfully transforms kinematic references into robust, extremely dynamic motions on real hardware, and (2) a distillation approach that can effectively learn these motion primitives and compose them to solve downstream tasks. We address these gaps with BeyondMimic, a real-world framework to learn from human motions for versatile and naturalistic humanoid control via guided diffusion.

[ Hybrid Robotics ]

Introducing our open-source metal-made bipedal robot MEVITA. All components can be procured through e-commerce, and the robot is built with a minimal number of parts. All hardware, software, and learning environments are released as open source.

[ MEVITA ]

Thanks, Kento!

I’ve always thought that being able to rent robots (or exoskeletons) to help you move furniture or otherwise carry stuff would be very useful.

[ DEEP Robotics ]

A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot. The discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.

[ Georgia Tech ]

Dynamic locomotion of legged robots is a critical yet challenging topic in expanding the operational range of mobile robots. To achieve generalized legged locomotion on diverse terrains while preserving the robustness of learning-based controllers, this paper proposes to learn an attention-based map encoding conditioned on robot proprioception, which is trained as part of the end-to-end controller using reinforcement learning. We show that the network learns to focus on steppable areas for future footholds when the robot dynamically navigates diverse and challenging terrains.

[ Paper ] from [ ETH Zurich ]

In the fifth installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Google DeepMind’s Chief Scientist Jeff Dean for a conversation about the origin of Jeff’s pioneering work scaling neural networks. They discuss the first time AI captured Jeff’s imagination, the earliest Google Brain framework, the team’s stratospheric advancements in image recognition and speech-to-text, how AI is evolving, and more.

[ Moonshot Podcast ]

Reference: https://ift.tt/qtp4mib

6G Wireless Networks to Use Satellites as Base Stations




The future of wireless communication is today being sketched out in the skies and in space. A new generation of intelligent aerospace platforms—drones, airships, and satellites—will be part of tomorrow’s 6G networks, acting as, in effect, base stations in the sky. They’re expected to roll out in the early 2030s.

Researchers at the King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia, are amid the vanguard of innovators now imagining next-gen telecom networks in the atmosphere, the stratosphere, and orbit.

The sky won't be the limit for next-gen wireless platforms


Diagram of satellites, airships, and drones aiding network communication from space to Earth.


Reference: https://ift.tt/nlp8D7H

Google warns that mass data theft hitting Salesloft AI agent has grown bigger


Google is advising users of the Salesloft Drift AI chat agent to consider all security tokens connected to the platform compromised following the discovery that unknown attackers used some of the credentials to access email from Google Workspace accounts.

In response, Google has revoked the tokens that were used in the breaches and disabled integration between the Salesloft Drift agent and all Workspace accounts as it investigates further. The company has also notified all affected account holders of the compromise.

Scope expanded

The discovery, reported Thursday in an advisory update, indicates that a Salesloft Drift breach it reported on Tuesday is broader than previously known. Prior to the update, members of the Google Threat Intelligence Group said the compromised tokens were limited to Salesloft Drift integrations with Salesforce. The compromise of the Workspace accounts prompted Google to change that assessment.

Read full article

Comments

Reference : https://ift.tt/MRNlct3

Thursday, August 28, 2025

The First Inkjet Printer Was a Medical Device




Millions of people worldwide have reason to be thankful that Swedish engineer Rune Elmqvist decided not to practice medicine. Although qualified as a doctor, he chose to invent medical equipment instead. In 1949, while working at Elema-Schonander (later Siemens-Elema), in Stockholm, he applied for a patent for the Mingograph, the first inkjet printer. Its movable nozzle deposited an electrostatically controlled jet of ink droplets on a spool of paper.

Black and white photo of a man in a suit adjusting a piece of equipment on a tabletop. Rune Elmqvist qualified to be a physician, but he devoted his career to developing medical equipment, like this galvanometer.HÃ¥kan Elmqvist/Wikipedia

Elmqvist demonstrated the Mingograph at the First International Congress of Cardiology in Paris in 1950. It could record physiological signals from a patient’s electrocardiogram or electroencephalogram in real time, aiding doctors in diagnosing heart and brain conditions. Eight years later, he worked with cardiac surgeon Ã…ke Senning to develop the first fully implantable pacemaker. So whether you’re running documents through an inkjet printer or living your best life due to a pacemaker, give a nod of appreciation to the inventive Dr. Elmqvist.

The world’s first inkjet printer

Rune Elmqvist was an inquisitive person. While still a student, he invented a specialized potentiometer to measure pH and a portable multichannel electrocardiograph. In 1940, he became head of development at the Swedish medical electronics company Elema-Schonander.

Before the Mingograph, electrocardiograph machines relied on a writing stylus to trace the waveform on a moving roll of paper. But friction between the stylus and the paper prevented small changes in the electrical signal from being accurately recorded. Elmqvist’s initial design was a modified oscillograph. Traditionally, an oscillograph used a mirror to reflect a beam of light (converted from the electrical signal) onto photographic film or paper. Elmqvist swapped out the mirror for a small, moveable glass nozzle that continuously sprayed a thin stream of liquid onto a spool of paper. The electrical signal electrostatically controlled the jet.

Black and white photo of a shirtless man reclining in a chair with straps around his chest and forearms that are wired to a rectangular piece of equipment on a cart. The Mingograph was originally used to record electrocardiograms of heart patients. It soon found use in many other fields.Siemens Healthineers Historical Institute

By eliminating the friction of a stylus, the Mingograph (which the company marketed as the Mingograf) was able to record more detailed changes of the heartbeat. The machine had three paper-feed speeds: 10, 25, and 50 millimeters per second. The speed could be preset or changed while in operation.

An analog input jack on the Mingograph could be used to take measurements from other instruments. Researchers in disciplines far afield from medicine took advantage of this input to record pressure or sound. Phoneticians used it to examine the acoustic aspects of speech, and zoologists used it to record birdsongs. Throughout the second half of the 20th century, scientists cited the Mingograph in their research papers as an instrument for their experiments.

Today, the Mingograph isn’t that widely known, but the underlying technology, inkjet printing, is ubiquitous. Inkjets dominate the home printer market, and specialized printers print DNA microarrays in labs for genomics research, create electrical traces for printed circuit boards, and much more, as Phillip W. Barth and Leslie A. Field describe in their 2024 IEEE Spectrum article “Inkjets Are for More Than Just Printing.”

The world’s first implantable pacemaker

Despite the influence of the Mingograph on the evolution of printing, it is arguably not Elmqvist’s most important innovation. The Mingograph helped doctors diagnose heart conditions, but it couldn’t save a patient’s life by itself. One of Elmqvist’s other inventions could and did: the first fully implantable, rechargeable pacemaker.

Image with a chest x-ray in the background and a pair of hands holding 2 implantable pacemakers. The first implantable pacemaker [left] from 1958 had batteries that needed to be recharged once a week. The 1983 pacemaker [right] was programmable, and its batteries lasted several years.Siemens Healthineers Historical Institute

Like many stories in the history of technology, this one was pushed into fruition at the urging of a woman, in this case Else-Marie Larsson. Else-Marie’s 43-year-old husband, Arne, suffered from scarring of his heart tissue due to a viral infection. His heart beat so slowly that he constantly lost consciousness, a condition known as Stokes-Adams syndrome. Else-Marie refused to accept his death sentence and searched for an alternative. After reading a newspaper article about an experimental implantable pacemaker being developed by Elmqvist and Senning at the Karolinska Hospital in Stockholm, she decided that her husband would be the perfect candidate to test it out, even though it had been tried only on animals up until that point.

External pacemakers—that is, devices outside the body that regulated the heart beat by applying electricity—already existed, but they were heavy, bulky, and uncomfortable. One early model plugged directly into a wall socket, so the user risked electric shock.

By comparison, Elmqvist’s pacemaker was small enough to be implanted in the body and posed no shock risk. Fully encased in an epoxy resin, the disk-shaped device had a diameter of 55 mm and a thickness of 16 mm—the dimensions of the Kiwi Shoe Polish tin in which Elmqvist molded the first prototypes. It used silicon transistors to pace a pulse with an amplitude of 2 volts and duration of 1.5 milliseconds, at a rate of 70 to 80 beats per minute (the average adult heart rate).

The pacemaker ran on two rechargeable 60-milliampere-hour nickel-cadmium batteries arranged in series. A silicon diode connected the batteries to a coil antenna. A 150-kilohertz radio loop antenna outside the body charged the batteries inductively through the skin. The charge lasted about a week, but it took 12 hours to recharge. Imagine having to stay put that long.

Black and white photo of three older white men in suits. In 1958, over 30 years before this photo, Arne Larsson [right] received the first implantable pacemaker, developed by Rune Elmqvist [left] at Siemens-Elema. Åke Senning [center] performed the surgery.Sjöberg Bildbyrå/ullstein bild/Getty Images

Else-Marie’s persuasion and persistence pushed Elmqvist and Senning to move from animal tests to human trials, with Arne as their first case study. During a secret operation on 8 October 1958, Senning placed the pacemaker in Arne’s abdomen wall with two leads implanted in the myocardium, a layer of muscle in the wall of the heart. The device lasted only a few hours. But its replacement, which happened to be the only spare at the time, worked perfectly for six weeks and then off and on for several more years.

Photo of 5 implantable pacemakers. Arne Larsson lived another 43 years after his first pacemaker was implanted. Shown here are five of the pacemakers he received. Sjöberg Bildbyrå/ullstein bild/Getty Images

Arne Larsson clearly was happy with the improvement the pacemaker made to his quality of life because he endured 25 more operations over his lifetime to replace each failing pacemaker with a new, improved iteration. He managed to outlive both Elmqvist and Senning, finally dying at the age of 86 on 28 December 2001. Thanks to the technological intervention of his numerous pacemakers, his heart never gave out. His cause of death was skin cancer.

Today, more than a million people worldwide have pacemakers implanted each year, and an implanted device can last up to 15 years before needing to be replaced. (Some pacemakers in the 1980s used nuclear batteries, which could last even longer, but the radioactive material was problematic. See “The Unlikely Revival of Nuclear Batteries.”) Additionally, some pacemakers also incorporate a defibrillator to shock the heart back to a normal rhythm when it gets too far out of sync. This lifesaving device certainly has come a long way from its humble start in a shoe polish tin.

Rune Elmqvist’s legacy

Whenever I start researching the object of the month for Past Forward, I never know where the story will take me or how it might hit home. My dad lived with congestive heart failure for more than two decades and absolutely loved his pacemaker. He had a great relationship with his technician, Francois, and they worked together to fine-tune the device and maximize its benefits. And just like Arne Larsson, my dad died from an unrelated cause.

An engineer to the core, he would have delighted in learning about the history of this fantastic invention. And he probably would have been tickled by the fact that the same person also invented the inkjet printer. My dad was not a fan of inkjets, but I’m sure he would have greatly admired Rune Elmqvist, who saw problems that needed solving and came up with elegantly engineered solutions.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the September 2025 print issue.

References


There is frustratingly little documented information about the Mingograph’s origin story or functionality other than its patent. I pieced together how it worked by reading the methodology sections of various scientific papers, such as Alf Nachemson’s 1960 article in Acta Orthopaedica Scandinavica,Lumbar Intradiscal Pressure: Experimental Studies on Post-mortem Material”; Ingemar Hjorth’s 1970 article in the Journal of Theoretical Biology, “A Comment on Graphic Displays of Bird Sounds and Analyses With a New Device, the Melograph Mona”; and Paroo Nihalani’s 1975 article in Phonetica, “Velopharyngeal Opening in the Formation of Voiced Stops in Sindhi.” Such sources reveal how this early inkjet printer moved from cardiology into other fields.

Descriptions of Elmqvist’s pacemaker were much easier to find, with Mark Nicholls’s 2007 profile “Pioneers of Cardiology: Rune Elmqvist, M.D.,” in Circulation: Journal of the American Heart Association, being the main source. Siemens also pays tribute to the pacemaker on its website; see, for example, “A Lifesaver in a Plastic Cup.”

Reference: https://ift.tt/oB5aHCT

Unpacking Passkeys Pwned: Possibly the most specious research in decades


Don’t believe everything you read—especially when it’s part of a marketing pitch designed to sell security services.

The latest example of the runaway hype that can come from such pitches is research published today by SquareX, a startup selling services for securing browsers and other client-side applications. It claims, without basis, to have found a “major passkey vulnerability” that undermines the lofty security promises made by Apple, Google, Microsoft, and thousands of other companies that have enthusiastically embraced passkeys.

Ahoy, face-palm ahead

“Passkeys Pwned,” the attack described in the research, was demonstrated earlier this month in a Defcon presentation. It relies on a malicious browser extension, installed in an earlier social engineering attack, that hijacks the process for creating a passkey for use on Gmail, Microsoft 365, or any of the other thousands of sites that now use the alternative form of authentication.

Read full article

Comments

Reference : https://ift.tt/xRaebGU

Data Centers May House AI—But Operators Don’t Trust AI (Yet)




AI is starting to be trusted with high-stakes tasks, including running automated factories and guiding military drones through hostile airspace. But when it comes to managing the data centers that power this AI revolution, human operators are far more cautious.

According to a new survey of over 600 data center operators worldwide by Uptime Institute, a data center inspection and rating firm, only 14 percent say they would trust AI systems to change equipment configurations, even if it’s trained on years of historical data. In the same survey, just 1 in 3 operators say they would trust AI systems to control data center equipment.

Their skepticism may be justified: Despite pouring tens of billions of US dollars into AI systems, 95 percent of organizations thus far lack a clear return on investment, according to a recent MIT report of generative AI usage. Advanced industries, which include factories and data centers, ranked near the bottom of the list of sectors transformed by AI, if at all.

Operator Trust in AI Systems

Even before the AI-driven push to expand data centers, data center operators themselves are known to be a relatively change-averse crowd who have been disappointed by buzzy technologies of the past, says Rose Weinschenk, a research associate at Uptime Institute. Operators often have electrical engineering or technical mechanical backgrounds, with training in the running of critical facilities; others work on the IT or network system side and are also considered operators.

Operator trust in AI declined every year for the three years following OpenAI’s release of ChatGPT in 2022. When asked by Uptime if they trusted a trained AI system to run data center operations, 24 percent of respondents said no in 2022 and 42 percent said no in 2024. While the public has marveled at the seemingly all-knowing nature of new large language models, operators seem to feel this type of AI is too limited and unpredictable for use in data centers.

But now, operators appear to have entered a “period of careful testing and validation” of different types of AI systems in certain data center operations, said Uptime research analyst Max Smolaks in a public webinar of the latest survey results. To capture changing sentiments, Uptime asked operators in 2025 which applications AI might serve as a trustworthy decision-maker, assuming adequate past training. Over 70 percent of operators say they would trust AI to analyze sensor data or predict maintenance tasks for equipment, the survey shows.

“Data center operators are very, very happy to do certain things using AI, and they will never, never trust AI to do certain other things,” Smolaks said in the webinar.

AI’s Unpredictability in Data Centers

One reason why trust in AI is low for critical control of equipment is the technology’s unpredictability. Data centers are run on “good, old-fashioned” engineering, such as programmed if/then logic, says Robert Wright, the chief data center officer at Ilkari Data Centers, a data center startup company with two centers in Colombia and Iceland. “We say that we can’t run on luck, we have to run on certainty.”

Data centers are a complex series of systems that feed into each other. Mere seconds can pass before catastrophic failures occur that result in damaged chips, wasted money, angry customers, or fatal fires. In the high-stakes environment of data centers, anonymous posters on the r/datacenter Reddit forum who replied to an IEEE Spectrum query generally failed to see a reason to justify the risk that AI could bring.

Distrust may also mask an underlying job insecurity. Workers across many industries are concerned that AI will take their jobs. But the 2025 Uptime survey found that only one in five operators view AI as a way of reducing average staffing level.

“Operators believe that today’s AI is not going to replace the staff required to run their facilities,” Smolaks said in the Uptime webinar. “It might be coming for office workers, but data center jobs appear to be safe from AI for now.”

But it’s understandable for early career operators to still feel like this technology is coming for their jobs, says electrical engineer Jackson Fahrney, who has worked in data centers for over eight years. Someone just six months on the job may view an AI system like being told, “Here, train your replacement,” he says. In reality, he does not think AI will replace himself or others inside data centers. Yet AI carries an more “ominous” presence in the workplace than machine learning tools, which have long been part of an operator’s toolkit and are meant to assist operators when making decisions.

It could be that AI is the cherry on top of an industry-wide trend to reduce the number of operators within data centers, says Chris McLean, a data center design and construction consultant.

Whereas 60 engineers might have run a data center in the past, now only six are needed, McLean says. Less is required from those six, as well, as more and more critical maintenance is being outsourced to specialists outside of the data center. “Now you offset all of your risk with a low-cost human and a high-cost AI,” McLean said. “And I’ve got to imagine that that’s scary for operators.”

That said, there are more data center jobs than qualified applicants, as previously reported by Spectrum. Two-thirds of operators struggle with staff retention or recruitment, according to Uptime’s 2025 survey, similar to the responses from surveys for the previous two years.

Efficient AI Algorithms for Data Centers

Still, there are useful algorithms built on decades of machine learning research that could make data center operation more efficient. The most established AI system for data centers is predictive maintenance, says Ilkari’s Wright. If the readings of a particular HVAC unit are rising faster than those from other units, for instance, the system can predict when that unit needs to be serviced.

Other AI systems focus on optimizing chiller plants, which are, in effect, the refrigerator systems that keep the data center cool by circulating chilled water and air. Chillers account for much of the energy consumed by data centers. Data about weather patterns, load on the grid, and equipment degradation over time all feed into a single AI system run on hardware within the facility to optimize the total energy consumption, says Michael Berger, who runs research and development at the Australia-based energy software company Conserve IT.

But Berger is quick to note that his AI optimization software does not control equipment. It runs on top of the basic control loop and refines parameters to use less energy while achieving the same outcome, he says. Berger prefers to call this system machine learning instead of AI because of how specialized it is to the needs of a data center.

Others fully embrace AI, both the name and the technology, like Joe Minarik, the chief operating officer at DataBank, a Dallas-based data center company with 73 data centers across the U.S. and United Kingdom. He attributes his admittedly bullish attitude towards AI to his 17 years working for Amazon Web Services, where software is king. Currently, DataBank uses AI to write software, and there are plans to roll out AI systems for automated ticket generation and monitoring, as well as network configuration monitoring and adjustments by the end of the year. AI for bigger tasks, such as cooling, are tentatively scheduled for late 2026, subject to the time it takes to train the AI on enough data, he said.

AI does hallucinate: Minarik has watched it give the wrong information and send his team down the wrong path. “We do, we see it happen today. But we also see it getting better and better once we give it more time,” he says.

The key is “tremendous amounts of data points” in order for AI to understand the system, Minarik says. It’s not unlike training a human data center engineer about every possible scenario that could happen within the halls of a data center.

Hyperscalers and enterprise data centers, whose single customer is the company that owns the data center, are deploying AI at a faster pace than commercial companies like DataBank. Minarik is hearing of AI systems that run entire networks for in-house data centers.

When DataBank rolls out AI for more significant data center operations, it will be kept on a tight leash, Minarik says. Operators will still make final executions.

While AI will undoubtedly change how data centers run, Minarik sees operators as a core part of that new future. Data centers are physical places with on-site activity. “AI can’t walk out there and change a spark plug,” he says, or hear an odd rattle from a server rack. Although Minarik says that one day there could be sensors for some of these issues, they’ll still need physical human techs to fix the equipment that keep data centers running.

“If you want a safe job that can protect you from AI,” Minarik says, “Go to data centers.”

Reference: https://ift.tt/W853pBI

The personhood trap: How AI fakes human personality


Recently, a woman slowed down a line at the post office, waving her phone at the clerk. ChatGPT told her there's a "price match promise" on the USPS website. No such promise exists. But she trusted what the AI "knows" more than the postal worker—as if she'd consulted an oracle rather than a statistical text generator accommodating her wishes.

This scene reveals a fundamental misunderstanding about AI chatbots. There is nothing inherently special, authoritative, or accurate about AI-generated outputs. Given a reasonably trained AI model, the accuracy of any large language model (LLM) response depends on how you guide the conversation. They are prediction machines that will produce whatever pattern best fits your question, regardless of whether that output corresponds to reality.

Despite these issues, millions of daily users engage with AI chatbots as if they were talking to a consistent person—confiding secrets, seeking advice, and attributing fixed beliefs to what is actually a fluid idea-connection machine with no persistent self. This personhood illusion isn't just philosophically troublesome—it can actively harm vulnerable individuals while obscuring a sense of accountability when a company's chatbot "goes off the rails."

Read full article

Comments

Reference : https://ift.tt/cF1aSBR

Wednesday, August 27, 2025

Anthropic’s auto-clicking AI Chrome extension raises browser-hijacking concerns


As AI assistants become capable of controlling web browsers, a new security challenge has emerged: users must now trust that every website they visit won't try to hijack their AI agent with hidden malicious instructions. Experts voiced concerns about this emerging threat this week after testing from a leading AI chatbot vendor revealed that AI browser agents can be successfully tricked into harmful actions nearly a quarter of the time.

On Tuesday, Anthropic announced the launch of Claude for Chrome, a web browser-based AI agent that can take actions on behalf of users. Due to security concerns, the extension is only rolling out as a research preview to 1,000 subscribers on Anthropic's Max plan, which costs between $100 and $200 per month, with a waitlist available for other users.

The Claude for Chrome extension allows users to chat with the Claude AI model in a sidebar window that maintains the context of everything happening in their browser. Users can grant Claude permission to perform tasks like managing calendars, scheduling meetings, drafting email responses, handling expense reports, and testing website features.

Read full article

Comments

Reference : https://ift.tt/7BsQWIp

Trump Seeks to Cancels CHIPS Act R&D Organization’s Funds




The U.S. Commerce Department says it will not abide by an agreement to fund the U.S. CHIPS and Science Act’s R&D through the nonprofit set up to administer the program, called Natcast. Instead, it handed operational control to the National Institute of Standards and Technology (NIST).

Natcast was created in 2023 to oversee the National Semiconductor Technology Center (NSTC), which the law established to conduct “research and prototyping of advanced semiconductor technology and grow the domestic semiconductor workforce to strengthen the economic competitiveness and security of the domestic supply chain.”

The nonprofit was contracted to receive a total of US $7.4 billion, in annual payments and when the organization reaches milestones. But Commerce Secretary Howard Lutnick claimed that Natcast doesn’t meet certain legal requirements, and therefore the contract, inked less than a week before Donald J. Trump took office for the second time, is illegal.

Several NSTC proponents whom IEEE Spectrum spoke to are concerned that the move could squander U.S. semiconductor leadership in the long term. The goal of the NSTC, those involved say, is to make gains in semiconductors from the CHIPS Act durable through continued advances.

Since its establishment, Natcast has been working to bring up three key centers to execute those functions. In Silicon Valley, it’s established a workforce development and design enablement center. In New York, it opened a center for extreme-ultraviolet lithography for cutting edge chipmaking. And in Arizona, it plans to build a prototyping and packaging facility. The centers are intended to help startups and other companies more easily bridge the lab-to-fab gap that currently prevents new technologies from making it into commercial products.

“There were people from day one…who viewed [Natcast] as very much a political entity and wanted to undo it”

The CHIPS Act requires that the NSTC be operated as a “public private-sector consortium with participation from the private sector” instead of by a government agency. During the Biden administration, the Commerce Department created Natcast to fill that role, deliberately setting it up in a way to help maintain its independence from political interference.

In a public letter to Natcast CEO Deirdre Hanford, Lutnick cast the actions of Hanford, her staff, and the volunteer advisors involved in the organization’s creation as giving “the appearance of impropriety” and flouting “federal law.” “From the very beginning Natcast served as a semiconductor slush fund that did nothing but line the pockets of Biden loyalists with American tax dollars,” he said in a press release.

(IEEE Spectrum sought additional comment from the Commerce Department and from Natcast but did not receive a reply by press time.)

Very little funding has actually been delivered, sources say, in part because Commerce has held up its dispersal. (Despite this, NSTC does have a list of accomplishments and is planning a symposium in September at which it will unveil its research agenda.) Lutnick’s legal argument for refusing payment now is that Natcast wasn’t established in accordance with the Government Corporation Control Act, which lays out how government agencies establish or purchase corporations.

One person familiar with the situation who asked not to be named says that the structure of Natcast is typical of public-private partnerships and that its underpinnings were thoroughly reviewed by the Commerce Department before its establishment. What’s really at issue, this person says, is Natcast’s independence.

“What was set up… was always designed with a long-term strategy in mind. I don’t think they’ll get that back…. I think all of that has gone away with this decision”

“There were people from day one…who viewed [Natcast] as very much a political entity and wanted to undo it,” says this person.

In the letter, Lutnick takes aim at Hanford, formerly a top executive at electronic design automation giant Synopsys, as well as at Natcast staffers who came over from government during the Biden administration or from a volunteer industrial advisory committee that included IEEE Fellows and other chip industry leaders. Targeting such people is concerning, says one expert who preferred not to be named, because chip experts who choose to work in government or at Natcast are usually giving up more lucrative work to serve their country. It has the effect of “punishing patriotic behavior,” the expert said.

Delaying the work of the NSTC by attacking Natcast is counterproductive for the U.S. chip industry, the expert added. “We are in a race, and these delays make it all the more urgent.”

Commerce will likely find some way to spend the money on semiconductor R&D eventually, sources agreed. One expert told Spectrum they have faith in NIST’s ability to administer the research funding. Mark Granahan, an early proponent of the CHIPS Act and CEO of Ideal Semiconductor, in Bethlehem, Penn., went further. “If the administration has a different tactic but the same goal… not just independence in semiconductors but leadership… then NIST and other existing infrastructure is capable of handling things,” he said.

But other sources were skeptical it would have the same impact as Natcast. “What was set up… was always designed with a long-term strategy in mind,” said one person. “I don’t think they’ll get that back…. I think all of that has gone away with this decision.”

Reference: https://ift.tt/cai9NVA

It’s the End of the Line for AOL’s Dial-Up Service




The last time I used a dial-up modem came sometime around 2001. Within just a few years, dial-up had exited my life, never to return. I haven’t even had a telephone line in my house for most of my adult life.

Tedium logo, a red rectangle with the word Tedium in white, above the text "This post originally appeared on Tedium."

But I still feel a strong tinge of sadness to know that AOL is finally retiring the ol’ hobbyhorse. At the end of September, it’s gone. The timeline is almost on-the-nose fitting: The widespread access to the Internet AOL’s service brought in the 1990s is associated with a digital phenomenon called the Eternal September. Before AOL allowed broad access to Usenet—a precursor to today’s online discussion forums—most new users appeared each September, when new college students frequently joined the platform. Thanks to AOL, they began showing up daily starting around September 1993.

The fact that AOL’s dial-up is still active in the first place highlights a truism of technology: Sometimes, the important stuff sticks around well after it’s obsolete.

Why AOL is ditching dial-up now

It’s no surprise that dial-up has lingered for close to a quarter-century. Despite not having needed a dial-up modem myself since the summer of 2001, I was once so passionate about dial-up that I begged to get a modem for my 13th birthday. Modems are hard to shake, and not just because we fondly remember waiting so long for them to do their thing.

Originally, the telephone modem was a hack. It was pushed into public consciousness partly by Deaf users who worked around the phone industry’s monopolistic regulations to develop the teletypewriter, a system to communicate over phone lines via text. Along the way, the community invented technologies like the acoustic coupler.

To make that hack function, modems had to do multiple conversions in real time—from data to audio and back again, in two directions. As I put it in a piece that compared the modem to the telegraph:

The modem, at least in its telephone-based forms, represents a dance between sound and data. By translating information into an aural signal, then into current, then back into an aural signal, then back into data once again, the modulation and demodulation going on is very similar to the process used with the original telegraph, albeit done manually.

A retro rectangular modem that says U.S. Robotics Sportster 33.6 Faxmoden Modems like this one from U.S. Robotics work by converting data to audio and back again. Jphill19/Wikimedia Commons

With telegraphs, the information was input by a person, translated into electric pulses, and received by another person. Modems work the same way, just without human translators.

The result of all this back and forth was that modems had to give up a hell of a lot of speed to make this all work. The need to connect over a medium built for audio meant that data was at risk of getting lost over the line. (This is why error correction was an essential part of the modem’s evolution; often data needed to be shared more than once to ensure it got through. Without error correction, dial-up modems would be even slower.)

Remember that sound? It marked many users’ first experience getting online.AdventuresinHD/YouTube

Telephone lines were a hugely inefficient system for data because they were built for voice and heavily compressed audio. Voices are still clear and recognizable after being compressed, but audio compression can wreak havoc on data connections.

Plus, there was the problem of line access. With a call, you could not easily share a connection. That meant you couldn’t make phone calls while using dial-up, leading to some homes getting a second line. And at the Internet Service Provider level, having multiple lines got very complex, very fast.

The phone industry knew this, but its initial solution, ISDN, did not take off among mainstream consumers. (A later one, DSL, had better uptake, and is likely one of the few Internet options rural users currently have.)

In some areas of the United States, dial-up remains the best option—the result of decades of poor investment in Internet infrastructure.

So the industry moved to other solutions to get consumers Internet—coaxial cable, which was already widespread because of cable TV, and fiber, which wasn’t. The problem is, coax never reached quite as far as telephone wires did, in part because cable television wasn’t technically a utility in the way electricity or water were.

In recent years, many attempts have been made to classify Internet access as a public utility, though the most recent one was struck down by an appeals court earlier this year. The public utility regulation is important. The telephone had struggled to reach rural communities in the 1930s, and only did so after a series of regulations, including one that led to the creation of the Federal Communications Commission, were put into effect. So too did electricity, which needed a dedicated law to expand its reach.

But the reach of broadband is frustratingly incomplete, as highlighted by the fact that many areas of the country are not properly covered by cellular signals. And getting new wires hung can be an immensely difficult task, in part because companies that sell fiber, like Verizon and Google, often stop investing due to the high costs. (Though, to Google’s credit, it started expanding again in 2022 after a six-year rollback.)

So, in some areas of the United States, dial-up remains the best option—the result of decades of poor investment in Internet infrastructure. This, for years, has propped up companies like AOL, which has evolved numerous times since it foolishly merged with Time Warner a quarter-century ago.

A screenshot showing a 1994 DOS AOL client The first PC-based client called America Online appeared on the graphical operating system GeoWorks. This screenshot shows the DOS AOL client that was distributed with GeoWorks 2.01.Ernie Smith

But AOL is not the company it was. After multiple acquisitions and spin-outs, it is now a mere subsidiary of Yahoo, and it long ago transitioned into a Web-first property. Oh, it still has subscriptions, but they’re effectively fancy analogues for unnecessary security software. And their email client, while having been defeated by the likes of Gmail years ago, still has its fans.

When I posted the AOL news on social media, about 90 percent of the responses were jokes or genuine notes of respect. But there was a small contingent, maybe 5 percent, that talked about how much this was going to screw over far-flung communities. I don’t think it’s AOL’s responsibility to keep this model going forever.

Instead, it looks like the job is going to fall to two companies: Microsoft, whose MSN Dial-Up Internet Access costs US $179.95 per year, and the company United Online, which still operates the longtime dial-up players Juno and NetZero. Satellite Internet is also an option, with older services like HughesNet and newer ones like Starlink picking up the slack.

It’s not AOL’s fault. But AOL is the face of this failing.

AOL dropping dial-up is part of a long fade-out

As technologies go, the dial-up modem has not lasted quite as long as the telegram, which has been active in one form or another for 181 years. But the modem, which was first used in 1958 as part of an air-defense system, has stuck around for a good 67 years. That makes it one of the oldest pieces of computer-related technology still in modern use.

To give you an idea of how old that is: 1958 is also the year that the integrated circuit, an essential building block of any modern computer, was invented. The disk platter, which became the modern hard drive, was invented a year earlier. The floppy disk came a decade later.

(It should be noted that the modem itself is not dying—your smartphone has one—but the connection your landline has to your modem, the really loud one, has seen better days.)

The news that AOL is dropping its service might be seen as the end of the line for dial-up, but the story of the telegram hints that this may not be the case. In 2006, much hay was made about Western Union sending its final telegram. But Western Union was never the only company sending telegrams, and another company picked up the business. You can still send a telegram via International Telegram in 2025. (It’s not cheap: A single message, sent the same day, is $34, plus 75 cents per word.)

In many ways, AOL dropping the service is a sign that this already niche use case is going to get more niche. But niche use cases have a way of staying relevant, given the right audience. It’s sort of like why doctors continue to use pagers. As a Planet Money episode from two years ago noted, the additional friction of using pagers worked well with the way doctors functioned, because it ensured that they knew the messages they were getting didn’t compete with anything else.

Dial-up is likely never going to totally die, unless the landline phone system itself gets knocked offline, which AT&T has admittedly been itching to do. It remains one of the cheapest options to get online, outside of drinking a single coffee at a Panera and logging onto the wifi.

But AOL? While dial-up may have been the company’s primary business earlier in its life, it hasn’t really been its focus in quite a long time. AOL is now a highly diversified company, whose primary focus over the past 15 years has been advertising. It still sells subscriptions, but those subscriptions are about to lose their most important legacy feature.

AOL is simply too weak to support the next generation of Internet service themselves. Their inroad to broadband was supposed to be Time Warner Cable; that didn’t work out, so they pivoted to something else, but kept around the legacy business while it was still profitable. It’s likely that emerging technologies, like Microsoft’s Airband Initiative, which relies on distributing broadband over unused “white spaces” on the television dial, stand a better shot. 5G connectivity will also likely improve over time (T-Mobile already promotes its 5G home Internet as a rural option), and perhaps more satellite-based options will emerge.

Technologies don’t die. They just slowly become so irrelevant that they might as well be dead.

The monoculture of the AOL login experience

When I posted the announcement, hidden in an obscure link on the AOL website sent to me by a colleague, it immediately went viral on Bluesky and Mastodon.

That meant I got to see a lot of people react to this news in real time. Most had the same comment: I didn’t even know it was still around. Others made modem jokes, or talked about AOL’s famously terrible customer service. What was interesting was that most people said roughly the same thing about the service.

That is not the case with most online experiences, which usually reflect myriad points of views. I think it speaks to the fact that while the Internet was the ultimate monoculture killer, the experience of getting online for the first time was largely monocultural. Usually, it started with a modem connecting to a phone number and dropping us into a single familiar place.

We have lost a lot of Internet Service Providers over the years. Few spark the passion and memories of America Online, a network that somehow beat out more innovative and more established players to become the onramp to the Information Superhighway, for all the good and bad that represents.

AOL must be embarrassed of that history. It barely even announced its closure.

Reference: https://ift.tt/oHLCIpc

Tuesday, August 26, 2025

After teen suicide, OpenAI claims it is “helping people when they need it most”


OpenAI published a blog post on Tuesday titled "Helping people when they need it most" that addresses how its ChatGPT AI assistant handles mental health crises, following what the company calls "recent heartbreaking cases of people using ChatGPT in the midst of acute crises."

The post arrives after The New York Times reported on a lawsuit filed by Matt and Maria Raine, whose 16-year-old son Adam died by suicide in April after extensive interactions with ChatGPT, which Ars covered extensively in a previous post. According to the lawsuit, ChatGPT provided detailed instructions, romanticized suicide methods, and discouraged the teen from seeking help from his family while OpenAI's system tracked 377 messages flagged for self-harm content without intervening.

ChatGPT is a system of multiple models interacting as an application. In addition to a main AI model like GPT-4o or GPT-5 providing the bulk of the outputs, the application includes components that are typically invisible to the user, including a moderation layer (another AI model) or classifier that reads the text of the ongoing chat sessions. That layer detects potentially harmful outputs and can cut off the conversation if it veers into unhelpful territory.

Read full article

Comments

Reference : https://ift.tt/2Z3f6Ya

Fancy Flying Trick Could Bring Sensors to Earth’s “Ignorosphere”




Earth’s atmosphere is large, extending out to around 10,000 kilometers from the surface of the planet. It’s so large, in fact, that scientists break it into five separate sections. There’s one particular section that hasn’t got a whole lot of attention due to the difficulty in maintaining any craft there.

Planes and balloons can visit the troposphere and stratosphere, the two sections closest to the ground, while satellites can sit in orbit in the thermosphere and exosphere, allowing for a platform for consistent observations. But the mesosphere, the section in the middle, is too close to have a stable orbit, but too sparse in air density for traditional airplanes or balloons to work.

As a result, we don’t have a lot of data on it, but it impacts climate and weather forecasting, so scientists have simply had to make a lot of assumptions about what it’s like up there. Now, a new study from researchers at Harvard and the University of Chicago might have found a way to put stable sensing platforms into the mesosphere, using a novel flight mechanism known as photophoresis.

Universe Today logo; text reads "This post originally appeared on Universe Today."

The mesosphere itself is located between 50 and 85 km up, and while it isn’t technically considered “space,” it is very different from the lower levels of the atmosphere we are more accustomed to. It’s affected both by weather from below and above, reacting to solar storms as often as hurricanes. Since it serves as that kind of interface level, it plays a critical role in how the layers both above and below it react as well.

But we haven’t been able to place any stable monitoring equipment in it due to the difficulty for the two types of continual monitoring systems we have—balloons and satellites. This has led to the moniker “ignorosphere” because scientists have been forced to essentially ignore the existence of this layer of atmosphere due to lack of data.

Photophoresis Powers A New Atmospheric Sensor

Enter the new paper, published on 13 August in Nature, about long-term sensors in the mesosphere. Photophoresis is a process where more energy is created when gas molecules bounce off the “warm” side of an object than its “cool” side. In this case, the warm side is the side of the object facing the sun while the cool side is the underside facing Earth. The effect is only noticeable in low pressure environments, which is exactly what the mesosphere is.

Admittedly, the force from photophoresis is minuscule, so the researchers had to develop really tiny parts to have any chance of taking advantage of it. They recruited experts in nanofabrication techniques to make a centimeter-scale structures as a proof of concept and tested them in a vacuum chamber designed to have the same pressure as the mesosphere.

The prototypes reacted as expected, and managed to levitate a structure with just 55 percent of sunlight at a pressure comparable to that of the mesosphere. That marks a first that anyone has ever demonstrated a functional prototype of a photophoresis-powered flight, mainly due to how light the structure itself was.

Devices powered by this technique could be sent to monitor the mesosphere, but they could also be useful farther afield. Mars is an obvious candidate, because its low pressure and sparse atmosphere are both hallmarks of the planet but also largely unexplored at different layers. Other planets and moons could be potential targets as well—anything that has an atmosphere that is spare enough to support a levitating spacecraft could be served by one of these fliers.

Unfortunately, there’s still some advanced engineering left to do. The nanofabrication technique that was used to build the flight structure didn’t include any functional hardware, such as sensors or wireless communication equipment. A structure that simply floats without transmitting data isn’t scientifically useful, so in order for these devices to start making the type of scientific impact they hope to, the nanofabrication techniques will need to be improved to create a functional payload.

The researchers have no doubt that is possible though, and have already created a start-up company called Rarefied Technologies, which was accepted into the Breakthrough Energy Fellows program last year. With that support, and some ongoing research in nanofabrication, hopefully it will only be a matter of time before we see centimeter-sized sensors scattered throughout the “ignorosphere” and beyond.

Reference: https://ift.tt/al9wxBM

Video Friday: Biorobotics Turns Lobster Tails Into Gripper

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a w...