Wednesday, November 20, 2024

New "E-nose" Samples Odors 60 Times Per Second




Odors are all around us, and often disperse fast—in hazardous situations like wildfires, for example, wind conditions quickly carry any smoke (and the smell of smoke) away from its origin. Sending people to check out disaster zones is always a risk, so what if a robot equipped with an electronic nose, or e-nose, could track down a hazard by “smelling” for it?

This concept motivated a recent study in Science Advances, in which researchers built an e-nose that can not only detect odors at the same speed as a mouse’s olfactory system, but also distinguish between odors by the specific patterns they produce over time when interacting with the e-nose’s sensor.

“When odorants are carried away by turbulent airflow, they get chopped into smaller packets,” says Michael Schmuker, a professor at the University of Hertfordshire in the United Kingdom. Schmuker says that these odor packets can rapidly change, which means that an effective odor-sensing system needs to be fast to detect them. And the way in which packets change—and how frequently that happens—can give clues about how far away the odor’s source is.

How the E-nose Works

The e-nose uses metal oxide gas sensors with a sensing surface heated and cooled to between 150 °C and 400 °C at up to 20 times per second. Redox reactions take place on the sensing surface when it comes into direct contact with an odorant.

The electronic nose's circuitry and a microscopy image of the sensor with its housing removed. The new electronic nose is smaller than a credit card, and includes several sensors such as the one on the right.Nik Dennler et al.

The e-nose is smaller than a credit card, with a power consumption of only 1.2 to 1.5 watts (including the microprocessor and USB readout). The researchers built the system with off-the-shelf components, with custom-designed digital interfaces to allow odor dynamics to be probed more precisely when they encounter the heated electrodes making up the sensing surface. “Odorants flow around us in the air and some of them react with that hot surface,” says Schmuker. “How they react with it depends on their own chemical composition—they might oxidize or reduce the surface—but a chemical reaction takes place.”

As a result, the resistance of the metal oxide electrodes changes, which can be measured. The amount and dynamics of this change are different for different combinations of odorants and sensor materials. The e-nose uses two pairs of four distinct sensors to build a pattern of resistance response curves. Resistance response curves illustrate how a sensor’s resistance changes over time in response to a stimulus, such as an odor. These curves capture the sensor’s conversion of a physical interaction—like an odor molecule binding to its surface—into an electrical signal. Because each odor generates a distinct response pattern, analyzing how the electrical signal evolves over time enables the identification of specific odors.


“We discovered that rapidly switching the temperature back and forth between 150°C and 400°C about 20 times per second produced distinctive data patterns that made it easier to identify specific odors,” says Nik Dennler, a dual Ph.D. student at the University of Hertfordshire and Western Sydney University. By building up a picture of how the odorant reacts at these different temperatures, the response curves can be plugged into a machine learning algorithm to spot the patterns that relate to a specific odor.

While the e-nose does not “sniff” like a regular nose, the periodic heating cycle for detecting odors is reminiscent of the periodic sniffing that mammals perform.

Using the E-nose in Disaster Management

A discovery in 2021 by researchers at the Francis Crick Institute in London and the University College London showed that mice can discriminate odor fluctuations up to 40 times per second—contrary to a long-held belief that mammals require one or several sniffs to obtain any meaningful odor information.

In the new work—conducted in part by the same researchers behind the 2021 discovery—the researchers found that the e-nose can detect odors as quickly as a mouse can, with the ability to resolve and decode odor fluctuations up to 60 times per second. The e-nose can currently differentiate between 5 different odors when presented individually or in a mixture of two odors. The e-nose could detect additional odors if it is trained to do so.

“We found it could accurately identify odors in just 50 milliseconds and decode patterns between odors switching up to 40 times per second,” says Dennler. For comparison, recent research in humans suggests the threshold for distinguishing between two odors binding to the same olfactory receptors is about 60 ms.

The small scale and moderate power requirements could enable the e-nose to be deployed in robots used to pinpoint an odor’s source. “Other fast technologies exist, but are usually very bulky and you would need a large battery to power them,” says Schmuker. “We can put our device on a small robot and evaluate its use in applications that you use a sniffer dog for today.”

“As soon as you’re driving, walking, or flying around, you need to be really fast at sensing,” says Dennler. “With our e-nose, we can capture odor information at high speeds. Primary applications could involve odor-guided navigation tasks, or, more generally, collecting odor information while on the move.”

The researchers are looking at using these small e-nose robots in disaster management applications, including locating wildfires and gas leaks, and finding people buried in rubble after an earthquake.

Reference: https://ift.tt/iXzf2cp

Packaging and Robots




This is a sponsored article brought to you by Amazon.

The journey of a package from the moment a customer clicks “buy” to the moment it arrives at their doorstep is one of the most complex and finely tuned processes in the world of e-commerce. At Amazon, this journey is constantly being optimized, not only for speed and efficiency, but also for sustainability. This optimization is driven by the integration of cutting-edge technologies like artificial intelligence (AI), machine learning (ML), and robotics, which allow Amazon to streamline its operations while working towards minimizing unnecessary packaging.

The use of AI and ML in logistics and packaging is playing an increasingly vital role in transforming the way packages are handled across Amazon’s vast global network. In two interviews — one with Clay Flannigan, who leads manipulation robotics programs at Amazon, and another with Callahan Jacobs, an owner of the Sustainable Packaging team’s technology products — we gain insights into how Amazon is using AI, ML, and automation to push the boundaries of what’s possible in the world of logistics, while also making significant strides in sustainability-focused packaging.

The Power of AI and Machine Learning in Robotics

One of the cornerstones of Amazon’s transformation is the integration of AI and ML into its robotics systems. Flannigan’s role within the Fulfillment Technologies Robotics (FTR) team, Amazon Robotics, centers around manipulation robotics — machines that handle the individual items customers order on amazon.com. These robots, in collaboration with human employees, are responsible for picking, sorting, and packing millions of products every day. It’s an enormously complex task, given the vast diversity of items in Amazon’s inventory.

“Amazon is uniquely positioned to lead in AI and ML because of our vast data,” Flannigan explained. “We use this data to train models that enable our robots to perform highly complex tasks, like picking and packing an incredibly diverse range of products. These systems help Amazon solve logistics challenges that simply wouldn’t be possible at this scale without the deep integration of AI.”

At the core of Amazon’s robotic systems is machine learning, which allows the machines to “learn” from their environment and improve their performance over time. For example, AI-powered computer vision systems enable robots to “see” the products they are handling, allowing them to distinguish between fragile items and sturdier ones, or between products of different sizes and shapes. These systems are trained using expansive amounts of data, which Amazon can leverage due to its immense scale.

One particularly important application of machine learning is in the manipulation of unstructured environments. Traditional robotics have been used in industries where the environment is highly structured and predictable. But Amazon’s warehouses are anything but predictable. “In other industries, you’re often building the same product over and over. At Amazon, we have to handle an almost infinite variety of products — everything from books to coffee makers to fragile collectibles,” Flannigan said.

“There are so many opportunities to push the boundaries of what AI and robotics can do, and Amazon is at the forefront of that change.” —Clay Flannigan, Amazon

In these unstructured environments, robots need to be adaptable. They rely on AI and ML models to understand their surroundings and make decisions in real-time. For example, if a robot is tasked with picking a coffee mug from a bin full of diverse items, it needs to use computer vision to identify the mug, understand how to grip it without breaking it, and move it to the correct packaging station. These tasks may seem simple, but they require advanced ML algorithms and extensive data to perform them reliably at Amazon’s scale.

Sustainability and Packaging: A Technology-Driven Approach

While robotics and automation are central to improving efficiency in Amazon’s fulfillment centers, the company’s commitment to sustainability is equally important. Callahan Jacobs, product manager on FTR’s Mechatronics & Sustainable Packaging (MSP) team, is focused on preventing waste and aims to help reduce the negative impacts of packaging materials. The company has made significant strides in this area, leveraging technology to improve the entire packaging experience.

A photo of a packaging machine. Amazon

“When I started, our packaging processes were predominantly manual,” Jacobs explained. “But we’ve moved toward a much more automated system, and now we use machines that custom-fit packaging to items. This has drastically reduced the amount of excess material we use, especially in terms of minimizing the cube size for each package, and frees up our teams to focus on harder problems like how to make packaging out of more conscientious materials without sacrificing quality.”

Since 2015, Amazon has decreased its average per-shipment packaging weight by 43 percent, which represents more than 3 million metric tons of packaging materials avoided. This “size-to-fit” packaging technology is one of Amazon’s most significant innovations in packaging. By using automated machines that cut and fold boxes to fit the dimensions of the items being shipped, Amazon is able to reduce the amount of air and unused space inside packages. This not only reduces the amount of material used but also optimizes the use of space in trucks, planes, and delivery vehicles.

“By fitting packages as closely as possible to the items they contain, we’re helping to reduce both waste and shipping inefficiencies,” Jacobs explained.

Advanced Packaging Technology: The Role of Machine Learning

AI and ML play a critical role in Amazon’s efforts to optimize packaging. Amazon’s packaging technology doesn’t just aim to prevent waste but also ensures that items are properly protected during their journey through the fulfillment network. To achieve this balance, the company relies on advanced machine learning models that evaluate each item and determine the optimal packaging solution based on various factors, including the item’s fragility, size, and the route it needs to travel.

“We’ve moved beyond simply asking whether an item can go in a bag or a box,” said Jacobs. “Now, our AI and ML models look at each item and say, ‘What are the attributes of this product? Is it fragile? Is it a liquid? Does it have its own packaging, or does it need extra protection?’ By gathering this information, we can make smarter decisions about packaging, helping to result in less waste or better protection for the items.”

“By fitting packages as closely as possible to the items they contain, we’re helping to reduce both waste and shipping inefficiencies.” —Callahan Jacobs, Amazon

This process begins as soon as a product enters Amazon’s inventory. Machine Learning models analyze each product’s data to determine key attributes. These models may use computer vision to assess the item’s packaging or natural language processing to analyze product descriptions and customer feedback. Once the product’s attributes have been determined, the system decides which type of packaging is most suitable, helping to prevent waste while ensuring the item’s safe arrival.

“Machine learning allows us to make these decisions dynamically,” Jacobs added. “For example, an item like a t-shirt doesn’t need to be packed in a box—it can go in a paper bag. But a fragile glass item might need additional protection. By using AI and ML, we can make these decisions at scale, ensuring that we’re always prioritizing for the option that aims to benefits the customer and the planet.”

Dynamic Decision-Making With Real-Time Data

Amazon’s use of real-time data is a game-changer in its packaging operations. By continuously collecting and analyzing data from its fulfillment centers, Amazon can rapidly adjust its packaging strategies, optimizing for efficiency at scale. This dynamic approach allows Amazon to respond to changing conditions, such as new packaging materials, changes in shipping routes, or feedback from customers.

“A huge part of what we do is continuously improving the process based on what we learn,” Jacobs explained. “For example, if we find that a certain type of packaging isn’t satisfactory, we can quickly adjust our criteria and implement changes across our delivery network. This real-time feedback loop is critical in making our system more resilient and keeping it aligned with our team’s sustainability goals.”

This continuous learning process is key to Amazon’s success. The company’s AI and ML models are constantly being updated with new data, allowing them to become more accurate and effective over time. For example, if a new type of packaging material is introduced, the models can quickly assess its effectiveness and make adjustments as needed.

Jacobs also emphasized the role of feedback in this process. “We’re always monitoring the performance of our packaging,” she said. “If we receive feedback from customers that an item arrived damaged or that there was too much packaging, we can use that information to improve model outputs, which ultimately helps us continually reduce waste.”

Robotics in Action: The Role of Gripping Technology and Automation

One of the key innovations in Amazon’s robotic systems is the development of advanced gripping technology. As Flannigan explained, the “secret sauce” of Amazon’s robotic systems is not just in the machines themselves but in the gripping tools they use. These tools are designed to handle the immense variety of products Amazon processes every day, from small, delicate items to large, bulky packages.

A photo of a robot. Amazon

“Our robots use a combination of sensors, AI, and custom-built grippers to handle different types of products,” Flannigan said. “For example, we’ve developed specialized grippers that can handle fragile items like glassware without damaging them. These grippers are powered by AI and machine learning, which allow them to plan their movements based on the item they’re picking up.”

The robotic arms in Amazon’s fulfillment centers are equipped with a range of sensors that allow them to “see” and “feel” the items they’re handling. These sensors provide real-time data to the machine learning models, which then make decisions about how to handle the item. For example, if a robot is picking up a fragile item, it will use gentler strategy, whereas it might optimize for speed when handling a sturdier item.

Flannigan also noted that the use of robotics has significantly improved the safety and efficiency of Amazon’s operations. By automating many of the repetitive and physically demanding tasks in fulfillment centers, Amazon has been able to reduce the risk of injuries among its employees while also increasing the speed and accuracy of its operations. It also provides the opportunity to focus on upskilling. “There’s always something new to learn,” Flannigan said, “there’s no shortage of training and advancement options.”

Continuous Learning and Innovation: Amazon’s Culture of Growth

Both Flannigan and Jacobs emphasized that Amazon’s success in implementing these technologies is not just due to the tools themselves but also the culture of innovation that drives the company. Amazon’s engineers and technologists are encouraged to constantly push the boundaries of what’s possible, experimenting with new solutions and improving existing systems.

“Amazon is a place where engineers thrive because we’re always encouraged to innovate,” Flannigan said. “The problems we’re solving here are incredibly complex, and Amazon gives us the resources and freedom to tackle them in creative ways. That’s what makes Amazon such an exciting place to work.”

Jacobs echoed this sentiment, adding that the company’s commitment to sustainability is one of the things that makes it an attractive place for engineers. “Every day, I learn something new, and I get to work on solutions that have a real impact at a global scale. That’s what keeps me excited about my work. That’s hard to find anywhere else.”

The Future of AI, Robotics, and Innovation at Amazon

Looking ahead, Amazon’s vision for the future is clear: to continue innovating in the fields of AI, ML, and robotics for maximum customer satisfaction. The company is investing heavily in new technologies that are helping to progress its sustainability initiatives while improving the efficiency of its operations.

“We’re just getting started,” Flannigan said. “There are so many opportunities to push the boundaries of what AI and robotics can do, and Amazon is at the forefront of that change. The work we do here will have implications not just for e-commerce but for the broader world of automation and AI.”

Jacobs is equally optimistic about the future of the Sustainable Packaging team. “We’re constantly working on new materials and new ways to reduce waste,” she said. “The next few years are going to be incredibly exciting as we continue to refine our packaging innovations, making them more scalable without sacrificing quality.”

As Amazon continues to evolve, the integration of AI, ML, and robotics will be key to achieving its ambitious goals. By combining cutting-edge technology with a deep commitment to sustainability, Amazon is setting a new standard for how e-commerce companies can operate in the 21st century. For engineers, technologists, and environmental advocates, Amazon offers an unparalleled opportunity to work on some of the most challenging and impactful problems of our time.

Learn more about becoming part of Amazon’s Team.

Reference: https://ift.tt/6KDWIge

Tuesday, November 19, 2024

Niantic uses Pokémon Go player data to build AI navigation system


Last week, Niantic announced plans to create an AI model for navigating the physical world using scans collected from players of its mobile games, such as Pokémon Go, and from users of its Scaniverse app, reports 404 Media.

All AI models require training data. So far, companies have collected data from websites, YouTube videos, books, audio sources, and more, but this is perhaps the first we've heard of AI training data collected through a mobile gaming app.

"Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse," Niantic wrote in a company blog post.

Read full article

Comments

Reference : https://ift.tt/9Ccp1wy

Analog AI Startup Aims to Lower Gen AI's Power Needs




Machine learning chips that use analog circuits instead of digital ones have long promised huge energy savings. But in practice they’ve mostly delivered modest savings, and only for modest-sized neural networks. Silicon Valley startup Sageance says it has the technology to bring the promised power savings to tasks suited for massive generative AI models. The startup claims that its systems will be able to run the large language model Llama 2-70B at one-tenth the power of an Nvidia H100 GPU-based system, at one-twentieth the cost and in one-twentieth the space.

“My vision was to create a technology that was very differentiated from what was being done for AI,” says Sageance CEO and founder Vishal Sarin. Even back when the company was founded in 2018, he “realized power consumption would be a key impediment to the mass adoption of AI…. The problem has become many, many orders of magnitude worse as generative AI has caused the models to balloon in size.”

The core power-savings prowess for analog AI comes from two fundamental advantages: It doesn’t have to move data around and it uses some basic physics to do machine learning’s most important math.

That math problem is multiplying vectors and then adding up the result, called multiply and accumulate. Early on, engineers realized that two foundational rules of electrical engineers did the same thing, more or less instantly. Ohm’s Law—voltage multiplied by conductance equals current—does the multiplication if you use the neural network’s “weight” parameters as the conductances. Kirchoff’s Current Law—the sum of the currents entering and exiting a point is zero—means you can easily add up all those multiplications just by connecting them to the same wire. And finally, in analog AI, the neural network parameters don’t need to be moved from memory to the computing circuits—usually a bigger energy cost than computing itself—because they are already embedded within the computing circuits.

Sageance uses flash memory cells as the conductance values. The kind of flash cell typically used in data storage is a single transistor that can hold 3 or 4 bits, but Sageance has developed algorithms that let cells embedded in their chips hold 8 bits, which is the key level of precision for LLMs and other so-called transformer models. Storing an 8-bit number in a single transistor instead of the 48 transistors it would take in a typical digital memory cell is an important cost, area, and energy savings, says Sarin, who has been working on storing multiple bits in flash for 30 years.

An array of blue circles with transistor symbols connected by lines to triangles Digital data is converted to analog voltages [left]. These are effectively multiplied by flash memory cells [blue], summed, and converted back to digital data [bottom].Analog Inference

Adding to the power savings is that the flash cells are operated in a state called “deep subthreshold.” That is, they are working in a state where they are barely on at all, producing very little current. That wouldn’t do in a digital circuit, because it would slow computation to a crawl. But because the analog computation is done all at once, it doesn’t hinder the speed.

Analog AI Issues

If all this sounds vaguely familiar, it should. Back in 2018 a trio of startups went after a version of flash-based analog AI. Syntiant eventually abandoned the analog approach for a digital scheme that’s put six chips in mass production so far. Mythic struggled but stuck with it, as has Anaflash. Others, particularly IBM Research, have developed chips that rely on nonvolatile memories other than flash, such as phase-change memory or resistive RAM.

Generally, analog AI has struggled to meet its potential, particularly when scaled up to a size that might be useful in datacenters. Among its main difficulties are the natural variation in the conductance cells; that might mean the same number stored in two different cells will result in two different conductances. Worse still, these conductances can drift over time and shift with temperature. This noise drowns out the signal representing the result, and the noise can be compounded stage after stage through the many layers of a deep neural network.

Sageance’s solution, Sarin explains, is a set of reference cells on the chip and a proprietary algorithm that uses them to calibrate the other cells and track temperature-related changes.

Another source of frustration for those developing analog AI has been the need to digitize the result of the multiply and accumulate process in order to deliver it to the next layer of the neural network where it must then be turned back into an analog voltage signal. Each of those steps requires analog-to-digital and digital-to-analog converters, which take up area on the chip and soak up power.

According to Sarin, Sageance has developed low-power versions of both circuits. The power demands of the digital-to-analog converter are helped by the fact that the circuit needs to deliver a very narrow range of voltages in order to operate the flash memory in deep subthreshold mode.

Systems and What’s Next

Sageance’s first product, to launch in 2025, will be geared toward vision systems, which are a considerably lighter lift than server-based LLMs. “That is a leapfrog product for us, to be followed very quickly [by] generative AI,” says Sarin.

Rectangles of various size and texture arranged atop a long narrow rectangle. Future systems from Sageance will be made up of 3D-stacked analog chips linked to a processor and memory through an interposer that follows the universal chiplet interconnect (UCIe) standard.Analog Inference

The generative AI product would be scaled up from the vision chip mainly by vertically stacking analog AI chiplets atop a communications die. These stacks would be linked to a CPU die and to high-bandwidth memory DRAM in a single package called Delphi.

In simulations, a system made up of Delphis would run Llama2-70B at 666,000 tokens per second consuming 59 kilowatts, versus a 624 kW for an Nvidia H100-based system, Sageance claims.

Reference: https://ift.tt/Xq0hLWa

Monday, November 18, 2024

AI-generated shows could replace lost DVD revenue, Ben Affleck says


Last week, actor and director Ben Affleck shared his views on AI's role in filmmaking during the 2024 CNBC Delivering Alpha investor summit, arguing that AI models will transform visual effects but won't replace creative filmmaking anytime soon. A video clip of Affleck's opinion began circulating widely on social media not long after.

"Didn’t expect Ben Affleck to have the most articulate and realistic explanation where video models and Hollywood is going," wrote one X user.

In the clip, Affleck spoke of current AI models' abilities as imitators and conceptual translators—mimics that are typically better at translating one style into another instead of originating deeply creative material.

Read full article

Comments

Reference : https://ift.tt/Kw7XEMS

Shaping Africa’s Future With Microelectronics




Timothy Ayelagbe dreams of using technology to advance health care and make other improvements across Africa.

Ayelagbe calls microelectronics his “joy and passion” and says he wants to use the expertise he’s gaining in the field to help others.

“My ultimate goal,” he says, “is to uplift my fellow Africans.”

Timothy Ayelagbe


Volunteer Roles:

IEEE Youth Endeavors for Social Innovation Using Sustainable Technology ambassador, 2025 vice president of the IEEE Robotics and Automation Society student branch chapter

University:

Obafemi Awolowo University in Ile-Ife, Nigeria

Major:

Electronics and electrical engineering

Minor:

Microelectronics

He is pursuing an electronics and electrical engineering degree, specializing in microelectronics, at Obafemi Awolowo University (OAU), in Ile-Ife, Nigeria. He says he believes learning how to employ field-programmable gate arrays (FPGAs) is the path to mastering the hardware description languages that will let him develop affordable, sustainable medical electronics.

He says he hopes to apply his growing technical expertise and leadership abilities to address the continent’s challenges in health care, infrastructure, and natural resources management.

Ayelagbe is passionate about mentoring aspiring African engineers as well. Early this year, he became an IEEE Youth Endeavors for Social Innovation Using Sustainable Technology (YESIST) ambassador. The YESIST 12 program provides students and young professionals with a platform to showcase ideas for addressing humanitarian and social issues affecting their communities.

As an ambassador, Ayelagbe made online webinar sessions in his student branch while also mentoring pre-university students through activities encouraging service-oriented engineering practice.

A technologist right out of the gate

Born in Lagos, Nigeria, Ayelagbe was captivated by how things worked from a young age. As a child, he would dismantle and reassemble his toys to learn how they worked.

His mother, a trader, and his father, then a quality control officer in the metal processing industry, nurtured his curiosity. While the conventional path to upward mobility in Nigeria might have led him to becoming a doctor or nurse, his parents supported his pursuit of technology.

As it turns out, he is poised to advance the state of health care in Nigeria and around the globe.

For now, he is focused on his undergraduate studies and on gaining practical experience. He recently completed a six-week student work experience program as part of his university’s engineering curriculum. He and fellow OAU students developed an angular speed measurement system using Hall effect sensors, which calculates the speed when its Hall’s element moves in relation to a magnetic field. Changes in the voltage and current running through the Hall element can be used to calculate the strength of the magnetic field at different locations or to track changes in its position. One common use of Hall effect sensors is to monitor wheel speed in a vehicle’s antilock braking system.

“I want to apply the things I’m learning to make Africa great.”

Like commercialized versions, the students’ device was designed to withstand harsh weather and unfavorable road conditions. But theirs is certain to have a significantly lower price point than the magnetic devices it emulates, while producing more accurate readings than traditional mechanical versions, Ayelagbe says.

“We did some data processing and manipulation via Arduino programming using an ATmega microcontroller and a liquid crystal display to show the angular speed and frequency of rotation,” he says.

Because the measurement system has potential applications in automotive and other industries, Ayelagbe’s OAU team is seeking partnerships with other researchers to further develop and commercialize it. The team also hopes to publish its findings in an IEEE journal.

“In the future, I hope to work with semiconductor giant industries like TSMC, Nvidia, Intel, and Qualcomm,” he says.

Volunteering provides valuable experience

Despite Ayelagbe’s academic success, he has faced challenges in finding semiconductor internships, citing some companies’ geographical inaccessibility to African students. Instead, he says, he has been gaining valuable experience through volunteering.

He serves as a social media manager for the Paris-based Human Development Research Initiative (HDRI), an organization that works to inspire young people to help achieve the 17 sustainable U.N. development goals known collectively as Agenda 2030. He has been promoting environmental and climate action through LinkedIn posts.

Ayelagbe is an active IEEE volunteer and is involved in his student branch. He is the incoming vice president of the branch’s IEEE Robotics and Automation Society chapter and says he would love to take on more roles in the course of his leadership journey. He organizes webinars, meetings, and other initiatives, including connecting fellow student members with engineering professionals for mentorship.

Through his work with HDRI and IEEE, he has the opportunity to network with students, professionals, and industry experts. The connections, he hopes, can help him achieve his ambitions.

African nations “need engineers in the leadership sector,” he says, “and I want to apply the things I’m learning to make Africa great.”

Reference: https://ift.tt/vZdUa8r

Saturday, November 16, 2024

Predictions From IEEE’s 2024 Technology Megatrends Report




It’s time to start preparing your organization and employees for the effects of artificial general intelligence, sustainability, and digital transformation. According to IEEE’s 2024 Technology Megatrends report, the three technologies will change how companies, governments, and universities operate and will affect what new skills employees need.

A megatrend, which integrates multiple tendencies that evolve over two decades or so, is expected to have a substantial effect on society, technology, ecology, economics, and more.

More than 50 experts from Asia, Australia, Europe, Latin America, the Middle East, and the United States provided their perspectives for the report. They represent all 47 of IEEE’s fields of interest and come from academia, the public sector, and the private sector. The report includes insights and opportunities about each megatrend and how industries could benefit.

The experts compared their insights to technology predictions from Google Trends; the IEEE Computer Society and the IEEE Xplore Digital Library; and the U.S. Patent and Trademark Office.

“We made predictions about technology and megatrends and correlated them with other general megatrends such as economical, ecological, and sociopolitical. They’re all intertwined,” says IEEE Fellow Dejan Milojicic, a member of the IEEE Future Directions Committee and vice president at Hewlett Packard Labs in Milpitas, Calif. He is also a Hewlett Packard Enterprise Fellow.

The benefits and drawbacks of artificial general intelligence

Artificial general intelligence (AGI) includes ChatGPT, autonomous robots, wearable and implantable technologies, and digital twins.

Education, health care, and manufacturing are some of the sectors that can benefit most from AGI, the report says.

For academia, the technology can help expand remote learning, potentially replacing physical classrooms and leading to more personalized education for students.

In health care, the technology could lead to personalized medicine, tailored patient treatment plans, and faster drug discovery. AGI also could help reduce costs and increase efficiencies, the report says.

Manufacturing can use the technology to improve quality control, reduce downtime, and increase production. The time to market could be significantly shortened, the report says.

Today’s AI systems are specialized and narrow, so to reap the benefits, experts say, the widespread adoption of curated datasets, advances in AI hardware, and new algorithms will be needed. It will require interdisciplinary collaborations across computer science, engineering, ethics, and philosophy, the report says.

The report points out drawbacks with AGI, including a lack of data privacy, ethical challenges, and misuse of content.

Another concern is job displacement and the need for employees to be retrained. AGI requires more AI programmers and data scientists but fewer support staff and system administrators, the report notes.

Adopting digital technologies

Digital transformation tech includes autonomous technologies, ubiquitous connectivity, and smart environments.

The areas that would benefit most from expanding their use of computers and other electronic devices, the experts say, are construction, education, health care, and manufacturing.

The construction industry could use building information modeling (BIM), which generates digital versions of office buildings, bridges, and other structures to improve safety and efficiency.

Educational institutions already use electronics such as digital whiteboards, laptops, tablets, and smartphones to enhance the learning experience. But the experts point out that schools aren’t using the tools yet for continuing education programs needed to train workers on how to use new tools and technology.

“Most education processes are the same now as they were in the last century, at a time when we need to change to lifelong learning,” the experts say.

“We made predictions about technology and megatrends, but we correlated them with other general megatrends such as economical, ecological, and sociopolitical. They’re all intertwined.” —Dejan Milojicic

The report says the digital transformation will need more employees to supervise automation, as well as those with experience in analytics, but fewer operators and workers responsible for maintaining old systems.

The health field has started converting to electronic records, but more could be done, the report says, such as using computer-aided design to develop drugs and prosthetics and using BIM tools to design hospitals.

Manufacturing could benefit by using computer-aided-design data to create digital representations of product prototypes.

There are some concerns with digital transformation, the experts acknowledge. There aren’t enough chips and batteries to build all the devices and systems needed, for example, and not every organization or government can afford the digital tools. Also, people in underdeveloped areas who lack connectivity would not have access to them, leading to a widening of the digital divide. Other people might resist because of privacy, religious, or lifestyle concerns, the experts note.

Addressing the climate crisis

Technology can help engineer social and environmental change. Sustainability applications include clean renewable energy, decarbonization, and energy storage.

Nearly half of organizations around the world have a company-wide sustainability strategy, but only 18 percent have well-defined goals and a timetable for how to implement them, the report says. About half of companies lack the tools or expertise to deploy sustainable solutions. Meanwhile, information and communication technologies’ energy consumption is growing, using about 10 percent of worldwide electricity.

The experts predict that transitioning to more sustainable information and communication technologies will lead to entirely new businesses. Blockchain technology could be used to optimize surplus energy produced by microgrids, for example, ultimately leading to more jobs, less-expensive energy, and energy security. Early leaders in sustainability are already applying digital technologies such as AI, big data, blockchain, computer vision, and the Internet of Things to help operationalize sustainability.

Employees familiar with those technologies will be needed, the report predicts, adding that engineers who can design systems that are more energy efficient and environmentally friendly will be in demand.

Some of the challenges that could hinder such efforts include a lack of regulations, an absence of incentives to encourage people to become eco-friendly, and the high cost of sustainable technologies.

How organizations can work together

All three megatrends should be considered synergistically, the experts say. For example, AGI techniques can be applied to sustainable and digitally transformed technologies. Sustainability is a key aspect of technology, including AGI. And digital transformation needs to be continually updated with AGI and sustainability features, the report says.

The report included several recommendations for how academia, governments, industries, and professional organizations can work together to advance the three technologies.

To address the need to retrain employees, for example, industry should work with colleges and universities to educate the workforce and train instructors on the technologies.

To advance the science that supports the megatrend technologies, academia needs to work more closely with industry on research projects, the experts suggest. In turn, governments should foster research by academia and not-for-profit organizations.

Companies should advise government officials on how to best regulate the technologies. To gain widespread acceptance of the technologies, the risks and the benefits should be explained to the public to avoid misinformation, the experts say. In addition, processes, practices, and educational materials need to be created to address ethical issues surrounding the technologies.

“As a whole, these megatrends should focus on helping industry,” Milojicic says. “Government and academia are important in their own ways, but if we can make industry successful, everything else will come from that. Industry will fund academia, and governments will help industry.”

Professional organizations including IEEE will need to develop technical standards and road maps on the three areas, he says. A road map is a strategic look at the long-term landscape of a technology, what the trends are, and what the possibilities are.

The megatrends influence which initiatives IEEE is going to explore, Milojicic says, “which could potentially lead to future road maps and standards. In a way, we are doing the prework to prepare what they could eventually standardize.”

Dejan Milojicic discusses findings from IEEE’s 2024 Technology Megatrends report.

Dissemination and education are critical

The group encourages a broad dissemination of the three megatrends to avoid widening the digital divide.

“The speed of change could be faster than most people can adapt to—which could lead to fear and aggression toward technology,” the experts say. “Broad education is critical for technology adoption.”

Reference: https://ift.tt/a146Rhc

New "E-nose" Samples Odors 60 Times Per Second

Odors are all around us, and often disperse fast—in hazardous situations like wildfires, for example, wind conditions quickly carry any s...