Wednesday, January 31, 2024

Chinese malware removed from SOHO routers after FBI issues covert commands


A wireless router with an Ethernet cable hooked into it.

Enlarge / A Wi-Fi router. (credit: Getty Images | deepblue4you)

The US Justice Department said Wednesday that the FBI surreptitiously sent commands to hundreds of infected small office and home office routers to remove malware China state-sponsored hackers were using to wage attacks on critical infrastructure.

The routers—mainly Cisco and Netgear devices that had reached their end of life—were infected with what’s known as KV Botnet malware, Justice Department officials said. Chinese hackers from a group tracked as Volt Typhoon used the malware to wrangle the routers into a network they could control. Traffic passing between the hackers and the compromised devices was encrypted using a VPN module KV Botnet installed. From there, the campaign operators connected to the networks of US critical infrastructure organizations to establish posts that could be used in future cyberattacks. The arrangement caused traffic to appear as originating from US IP addresses with trustworthy reputations rather than suspicious regions in China.

Seizing infected devices

Before the takedown could be conducted legally, FBI agents had to receive authority—technically for what’s called a seizure of infected routers or "target devices"—from a federal judge. An initial affidavit seeking authority was filed in US federal court in Houston in December. Subsequent requests have been filed since then.

Read 11 remaining paragraphs | Comments

Reference : https://ift.tt/efQ5lwy

ChatGPT’s new @-mentions bring multiple personalities into your AI convo


Illustration of a man jugging at symbols.

Enlarge / With so many choices, selecting the perfect GPT can be confusing. (credit: Getty Images)

On Tuesday, OpenAI announced a new feature in ChatGPT that allows users to pull custom personalities called "GPTs" into any ChatGPT conversation with the @ symbol. It allows a level of quasi-teamwork within ChatGPT among expert roles that was previously impractical, making collaborating with a team of AI agents within OpenAI's platform one step closer to reality.

"You can now bring GPTs into any conversation in ChatGPT - simply type @ and select the GPT," wrote OpenAI on the social media network X. "This allows you to add relevant GPTs with the full context of the conversation."

OpenAI introduced GPTs in November as a way to create custom personalities or roles for ChatGPT to play. For example, users can build their own GPTs to focus on certain topics or certain skills. Paid ChatGPT subscribers can also freely download a host of GPTs developed by other ChatGPT users through the GPT Store.

Read 6 remaining paragraphs | Comments

Reference : https://ift.tt/n1emTtP

Tuesday, January 30, 2024

Nanostructures Bring Gains for Phase Change Memory




Engineers in the United States and Taiwan say they have demonstrated a promising new twist on nonvolatile memory that’s small enough, miserly when it comes to energy, and works at low enough voltage that it could boost the abilities of future processors.

The device is a type of phase change memory, a class of memory that holds information in the form of resistance and changes that resistance by melting and reforming its own crystal structure. The crystal in question, called a nanocomposite superlattice, leads to an order of magnitude improvement in the amount of power needed to write a bit, according to research reported last week in Nature Communications. The engineers say this form of phase-change memory (PCRAM) would be particularly useful in future compute-in-memory schemes, which save energy in machine learning by moving less data between memory and processor.

“With switching that low, logic and memory integration are possible.” —Asir Intisar Khan, Stanford

PCRAM has already been commercialized, but in it’s not a big segment of the market. It’s thought of as an in-between technology: It’s nonvolatile like flash memory but faster. Yet it’s slower than DRAM, a computer’s main memory, which is volatile. However, an individual phase-change device has the potential to store more data than an individual device of either of the others.

Among the problems holding PCRAM back are that it takes too much current to flip between states. But efforts to fix this have come with trade-offs, such as drifting resistance values. In earlier research, the Stanford University-based part of the team managed to both reduce the current and stabilize resistance. Their answer was a structure called a superlattice, repeating nanometer-scale layers of two different crystal materials. In such a structure, atomic-scale gaps between the layers restrict the flow of heat, so less current is needed to heat the structure and change its phase.

However, those early superlattice devices were too slow to switch and much too large for use in logic chips—about 600 nanometers across. And even though they showed improved energy efficiency, the device’s operating voltage was too high to be driven by CMOS logic, says Stanford post-doctoral researcher Asir Intisar Khan. The team wanted to see if the superlattice concept would work if it was shrunk down to the size and other requirements for use in CMOS ICs and whether doing so would mean the kind of difficult tradeoffs improving PCRAM usually demands.

The goal was a fast-switching, low-voltage, low-power device that was just tens of nanometers wide. “We had to scale it down to 40 nanometers but at the same time optimize all these different components,” says Khan. “If not, industry is not going to take it seriously.”

Getting there required a new material for the lattice, GST467, a compound having a 4:6:7 ratio of germanium, antimony, and tellurium. GST467 was discovered by researchers at University of Maryland, who later collaborated with those at Stanford and TSMC for use in superlattice PCRAM. The new material is considered a nanocomposite, because it has nanometer-scale crystal facets. “These can act as a crystallization template,” explains Xiangjin Wu, a doctoral researcher in the laboratory of Eric Pop at Stanford. Those templates make it easier for the device to regain its crystal structure when a new bit is written.

With a superlattice alternating between layers of GST467 and antimony telluride. Khan, Wu, and their team achieved 40-nanometer devices that work at 0.7 volts and switch in about 40 nanoseconds while consuming less than 1.5 picojoules. Additionally, the degree of resistance drift was low, it endured about 200 million switching cycles, and it could store data as 8 different resistance states for multi-bit storage per device or for use in analog machine learning circuits.

“With switching that low, logic and memory integration are possible,” says Khan. The memory cells can be controlled using ordinary logic transistors instead of larger devices meant for I/O, as they are now.

Khan says in addition to further improving the device’s endurance at higher temperatures, the researchers are going to explore what kind of system-level advantages integrating the new PCRAM into logic chips could bring. In particular, it could be useful in experimental 3D chips that are built from the bottom up, rather than from carefully connected stacks of already-constructed silicon ICs, as is done in some advanced CPUs and GPUs today. The new PCRAM could be a good fit for integration on top of silicon or other layers, because the device’s formation does not require high temperatures that would damage layers beneath it.

Reference: https://ift.tt/pOQrvNi

Raspberry Pi is preparing for an IPO in London for likely more than $500M


Raspberry Pi 5 with Active Cooler installed on a wood desktop

Enlarge / Is it not a strange fate that we should suffer so much fear and doubt for so small a thing? So small a thing! (credit: Andrew Cunningham)

The business arm of Raspberry Pi is preparing to make an initial public offering (IPO) in London, bringing new capital into the company and likely changing the nature of the privately held, charity-controlled company.

CEO Eben Upton confirmed in an interview with Bloomberg News that Raspberry Pi had appointed bankers at London firms Peel Hunt and Jefferies to prepare for "when the IPO market reopens."

Raspberry previously raised money from Sony and semiconductor and software design firm ARM, and it sought public investment. Upton denied or didn't quite deny IPO rumors in 2021, and Bloomberg reported Raspberry Pi was considering an IPO in early 2022. After ARM took a minority stake in the company in November 2023, Raspberry Pi was valued at roughly 400 million pounds, or just over $500 million.

Read 5 remaining paragraphs | Comments

Reference : https://ift.tt/AvrDKeS

Monday, January 29, 2024

ChatGPT is leaking passwords from private conversations of its users, Ars reader says


OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Enlarge (credit: Getty Images)

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.

“Horrible, horrible, horrible”

“THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better,” the user wrote. “I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong.”

Read 10 remaining paragraphs | Comments

Reference : https://ift.tt/cWLMZfT

Amazon’s Acquisition of iRobot Falls Through




Citing “no path to regulatory approval in the European Union,” Amazon and iRobot have announced the termination of an acquisition deal first announced in August of 2022 that would have made iRobot a part of Amazon and valued the robotics company at US $1.4 billion.


The European Commission released a statement today that explained some of its concerns, which to be fair, seem like reasonable things to be concerned about:

Our in-depth investigation preliminarily showed that the acquisition of iRobot would have enabled Amazon to foreclose iRobot’s rivals by restricting or degrading access to the Amazon Stores.… We also preliminarily found that Amazon would have had the incentive to foreclose iRobot’s rivals because it would have been economically profitable to do so. All such foreclosure strategies could have restricted competition in the market for robot vacuum cleaners, leading to higher prices, lower quality, and less innovation for consumers.

Amazon, for its part, characterizes this as “undue and disproportionate regulatory hurdles.” Whoever you believe is correct, the protracted strangulation of this acquisition deal has not been great for iRobot, and its termination is potentially disastrous—Amazon will have to pay iRobot a $94 million termination fee, which is basically nothing for it, and meanwhile iRobot is already laying off 350 people, or 31 percent of its head count.

From one of iRobot’s press releases:

“iRobot is an innovation pioneer with a clear vision to make consumer robots a reality,” said Colin Angle, Founder of iRobot. “The termination of the agreement with Amazon is disappointing, but iRobot now turns toward the future with a focus and commitment to continue building thoughtful robots and intelligent home innovations that make life better, and that our customers around the world love.”

The reason that I don’t feel much better after reading that statement is that Colin Angle has already stepped down as chairman and CEO of iRobot. Angle was one of the founders of iRobot (along with Rod Brooks and Helen Greiner) and has stuck with the company for its entire 30+ year existence, until just now. So, that’s not great. Also, I’m honestly not sure how iRobot is going to create much in the way of home innovations since the press release states that the company is “pausing all work related to non-floor care innovations, including air purification, robotic lawn mowing and education,” while also “reducing R&D expense by approximately $20 million year-over-year.”

iRobot’s lawn mower has been paused for a while now, so it’s not a huge surprise that nothing will move forward there, but a pause on the education robots like Create and Root is a real blow to the robotics community. And even if iRobot is focusing on floor-care innovations, I’m not sure how much innovation will be possible with a slashed R&D budget amidst huge layoffs.

Sigh.

On LinkedIn, Colin Angle wrote a little bit about what he called “the magic of iRobot”:

iRobot built the first micro rovers and changed space exploration forever. iRobot built the first practical robots that left the research lab and went on combat missions to defuse bombs, saving 1000’s of lives. iRobot’s robots crucially enabled the cold shutdown of the reactors at Fukushima, found the underwater pools of oil in the aftermath of the deep horizon oil rig disaster in the Gulf of Mexico. And pioneered an industry with Roomba, fulfilling the unfulfilled promise of over 50 years for practical robots in the home.

Why?
As I think about all the events surrounding those actions, there is a common thread. We believed we could. And we decided to try with a spirit of pragmatic optimism. Building robots means knowing failure. It does not treat blind hope kindly. Robots are too complex. Robots are too expensive. Robots are too challenging for hope alone to have the slightest chance of success. But combining the belief that a problem can be solved with a commitment to the work to solve it enabled us to change the world.

And that’s what I personally find so worrying about all of this. iRobot has a treasured history of innovation which is full of successes and failures and really weird stuff, and it’s hard to see how that will be able to effectively continue. Here are a couple of my favorite weird iRobot things, including a PackBot that flies (for a little bit) and a morphing blobular robot:

I suppose it’s worth pointing out that the weirdest stuff (like in the videos above) is all over a decade old, and you can reasonably ask whether iRobot was that kind of company anymore even before this whole Amazon thing happened. The answer is probably not, since the company has chosen to focus almost exclusively on floor-care robots. But even there we’ve seen consistent innovation in hardware and software that pretty much every floor-care robot company seems to then pick up on about a year later. This is not to say that other floor-care robots can’t innovate, but it’s undeniable that iRobot has been a driving force behind that industry. Will that continue? I really hope so.

Reference: https://ift.tt/meipnA7

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse


Boy in Living Room Wearing Robot Mask

Enlarge (credit: Getty Images)

On Monday, OpenAI announced a partnership with the nonprofit Common Sense Media to create AI guidelines and educational materials targeted at parents, educators, and teens. It includes the curation of family-friendly GPTs in OpenAI's GPT store. The collaboration aims to address concerns about the impacts of AI on children and teenagers.

Known for its reviews of films and TV shows aimed at parents seeking appropriate media for their kids to watch, Common Sense Media recently branched out into AI and has been reviewing AI assistants on its site.

"AI isn’t going anywhere, so it’s important that we help kids understand how to use it responsibly," Common Sense Media wrote on X. "That’s why we’ve partnered with @OpenAI to help teens and families safely harness the potential of AI."

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/XTHeAKy

This IEEE Service-Learning Program Is More Popular Than Ever




Since its founding in 1995 at Purdue University, the Engineering Projects in Community Service (EPICS) in IEEE program has been providing nonprofit organizations with technology to improve and deliver services to their community while broadening undergraduate EE students’ hands-on experiences.

In 2009 the EPICS program was brought to IEEE by Moshe Kam, an IEEE Fellow and the 2005–2007 vice president of IEEE Educational Activities; Senior Member Kapil Dandekar; and Fellow Saurabh Sinha. Together they founded EPICS in IEEE as an IEEE Educational Activities program. Funding for the program came from a seed grant through the IEEE New Initiatives Committee.

This year the program marks its 15th anniversary.

“When we created EPICS in IEEE,” Dandekar says, “we were very eager to align the perspective of service-learning from the EPICS program at Purdue with IEEE’s mission to foster technological innovation for the benefit of humanity.

“I am particularly proud of the continuing stakeholder engagement by engineers with humanitarian organizations in shaping projects. I firmly believe that this leads to a better learning experience for the engineering students and a more useful outcome for the humanitarian partner organization.”

During the past 15 years, more than 219 projects in 34 countries have been completed, involving more than 11,000 students in service-learning projects. Of those students, 47 percent identified as female.

“EPICS in IEEE has played a key role in expanding the global reach of projects in which engineering students bring their learning and skills to bear in addressing challenges faced by their local communities,” says Leah Jamieson, 2007 IEEE president and a cofounder of EPICS at Purdue, which is in West Lafayette, Ind. “By tackling community needs in the areas of access and abilities, education and outreach, human services, and the environment, students participating in EPICS in IEEE are gaining firsthand experience in marrying engineering and community. Project by project, they are contributing to IEEE’s goal of advancing technology for the benefit of humanity.”

A focus on learning outcomes

The program differs from other humanitarian efforts within IEEE because of its focus on engineering-student learning outcomes as well as the benefits to the local communities.

“EPICS in IEEE is a perfect way to merge engineering education and engagement,” Sinha says. “It provides an opportunity for universities to connect their students’ educational experiences to support the U.N. sustainable development goals.

“I’ve had the privilege of seeing EPICS in IEEE in many countries,” he adds, “and enjoyed the globalizing benefit that the program brought to all parties involved.”

The projects include a recycling center to reduce plastic waste at Ankole Institute, in southwestern Uganda. The center was built by students from Kyambogo University of Kampala, Uganda.

A team of students from the University of Florida, in Gainesville, designed a computer mouse for those whose hands or arms have an abnormality, so they could more easily play games.

Arizona State University students created a solar-powered air filtration system for nomadic people in Mongolia.

“By tackling community needs in the areas of access and abilities, education and outreach, human services, and the environment, students participating in EPICS in IEEE are gaining firsthand experience in marrying engineering and community.” Leah Jamieson, 2007 IEEE president and cofounder of EPICS at Purdue.

In Panama, engineering students from Universidad Tecnológica de Panama’s electrical engineering and computer science departments used their tech know-how to make the university’s campus more accessible for people with disabilities. They employed a 3D printer to make signs in braille, they built a wheelchair, and they automated the school’s doors to improve access.

In follow-up surveys about their EPICS in IEEE participation, students have shared that it was unlike anything they did in the classroom.

“This experience has been a profound learning opportunity,” says Leonardo Vergara, a systems engineering student at Universidad del Norte, Barranquilla, in Colombia.

“My collaboration and communication skills have also been greatly enhanced,” Vergara adds. “It has reaffirmed my belief in technology’s power to create positive social impact and ignited a sense of social responsibility.”

Continued growth

To take the program to the next level, EPICS in IEEE in 2014 created a fund through the IEEE Foundation to help donors support the program. The Foundation is IEEE’s philanthropic partner.

EPICS in IEEE continues to partner with EPICS Purdue and participates in its University Consortium, a network of institutions that strives to share ideas and support development for service-learning programs.

“Over the past 15 years, the term service learning has been evolving and is often now referred to as community-engaged learning,” says IEEE Member Stephanie Gillespie, chair of EPICS in IEEE. “This updated terminology better reflects the significant role the community has in the learning process. EPICS in IEEE requires significant community-partner engagement alongside the IEEE and student involvement because this partnership with a community organization is more likely to lead to long-term project support and maintenance once deployed into a community.

“As the field of service learning and community-engaged learning evolves, so does EPICS in IEEE. We are excited to celebrate how far EPICS in IEEE has come in 15 years.”

In the past two years, the committee has noticed increased interest in service learning, reviewing 190 proposals in 2023, compared with 77 the year before. The committee approved 39 projects last year, up from 23 in 2022.

The program has streamlined its processes and increased its marketing efforts. It now provides more resources to help ensure the success of the projects. In addition, EPICS volunteers have strengthened partnerships with IEEE affinity groups, technical societies, and regions and sections to raise awareness of the program among student and IEEE members.

Celebratory events

EPICS in IEEE is commemorating its 15th anniversary with a number of events. The celebration kicked off during the IEEE Rising Stars conference, held from 5 to 7 January in Las Vegas. At the conference, program volunteers and staff members gave presentations about its progress, and student teams showcased their prototypes.

Virtual events are planned for this year, as well as stories posted on the EPICS in IEEE website, highlighting past and current projects.

Reference: https://ift.tt/kXLdi9v

Sunday, January 28, 2024

Why Rip Off Creatives, If Generative AI Can Play Fair?




In recent years, AI ethicists have had a tough job. The engineers developing generative AI tools have been racing ahead, competing with each other to create models of even more breathtaking abilities, leaving both regulators and ethicists to comment on what’s already been done.

One of the people working to shift this paradigm is Alice Xiang, global head of AI ethics at Sony. Xiang has worked to create an ethics-first process in AI development within Sony and in the larger AI community. She spoke to Spectrum about starting with the data and whether Sony, with half its business in content creation, could play a role in building a new kind of generative AI.

Alice Xiang on...

  1. Responsible data collection
  2. Her work at Sony
  3. The impact of new AI regulations
  4. Creator-centric generative AI

Responsible data collection

IEEE Spectrum: What’s the origin of your work on responsible data collection? And in that work, why have you focused specifically on computer vision?

Alice Xiang: In recent years, there has been a growing awareness of the importance of looking at AI development in terms of entire life cycle, and not just thinking about AI ethics issues at the endpoint. And that’s something we see in practice as well, when we’re doing AI ethics evaluations within our company: How many AI ethics issues are really hard to address if you’re just looking at things at the end. A lot of issues are rooted in the data collection process—issues like consent, privacy, fairness, intellectual property. And a lot of AI researchers are not well equipped to think about these issues. It’s not something that was necessarily in their curricula when they were in school.

In terms of generative AI, there is growing awareness of the importance of training data being not just something you can take off the shelf without thinking carefully about where the data came from. And we really wanted to explore what practitioners should be doing and what are best practices for data curation. Human-centric computer vision is an area that is arguably one of the most sensitive for this because you have biometric information.

Spectrum: The term “human-centric computer vision”: Does that mean computer vision systems that recognize human faces or human bodies?

Xiang: Since we’re focusing on the data layer, the way we typically define it is any sort of [computer vision] data that involves humans. So this ends up including a much wider range of AI. If you wanted to create a model that recognizes objects, for example—objects exist in a world that has humans, so you might want to have humans in your data even if that’s not the main focus. This kind of technology is very ubiquitous in both high- and low-risk contexts.

“A lot of AI researchers are not well equipped to think about these issues. It’s not something that was necessarily in their curricula when they were in school.” —Alice Xiang, Sony

Spectrum: What were some of your findings about best practices in terms of privacy and fairness?

Xiang: The current baseline in the human-centric computer vision space is not great. This is definitely a field where researchers have been accustomed to using large web-scraped datasets that do not have any consideration of these ethical dimensions. So when we talk about, for example, privacy, we’re focused on: Do people have any concept of their data being collected for this sort of use case? Are they informed of how the data sets are collected and used? And this work starts by asking: Are the researchers really thinking about the purpose of this data collection? This sounds very trivial, but it’s something that usually doesn’t happen. People often use datasets as available, rather than really trying to go out and source data in a thoughtful manner.

This also connects with issues of fairness. How broad is this data collection? When we look at this field, most of the major datasets are extremely U.S.-centric, and a lot of biases we see are a result of that. For example, researchers have found that object-detection models tend to work far worse in lower-income countries versus higher-income countries, because most of the images are sourced from higher-income countries. Then on a human layer, that becomes even more problematic if the datasets are predominantly of Caucasian individuals and predominantly male individuals. A lot of these problems become very hard to fix once you’re already using these [datasets].

So we start there, and then we go into much more detail as well: If you were to collect a data set from scratch, what are some of the best practices? [Including] these purpose statements, the types of consent and best practices around human-subject research, considerations for vulnerable individuals, and thinking very carefully about the attributes and metadata that are collected.

Spectrum: I recently read Joy Buolamwini’s book Unmasking AI, in which she documents her painstaking process to put together a dataset that felt ethical. It was really impressive. Did you try to build a dataset that felt ethical in all the dimensions?

Xiang: Ethical data collection is an important area of focus for our research, and we have additional recent work on some of the challenges and opportunities for building more ethical datasets, such as the need for improved skin tone annotations and diversity in computer vision. As our own ethical data collection continues, we will have more to say on this subject in the coming months.

back to top

Her work at Sony

Spectrum: How does this work manifest within Sony? Are you working with internal teams who have been using these kinds of datasets? Are you saying they should stop using them?

Xiang: An important part of our ethics assessment process is asking folks about the datasets they use. The governance team that I lead spends a lot of time with the business units to talk through specific use cases. For particular datasets, we ask: What are the risks? How do we mitigate those risks? This is especially important for bespoke data collection. In the research and academic space, there’s a primary corpus of data sets that people tend to draw from, but in industry, people are often creating their own bespoke datasets.

“I think with everything AI ethics related, it’s going to be impossible to be purists.” —Alice Xiang, Sony

Spectrum: I know you’ve spoken about AI ethics by design. Is that something that’s in place already inside Sony? Are AI ethics talked about from the beginning stages of a product or a use case?

Xiang: Definitely. There are a bunch of different processes, but the one that’s probably the most concrete is our process for all our different electronics products. For that one, we have several checkpoints as part of the standard quality management system. This starts in the design and planning stage, and then goes to the development stage, and then the actual release of the product. As a result, we are talking about AI ethics issues from the very beginning, even before any sort of code has been written, when it’s just about the idea for the product.

back to top

The impact of new AI regulations

Spectrum: There’s been a lot of action recently on AI regulations and governance initiatives around the world. China already has AI regulations, the EU passed its AI Act, and here in the U.S. we had President Biden’s executive order. Have those changed either your practices or your thinking about product design cycles?

Xiang: Overall, it’s been very helpful in terms of increasing the relevance and visibility of AI ethics across the company. Sony’s a unique company in that we are simultaneously a major technology company, but also a major content company. A lot of our business is entertainment, including films, music, video games, and so forth. We’ve always been working very heavily with folks on the technology development side. Increasingly we’re spending time talking with folks on the content side, because now there’s a huge interest in AI in terms of the artists they represent, the content they’re disseminating, and how to protect rights.

“When people say ‘go get consent,’ we don’t have that debate or negotiation of what is reasonable.” —Alice Xiang, Sony

Generative AI has also dramatically impacted that landscape. We’ve seen, for example, one of our executives at Sony Music making statements about the importance of consent, compensation, and credit for artists whose data is being used to train AI models. So [our work] has expanded beyond just thinking of AI ethics for specific products, but also the broader landscapes of rights, and how do we protect our artists? How do we move AI in a direction that is more creator-centric? That’s something that is quite unique about Sony, because most of the other companies that are very active in this AI space don’t have much of an incentive in terms of protecting data rights.

back to top

Creator-centric generative AI

Spectrum: I’d love to see what more creator-centric AI would look like. Can you imagine it being one in which the people who make generative AI models get consent or compensate artists if they train on their material?

Xiang: It’s a very challenging question. I think this is one area where our work on ethical data curation can hopefully be a starting point, because we see the same problems in generative AI that we see for more classical AI models. Except they’re even more important, because it’s not only a matter of whether my image is being used to train a model, now [the model] might be able to generate new images of people who look like me, or if I’m the copyright holder, it might be able to generate new images in my style. So a lot of these things that we’re trying to push on—consent, fairness, IP and such—they become a lot more important when we’re thinking about [generative AI]. I hope that both our past research and future research projects will be able to really help.

Spectrum: Are you able to say whether Sony is developing generative AI models?

“I don’t think we can just say, ‘Well, it’s way too hard for us to solve today, so we’re just going to try to filter the output at the end.’” —Alice Xiang, Sony

Xiang: I can’t speak for all of Sony, but certainly we believe that AI technology, including generative AI, has the potential to augment human creativity. In the context of my work, we think a lot about the need to respect the rights of stakeholders, including creators, through the building of AI systems that creators can use with peace of mind.

Spectrum: I’ve been thinking a lot lately about generative AI’s problems with copyright and IP. Do you think it’s something that can be patched with the Gen AI systems we have now, or do you think we really need to start over with how we train these things? And this can be totally your opinion, not Sony’s opinion.

Xiang: In my personal opinion, I think with everything AI ethics related, it’s going to be impossible to be purists. Even though we are pushing very strongly for these best practices, we also acknowledge in all our research papers just how insanely difficult this is. If you were to, for example, uphold the highest practices for obtaining consent, it’s difficult to imagine that you could have datasets of the magnitude that a lot of the models nowadays require. You’d have to maintain relationships with billions of people around the world in terms of informing them of how their data is being used and letting them revoke consent.

Part of the problem right now is when people say “go get consent,” we don’t have that debate or negotiation of what is reasonable. The tendency becomes either to throw the baby out with the bathwater and ignore this issue, or go to the other extreme, and not have the technology at all. I think the reality will always have to be somewhere in between.

So when it comes to these issues of reproduction of IP-infringing content, I think it’s great that there’s a lot of research now being done on this specific topic. There are a lot of patches and filters that people are proposing. That said, I think we also will need to think more carefully about the data layer as well. I don’t think we can just say, “Well, it’s way too hard for us to solve today, so we’re just going to try to filter the output at the end.”

We’ll ultimately see what shakes out in terms of the courts in terms of whether this is going to be okay from a legal perspective. But from an ethics perspective, I think we’re at a point where there needs to be deep conversations on what is reasonable in terms of the relationships between companies that benefit from AI technologies and the people whose works were used to create it. My hope is that Sony can play a role in those conversations.

back to top

Reference: https://ift.tt/0jIt3ua

Friday, January 26, 2024

In major gaffe, hacked Microsoft test account was assigned admin privileges


In major gaffe, hacked Microsoft test account was assigned admin privileges

Enlarge

The hackers who recently broke into Microsoft’s network and monitored top executives’ email for two months did so by gaining access to an aging test account with administrative privileges, a major gaffe on the company's part, a researcher said.

The new detail was provided in vaguely worded language included in a post Microsoft published on Thursday. It expanded on a disclosure Microsoft published late last Friday. Russia-state hackers, Microsoft said, used a technique known as password spraying to exploit a weak credential for logging into a “legacy non-production test tenant account” that wasn’t protected by multifactor authentication. From there, they somehow acquired the ability to access email accounts that belonged to senior executives and employees working in security and legal teams.

A “pretty big config error”

In Thursday’s post updating customers on findings from its ongoing investigation, Microsoft provided more details on how the hackers achieved this monumental escalation of access. The hackers, part of a group Microsoft tracks as Midnight Blizzard, gained persistent access to the privileged email accounts by abusing the OAuth authorization protcol, which is used industry-wide to allow an array of apps to access resources on a network. After compromising the test tenant, Midnight Blizzard used it to create a malicious app and assign it rights to access every email address on Microsoft’s Office 365 email service.

Read 11 remaining paragraphs | Comments

Reference : https://ift.tt/p9cVCvX

Video Friday: Medusai




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICH
Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE
ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN
RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Made from beautifully fabricated steel and eight mobile arms, medusai can play percussion and strings with human musicians, dance with human dancers, and move in time to multiple human observers. It uses AI-driven computer vision to know what human observers are doing and responds accordingly through snake gestures, music, and light.

If this seems a little bit unsettling, that’s intentional! The project was designed to explore the concepts of trust and risk in the context of robots, and of using technology to influence emotion.

[ medusai ] via [ Georgia Tech ]

Thanks, Gil!

On April 19, 2021, NASA’s Ingenuity Mars Helicopter made history when it completed the first powered, controlled flight on the Red Planet. It flew for the last time on January 18, 2024.

[ NASA JPL ]

Teleoperation plays a crucial role in enabling robot operations in challenging environments, yet existing limitations in effectiveness and accuracy necessitate the development of innovative strategies for improving teleoperated tasks. The work illustrated in this video introduces a novel approach that utilizes mixed reality and assistive autonomy to enhance the efficiency and precision of humanoid robot teleoperation.

Sometimes all it takes is one good punch, and then you can just collapse.

[ Paper ] via [ IHMC ]

Thanks, Robert!

The new Dusty Robotics FieldPrinter 2 enhances on-site performance and productivity through its compact design and extended capabilities. Building upon the success of the first-generation FieldPrinter, which has printed over 91 million square feet of layout, the FieldPrint Platform incorporates lessons learned from years of experience in the field to deliver an optimized experience for all trades on site.

[ Dusty Robotics ]

Quadrupedal robots have emerged as a cutting-edge platform for assisting humans, finding applications in tasks related to inspection and exploration in remote areas. Nevertheless, their floating base structure renders them susceptible to failure in cluttered environments, where manual recovery by a human operator may not always be feasible. In this study, we propose a robust all-terrain recovery policy to facilitate rapid and secure recovery in cluttered environments.

[ DreamRiser ]

The work that Henry Evans is doing with Stretch (along with Hello Robot and Maya Cakmak’s lab at UW) will be presented at Humanoids this spring.

[ UW HCRL ]

Thanks, Stefan!

I like to imagine that these are just excerpts from one very long walk that Digit took around San Francisco.

[ Hybrid Robotics Lab ]

Boxing, drumming, stacking boxes, and various other practices... Those are the daily teleoperation testing of our humanoid robot. Collaborating with engineers, our humanoid robots collect real-world data from teleoperation for learning to iterate control algorithms.

[ LimX Dynamics ]

The OpenDR project aims to develop a versatile and open toolkit for fundamental robot functions, using deep learning to enhance their understanding and decision-making abilities. The primary objective is to make robots more intelligent, particularly in critical areas like healthcare, agriculture, and production. In the healthcare setting, the TIAGo robot is deployed to offer assistance and support within a healthcare facility.

[ OpenDR ] via [ PAL Robotics ]

[ ARCHES ]

Christoph Bartneck gives a talk entitled, “Social Robots—The end of the beginning or the beginning of the end?”

[ Christoph Bartneck ]

Prof. Michael Jordan offers his provocative thoughts on the blending of AI and economics and takes us on a tour of Trieste, a beautiful and grand city in northern Italy.

[ Berkeley ]

Reference: https://ift.tt/rzRHCVw

OpenAI announces ChatGPT-4 Turbo and ChatGPT 3.5 Turbo model updates


A lazy robot (a man with a box on his head) sits on the floor beside a couch.

Enlarge (credit: Getty Images)

On Thursday, OpenAI announced updates to the AI models that power its ChatGPT assistant. Amid less noteworthy updates, OpenAI tucked in a mention of a potential fix to a widely reported "laziness" problem seen in GPT-4 Turbo since its release in November. The company also announced a new GPT-3.5 Turbo model (with lower pricing), a new embedding model, an updated moderation model, and a new way to manage API usage.

"Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of 'laziness' where the model doesn’t complete a task," writes OpenAI in its blog post.

Since the launch of GPT-4 Turbo, a large number of ChatGPT users have reported that the ChatGPT-4 version of its AI assistant has been declining to do tasks (especially coding tasks) with the same exhaustive depth as it did in earlier versions of GPT-4. We've seen this behavior ourselves while experimenting with ChatGPT over time.

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/YdOhnR1

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE


The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE

Enlarge (credit: Getty Images)

Hewlett Packard Enterprise (HPE) said Wednesday that Kremlin-backed actors hacked into the email accounts of its security personnel and other employees last May—and maintained surreptitious access until December. The disclosure was the second revelation of a major corporate network breach by the hacking group in five days.

The hacking group that hit HPE is the same one that Microsoft said Friday broke into its corporate network in November and monitored email accounts of senior executives and security team members until being driven out earlier this month. Microsoft tracks the group as Midnight Blizzard. (Under the company’s recently retired threat actor naming convention, which was based on chemical elements, the group was known as Nobelium.) But it is perhaps better known by the name Cozy Bear—though researchers have also dubbed it APT29, the Dukes, Cloaked Ursa, and Dark Halo.

“On December 12, 2023, Hewlett Packard Enterprise was notified that a suspected nation-state actor, believed to be the threat actor Midnight Blizzard, the state-sponsored actor also known as Cozy Bear, had gained unauthorized access to HPE’s cloud-based email environment,” company lawyers wrote in a filing with the Securities and Exchange Commission. “The Company, with assistance from external cybersecurity experts, immediately activated our response process to investigate, contain, and remediate the incident, eradicating the activity. Based on our investigation, we now believe that the threat actor accessed and exfiltrated data beginning in May 2023 from a small percentage of HPE mailboxes belonging to individuals in our cybersecurity, go-to-market, business segments, and other functions.”

Read 15 remaining paragraphs | Comments

Reference : https://ift.tt/La46r03

Thursday, January 25, 2024

Blade Strike on Landing Ends Mars Helicopter’s Epic Journey




The Ingenuity Mars Helicopter made its 72nd and final flight on 18 January. “While the helicopter remains upright and in communication with ground controllers,” NASA’s Jet Propulsion Lab said in a press release this afternoon, “imagery of its Jan. 18 flight sent to Earth this week indicates one or more of its rotor blades sustained damage during landing, and it is no longer capable of flight.” That’s what you’re seeing in the picture above: the shadow of a broken tip of one of the helicopter’s four two-foot long carbon fiber rotor blades. NASA is assuming that at least one blade struck the Martian surface during a “rough landing,” and this is not the kind of damage that will allow the helicopter to get back into the air. Ingenuity’s mission is over.



The Perseverance rover took this picture of Ingenuity on on Aug. 2, 2023, just before flight 54.NASA/JPL-Caltech/ASU/MSSS

NASA held a press conference earlier this evening to give as much information as they can about exactly what happened to Ingenuity, and what comes next. First, here’s a summary from the press release:

Ingenuity’s team planned for the helicopter to make a short vertical flight on Jan. 18 to determine its location after executing an emergency landing on its previous flight. Data shows that, as planned, the helicopter achieved a maximum altitude of 40 feet (12 meters) and hovered for 4.5 seconds before starting its descent at a velocity of 3.3 feet per second (1 meter per second).

However, about 3 feet (1 meter) above the surface, Ingenuity lost contact with the rover, which serves as a communications relay for the rotorcraft. The following day, communications were reestablished and more information about the flight was relayed to ground controllers at NASA JPL. Imagery revealing damage to the rotor blade arrived several days later. The cause of the communications dropout and the helicopter’s orientation at time of touchdown are still being investigated.

While NASA doesn’t know for sure what happened, they do have some ideas based on the cause of the emergency landing during the previous flight, Flight 71. “[This location] is some of the hardest terrain we’ve ever had to navigate over,” said Teddy Tzanetos, Ingenuity Project Manager at NASA JPL, during the NASA press conference. “It’s very featureless—bland, sandy terrain. And that’s why we believe that during Flight 71, we had an emergency landing. She was flying over the surface and was realizing that there weren’t too many rocks to look at or features to navigate from, and that’s why Ingenuity called an emergency landing on her own.”

Ingenuity uses a downward-pointing VGA camera running at 30hz for monocular feature tracking, and compares the apparent motion of distinct features between frames to determine its motion over the ground. This optical flow technique is used for drones (and other robots) on Earth too, and it’s very reliable, as long as you have enough features to track. Where it starts to go wrong is when your camera is looking at things that are featureless, which is why consumer drones will sometimes warn you about unexpected behavior when flying over water, and why robotics labs often have bizarre carpets and wallpaper: the more features, the better. On Mars, Ingenuity has been reliably navigating by looking for distinctive features like rocks, but flying over a featureless expanse of sand caused serious problems, as Ingenuity’s Chief Pilot Emeritus Håvard Grip explained to us during today’s press conference:

The way a system like this works is by looking at the consensus of [the features] it sees, and then throwing out the things that don’t really agree with the consensus. The danger is when you run out of features, when you don’t have very many features to navigate on, and you’re not really able to establish what that consensus is and you end up tracking the wrong kinds of features, and that’s when things can get off track.

This view from Ingenuity’s navigation camera during flight 70 (on December 22) shows areas of nearly featureless terrain that would cause problems during flights 71 and 72.NASA/JPL-Caltech

After the Flight 71 emergency landing, the team decided to try a “pop-up” flight next: it was supposed to be about 30 seconds in the air, just straight up to 12 meters and then straight down as a check-out of the helicopter’s systems. As Ingenuity was descending, just before landing, there was a loss of communications with the helicopter. “We have reason to believe that it was facing the same featureless sandy terrain challenges [as in the previous flight],” said Tzanetos. “And because of the navigation challenges, we had a rotor strike with the surface that would have resulted in a power brownout which caused the communications loss.” Grip describes what he thinks happened in more detail:

Some of this is speculation because of the sparse telemetry that we have, but what we see in the telemetry is that coming down towards the last part of the flight, on the sand, when we’re closing in on the ground, the helicopter relatively quickly starts to think that it’s moving horizontally away from the landing target. It’s likely that it made an aggressive maneuver to try to correct that right upon landing. And that would have accounted for a sideways motion and tilt of the helicopter that could have led to either striking the blade to the ground and then losing power, or making a maneuver that was aggressive enough to lose power before touching down and striking the blade, we don’t know those details yet. We may never know. But we’re trying as hard as we can with the data that we have to figure out those details.

When the Ingenuity team tried reestablishing contact with the helicopter the next sol, “she was right there where we expected her to be,” Tzanetos said. “Solar panel currents were looking good, which indicated that she was upright.” In fact, everything was “green across the board.” That is, until the team started looking through the images from Ingenuity’s navigation camera, and spotted the shadow of the damaged lower blade. Even if that’s the only damage to Ingenuity, the whole rotor system is now both unbalanced and producing substantially less lift, and further flights will be impossible.

A closeup of the shadow of the damaged blade tip.NASA/JPL-Caltech

There’s always that piece in the back of your head that’s getting ready every downlink—today could be the last day, today could be the last day. So there was an initial moment, obviously, of sadness, seeing that photo come down and pop on screen, which gives us certainty of what occurred. But that’s very quickly replaced with happiness and pride and a feeling of celebration for what we’ve pulled off. Um, it’s really remarkable the journey that she’s been on and worth celebrating every single one of those sols. Around 9pm tonight Pacific time will mark 1000 sols that Ingenuity has been on the surface since her deployment from the Perseverance rover. So she picked a very fitting time to come to the end of her mission. —Teddy Tzanetos

The Ingenuity team is guessing that there’s damage to more than one of the helicopter’s blades; the blades spin fast enough that if one hit the surface, others likely did too. The plan is to attempt to slowly spin the blades to bring others into view to try and collect more information. It sounds unlikely that NASA will divert the Perseverance rover to give Ingenuity a closer look; while continuing on its sincere mission the rover will come between 200 and 300 meters of Ingenuity and will try to take some pictures, but that’s likely too far away for a good quality image.

Perseverance watches Ingenuity take off on flight 47 on March 14, 2023.NASA/JPL-Caltech/ASU/MSSS

As a tech demo, Ingenuity’s entire reason for existence was to push the boundaries of what’s possible. And as Grip explains, even in its last flight, the little helicopter was doing exactly that, going above and beyond and trying newer and riskier things until it got as far as it possibly could:

Overall, the way that Ingenuity has navigated using features of terrain has been incredibly successful. We didn’t design this system to handle this kind of terrain, but nonetheless it’s sort of been invincible until this moment where we flew in this completely bland terrain where you just have nothing to really hold on to. So there are some lessons in that for us: we now know that that particular kind of terrain can be a trap for a system like this. Backing up when encountering this featureless terrain is a functionality that a future helicopter could be equipped with. And then there are solutions like having a higher resolution camera, which would have likely helped mitigate this situation. But it’s all part of this tech demo, where we equipped this helicopter to do at most five flights in a pre-scouted area and it’s gone on to do so much more than that. And we just worked it all the way up to the line, and then just tipped it right over the line to where it couldn’t handle it anymore.

Arguably, Ingenuity’s most important contribution has been showing that it’s not just possible, but practical and valuable to have rotorcraft on Mars. “I don’t think we’d be talking about sample recovery helicopters if Ingenuity didn’t fly, period, and if it hadn’t survived for as long as it has,” Teddy Tzanetos told us after Ingenuity’s 50th flight. And it’s not just the sample return mission: JPL is also developing a much larger Mars Science Helicopter, which will owe its existence to Ingenuity’s success.

Nearly three years on Mars. 128 minutes and 11 miles of flight in the Martian skies. “I look forward to the day that one of our astronauts brings home Ingenuity and we can all visit it in the Smithsonian,” said Director of JPL Laurie Leshin at the end of today’s press conference.

I’ll be first in line.

We’ve written extensively about Ingenuity, including in-depth interviews with both helicopter and rover team members, and they’re well worth re-reading today. Thanks, Ingenuity. You did well.


What Flight 50 Means for the Ingenuity Mars Helicopter

Team lead Teddy Tzanetos on the helicopter’s milestone aerial mission


Mars Helicopter Is Much More Than a Tech Demo

A Mars rover driver explains just how much of a difference the little helicopter scout is making to Mars exploration


Ingenuity’s Chief Pilot Explains How to Fly a Helicopter on Mars

Simulation is the secret to flying a helicopter on Mars


How NASA Designed a Helicopter That Could Fly Autonomously on Mars

The Perseverance rover’s Mars Helicopter (Ingenuity) will take off, navigate, and land on Mars without human intervention

Reference: https://ift.tt/7RpkoJ1

Design Exploration of RF GaN Amplifier, 3D Packaging, and Thermal Analysis




Join us for this webinar on RF GaN amplifier design using electromagnetic/thermal 3D solvers. We will discuss the step-by-step process of building a GaN amplifier, beginning with the transistor model in the circuit simulator.

The webinar will outline the steps required to convert this to a physical layout for electromagnetic simulation and verification while integrating packaging and thermal effects co-simulation to analyze a complete packaged system. Discover how this comprehensive approach yields innovative solutions, important design insights, and their potential impact on packaged performance.

Register now for this free webinar!

What You Will Learn

  • Design and layout of a typical GaN amplifier using circuit and 3D EM tools
  • Incorporation of 3D component models (connectors, packages, etc.) for complete system analysis
  • Co-simulation of packaged devices with thermal solvers

Who Should Attend

  • Electromagnetic and circuit design engineers: Delve into the co-simulation nuances between circuit models and full 3D designs
  • System engineers: Develop an understanding of the packaged PA design and tradeoffs
  • Professionals in wireless communication: Gain comprehensive insights into the modern circuit/package co-simulation workflows
Reference: https://ift.tt/4wZBTHA

Microsoft cancels Blizzard survival game, lays off 1,900


Activision Blizzard survival game

Enlarge / Blizzard shared this image teasing a now-cancelled game in 2022. (credit: Blizzard Entertainment/Twitter)

The survival game that Blizzard announced it was working on in January 2022 has reportedly been cancelled. The cut comes as Microsoft is slashing jobs a little over four months after closing its $69 billion Activision Blizzard acquisition.

Blizzard's game didn't have a title yet, but Blizzard said it would be for PC and console and introduce new stories and characters. In January 2022, Blizzard put out a call for workers to help build the game.

The game's axing was revealed today in an internal memo from Microsoft Gaming CEO Phil Spencer seen by publications including The Verge and CNBC that said:

Read 12 remaining paragraphs | Comments

Reference : https://ift.tt/RB1iXco

AI will increase the number and impact of cyber attacks, intel officers say


AI will increase the number and impact of cyber attacks, intel officers say

Enlarge (credit: Getty Images)

Threats from malicious cyber activity are likely to increase as nation-states, financially motivated criminals, and novices increasingly incorporate artificial intelligence into their routines, the UK’s top intelligence agency said.

The assessment, from the UK’s Government Communications Headquarters, predicted ransomware will be the biggest threat to get a boost from AI over the next two years. AI will lower barriers to entry, a change that will bring a surge of new entrants into the criminal enterprise. More experienced threat actors—such as nation-states, the commercial firms that serve them, and financially motivated crime groups—will likely also benefit, as AI allows them to identify vulnerabilities and bypass security defenses more efficiently.

“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” Lindly Cameron, CEO of the GCHQ’s National Cyber Security Centre, said. Cameron and other UK intelligence officials said that their country must ramp up defenses to counter the growing threat.

Read 11 remaining paragraphs | Comments

Reference : https://ift.tt/NVSsbgi

Wednesday, January 24, 2024

Google’s latest AI video generator renders implausible situations for cute animals


Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

Enlarge / Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model. (credit: Google)

On Tuesday, Google announced Lumiere, an AI video generator that it calls "a space-time diffusion model for realistic video generation" in the accompanying preprint paper. But let's not kid ourselves: It does a great job at creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated.

According to Google, Lumiere utilizes unique architecture to generate a video's entire temporal duration in one go. Or, as the company put it, "We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution—an approach that inherently makes global temporal consistency difficult to achieve."

In layperson terms, Google's tech is designed to handle both the space (where things are in the video) and time (how things move and change throughout the video) aspects simultaneously. So, instead of making a video by putting together many small parts or frames, it can create the entire video, from start to finish, in one smooth process.

Read 8 remaining paragraphs | Comments

Reference : https://ift.tt/26LP5Tj

Tuesday, January 23, 2024

Mass exploitation of Ivanti VPNs is infecting networks around the globe


Cybercriminals or anonymous hackers use malware on mobile phones to hack personal and business passwords online.

Enlarge / Cybercriminals or anonymous hackers use malware on mobile phones to hack personal and business passwords online. (credit: Getty Images)

Hackers suspected of working for the Chinese government are mass exploiting a pair of critical vulnerabilities that give them complete control of virtual private network appliances sold by Ivanti, researchers said.

As of Tuesday morning, security company Censys detected 492 Ivanti VPNs that remained infected out of 26,000 devices exposed to the Internet. More than a quarter of the compromised VPNs—121—resided in the US. The three countries with the next biggest concentrations were Germany, with 26, South Korea, with 24, and China, with 21.

(credit: Censys)

Microsoft’s customer cloud service hosted the most infected devices with 13, followed by cloud environments from Amazon with 12, and Comcast at 10.

Read 9 remaining paragraphs | Comments

Reference : https://ift.tt/LYxmfXT

A “robot” should be chemical, not steel, argues man who coined the word


Enlarge (credit: Getty Images)

In 1921, Czech playwright Karel Čapek and his brother Josef invented the word "robot" in a sci-fi play called R.U.R. (short for Rossum's Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn't happy about how the term's meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

In a newly translated column called "The Author of the Robots Defends Himself," published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

This recently resurfaced article comes courtesy of a new English translation of Čapek's play called R.U.R. and the Vision of Artificial Life accompanied by 20 essays on robotics, philosophy, politics, and AI. The editor, Jitka Čejková, a professor at the Chemical Robotics Laboratory in Prague, aligns her research with Čapek's original vision. She explores "chemical robots"—microparticles resembling living cells—which she calls "liquid robots."

Read 4 remaining paragraphs | Comments

Reference : https://ift.tt/S4rD6Nz

The Top 10 Climate Tech Stories of 2024

In 2024, technologies to combat climate change soared above the clouds in electricity-generating kites, traveled the oceans sequestering...