Thursday, February 12, 2026

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips


On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding model on chips from Cerebras. The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic's Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.

"Cerebras has been a great engineering partner, and we're excited about adding fast inference as a new platform capability," Sachin Katti, head of compute at OpenAI, said in a statement.

Codex-Spark is a research preview available to ChatGPT Pro subscribers ($200/month) through the Codex app, command-line interface, and VS Code extension. OpenAI is rolling out API access to select design partners. The model ships with a 128,000-token context window and handles text only at launch.

Read full article

Comments

Reference : https://ift.tt/uJpN0K6

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says


On Thursday, Google announced that "commercially motivated" actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity "model extraction" and considers it intellectual property theft, which is a somewhat loaded position, given that Google's LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google's Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI's terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Read full article

Comments

Reference : https://ift.tt/3pQZ7Dw

LEDs Enter the Nanoscale




MicroLEDs, with pixels just micrometers across, have long been a byword in the display world. Now, microLED-makers have begun shrinking their creations into the uncharted nano realm. In January, a startup named Polar Light Technologies unveiled prototype blue LEDs less than 500 nanometers across. This raises a tempting question: How far can LEDs shrink?

We know the answer is, at least, considerably smaller. In the past year, two different research groups have demonstrated LED pixels at sizes of 100 nm or less.

These are some of the smallest LEDs ever created. They leave much to be desired in their efficiency—but one day, nanoLEDs could power ultra-high-resolution virtual reality displays and high-bandwidth on-chip photonics. And the key to making even tinier LEDs, if these early attempts are any precedents, may be to make more unusual LEDs.

New Approaches to LED

Take Polar Light’s example. Like many LEDs, the Sweden-based startup’s diodes are fashioned from III-V semiconductors like gallium nitride (GaN) and indium gallium nitride (InGaN). Unlike many LEDs, which are etched into their semiconductor from the top down, Polar Light’s are instead fabricated by building peculiarly shaped hexagonal pyramids from the bottom up.

Polar Light designed its pyramids for the larger microLED market, and plans to start commercial production in late 2026. But they also wanted to test how small their pyramids could shrink. So far, they’ve made pyramids 300 nm across. “We haven’t reached the limit, yet,” says Oskar Fajerson, Polar Light’s CEO. “Do we know the limit? No, we don’t, but we can [make] them smaller.”

Elsewhere, researchers have already done that. Some of the world’s tiniest LEDs come from groups who have foregone the standard III-V semiconductors in favor of other types of LEDs—like OLEDs.

“We are thinking of a different pathway for organic semiconductors,” says Chih-Jen Shih, a chemical engineer at ETH Zurich in Switzerland. Shih and his colleagues were interested in finding a way to fabricate small OLEDs at scale. Using an electron-beam lithography-based technique, they crafted arrays of green OLEDs with pixels as small as 100 nm across.

Where today’s best displays have 14,000 pixels per inch, these nanoLEDs—presented in an October 2025 Nature Photonics paper—can reach 100,000 pixels per inch.

Another group tried their hands with perovskites, cage-shaped materials best-known for their prowess in high-efficiency solar panels. Perovskites have recently gained traction in LEDs too. “We wanted to see what would happen if we make perovskite LEDs smaller, all the way down to the micrometer and nanometer length-scale,” says Dawei Di, engineer at Zhejiang University in Hangzhou, China.

Di’s group started with comparatively colossal perovskite LED pixels, measuring hundreds of micrometers. Then, they fabricated sequences of smaller and smaller pixels, each tinier than the last. Even after the 1 μm mark, they did not stop: 890 nm, then 440 nm, only bottoming out at 90 nm. These 90 nm red and green pixels, presented in a March 2025 Nature paper, likely represent the smallest LEDs reported to date.

Efficiency Challenges

Unfortunately, small size comes at a cost: Shrinking LEDs also shrinks their efficiency. Di’s group’s perovskite nanoLEDs have external quantum efficiencies—a measure of how many injected electrons are converted into photons—around 5 to 10 percent; Shih’s group’s nano-OLED arrays performed slightly better, topping 13 percent. For comparison, a typical millimeter-sized III-V LED can reach 50 to 70 percent, depending on its color.

Shih, however, is optimistic that modifying how nano-OLEDs are made can boost their efficiency. “In principle, you can achieve 30 percent, 40 percent external quantum efficiency with OLEDs, even with a smaller pixel, but it takes time to optimize the process,” Shih says.

Di thinks that researchers could take perovskite nanoLEDs to less dire efficiencies by tinkering with the material. Although his group is now focusing on the larger perovskite microLEDs, Di expects researchers will eventually reckon with nanoLEDs’ efficiency gap. If applications of smaller LEDs become appealing, “this issue could become increasingly important,” Di says.

What Can NanoLEDs Be Used For?

What can you actually do with LEDs this small? Today, the push for tinier pixels largely comes from devices like smart glasses and virtual reality headsets. Makers of these displays are hungry for smaller and smaller pixels in a chase for bleeding-edge picture quality with low power consumption (one reason that efficiency is important). Polar Light’s Fajerson says that smart-glass manufacturers today are already seeking 3 μm pixels.

But researchers are skeptical that VR displays will ever need pixels smaller than around 1 μm. Shrink pixels too far beyond that, and they’ll cross their light’s diffraction limit—that means they’ll become too small for the human eye to resolve. Shih’s and Di’s groups have already crossed the limit with their 100-nm and 90-nm pixels.

Very tiny LEDs may instead find use in on-chip photonics systems, allowing the likes of AI data centers to communicate with greater bandwidths than they can today. Chip manufacturing giant TSMC is already trying out microLED interconnects, and it’s easy to imagine chipmakers turning to even smaller LEDs in the future.

But the tiniest nanoLEDs may have even more exotic applications, because they’re smaller than the wavelengths of their light. “From a process point of view, you are making a new component that was not possible in the past,” Shih says.

For example, Shih’s group showed their nano-OLEDs could form a metasurface—a structure that uses its pixels’ nano-sizes to control how each pixel interacts with its neighbors. One day, similar devices could focus nanoLED light into laser-like beams or create holographic 3D nanoLED displays.

Reference: https://ift.tt/aZz4Xvs

What the FDA’s 2026 Update Means for Wearables




As new consumer hardware and software capabilities have bumped up against medicine over the last few years, consumers and manufacturers alike have struggled with identifying the line between “wellness” products such as earbuds that can also amplify and clarify surrounding speakers’ voices and regulated medical devices such as conventional hearing aids. On January 6, 2026, the U.S. Food and Drug Administration issued new guidance documents clarifying how it interprets existing law for the review of wearable and AI-assisted devices.

The first document, for general wellness, specifies that the FDA will interpret noninvasive sensors such as sleep trackers or heart rate monitors as low-risk wellness devices while treating invasive devices under conventional regulations. The other document defines how the FDA will exempt clinical decision support tools from medical device regulations, limiting such software to analyzing existing data rather than extracting data from sensors, and requiring them to enable independent review of their recommendations. The documents do not rewrite any statutes, but they refine interpretation of existing law, compared to the 2019 and 2022 documents they replace. They offer a fresh lens on how regulators see technology that sits at the intersection of consumer electronics, software, and medicine—a category many other countries are choosing to regulate more strictly rather than less.

What the 2026 update changed

The 2026 FDA update clarifies how it distinguishes between “medical information” and systems that measure physiological “signals” or “patterns.” Earlier guidance discussed these concepts more generally, but the new version defines signal-measuring systems as those that collect continuous, near-continuous, or streaming data from the body for medical purposes, such as home devices transmitting blood pressure, oxygen saturation, or heart rate to clinicians. It gives more concrete examples, like a blood glucose lab result as medical information versus continuous glucose monitor readings as signals or patterns.

The updated guidance also sharpens examples of what counts as medical information that software may display, analyze, or print. These include radiology reports or summaries from legally marketed software, ECG reports annotated by clinicians, blood pressure results from cleared devices, and lab results stored in electronic health records.

In addition, the 2026 update softens FDA’s earlier stance on clinical decision tools that offer only one recommendation. While prior guidance suggested tools needed to present multiple options to avoid regulation, FDA now indicates that a single recommendation may be acceptable if only one option is clinically appropriate, though it does not define how that determination will be made.

Separately, updates to the general wellness guidance clarify that some non-invasive wearables—such as optical sensors estimating blood glucose for wellness or nutrition awareness—may qualify as general wellness products, while more invasive technologies would not.

Wellness still requires accuracy

For designers of wearable health devices, the practical implications go well beyond what label you choose. “Calling something ‘wellness’ doesn’t reduce the need for rigorous validation,” says Omer Inan, a medical device technology researcher at the Georgia Tech School of Electrical and Computer Engineering. A wearable that reports blood pressure inaccurately could lead a user to conclude that their values are normal when they are not—potentially influencing decisions about seeking clinical care.

“In my opinion, engineers designing devices to deliver health and wellness information to consumers should not change their approach based on this new guidance,” says Inan. Certain measurements—such as blood pressure or glucose—carry real medical consequences regardless of how they’re branded, Inan notes.

Unless engineers follow robust validation protocols for technology delivering health and wellness information, Inan says, consumers and clinicians alike face the risk of faulty information.

To address that, Inan advocates for transparency: companies should publish their validation results in peer-reviewed journals, and independent third parties without financial ties to the manufacturer should evaluate these systems. That approach, he says, helps the engineering community and the broader public assess the accuracy and reliability of wearable devices.

When wellness meets medicine

The societal and clinical impacts of wearables are already visible, regardless of regulatory labels, says Sharona Hoffman, JD, a law and bioethics professor at Case Western Reserve University.

Medical metrics from devices like the Apple Watch or Fitbit may be framed as “wellness,” but in practice many users treat them like medical data, influencing their behavior or decisions about care, Hoffman points out.

“It could cause anxiety for patients who constantly check their metrics,” she notes. Alternatively, “A person may enter a doctor’s office confident that their wearable has diagnosed their condition, complicating clinical conversations and decision-making.”

Moreover, privacy issues remain unresolved, unmentioned in previous or updated guidance documents. Many companies that design wellness devices fall outside protections like the Health Insurance Portability and Accountability Act (HIPAA), meaning data about health metrics could be collected, shared, or sold without the same constraints as traditional medical data. “We don’t know what they’re collecting information about or whether marketers will get hold of it,” Hoffman says.

International approaches

The European Union’s Artificial Intelligence Act designates systems that process health-related data or influence clinical decisions as “high risk,” subjecting them to stringent requirements around data governance, transparency, and human oversight. China and South Korea have also implemented rules that tighten controls on algorithmic systems that intersect with healthcare or public-facing use cases. South Korea provides very specific categories for regulation for technology makers, such as standards on labeling and description on medical devices and good manufacturing practices.

Across these regions, regulators are not only classifying technology by its intended use but also by its potential impact on individuals and society at large.

“Other countries that emphasize technology are still worrying about data privacy and patients,” Hoffman says. “We’re going in the opposite direction.”

Post-market oversight

“Regardless of whether something is FDA approved, these technologies will need to be monitored in the sites where they’re used,” says Todd R. Johnson, a professor of biomedical informatics at McWilliams School of Biomedical Informatics at UTHealth Houston, who has worked on FDA-regulated products and informatics in clinical settings. “There’s no way the makers can ensure ahead of time that all of the recommendations will be sound.”

Large health systems may have the capacity to audit and monitor tools, but smaller clinics often do not. Monitoring and auditing are not emphasized in the current guidance, raising questions about how reliability and safety will be maintained once devices and software are deployed widely.

Balancing innovation and safety

For engineers and developers, the FDA’s 2026 guidance presents both opportunities and responsibilities. By clarifying what counts as a regulated device, the agency may reduce upfront barriers for some categories of technology. But that shift also places greater weight on design rigor, validation transparency, and post-market scrutiny.

“Device makers do care about safety,” Johnson says. “But regulation can increase barriers to entry while also increasing safety and accuracy. There’s a trade-off.”

Reference: https://ift.tt/PAOhzct

Wednesday, February 11, 2026

Once-hobbled Lumma Stealer is back with lures that are hard to resist


Last May, law enforcement authorities around the world scored a key win when they hobbled the infrastructure of Lumma, an infostealer that infected nearly 395,000 Windows computers over just a two-month span leading up to the international operation. Researchers said Wednesday that Lumma is once again “back at scale” in hard-to-detect attacks that pilfer credentials and sensitive files.

Lumma, also known as Lumma Stealer, first appeared in Russian-speaking cybercrime forums in 2022. Its cloud-based malware-as-a-service model provided a sprawling infrastructure of domains for hosting lure sites offering free cracked software, games, and pirated movies, as well as command-and-control channels and everything else a threat actor needed to run their infostealing enterprise. Within a year, Lumma was selling for as much as $2,500 for premium versions. By the spring of 2024, the FBI counted more than 21,000 listings on crime forums. Last year, Microsoft said Lumma had become the “go-to tool” for multiple crime groups, including Scattered Spider, one of the most prolific groups.

Takedowns are hard

The FBI and an international coalition of its counterparts took action early last year. In May, they said they seized 2,300 domains, command-and-control infrastructure, and crime marketplaces that had enabled the infostealer to thrive. Recently, however, the malware has made a comeback, allowing it to infect a significant number of machines again.

Read full article

Comments

Reference : https://ift.tt/tsvpjSP

Tips for Using AI Tools in Technical Interviews




This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

We’d like to introduce Brian Jenney, a senior software engineer and owner of Parsity, an online education platform that helps people break into AI and modern software roles through hands-on training. Brian will be sharing his advice on engineering careers with you in the coming weeks of Career Alert.

Here’s a note from Brian:

“12 years ago, I learned to code at the age of 30. Since then I’ve led engineering teams, worked at organizations ranging from five-person startups to Fortune 500 companies, and taught hundreds of others who want to break into tech. I write for engineers who want practical ways to get better at what they do and advance in their careers. I hope you find what I write helpful.”

Technical Interviews in the Age of AI Tools

Last year, I was conducting interviews for an AI startup position. We allowed unlimited AI usage during the technical challenge round. Candidates could use Cursor, Claude Code, ChatGPT, or any assistant they normally worked with. We wanted to see how they used modern tools.

During one interview, we asked a candidate a simple question: “Can you explain what the first line of your solution is doing?”

Silence.

After a long pause, he admitted he had no idea. His solution was correct. The code worked. But he couldn’t explain how or why. This wasn’t an isolated incident. Around 20 percent of the candidates we interviewed were unable to explain how their solutions worked, only that they did.

When AI Makes Interviews Harder

A few months earlier, I was on the other side of the table at this same company. During a live interview, I instinctively switched from my AI-enabled code editor to my regular one. The CTO stopped me.

“Just use whatever you normally would. We want to see how you work with AI.”

I thought the interview would be easy. But I was wrong.

Instead of only evaluating correctness, the interviewer focused on my decision-making process:

  • Why did I accept certain suggestions?
  • Why did I reject others?
  • How did I decide when AI helped versus when it created more work?

I wasn’t just solving a problem in front of strangers. I was explaining my judgment and defending my decisions in real time, and AI created more surface area for judgment. Counterintuitively, the interview was harder.

The Shift in Interview Evaluation

Most engineers now use AI tools in some form, whether they write code, analyze data, design systems, or automate workflows. AI can generate output quickly, but it can’t explain intent, constraints, or tradeoffs.

More importantly, it can’t take responsibility when something breaks.

As a result, major companies and startups alike are now adapting to this reality by shifting to interviews with AI. Meta, Rippling, and Google, for instance, have all begun allowing candidates to use AI assistants in technical sessions. And the goal has evolved: interviewers want to understand how you evaluate, modify, and trust AI-generated answers.

So, how can you succeed in these interviews?

What Actually Matters in AI-Enabled Interviews

Refusing to use AI out of principle doesn’t help. Some candidates avoid AI to prove they can think independently. This can backfire. If the organization uses AI internally—and most do—then refusing to use it signals rigidity, not strength.

Silence is a red flag. Interviews aren’t natural working environments. We don’t usually think aloud when deep in a complex problem, but silence can raise concerns. If you’re using AI, explain what you’re doing and why:

  • “I’m using AI to sketch an approach, then validating assumptions.”
  • “This suggestion works, but it ignores a constraint we care about.”
  • “I’ll accept this part, but I want to simplify it.”

Your decision-making process is what separates effective engineers from prompt jockeys.

Treat AI output as a first draft. Blind acceptance is the fastest way to fail. Strong candidates immediately evaluate the output: Does this meet the requirements? Is it unnecessarily complex? Would I stand behind this in production?

Small changes like renaming variables, removing abstractions, or tightening logic signal ownership and critical thinking.

Optimize for trust, not completion. Most AI tools can complete a coding challenge faster than any human. Interviews that allow AI are testing something different. They’re answering: “Would I trust this person to make good decisions when things get messy?”

Adapting to a Shifting Landscape

Interviews are changing faster than most candidates realize. Here’s how to prepare:

Start using AI tools daily. If you’re not already working with Cursor, Claude Code, ChatGPT, or CoPilot, start now. Build muscle memory for prompting, evaluating output, and catching errors.

Develop your rejection instincts. The skill isn’t using AI. It’s knowing when AI output is wrong, incomplete, or unnecessarily complex. Practice spotting these issues and learning known pitfalls.

Your next interview might test these skills. The candidates who’ve been practicing will have a clear advantage.

—Brian

Was 2025 Really the Year of AI Agents?

Around this time last year, CEOs like Sam Altman promised that 2025 would be the year AI agents would join the workforce as your own personal assistant. But in hindsight, did that really happen? It depends on who you ask. Some programmers and software engineers have embraced agents like Cursor and Claude Code in their daily work. But others are still wary of the risks these tools bring, such as a lack of accountability.

Read more here.

Class of 2026 Salary Projections Are Promising

In the United States, starting salaries for students graduating this spring are expected to increase, according to the latest data from the National Association of Colleges and Employers. Computer science and engineering majors are expected to be the highest paying graduates, with a 6.9 percent and 3.1 percent salary increase from last year, respectively. The full report breaks down salary projections by academic major, degree level, industry, and geographic region.

Read more here.

Go Global to Make Your Career Go Further

If given the opportunity, are international projects worth taking on? As part of a career advice series by IEEE Spectrum’s sister publication, The Institute, the chief engineer for Honeywell lays out the advantages of working with teams from around the world. Participating in global product development, the author says, could lead to both personal and professional enrichment. Read more here. Reference: https://ift.tt/wxobd5V

How Do You Define an AI Companion?




For a different perspective on AI companions, see our Q&A with Brad Knox: How Can AI Companions Be Helpful, not Harmful?

AI models intended to provide companionship for humans are on the rise. People are already frequently developing relationships with chatbots, seeking not just a personal assistant but a source of emotional support.

In response, apps dedicated to providing companionship (such as Character.ai or Replika) have recently grown to host millions of users. Some companies are now putting AI into toys and desktop devices as well, bringing digital companions into the physical world. Many of these devices were on display at CES last month, including products designed specifically for children, seniors, and even your pets.

AI companions are designed to simulate human relationships by interacting with users like a friend would. But human-AI relationships are not well understood, and companies are facing concern about whether the benefits outweigh the risks and potential harm of these relationships, especially for young people. In addition to questions about users’ mental health and emotional well being, sharing intimate personal information with a chatbot poses data privacy issues.

Nevertheless, more and more users are finding value in sharing their lives with AI. So how can we understand the bonds that form between humans and chatbots?

Jaime Banks is a professor at the Syracuse University School of Information Studies who researches the interactions between people and technology—in particular, robots and AI. Banks spoke with IEEE Spectrum about how people perceive and relate to machines, and the emerging relationships between humans and their machine companions.

Defining AI Companionship

How do you define AI companionship?

Jaime Banks: My definition is evolving as we learn more about these relationships. For now, I define it as a connection between a human and a machine that is dyadic, so there’s an exchange between them. It is also sustained over time; a one-off interaction doesn’t count as a relationship. It’s positively valenced—we like being in it. And it is autotelic, meaning we do it for its own sake. So there’s not some extrinsic motivation, it’s not defined by an ability to help us do our jobs or make us money.

I have recently been challenged by that definition, though, when I was developing an instrument to measure machine companionship. After developing the scale and working to initially validate it, I saw an interesting situation where some people do move toward this autotelic relationship pattern. “I appreciate my AI for what it is and I love it and I don’t want to change it.” It fit all those parts of the definition. But then there seems to be this other relational template that can actually be both appreciating the AI for its own sake, but also engaging it for utilitarian purposes.

That makes sense when we think about how people come to be in relationships with AI companions. They often don’t go into it purposefully seeking companionship. A lot of people go into using, for instance, ChatGPT for some other purpose and end up finding companionship through the course of those conversations. And we have these AI companion apps like Replika and Nomi and Paradot that are designed for social interaction. But that’s not to say that they couldn’t help you with practical topics.

Professor Jaime Banks programming the motions of a humanoid robot on a desktop computer. Jaime Banks customizes the software for an embodied AI social humanoid robot.Angela Ryan/Syracuse University

Different models are also programmed to have different “personalities.” How does that contribute to the relationship between humans and AI companions?

Banks: One of our Ph.D. students just finished a project about what happened when OpenAI demoted GPT-4o and the problems that people encountered, in terms of companionship experiences when the personality of their AI just completely changed. It didn’t have the same depth. It couldn’t remember things in the same way.

That echoes what we saw a couple years ago with Replika. Because of legal problems, Replika disabled for a period of time the erotic roleplay module and people described their companions as though they had been lobotomized, that they had this relationship and then one day they didn’t anymore. With my project on the tanking of the soulmate app, many people in their reflection were like, “I’m never trusting AI companies again. I’m only going to have an AI companion if I can run it from my computer so I know that it will always be there.”

Benefits and Risks of AI Relationships

What are the benefits and risks of these relationships?

Banks: There’s a lot of talk about the risks and a little talk about benefits. But frankly, we are only just on the precipice of starting to have longitudinal data that might allow people to make causal claims. The headlines would have you believe that these are the end of mankind, that they’re going to make you commit suicide or abandon other humans. But much of those are based on these unfortunate, but uncommon situations.

Most scholars gave up technological determinism as a perspective a long time ago. In the communication sciences at least, we don’t generally assume that machines make us do something because we have some degree of agency in our interactions with technologies. Yet much of the fretting around potential risks is deterministic—AI companions make people delusional, make them suicidal, make them reject other relationships. A large number of people get real benefits from AI companions. They narrate experiences that are deeply meaningful to them. I think it’s irresponsible of us to discount those lived experiences.

When we think about concerns linking AI companions to loneliness, we don’t have much data that can support causal claims. Some studies suggest AI companions lead to loneliness, but other work suggests it reduces loneliness, and other work suggests that loneliness is what comes first. Social relatedness is one of our three intrinsic psychological needs, and if we don’t have that we will seek it out, whether it’s from a volleyball for a castaway, my dog, or an AI that will allow me to feel connected to something in my world.

Some people, and governments for that matter, may move toward a protective stance. For instance, there are problems around what gets done with your intimate data that you hand over to an agent owned and maintained by a company—that’s a very reasonable concern. Dealing with the potential for children to interact, where children don’t always navigate the boundaries between fiction and actuality. There are real, valid concerns. However, we need some balance in also thinking about what people are getting from it that’s positive, productive, healthy. Scholars need to make sure we’re being cautious about our claims based on our data. And human interactants need to educate themselves.

Close-up of Professor Jaime Banks aligning her fingers and palm with the hand of a humanoid robot. Jaime Banks holds a mechanical hand.Angela Ryan/Syracuse University

Why do you think that AI companions are becoming more popular now?

Banks: I feel like we had this perfect storm, if you will, of the maturation of large language models and coming out of COVID, where people had been physically and sometimes socially isolated for quite some time. When those conditions converged, we had on our hands a believable social agent at a time when people were seeking social connection. Outside of that, we are increasingly just not nice to one another. So, it’s not entirely surprising that if I just don’t like the people around me, or I feel disconnected, that I would try to find some other outlet for feeling connected.

More recently there’s been a shift to embodied companions, in desktop devices or other formats beyond chatbots. How does that change the relationship, if it does?

Banks: I’m part of a Facebook group about robotic companions and I watch how people talk, and it almost seems like it crosses this boundary between toy and companion. When you have a companion with a physical body, you are in some ways limited by the abilities of that body, whereas with digital-only AI, you have the ability to explore fantastic things—places that you would never be able to go with another physical entity, fantasy scenarios.

But in robotics, once we get into a space where there are bodies that are sophisticated, they become very expensive and that means that they are not accessible to a lot of people. That’s what I’m observing in many of these online groups. These toylike bodies are still accessible, but they are also quite limiting.

Do you have any favorite examples from popular culture to help explain AI companionship, either how it is now or how it could be?

Banks: I really enjoy a lot of the short fiction in Clarkesworld magazine, because the stories push me to think about what questions we might need to answer now to be prepared for a future hybrid society. Top of mind are the stories “Wanting Things,” “Seven Sexy Cowboy Robots,” and “Today I am Paul.” Outside of that, I’ll point to the game Cyberpunk 2077, because the character Johnny Silverhand complicates the norms for what counts as a machine and what counts as companionship.

Reference: https://ift.tt/tEw4smP

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding mo...