Tuesday, 25 November 2025
28.8 C
Singapore
23.4 C
Thailand
21.3 C
Indonesia
27.7 C
Philippines

AI-controlled robots can be hacked, posing serious risks

A Penn Engineering study found AI-powered robots vulnerable to hacking, raising concerns over safety risks and real-world dangers.

Researchers at Penn Engineering have discovered alarming security vulnerabilities in AI-powered robotic systems, raising concerns about the safety of these advanced technologies. They found that certain AI-controlled robots can be hacked, allowing hackers to take complete control and potentially cause serious harm.

“Our work demonstrates that large language models are not yet safe enough when integrated into the physical world,” said George Pappas, the UPS Foundation Professor of Transportation in Electrical and Systems Engineering at Penn. His comments highlight the significant risks these systems pose in their current state.

The Penn Engineering research team conducted tests using a tool they developed called RoboPAIR. The tool could “jailbreak” three well-known robotic platforms: the four-legged Unitree Go2, the four-wheeled Clearpath Robotics Jackal, and the Dolphins LLM simulator for autonomous vehicles. Incredibly, the tool was successful in every single attempt, bypassing the safety systems of these platforms in just a few days.

Once the safety guardrails were disabled, the researchers gained complete control over the robots. They could direct the machines to perform dangerous actions, such as sending them through road crossings without stopping. This demonstration revealed that jailbroken robots could pose real-world dangers if misused.

The researchers’ findings mark the first time that jailbroken large language models (LLMs) risks have been linked to physical damage, showing that the dangers go well beyond simple text generation errors.

Strengthening systems against future attacks

Penn Engineering is working closely with the developers of these robotic platforms to improve their security and prevent further vulnerabilities. However, the researchers have issued a strong warning that these problems are not limited to just these specific robots but are part of a wider issue that needs immediate attention.

“The results make it clear that adopting a safety-first mindset is essential for the responsible development of AI-enabled robots,” said Vijay Kumar, a co-author of the research paper and professor at the University of Pennsylvania. “We must address these inherent vulnerabilities before deploying robots into the real world.”

In addition to strengthening the systems, the researchers also stress the importance of “AI red teaming.” This practice involves testing AI systems for possible risks and weaknesses to ensure they are robust enough for safe use. According to Alexander Robey, the study’s lead author, identifying and understanding these weaknesses is a crucial step. Once the flaws are found, the robots can be trained to avoid such vulnerabilities, making them safer for real-world applications.

As AI continues to evolve and more robots are integrated into daily life, it becomes increasingly important to ensure their safety. If not properly secured, these technologies could seriously threaten public safety. Penn Engineering’s work is a crucial step towards ensuring that AI-controlled robots are safe and trustworthy in the future.

Hot this week

Singapore organisations face rising data security pressures as AI adoption expands

Singapore organisations struggle with data security as rapid AI adoption and cloud sprawl increase insider risks.

From insight to action: TeamViewer introduces Tia for autonomous IT support

TeamViewer launches Tia, an intelligent agent designed to autonomously detect and resolve IT issues across devices and systems.

New research from IDC shows AI is reshaping entry-level hiring worldwide

New IDC findings reveal how AI is transforming hiring, skills and workforce development across global industries.

Google warns staff of rapid scaling demands to keep pace with AI growth

Google tells staff it must double AI capacity every six months as leaders warn of rapid growth, rising demand, and tough years ahead.

OpenAI introduces a new shopping assistant in ChatGPT

OpenAI launches a new ChatGPT shopping assistant that helps users compare products, find deals, and search for images ahead of Black Friday.

Chrome tests new privacy feature to limit precise location sharing on Android

Chrome for Android tests a new privacy feature that lets websites access only approximate location data instead of precise GPS information.

OpenAI introduces a new shopping assistant in ChatGPT

OpenAI launches a new ChatGPT shopping assistant that helps users compare products, find deals, and search for images ahead of Black Friday.

OpenAI was blocked from using the term ‘cameo’ in Sora after a temporary court order

A judge blocks OpenAI from using the term “cameo” in Sora until 22 December as Cameo pursues its trademark dispute.

Google warns staff of rapid scaling demands to keep pace with AI growth

Google tells staff it must double AI capacity every six months as leaders warn of rapid growth, rising demand, and tough years ahead.

Related Articles

Popular Categories