Calamos Supports Greece
GreekReporter.comGreek NewsTechnologyGoogle Writes 'Robot Constitution' For Its New AI Droids

Google Writes ‘Robot Constitution’ For Its New AI Droids

Google Wrote a ‘Robot Constitution.’
Google’s AI droids for which Google wrote a Robot Constitution. Credit: The Google DeepMind Robotics Team

The DeepMind robotics team just unveiled three exciting breakthroughs aimed at helping robots make quicker, smarter, and safer decisions outdoors. One of these breakthroughs involves a system for collecting training data. This is complete with a “Robot Constitution.”

The Robot Constitution ensures that your robot office assistant can grab more printer paper for you without accidentally bumping into a human co-worker in its path, wrote The Verge.

Google’s data collection system, AutoRT, employs a visual language model (VLM) and a large language model (LLM) working together to comprehend its surroundings, adjust to new situations, and choose the right tasks.

The Robot Constitution, inspired by Isaac Asimov’s “Three Laws of Robotics,” acts like a set of safety instructions. It guides the LLM to steer clear of tasks involving humans, animals, sharp objects, and even electrical appliances.

Google’s 53 AutoRT robots with over 77,000 trials

To enhance safety measures, DeepMind has programmed the robots to halt automatically if the force on their joints exceeds a specific limit. Additionally, a physical kill switch is available for human operators to deactivate the robots.

In a span of seven months, Google introduced fifty-three AutoRT robots across four office buildings and conducted over seventy-seven thousand trials.

Some robots were remotely controlled by humans, while others followed a script or operated autonomously using Google’s Robotic Transformer (RT-2) AI learning model.

Moreover, the robots employed in the trial have a much more practical appearance rather than a flashy one, featuring a camera, robot arm, and mobile base. In the system, a Visual Language Model (VLM) is utilized to grasp the surroundings and identify visible objects for each robot.

Following this, a Large Language Model (LLM) proposes various tasks the robot is capable of undertaking, such as “Place the snack onto the countertop.” The LLM acts as the decision-maker, choosing the most suitable task for the robot to execute, as highlighted in DeepMind’s blog post.

New SARA-RT makes RT Models more efficient

The recently unveiled system, Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT), has been engineered to convert Robotics Transformer (RT) models into more efficient versions.

Additionally, the RT neural network architecture, pioneered by a dedicated team, has become integral to the latest advancements in robotic control systems, notably showcased in the cutting-edge RT-2 model.

Noteworthy results indicate that the top-performing SARA-RT-2 models demonstrated a remarkable 10.6 percent improvement in accuracy and a 14 percent increase in speed when compared to their RT-2 counterparts. This was all achieved with a concise history of images as input.

The development is being hailed as a groundbreaking achievement, representing the first scalable attention mechanism that enhances computational efficiency without compromising quality, as reported by DeepMind’s blog post.

See all the latest news from Greece and the world at Greekreporter.com. Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!



Related Posts