Posture-Controlled Robotic Arm

Dec 10th, 2022
Robotics, AI, OpenCV, Python, Embedded Systems
Offline
Completed
Arm Functionality Poster Page
Design Lab Live Demo Screenshot
Arm Architecture Diagram

TLDR

The goal of this project was to create a robotic arm that could be controlled by ones posture, allowing it to interact with its environment, and being very intuitive to learn therefore replacing complex user interfaces. Additional features included velocity control and remote controlling(over WiFi).

Tools & Technologies

The languages employed were simply Python. Raspberry Pi was utilized to control our http server and robotic arm. We used and modified many open source projects to speed up development including the Poppy Project for robotic arm control, MediaPipe for their pose recognition models, and OpenCV for general computer vision algorithms and feed extraction.

PythonREST APIOpenCVMediaPipeRaspberry Pi

Challenge

Requirements

The requirements of the project were simply that our robot had to be a physical robot that could interact with its environment in some way and be semi-autonomous while including some methods & topics that were introduced in class.

Project Description

The goals of our project was quite simple. We wanted to create a simpler interface to move robotic arms than the complex tablet interfaces currently available. Our technology should be constructed so that it is easy to use once set up and could control robots that were either within our out of sight(remote control despite distance). Our interface included two cameras and a box on the ground you had to stay between to stay in control of the arm. This booth comprised our entire interface, after that, your arm mimicked the robotic arm. To accomplish this we redesigned the robotic arm to have an identical joint configuration as a human arm. The robotic arm was equipped with simple velocity control to combat sensor noise and increase the precision of the arm.

Architecture

The Poppy Project was our interface to sending remote commands to the robotic arm. We had to change configuration files to let this software know what motors it had to give use control of. The posture recogntion was developed using OpenCV and MediaPipe. The most difficult part of this was merging the inputs from both cameras to get a better estimation of the users pose. After we obtained joint positions, the motor updates were calculated and streamed to the Raspberry Pi so that the robotic arm would move accordingly. In addition to pose recognition, we had to use another model to detect whether the fist was closed or not. The fist would control a binary(on or off) end effector. In our case a gripper/light.

Find me elsewhere