简体   繁体   中英

Robot 6 DOF pick object with camera

I have a 6 DOF robotic arm ( Niryo Ned 2 ) and a RGB-D camera ( Depth AI OAK-D ).

My aim is to detect an object using a YOLO AI and pick it with the robotic arm.

So far, I've be able to:

  • Detect the object and its location in space (relative to camera)
  • Converts coordinates to the end-effector base
  • Move the arm to that point

Here is my code:

def convert_to_arm_coordinates(self, object_pos_3d) -> np.array:
    # Input distance is in milimeters while Niryo works in meters
    object_pos_3d /= 1000.0

    # Rotation matrix to rotate the coordinates from the camera frame to the robot frame
    rotation_matrix = np.array([[0, 1, 0], [-1, 0, 0], [0, 0, -1]])

    # Translation vector to translate the coordinates from the camera frame to the robot frame
    translation_vector = np.array([+0.004, -0.04, -0.085])

    # Transform the coordinates using the rotation matrix
    object_pos_robot = np.matmul(rotation_matrix, object_pos_3d + translation_vector)

    return object_pos_robot

def loop_detections(detections):
    for detection in detections:
        label, x_ia, y_ia, z_ia = get_position(detection)
        if int(x_ia) != 0 and int(y_ia) != 0 and int(z_ia) != 0:
            if label == "label_to_grab":
                print("[CAMERA] Grabing label {} ..".format(label), flush=True)

                # Read detected position of the object from RGB-D camera
                relative_pos = np.array([x_ia, y_ia, z_ia])

                # Convert to robot coordinates system
                world_pos = convert_to_arm_coordinates(relative_pos)

                # Shift position
                x, y, z, roll, pitch, yaw = robot.arm.get_pose().to_list()
                print("[Niryo] Current pose ({}, {}, {})".format(x, y, z), flush=True)
                x += world_pos[1]
                y += world_pos[1]
                z += world_pos[2]
                x = max(-0.5, min(0.35, x))
                y = max(-0.5, min(0.5, y))
                z = max(0.17, min(0.6, z))
                print("[DepthAI] {}: Detection ({}, {}, {}), World ({}, {}, {}), Shifted ({}, {}, {})".format(label, x_ia, y_ia, z_ia, world_pos[0], world_pos[1], world_pos[2], x, y, z), flush=True)

                # Reach the coordinates and pick the object
                robot.pick_place.pick_from_pose([x, y, z, roll, pitch, yaw])

                # Move back to stand-by position
                robot.arm.move_pose(robot.stand_by)

The robot is correctly moving towards Y and Z axis but not on X axis. The robot is 4 to 8 centimeters too short on that axis. As the camera is not purely vertical, I am almost certain the AI computed X is not correct.

Is there a way to compute X based on end effector pitch?

The solution was to use sinus:

x = -world_pos[2] * math.sin(pitch)

This answer was posted as an edit to the question Robot 6 DOF pick object with camera by the OP Manitoba under CC BY-SA 4.0.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM