PICABOT
(PICk and place, Autonomous driving, Bot)

1Modulab with-robot, 2nextchip 3Hanyang university 4Yonsei University 5with-RL


PICABOT : Pick and Place, Autonomous driving, Bot

Abstract

This project focuses on advancing personal robotics by developing a robot capable of autonomous navigation and robotic arm control. Part of the Companion Robot Lab's Season 3, the goal is to integrate SLAM for map creation, autonomous driving for task execution, and robotic arm manipulation. In a simulated home environment, the robot is tasked with moving from a starting point (Zone 1) to a target (Zone 2), autonomously navigating using SLAM, locating a cube with a camera, picking it up, and delivering it to the target. This project demonstrates the potential of personal robots to assist with daily tasks, utilizing technologies like SLAM and computer vision without complex frameworks like ROS. The results aim to lay the groundwork for robots that can seamlessly integrate into home environments.

Video

Operation Steps

  1. Stand By

    The robot waits for user commands in the format: (pickup location, drop-off location, target object). Upon receiving the command, the robot executes the following procedures.

  2. Move to Pick

    - Remembers its "original position" for later return
    - Calculates path from current position to pickup location
    - Moves to pickup location following the calculated path

  3. Find Target

    - Scans surroundings using arm-mounted camera
    - Adjusts camera position and angle when target is detected
    - Centers target object in camera view

  4. Approach Target

    - Controls wheels to approach the target
    - Maintains target at camera center through continuous adjustment
    - Positions robot at optimal distance for pickup

  5. Pick Target

    - Uses Visual Servoing for precise arm control
    - Grasps target object
    - Loads target into storage compartment

  6. Move to Place

    - Calculates path to drop-off location
    - Navigates to destination following planned path

  7. Place Target

    - Controls robotic arm to unload target
    - Places object at designated location

  8. Move to Base

    - Calculates return path to original position
    - Returns to starting position
    - Prepares for next command

Technical Details

Navigation System

1. Path Planning (Using simOMPL)

Path Planning
Path Planning Visualization

The find_path method leverages the simOMPL module in CoppeliaSim to plan the robot's path. The key steps are as follows:

  1. Goal Location Setup:
    • The method sets the robot's current position and goal location.
    • The goal location is identified based on the mission or pre-configured coordinates.
  2. Path Exploration:
    • It uses the BiTRRT algorithm to explore and determine a feasible path in a 2D plane while avoiding obstacles.
    • Obstacles and the robot's collision box are defined to ensure a valid path is generated.
  3. Path Simplification:
    • Once a path is found, it is simplified (simplifyPath) to create an optimal trajectory for the robot.
  4. Storing Path Data:
    • The planned path is stored as 3D coordinates (path_3d) for subsequent use during motion control and path tracking.

2. Path Tracking (Including Pure Pursuit)

Path Tracking
Path Tracking with Pure Pursuit
a. Basic Path Tracking:
  • The robot continuously compares its current position with the planned target point along the path.
  • It calculates the nearest point on the path and guides the robot toward it.
  • The robot minimizes its positional error as it moves along the path.
b. Pure Pursuit Algorithm:

The Pure Pursuit algorithm enhances path tracking by ensuring smooth and natural motion along curved paths:

  1. Lookahead Distance:
    • The algorithm identifies a Lookahead Point on the path, located at a specific distance ahead of the robot's current position.
    • This distance can be dynamically adjusted based on the robot's speed (longer distance for higher speeds).
  2. Target Direction Calculation:
    • The direction vector between the robot's position and the Lookahead Point is computed.
    • The angular difference between the robot's heading and the target direction determines the rotational velocity.
  3. Control Signal Generation:
    • Linear velocity is determined based on the straight-line distance to the Lookahead Point.
    • Rotational velocity is calculated to align the robot toward the Lookahead Point.
    • The Mecanum wheels' independent velocities are adjusted accordingly to ensure smooth motion.
c. Benefits of Pure Pursuit:
  • Produces smoother and more natural curved trajectories compared to traditional nearest-point tracking.
  • Reduces abrupt direction changes and oscillations during path following.
  • Adaptive Lookahead Distance enables stable control across different speeds and environments.

Robotic Arm Control

Object Detection

Object Detection

- Uses arm-mounted camera for target detection
- Currently implements OpenCV-based red cube detection
- Future versions planned to include DNN-based object detection

Target Approach

Approach to Target

- Calculates distance using camera position and angle
- Rotates to face target directly
- Maintains target centering during approach

Visual Servoing

Visual Servoing
Visual Servoing Process

Image-Based Visual Servoing (IBVS)

IBVS Equation
ArUco Marker Feature Point
Image-Based Visual Servoing (IBVS)

The image Jacobian for a point feature is given by:

\[ J_p(u,v,z) = \begin{pmatrix} -f/z & 0 & u/z & uv/f & -(f+u^2/f) & v\\ 0 & -f/z & v/z & f+v^2/f & -uv/f & -u \end{pmatrix} \]

The complete visual servoing control law:

\[ \begin{pmatrix} v_x\\v_y\\v_z\\w_x\\w_y\\w_z \end{pmatrix} = \begin{pmatrix} J_p(u_1,v_1,z)\\ \vdots\\ J_p(u_n,v_n,z) \end{pmatrix}^{-1} \begin{pmatrix} \dot{u}_1\\ \dot{v}_1\\ \vdots\\ \dot{u}_n\\ \dot{v}_n \end{pmatrix} \]

Where:

  • \((u,v)\): Image coordinates of feature points
  • \(z\): Depth of the feature point
  • \(f\): Focal length of the camera
  • \((v_x, v_y, v_z)\): Linear velocity components
  • \((w_x, w_y, w_z)\): Angular velocity components
  • \((\dot{u}, \dot{v})\): Desired feature point velocities

The image Jacobian is calculated by extracting image feature coordinates from ArUco markers attached to an object. The camera's velocity and orientation can be computed by multiplying the pseudo-inverse matrix, composed of pixel errors (the difference between target pixel coordinates and current feature coordinates) and the image Jacobian of the feature coordinates. By repeating this process until the pixel error reaches a threshold value, the system can be controlled to a position where the object can be stably grasped.

YouBot Platform

YouBot Description

The KUKA YouBot features:

  • 5-DOF robotic arm with linear gripper
  • Omnidirectional base with four Mecanum wheels
  • Simulated in CoppeliaSim environment