Grid
8×8
Plan Time
60 s
Arm
6-DOF myCobot600
Overview
An end-to-end robotic maze-solving system using the myCobot600 6-DOF collaborative arm. A camera-based perception pipeline converts a physical maze into an 8×8 occupancy grid using ArUco markers for homography correction. A* finds the optimal path, and inverse kinematics translates each grid waypoint to joint angles executed by the arm.
Problem Statement
Robotic manipulation in structured environments (such as pick-and-place in factory layouts) requires accurate workspace mapping and collision-free path planning. This project treats a physical maze as an abstraction of such environments — the robot must autonomously perceive the layout, plan a route, and execute it without collision with walls, all in a single uninterrupted run.
Perception — Maze to Occupancy Grid
Two ArUco markers (IDs 0 and 1) are placed at the top-left and top-right corners of the maze. OpenCV's ArUco detector localizes them in the camera frame. A homography transform is computed to warp the maze image to a top-down orthographic view. Adaptive thresholding then binarizes the warped image, and the 8×8 cell grid is overlaid to classify each cell as free or occupied.
Path Planning — A* Search
A* with Manhattan distance heuristic searches the 8×8 grid from start cell to goal cell. The algorithm finds the optimal path in under 5 ms for all 64-cell configurations tested. The path is represented as a sequence of (row, col) waypoints, which are then mapped to physical coordinates in the maze frame using the inverse homography.
Inverse Kinematics & Arm Execution
Each physical waypoint is transformed to the robot's base frame. The myCobot600 Python SDK's built-in IK solver computes the joint angles for each waypoint. The arm moves sequentially through the path, pausing at each waypoint to confirm position before advancing. The end effector traces a horizontal plane approximately 2 cm above the maze surface throughout the trajectory.
Results & Applicability
The system solves the maze from perception to execution in approximately 60 s total (5 s perception, <1 s planning, ~54 s arm motion). The solution is 100% correct for all maze configurations tested. The perception-to-plan-to-execute pipeline generalizes directly to structured pick-and-place tasks where item layout can be represented on a grid.
grid = occupancy_grid(
frame, aruco_markers
)
path = astar(grid, start, goal)
for waypoint in path:
arm.move_to_waypoint(waypoint)Tools & Stack
Key Outcomes
100% correct maze solutions across all tested configurations
Full pipeline from camera frame to arm execution in 60 s
Robust homography correction using 2 ArUco reference markers
Generalizable to structured pick-and-place manufacturing tasks