Jacob Ward Portfolio

Overview
This project explores human-aware navigation for autonomous robots operating in environments shared with humans. Integrating RGB-D and 2D LIDAR data with a YOLO-based object detection model, the system dynamically identifies humans to generate safety zones and optimize path planning. See the final IEEE report and presentation for more detailed information about the project.
Methodology
This project addresses the challenge of enabling real-time human detection, creating adaptable safety zones, and optimizing robot behavior in dynamic environments. The robot navigation system was developed using Robot Operating System (ROS) and implemented on a Clearpath Jackal UGV equipped with an Intel RealSense Depth Camera D455 and a SICK LMS1xx LIDAR sensor. The framework incorporates SLAM for environment mapping, AMCL for localization, and A* for path planning. Object detection was performed using YOLOv3, which processed RGB-D data to identify humans and generate virtual safety zones that were integrated into the robot’s navigation stack.


Implementation
-
Object Detection with YOLOv3 – The algorithm detects humans in RGB images and estimates their 3D positions using depth data.
-
Transformation & Data Fusion – Detected positions are transformed into the robot’s coordinate frame using intrinsic and extrinsic camera parameters.
-
Human Safety Area (HSA) Generation – Safety zones are created around detected humans and incorporated into LIDAR scans, ensuring they are treated as obstacles in the navigation framework.
-
Navigation & Path Planning – Using ROS move_base, the robot dynamically adjusts its trajectory to maintain safe distances while optimizing movement efficiency.
Results
While the system successfully detected humans and adjusted the robot's navigation trajectory accordingly, performance was hindered by computational constraints. Due to CPU-based execution in a virtual machine, the low operating speed of YOLOv3 significantly limited the real-time response. Running YOLOv3 on a dedicated GPU to improve frame rates would likely produce better results. Regardless, this project demonstrates the potential of human-aware navigation in autonomous systems, providing insights into both its capabilities and limitations. By integrating object detection, sensor fusion, and real-time path planning, the system offers a foundation for future improvements in human-robot interaction.
