image
image

Lynceus: advanced vision and smart safety for autonomous robots

What is Lynceus and its primary objective

The name Lynceus originates from Greek mythology. It refers to the Argonaut prince renowned for his extraordinary sight, which reportedly allowed him to see through solid objects. Today, advanced vision systems for autonomous vehicles represent a significant technological development inspired by this exceptional capability.

Our Lynceus system is an autonomous driving assistance solution designed to complement existing safety systems. Engineered for integration with the SICK Visionary-B Two stereo camera, it can detect the presence of personnel within a defined perimeter.

Lynceus has a dual objective: enhancing operator safety and optimizing machine usability, particularly in environments characterized by high human activity.

While we designed Lynceus for installation on autonomous vehicles, its applications extend to other platforms. It enables machines to operate safely by identifying and signaling the presence of individuals within various designated work zones.

How it Works: Cameras, Sensors, and Detection Algorithms

The system operates with a stereo camera that provides simultaneous RGB imagery and depth measurements. We utilize the RGB feed to identify individuals, while the depth data allows us to map their precise spatial coordinates.

To power these detections, we run a YOLO (You Only Look Once) neural network directly on an NVIDIA Jetson Orin module. By leveraging the module’s GPU, we ensure high-speed processing and low-latency inference.

Once we reconstruct the spatial positions, we determine if a detected person is within a specific monitoring zone and assign the appropriate hazard level based on their location.

Through a graphical user interface (GUI), users can:

  • Define the geometry and dimensions of the monitoring areas.
  • Assign specific risk levels to each zone.
  • Visualize real-time data, including the count of detected individuals and their 3D reconstructions.

The interface is customizable, allowing for the display of additional telemetry and diagnostic information tailored to the specific operational environment.

The Graphical User Interface (GUI) is structured into three main panels

Panel 1 (RGB Image and Detection Results): Displays the RGB feed captured by the camera overlaid with inference results. Monitoring areas are clearly delineated, and the color assigned to detected individuals is determined by the hazard level of the zone they occupy. The workspace can be partitioned into escalating safety tiers:

  • Red: Maximum safety level (e.g., action: “immediate stop”).
  • Orange: Medium safety level (e.g., action: “reduce speed”).
  • Yellow: Low safety level (e.g., action: “proceed with caution”).

Panel 2 (3D Rendering): Provides a three-dimensional visualization derived from the camera’s point cloud data. This view includes detected individuals, established monitoring zones, and a 3D model of the vehicle.

Panel 3 (Control Panel): Facilitates the management of monitoring zones, allowing users to activate or deactivate specific areas and assign their corresponding hazard levels.

Key challenges and technical hurdles encountered during development

The most significant challenge encountered during development was the requirement to make the system as intuitive as possible for configuration and operation, balancing technical sophistication with ease of use. We prioritized usability was throughout the design process to ensure full functionality for users without specialized training in monitoring systems.

Currently, the monitoring area can be defined via a single click on the main interface (Panel 1). This direct approach eliminates the need for manual coordinate entry or calculations, streamlining the setup process and minimizing the potential for configuration errors.

Potential Application Scenarios for Lynceus

Due to its versatility, the system’s primary application is within the agricultural sector and environments involving heavy machinery characterized by significant blind spots or limited operator visibility. Integrated into both autonomous and manually operated tractors, the device enhances safety standards by continuously monitoring the workspace. It ensures that personnel and unauthorized individuals remain outside hazardous zones or the vehicle’s path. This prevents serious incidents during reversing or when working near active implements.

Furthermore, the system serves as a driver assistant. By providing an extended and detailed perspective of the environment, it compensates for the vehicle’s physical constraints and eliminates blind spots. This capability is essential for complex maneuvers, navigating narrow passages, or operating near field boundaries and obstacles—increasing efficiency while reducing the risk of structural or mechanical damage.

The primary fields of application for this technology include:

  • Agriculture: Autonomous and traditional tractors, harvesters.
  • Construction: Excavators, mechanical loaders, and earthmoving equipment.
  • Mining: Large-scale dump trucks and specialized extraction machinery.

Broadly, Lynceus is a valuable asset in any industrial context where operators must maneuver large-scale equipment in proximity to personnel or infrastructure.  

GET UP TO SPEED

Sign up for our newsletter to see where we’re headed next.

Be the first to know when we launch our service in new cities and get the latest updates.