Mobileye 560/660

driver_name: mobileye_560_660
msgs_name: mobileye_560_660_msgs

This driver reads and parses CAN data provided by the Mobileye 560/630/660 with Extended Logging Firmware (provided by AutonomouStuff). The following are available:

  •  Several forms of Lane Model information
  • Additional Lanes (road edge, next lane, etc.)
  • Detected and Classified Objects (pedestrian, bicyclist, motorcycle, car, truck)
  • Traffic Signs (speed limit, construction zone, school zone, etc.)

Supported Hardware

  • Mobileye 560 w/Extended Firmware
  • Mobileye 630 w/Extended Firmware (without EyeWatch)
  • Mobileye 660 w/Extended Firmware (with Eyewatch)

Published Topics

Message TypeTopic Name

Description

can_msgs/Framecan_rxAll data published on this topic is intended to be sent to the sensor via a CAN interface.
mobileye_560_660_msgs/AftermarketLaneparsed_tx/aftermarket_laneLane data for detected lanes including confidence value, offset distances, and lane type.
mobileye_560_660_msgs/Ahbcparsed_tx/ahbcInformation about the Automatic High-Beam Control decision.
mobileye_560_660_msgs/AhbcGradualparsed_tx/ahbc_gradualDetailed information about the Automatic High-Beam Control decision model including number of cars, glare, and target tracking.
mobileye_560_660_msgs/AwsDisplayparsed_tx/aws_displayData that are sent to the EyeWatch display.
mobileye_560_660_msgs/FixedFoeparsed_tx/fixed_foeInformation about the inferred horizon and vehicle yaw.
mobileye_560_660_msgs/Laneparsed_tx/laneBasic information about detected lanes including curvature, heading, pitch angle, and yaw angle.
mobileye_560_660_msgs/LkaLaneparsed_tx/left_lka_laneExtended information about the left lane including type, quality, marker width, and parameters used to generate a 3rd degree polynomial model.
mobileye_560_660_msgs/LkaLane
parsed_tx/right_lka_laneExtended information about the right lane including type, quality, marker width, and parameters used to generate a 3rd degree polynomial model.
mobileye_560_660_msgs
parsed_tx/next_lka_lanesExtended information about the aditional lanes including type, quality, marker width, and parameters used to generate a 3rd degree polynomial model.
mobileye_560_660_msgs/LkaNumOfNextLaneMarkersReportedparsed_tx/lka_num_of_next_lane_markers_reportedThe number of "next lane" markers reported.
mobileye_560_660_msgs/LkaReferencePointsparsed_tx/lka_reference_pointsPosition information about reference points used to generate the polynomial model.
mobileye_560_660_msgs/ObstacleDataparsed_tx/obstacle_dataInformation about each obstacle detected including classification, position, width, age, CIPV status, and relative velocity.
mobileye_560_660_msgs/ObstacleStatusparsed_tx/obstacle_statusInformation about the number of obstacles detected and how they might affect the behavior of the driver.
mobileye_560_660_msgs/Tsrparsed_tx/tsrExtended nformation about each detected traffic sign.
mobileye_560_660_msgs/TsrVisionOnlyparsed_tx/tsr_vision_onlyInformation about the currently-detected signs.
visualization_msgs/Markeras_tx/lane_markersVisualization information about the detected lanes (intended for use in RViz).
perception_msgs/LaneModelsas_tx/lane_modelsCombined model data from multiple types of lane messages pertaining to the closest left and right lanes.
visualization_msgs/Markeras_tx/object_markersVisualization information about the detected objects (intended for use in RViz).
perception_msgs/ObectWithCovarianceas_tx/objectsCombined, detailed physical information about detected objects.

Subscribed Topics

Message TypeTopic NameDescription
can_msgs/Framecan_txAll data published to this topic will be parsed by the driver. This should be connected to a CAN interface.

Parameters

~sensor_frame_id
    The id of the frame of reference of the sensor. This will be attached to all published messages except the lane and object visualization markers.

~viz_frame_id
    The id of the frame of reference of visualized objects and lanes. This will be attached to messages which are intended for visualization in RViz. The intent of this frame is that it will be the same as the frame of reference as the Mobileye but will have an inverse value for the Z measurement.           This causes the visualized objects to be level with the ground (Z=0) rather than floating at the height of the Mobileye relative to the ground.