Frequently Asked Questions
Why can I not connect to my MultiSense?
Ensure your camera is powered on and connected to a network interface on your computer. If your network interfaces has link lights, check to see if the lights indicate network activity
Ensure your network interface is configured with a valid IP address on the MultiSense subnet
If you are still unable to connect to the MultiSense try factory resetting the MultiSense IP address to 10.66.171.21
How do I keep the lenses clean?
Lenses are relatively easy to clean. Washing the camera with water is effective for dust and dirt. Gentle solvents like isopropyl alcohol are good for removing greases and other sticky substances. If there are hard dusts or road salt on the lenses, it is better to rinse first and wipe later. Although just about any soft cloth will do, lint free cloths such as Kimtech Kimwipes have the advantage of not leaving behind fibers after cleaning.
Why are most of the pixels in my disparity image zero?
There are several reasons which may result invalid disparity images. These include:
Poor scene texture. Scenes that contain uniform color texture (i.e. flat white walls) do not provide enough visual information to visually match image patches between the left rectified and right rectified images
One possible mitigation is to add visual texture to the scene using a lighting source like the infrared pattern projector included on the KS21i
A physical occlusion of either the left or right lens. Any covering or FOD which exits on either the left or right camera lens will prevent the MultiSense from computing valid visual matches
To validate both lenses are not colluded, ensure that both the left rectified and right rectified images contain mostly the same scene information
A physical shift in the camera intrinsics or extrinsics that result from a physical impact or continual operation in extreme environments
Send an inquiry to the support portal for an in-field calibration solution
Why are there black regions at the top of Rectified Images?
The black region is an artifact of the stereo rectification process which removes the barrel distortion from the lens. In general this does not impact stereo performance, and most likely will not adversely impact downstream algorithms.
There are a few options you have to handle this if it’s causing issues for you system:
Use LibMultiSense’s RectifiedFocalLengthUtility to increase the rectified focal length of your camera. This will effectively zoom in your camera, eliminating the lower back region by decreasing the effective vertical and horizontal FOV.
Use the calibration to create a mask to apply to the image. You can do this by attempting to rectify a white image using the associated camera calibration. Any non-white pixel in the output image should be considered invalid, and used as a mask for feature detections.
You can manually adjust the cy parameter in the shift the image center in the stereo pair. You can do this by querying the calibration using the LibMultiSense ImageCalUtility, modifying the cy value in the 3 extrinsics P matrices (the cy value is the 6th element of the serialized P matrix), and re-uploading the modified calibration using the ImageCalUtility. You should take great care if you choose to do this! We recommend saving a backup of the calibration, before attempting this operation. Additionally, if you try this, you must change Cy to be the same value for all 3 P matrices. The camera calibration is a critical component of the cameras operation, and manual modifications to it are risky.
Why are there green regions at the top of Aux Rectified Color Images?
The green regions at the top of rectified color images are a result of the same stereo rectification process which is outlined in the black regions section. The regions are green as a result of the conversion between planar YCbCr420 and RGB images. When both the luma and chroma values are 0, both the R and B color pixel values will be 0.
The same mitigation steps outlined in the black regions section apply to eliminating this issue.
LibMultiSense
Why am I getting the exception “No free RX buffers”?
Internally LibMultiSense has 16 large buffers which it uses to buffer incoming camera data before it is sent to user-defined isolated callbacks. If those buffers are all reserved by the application, or in use by active user-defined callbacks. This exception generally occurs under the following conditions
You have a large number of callbacks attached to the Channel instance, and each callback is doing a substantial amount of work, or is blocked waiting on an external condition.
Your machine cannot keep up with processing at the rate MultiSense data is incoming.
There is a bug in your image reservation/release logic which is causing some buffers to be reserved even when they are no longer needed.
This error is not fatal, and is fine if it occurs occasionally. If you are seeing large numbers of these errors being printed, it probably means there is an issue with your software.
If this error only occurs occasionally, there are two ways to mitigate this issue by supplying more large buffers to the LibMultiSense
You can recompile LibMultiSense with a larger static buffer pool
You can supply a custom number of large buffers to LibMultiSense via the setLargeBuffers interface.
ROS1
Why am I seeing the error “Error: MultiSense firmware version does not support the updated aux camera controls”?
The MultiSense has outdated camera firmware before the v5.32 firmware please see the firmware upgrade instructions to update your camera to the latest MultiSense firmware version
Why is the right camera info topic not published?
The multisense_ros driver leverages an on-demand publisher model, and only publishes topics when an active subscription is established.
Camera info topics are published alongside their corresponding image topics. To enable publishing of a camera_info topic, an active subscription to the corresponding image topic must be established.
Why is it camera reporting “failed to set sensor MTU”?
This is most likely the result of an incorrect network configuration on the host machine connected to the MultiSense. For more detail please reference details on MTU configuration and host specific network configuration.
If your network interface does not support the larger MTU values, consider explicitly launching the ROS driver with a MTU of 1500:
Why does the multisense_ros driver fail to build on NVIDIA Jetson platforms?
This build related to him having both OpenCV 4.5.4-8 and 4.2.0 installed on the Jetson’s Jetpack operating system. ROS is pulling in the headers for OpenCV 4.5 (which has the Mat constructors defined here), and is linking against the OpenCV 4.2 libraries which have the constructors defined in the inline headers. The best path forward is to uninstall one of those conflicting OpenCV versions.
ROS2
Why are my topics not being published at the FPS I set?
This is typically an issue with the default fastrtps RMW used by ROS2. Consider trying another RMW such as cyclone_dds for better results.
Also verify the connection between the host machine and MultiSense is not damaged, the host machines network interface is configured properly, and the network interface is negotiating to 1Gbps of available bandwidth.