LIPSedge™ AE Series – General #
How can I change the IP address of the sensor? Is there a link or a guide I can follow to setup everything? #
Kindly refer to p.57, C. Camera Web Interface, LIPSedge™ AE400 / AE450 Ruggedized 3D Stereo Camera User’s Manual
Visit: https://dev.lips-hci.com/documents-installation-and-setup/lipsedge-ae400/user-manual
What is the recommended network speed for LIPSedge™ AE Series? #
For maximum performance, the recommended network speed for LIPSedge™ AE Series is 1000Mbps.
My NIViewer is showing the “Cannot find the camera” message despite that I can ping the camera and the IP address setting in the network.json is correct. #
In that scenario, check the following items:
1. Make sure that there is no other equipment using the same IP address as the camera in the same network segment.
2. Connect directly to the camera and the computer. Do NOT connect through the hub or other switch.
3. Configure the camera’s IP address through the camera’s web interface and reboot the camera.
How to setup the camera’s IP address? #
Kindly refer to p.58, C. Camera Web Interface, LIPSedge™ AE400 / AE450 Ruggedized 3D Stereo Camera User’s Manual
Visit: https://dev.lips-hci.com/documents-installation-and-setup/lipsedge-ae400/user-manual
When I activate the camera with NIviewer, the camera streams initially, but after 10 seconds the streaming turns off and a “The camera is disconnected” message emerges. #
In that scenario, kindly check the following hardware settings:
1. PoE supplier (It must follow at least IEEE 802.at/802.af, supported ethernet 1000Mbps)
2. Ethernet cable (We recommend using CAT-6 cable, or cable with equivalent quality.)
3. Make sure all cables are plugged properly.
4. Eliminate ethernet cables and plugs with poor connection.
What is the camera’s power consumption and the spec/protocol of PoE supplier? #
The power consumption of LIPSedge™ AE series camera is about 10w. About the protocol, both IEEE 802.at/802.af are supported. For your reference, see the PoE image here: https://fbox.lips-hci.com/s/LXeNWzdCjxeeXXs.
What do the “Off / Laser / Laser Auto” under the “Emitter Enabled” option mean? #
The option adjusts the depth calculation method used by the structured-light camera, as structured-lighted camera projects patterned-laser onto the measured object for depth calculation.
1. Turning the Laser Projection OFF: Depth calculation will be performed using natural light, but because patterns are not projected onto the surface of the measured objects, it will directly reduce some depth information in the image.
2. Laser Projection Enabled: Patterns are projected onto the surface of the measured objects, which can increase the accuracy of depth information in the image.
3. Laser Auto Projection: The intensity of the laser will be automatically adjusted according to the intensity of external light sources.
LIPSedge™ AE400 #
Can I customize the camera resolution in Python? #
LIPSedge™ AE400’s depth resolution only supports up to 1280×720@30fps, and 640×480@60fps.
If a special resolution is needed, you may capture the image and then resize it accordingly.
Python example:
depth_colormap = cv2.resize(depth_colormap, (1920, 680), interpolation=cv2.INTER_LINEAR)”
The latest version of LIPSedge™ SDK doesn’t work on camera firmware version 5.12 #
In that case, upgrade the camera firmware using AE400-Toolkit. For details, refer to LIPSedge™ AE400 / AE450 Ruggedized 3D Stereo Camera User’s Manual.
LIPSedge™ AE430 #
A Python code for setting-up multiple cameras simultaneously is needed. #
Kindly refer to: https://fbox.lips-hci.com/s/oY54xcnkqgzPRoa
How to adjust camera parameters through LIPSedge™ SDK? #
The feature will be supported is versions later than v2.4.4.3_1.3.6, which allows developers to directly change the parameters in {SDK PATH}OpenNI2/Driver/LIPSedge-AE430_AE470.json
LIPSedge™ AE450 #
Is it possible to synchronize multiple cameras simultaneously? #
Refer to the image and connect each cameras with pin7 & pin8. LIPS Corp. also provide waterproof wires for customers if they are needed. Image: https://fbox.lips-hci.com/s/tR8RyrCF8rmH7Bt
LIPSedge™ AE470 #
How to maintain a 2% accuracy at a working distance of 20 cm for LIPSedge™ AE470? #
To achieve this, adjust the camera resolution or the disparity shift parameter. For details, refer to LIPSedge™ AE430 / AE470 Ruggedized Stereo Camera User’s Manual.
LIPSedge™ AT #
How to overcome the fps constraint caused by Ethernet bandwidth for LIPSedge™ AT? #
The ethernet module in LIPSedge™ AT is only up to100Mbps. For LIPSedge™ AT, transference beyond 30 fps is only available for rgb camera or depth camera.
A “no calibration data” message appears when NITCPClient.exe is terminated. #
Ignore the message as the calibration data has already been flashed into the camera.
Where can I get Ni2TCPClient.exe? #
You may download the NiTCPClient.exe at https://fbox.lips-hci.com/s/wjsfewbnJEP7qWy, or visti LIPS Corp.’s Github for its source code.
How to modify the IP Address of LIPSedge™ AT? #
To achieve this, connect the usb cable then use the Teraterm app to modify the IP address of LIPSedge™ AT.
LIPSedge™ DL #
How can I modify certain depth module configuration, for instance: the infrared rate? #
To achieve this, modify and save the settings in OpenNI2/Driver/ModuleConfig.json.
How can I adjust the integral time? #
To adjust the integral time, refer to OpenNI2/Driver/ModuleConfig.json.
Note: Calibration is required after the adjustment.
General Topics #
Is there a way to confirm the depth map with 3D display? #
LIPSedge™ SDK provides a point cloud viewer in the “Samples” folder, which displays 3D images.
Is it possible to configure the color range according to the distance on OpenNI in Python? #
To achieve this, configure the color range and apply it by using cv2.applyColorMap.
Python example:
Trimming depth_array
out_of_range = depth_array > max_distance
too_close_range = depth_array < min_distance
depth_array[out_of_range] = max_distance
depth_array[too_close_range] = min_distance
Scaling depth array
depth_scale_factor = 255.0 / (max_distance – min_distance)
depth_scale_offset = -(min_distance * depth_scale_factor)
depth_array_norm = depth_array * depth_scale_factor + depth_scale_offset
Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap = cv2.applyColorMap(depth_array_norm.astype(np.uint8), cv2.COLORMAP_JET)”
How to access point cloud images by using OpenNI2 SDK or API? #
To achieve this, refer to https://lips-hci.gitbook.io/lips-developer-documentation/lipsedge-tm-sdk-languages-and-libraries
Captured RGB / depth image is saved as a .raw file in NIViewer. How to access the .raw file? #
To achieve this, use the “ImageJ” APP or use RawViewer.exe in the LIPSedge™ SDK.
Where is the point cloud exported by the Ni2PointCloud-gl.exe saved? #
In default, the exported .ply file will be saved alongside the application (.exe).
How to access RGB / depth image with C++? #
The link below covers the sample codes with simple instructions(In README):
https://github.com/lips-hci/LIPSedge-sdk-samples/tree/main
These sample codes are also attached in the LIPSedge™ SDK (in the LIPSedgeSamples folder).
To access the RGB / depth image simultaneously, refer to “Ni2SimpleViewer-gl”.
How to utilize LIPSedge™ camera with Python? #
To achieve this, follow LIPS Corp.’s github README to setup the environment:
https://github.com/lips-hci/LIPSedge-sdk-wrappers/tree/main/python3
How to access the camera intrinsic? #
To achieve this, refer to CameraMatrix.exe.
How to switch between the RGB / depth registration function in NIViewer? #
First, activate Registration function in NIViewer:
1. Move the cursor over the video feed
2. Right click the console window
3. Select Device > Registration > Depth → Image
Using Registration function in C++:
Refer to the “Registration” section of the Point Cloud Sample Code.
https://github.com/lips-hci/LIPSedge-sdk-samples/tree/main/Ni2PointCloud-gl
How to turn off the “FPS” situation when using Ni2SimpleViewer-cv.exe? #
In Ni2SimpleViewer-cv, pressing ‘i’ will toggle the display of FPS.
How does the color / depth superposition work? Is it a Depth to Color or Color to Depth transformation, or both? #
The registration method LIPS Corp. products utilizes is depth to color. Kindly refer to the pointcloud example program in LIPSedge™ SDK.
Does LIPS Corp. provide any type of camera SDK for C#? #
LIPSedge™ SDK currently supports C#. Kindly refer to the C# example: https://github.com/lips-hci/LIPSedge-sdk-wrappers/tree/main/csharp
Which of the LIPS Corp. example programs run on LIPS Corp.’s cameras? #
The following examples run on LIPS Corp.’s cameras:
Ni2`CenterRead: To get the depth data of the center pixel.
Ni2PointCloud-gl: To visualize the pointcloud.
Ni2RawViewer-gl: After capture the raw images from NiViewer, you can use this app to see the image.
Ni2SimpleViewer-cv: To visualize both depth image and color image using opencv library.
Ni2SimpleViewer-gl: To visualize both depth image and color image.
Halcon Related Topics #
How to distinguish between the intensity and infrared frames in Halcon? #
To achieve this, refer to TLRemoteDeviceNodemapExtLIPS.xml and select each object.
I have problems detecting and connecting LIPSedge™ AE400 through GenTL. #
In that case, make sure that “C://Program Files/OpenNI2/Redist” has been added to system $PATH.
What kind of camera parameters can I use in Halcon? #
Refer to the Nodemap (.xml) in LIPS Corp.’s GenTL driver package.
We can access LIPS Corp.’s cameras function in Halcon as intended. But what is the recommended way to utilize C# via halcondotnet.dll when building a .NET Framework WinForms application? #
Here is an demo solution file(https://fbox.lips-hci.com/s/LygKe2jE9GJ7wFi) for your reference, LIPS Corp. build an demo appplication with 2 buttons. The first button activates the camera and do RGB streaming while the second button releases the camera resource and close the picture window.
Open HalconWinForms.zip and change the following information on your device:
1. Net SDK version
2. MVTec.HalconDotNet version
3. Camera serial number
I have an issue of measuring time snap image with LIPSedge™ AE430 in Halcon, the image acquisition time becomes unstable (ranging from 0.3 – 2s). #
LIPS Corp. tested the script and discovered the problem was caused by the use of “grab_data” when the recommended function should be “grab_data_async”. The “grab_data” function is typically used for a quick the image capture is expected with no parallel processing needed during image acquisition. If the performance was crucial and a real-time applications is desired, using “grab_data_async” might be better.
The camera freezes as the image resolution is modified to 1080 * 720. After the freezing, a timeout error emerges which prevents us from accessing the camera image. #
In that case, refer to AE400_show_rgb_depth.hdev.
Is there a way to modify the exposure time parameter and post-processing related parameters in applications such as the rs-depth-quality.exe in Halcon? #
To achieve this, set the following parameters in the script:
set_framegrabber_param (AcqHandle, ‘auto_exposure’, 0)
set_framegrabber_param (AcqHandle, ‘exposure’, 10.0)
We can detect / access the LIPS Corp.’s camera in the NIViewer but not in Halcon with GenTL interface. #
To fix this issue, make sure that:
1. LIPS3DGenTL.cti file has been placed in the folder with variable $GENICAM_GENTL64_PATH.
2. Copy folder “”OpenNI2″” from SDK path into C:\Program Files.
3. Add “C:\Program Files\OpenNI2\Redist” into system variable $PATH.
Clicking the snap / live function with Mvtec Halcon causes unusually long waiting time and a timeout error emerges. #
To fix this issue, make sure that no other application is occupying the camera resource.
LIPSedge™ L210 / L215 #
Clicking the snap / live function with Mvtec Halcon causes unusually long waiting time and a timeout error emerges. #
To fix this issue, make sure that no other application is occupying the camera resource.
Is there a way to utilize 2 LIPSedge™ L215u camera simultaneously, without each camera interfering another? #
Currently, this function is not supported by the standardized product. However, this is customizable. T do so, contact to LIPS Corp.’s sales representative or e-mails info@lips-hci.com.
Is it normal for the camera image to become recognizable only when the camera is tilted at a 90 degree angle? #
Yes ! The tripod screw / micro usb hole should be set in the right side of the camera so that the developers can see the image from the proper angle.
Does LIPSedge™ L215 support the adjustment for exposure, gain, auto exposure and auto white balance? #
Adjustments of these parameters are generally NOT recommended as the calibration of LIPS Corp.’s camera is optimized upon released. For customization needs, contact the sales representatives or e-mail LIPS Corp. at info@lips-hci.com.
LIPSense™ 3D Scan APP #
I have problem accessing the camera as the error message: “Load dynamic camera library: 000000000014CD10” keeps emerging. #
To verify camera functionality, use the camera’s official SDK to open the Stereo Module. Once confirmed, reinstall LIPScan 3D and update the camera.json file by setting “camera”: “{your camera module name}”. Retry the application afterward.
Why is the stitching misaligned when scanning an object? #
If the target is placed on a turntable, avoid including the turntable into the frame when scanning.
LIPSense™ Body Pose SDK #
Would the LIPSense™ 3D Body Pose SDK work well with, for example, an Orbbec or a Kinect camera? #
LIPSense™ 3D Body Pose SDK currently support ONLY LIPSedge™ AE400 / DL Series or Intel® RealSense™ D400 Series. Paring the LIPSense™ 3D Body Pose SDK with a third-party camera like Orbbec or Kinect would require a customized version of it and extra payment. In that case, contact our sales representatives or e-mail at info@lips-hci.com.
What are the x.y.z values calculated by void RenderPose in render3D.cpp? #
In render3D.cpp, the x.y.z values represent the world coordination, with (0, 0, 0) being the origin of the camera.
Can the skeleton data be exported as the FBX or BVH format? #
Currently, LIPSense™ 3D Body Pose SDK ONLY supports the RAW format. To export the skeleton data with other format, such as the FBX or BVH, a customized version of the SDK and an extra payment are required. In that case, contact our sales representatives or e-mail at info@lips-hci.com
LIPFace™ HW120 / HW125 #
What is the resolution of LIPFace™ HW125? #
The resolution of LIPFace™ HW125 is 1280 * 800.
During the execution of Ni2-Recognition.exe, an error message “…… get usb dongle fail” keeps emerging. #
In that case, check the device manager and verify if the camera is detectable.
Can the face ID data be stored on the host PC instead of the device? #
Currently, the “ID” and “Face recognition data” are stored in the camera by default. To store these data in other devices, customization and extra payment is required. In that case, contact our sales representatives or e-mail at info.lips-hci.com.
Does LIPFace™ HW120 work in poor lighting condition? #
Sure, LIPFace™ HW125 utilizes IR and depth for recognition. Thus, it would work even under a poor lighting condition.
Can both USB connectrors of the LIPFace™ HW120 / HW125 be connected to a single USB hub? It seems that applications like the NIViewer or Ni2facerecognition-gl may become unresponsive when this setup is used. #
It is highly recommended to connect the camera directly to the PC instead of establishing the connection via the USB hub to avoid power supply issues or other unexpected situation.
What is the maximum profiles numbers for face registration? #
LIPFace™ HW120 / 125 supports a maximum profile of 1000 people.
Is there a documentation to clarify the meaning of error messages? Particularly the hat shielding fails and mask fails. #
For the description of error messages, refer to Error_code_description.xlsx(https://fbox.lips-hci.com/s/NjnJX8jN86jMndx)
LIPSMetrics™ Static #
LIPSMetrics™ often returns incorrect results when measuring round objects. Particularly, with the height measurement being correct, but not the length and width measurements. #
These errors are likely influenced by environmental differences. To fix this, hold the camera horizontal (level to the ground), clear any obstacles from the calibration area, and then do the calibration again. To further increase accuracy, place the target at the center of the detection zone.
Measure Master Related Topics #
Is it recommended to utilize LIPSedge™ L215u with the Metric MeasureMaster SDK? #
Such operation is possible, but not the best practice. Contact to our sales representatives or e-mail at info@lips-hci.com to get a compatible MeasureMaster SDK.
What is the 8 point mentioned in Measure Master’s user’s manual? #
These points are the endpoints of the target, as shown in the image. (https://fbox.lips-hci.com/s/BaeFoNEwZxL8DDw)
Which tool is used to send PUT and POST commands? #
Postman is the recommended tool for sending the PUT and POST commands.
HTTP GET requests work as expected, but the POST request to localhost:8080/v1/peripheral/print_label fails to execute. #
The HTTP method POST localhost:8080/v1/peripheral/print_label is specifically designed for LIPS-provided label machines. To use this method, a compatible label machine supplied by LIPS is required.
ROS / ROS2 Related Topics #
When using LIPSedge™ AE430 with ROS2 (openni2-ros), an error message “DeviceOpen: Couldn’t open device ‘INVALID’ emerges. #
To fix this, follow the steps below:
1.Download and put the file AE430_470_install_ros.sh(https://fbox.lips-hci.com/s/fNoMszrXbQBAMif) inside {SDK Folder}
2.Run AE430_470_install_ros.sh
$ ./AE430_470_install_ros.sh
3.Build ROS2 packages again
$colcon build
4.Run example
$ source ./install/setup.bash
$ ros2 launch openni2_camera camera_only.launch.py
How can it be confirmed that a camera with a specific IP address is associated with the correct camera name when multiple cameras are running in ROS 2? #
In this case, follow the step below for the setup:
1. tar zxvf multi_lipsedge.tar.gz (https://fbox.lips-hci.com/s/xb8Lcb4HzJ7dkKH)
2. cp * {your ros2 workspace}/isaac_ros-dev/src/realsense-ros/realsense2_camera/launch
3. colcon build –symlink-install –packages-select realsense2_camera
4. source install/setup.bash
And the launch command would be:
one cam: ros2 launch realsense2_camera lipsedge.launch.py
multi cam: ros2 launch realsense2_camera multi_lipsedge.launch.py camera_count:=2
How to utilize LIPSedge™ AE400 with ROS2? #
To achieve this, download the LIPSedge™ AE400 SDK (librealsense2-lipsedge_2.43.0-focal~20240722.c6721071_amd64.deb)(https://fbox.lips-hci.com/s/7YqxQNGFiBZ9az9) and follow Quick_Start_ros2_AE450.pdf(https://fbox.lips-hci.com/s/7dgM8nQfdW3Mj26).
How to integrate the LIPSedge™ AE430 into a ROS Noetic environment? #
1.Download this install_ros.sh and put it in the SDK folder
Install link: https://fbox.lips-hci.com/s/wyJXg8kR3m64xiF
2.Run the install_ros.sh file and install the required packages.
$sudo ./install_ros.sh
$sudo apt-get install libopenni2-0 libopenni2-dev
3.Build a ROS working environment
$mkdir -p ~/LIPSToF_ws/src
$cd ~/LIPSToF_ws/src
$catkin_init_workspace
4.Download this LIPSedge-ros.tar.gz and extract it.Then put it in /LIPSToF_ws/src
$tar zxvf LIPSedge-ros.tar.gz
5.Build ROS pkg
$cd ~/LIPSToF_ws
$catkin_make
$source ./devel/setup.bash
terminal
$roscore
another terminal
$ros launch openni2_launch openni2.launch
another terminal
$rqt
How to utilize LIPSedge™ AE430 with ROS2? #
For Ubuntu 22.04 and x86_64 device holder , follow the steps below to resolve the issue:
1.Follow the User’s Manual to install the LIPSedge™ AE430’s SDK.
https://dev.lips-hci.com/documents-installation-and-setup/lipsedge-tm-ae430-ae470/user-guide
2.Follow the README file on GitHub to install LIPSedge-ROS2.
https://github.com/lips-hci/LIPSedge-ros2
3.Copy all files from the drivers
folder in the AE430’s SDK to /usr/lib/x86_64-linux-gnu/OpenNI2/Drivers/
.
$cd {SDK PATH}/Redist/OpenNI2/Drivers/
$sudo cp * /usr/lib/x86_64-linux-gnu/OpenNI2/Drivers/
4.Go to the following path: /usr/lib/x86_64-linux-gnu/OpenNI2/Drivers/
, then run the ln
command for libLIPSedge-AE430_AE470.so
$cd /usr/lib/x86_64-linux-gnu/OpenNI2/Drivers
$sudo ln -s libLIPSedge-AE430_AE470.so libLIPSedge-AE430_AE470.so.0
5.If PointCloud2 data are not accessible and the error shown in the image emerges, install the following ROS2 package.
$sudo apt-get install ros-humble-depth*
$sudo apt-get update
$sudo apt-get upgrade
7. Execute the command specified in the README file.
$ros2 launch openni2_camera camera_only.launch.py
$ros2 run rqt_image_view rqt_image_view”
Is it possible to utilize LIPSedge™ AE470 and LIPSedge™ M3 simultaneously in ROS noetic? #
LIPSedge™ M3 uses the OpenNI2 SDK, while LIPSedge™ AE470 uses the Intel™ RealSense SDK. To avoid conflicts when both devices are used simultaneously, change the topic name of one of the devices.