LIPS® Developer Documentation
  • WELCOME
    • LIPS® Developer Documentation
    • About 3D Depth Cameras
    • Depth Image
    • Understanding Depth
    • An Ideal Depth Camera
    • LIPSedge™ AE Series Camera Selection Guide
  • LIPSedge™ SDK 1.x
    • LIPSedge™ SDK 1.x
    • Benefits from LIPSedge™ SDK implementation
    • LIPSedge™ SDK 1.x Installation Architecture
    • How to identify each camera SDK
    • New Features for v1.02
      • Installing and identifying packages on Windows
      • Saving Captures and Videos with the DepthViewer
      • Saving Point Cloud with PointCloudViewer
      • Live On-Screen displaying Depth, FPS
      • Live On-Screen displaying XYZ parameters in RawfileViewer
      • Distance measurement on-screen from point-to-point
      • Mouse Pointer Change
      • LIPSedge™ AE and S series Colormap new feature addition
      • Simple naming for LIPSedge™ SDK Tools
      • Importing parameters from .json files for LIPSedge™ AE and S series
    • LIPSedge™ SDK v1.02 Tools
      • DepthViewer
      • PointCloudViewer
      • CameraParameterViewer
      • CameraCenterViewer
      • CameraEventListener
      • CameraPowerTest
      • LensModeSelector
      • LIPSImuReader
      • CameraSimpleViewer
      • RawFileViewer
    • Features Articles
      • LIPSedge™ SDK Persistent Connection for PoE and USB
    • Tutorials
    • Development and Deployment on arm64
  • DOCUMENTS, INSTALLATION & SETUP
    • LIPSedge™ AE400 / AE450
      • User Guide
        • Previous Releases (Archive)
      • SDK Release
        • Previous Releases (Archive)
      • Installation
        • Linux
        • Windows
        • ROS Wrapper
        • NVIDIA ISAAC Wrapper
        • Persistent Connection
      • STEP files for CAD Use
      • Certifications Documents
      • Firmware
    • LIPSedge™ AE430 / AE470
      • User Guide
        • Previous Releases (Archive)
      • SDK Release
        • Previous Releases (Archive)
      • Firmware
      • STEP files for CAD Use
    • LIPSedge™ AE430-DK / AE470-DK
      • User Guide
    • LIPSedge™ DL & M3 Series
      • User Guide
        • Previous Releases (Archive)
      • SDK Release
        • Previous Releases (Archive)
        • Changelog
      • STEP files for CAD Use
      • Installation
        • Ubuntu
        • Windows
    • LIPSedge™ L215u / L210u
      • User Guide
        • Previous Releases (Archive)
      • SDK Release
        • Previous Releases (Archive)
        • Changelog
      • Installation
        • Windows
        • Linux
      • STEP files for CAD Use
    • LIPSFace™ HW110/115 On-Device 3D Facial Recognition
      • User Guide & SDK Download
      • STEP files for CAD Use
    • LIPSFace™ HW120/125 On-Device 3D Facial Recognition
      • User Guide
      • SDK
      • STEP files for CAD Use
    • LIPScan 3D ™ Middleware
      • LIPScan 3D™ SDK
        • SDK Download
          • Previous Releases
        • User Guide
        • Release Notes
    • LIPSMetric™ HA110 Handheld Dimensioner
      • User Guide
    • LIPSMetric™ ST115 Static Dimensioner
      • User Guide
    • LIPSMetric™ ST130 Pallet Dimensioner
      • User Guide
  • LIPSedge™ SDK Languages & Libraries
    • C++
      • environment-setup
      • hello-lipsedge-sdk
      • opencv-viewer
      • roi
      • depth-data
      • align-depth-color
      • range-filter
      • remove-background
      • record
      • pointcloud
      • config-resolution
      • camera-parameter
    • Python
      • environment-setup
      • hello-lipsedge-sdk
      • opencv-viewer
      • roi
      • depth-data
      • align-depth-color
      • range-filter
      • remove-background
      • record
    • Java
      • ni-hello
    • C#
      • ni-hello
      • simple-read
    • OpenCV
  • LIPSedge™ SDK Frameworks
    • GenICam (for Halcon / Aurora Vision)
      • User Manual
      • Driver(.cti) and Nodemap (.xml)
      • Supported LIPSedge™ Camera SDK
      • Installation Example
    • ROS
    • ROS2
    • NVIDIA Isaac Wrapper
    • Flutter
  • LIPSedge™ SDK Sample Codes
    • Sample Applications & Viewer & Utilities
      • ni-viewer
      • ni-pointcloud-gl
      • ni-camera-matrix
  • LIPSFace™ SDK
    • LIPSFace™ SDK Overview
    • Updates
Powered by GitBook
On this page
  • Prerequisite
  • Ubuntu 20.04 / Ubuntu 18.04
  • Windows 10
  • Example App
  • Supported 3D Camera
  • Usage
  • 1. Create and Connect to Camera
  • 2. Enable/Disable camera video stream
  • 3. Bind texture widget
  • 4. Pipeline (Set how to process frames captured from camera)
  • APIs
  • Known Issues
  1. LIPSedge™ SDK Frameworks

Flutter

PreviousNVIDIA Isaac WrapperNextLIPSedge™ SDK Sample Codes

Last updated 10 months ago

Github Repo:

A framework for 2D & 3D image processing with AI (Tensorflow Lite)

  • 3D camera RGB, Depth, IR frames and PointCloud

  • Capture image from camera and process with OpenCV and TensorflowLite hand recognition AI model.

  • Process image by OpenCV functions


Prerequisite

Ubuntu 20.04 / Ubuntu 18.04

  • libglew-dev

  • libopencv-dev (4.0.0+)

  • libglm-dev

  • freeglut3-dev

  • Tensorflow Lite 2.7.0+

Windows 10

  1. Extract downloaded file to where you like.

  1. Add system environment variable FLUTTER_VISION3D_DEP

  1. Execute and install LIPSedge-SDK-v2.4.1.1

Example App

The example app set all used TensorFlow Lite models in example/lib/define.dart. Please download the models. And modify this define file, set the correct path, if you want to run Tensorflow Lite pipeline example.

class Define {
  static const HAND_DETECTOR_MODEL = '/path/to/model';
  static const FACE_DETECTOR_MODEL = '/path/to/model';
  static const EFFICIENT_NET_MODEL = '/path/to/model';
}
Example
Description

[Camera] UVC Camera

Display 2D USB camera video frame

[Camera] Realsense

Display 3D camera RGB, depth, IR frames and PointCloud using Realsense SDK

[Camera] OpenNI

Display 3D camera RGB, depth, IR frames and PointCloud using OpenNI2 SDK

[Pipeline] OpenCV

Load a image file. Use different pipeline functions to process this image

[Pipeline] Custom Handler

Load a image file. Use pipeline native handler(written in C++) to process this image

[Pipeline] Hand Detection

[Pipeline] Object Detection

[Pipeline] Facial Recognition

Supported 3D Camera

Camera
Supported
Tested
Product Link

Intel Realsense D415

✅

✅

Intel Realsense D435

✅

✅

Intel Realsense D435i

✅

Intel Realsense D455

✅

Intel Realsense T265

✅

Intel Realsense L515

✅

LIPSedge AE400

✅

✅

LIPSedge AE430

✅

✅

LIPSedge AE450

✅

✅

LIPSedge AE470

✅

✅

LIPSedge DL

✅

✅

LIPSedge M3

✅

✅

LIPSedge L Series

✅

✅

Usage


1. Create and Connect to Camera

  • UVC Camera

UvcCamera? cam = await FvCamera.create(ctl.text, CameraType.UVC) as UvcCamera?;
  • OpenNI2 Camera

OpenniCamera? cam = await FvCamera.create(ctl.text, CameraType.OPENNI) as OpenniCamera?;
  • Realsense, LIPSedge AE400 and LIPSedge AE450

RealsenseCamera? cam = await FvCamera.create(ctl.text, CameraType.REALSENSE) as RealsenseCamera?;
  • Virtual Camera (Load frame from file system without video stream)

DummyCamera? cam = await FvCamera.create(ctl.text, CameraType.DUMMY) as DummyCamera?;

2. Enable/Disable camera video stream

await cam.enableStream();
await cam.disableStream();

3. Bind texture widget

Texture(textureId: cam.rgbTextureId);

4. Pipeline (Set how to process frames captured from camera)

  • Display UVC video stream

FvPipeline uvcPipeline = cam.rgbPipeline;
await uvcPipeline.cvtColor(OpenCV.COLOR_BGR2RGBA);
await uvcPipeline.show();
  • Display 3D camera depth and IR frame with color map applied

FvPipeline depthPipeline = cam.depthPipeline;
await depthPipeline.convertTo(0, 255.0 / 1024.0);
await depthPipeline.applyColorMap(OpenCV.COLORMAP_JET);
await depthPipeline.cvtColor(OpenCV.COLOR_RGB2RGBA);
await depthPipeline.show();
  • Object detection with Efficient Net (Tensorflow Lite)

// Create Tensorflow Lite Model
TFLiteModel model = await TFLiteModel.create('/path/to/model.tflite');

// Use pipeline to set input for model
FvPipeline rgbPipeline = cam!.rgbPipeline;
await rgbPipeline.setInputTensorData(model!.index, 0, FvPipeline.DATATYPE_UINT8);
await rgbPipeline.inference(model!.index);

// Set the callback function. Called when inference is done.
FlutterVision3d.listen((MethodCall call) async {
    if (call.method == 'onInference') {
        ...
    }
});

APIs

// Enumberation
enum CameraType { OPENNI, REALSENSE, DUMMY, UVC }


// FlutterVision3d Functions
class FvCamera {
    static Future<FvCamera?> create(String serial, CameraType type)
    Future<void> close()
    Future<bool> enableStream()
    Future<bool> disableStream()
    Future<bool> pauseStream(bool pause)
    Future<void> enablePointCloud()
    Future<void> disablePointCloud()
    Future<bool> isConnected()
    Future<void> configure(int prop, double value)
    Future<bool> screenshot(int index, String path, {int? cvtCode})
    Future<int> getOpenCVMat(int index)
    Future<Map<String, double>> getIntrinsic(int index)
    Future<bool> enableRegistration(bool enable)
    Future<List<String>> getVideoModes(int index)
    Future<bool> setVideMode(int index, int mode)
    Future<String> getCurrentVideoMode(int index)
    Future<String> getSerialNumber()
    Future<void> loadPresetParameter(String path)
    Future<List<int>> getDepthData(DepthType depthType, {int? x, int? y, int? width, int? height})
}

class OpenniCamera extends FvCamera {}
class RealsenseCamera extends FvCamera {}
class UvcCamera extends FvCamera {}

class FlutterVision3d {
    static listen(Future<dynamic> Function(MethodCall) callback)
    static Future<int> niInitialize()
    static Future<List<OpenNi2Device>> enumerateDevices()
    static Future<List<String>> rsEnumerateDevices()

    static Future<int> getOpenglTextureId()
    static Future<void> openglRender()
}

class FvPipeline {
    Future<void> clear()
    Future<void> cvtColor(int mode, {int? at, int? interval, bool? append})
    Future<void> imwrite(String path, {int? at, int? interval, bool? append})
    Future<void> imread(String path, {int? at, int? interval, bool? append})
    Future<void> show({int? at, int? interval, bool? append})
    Future<void> convertTo(int mode, double scale, {int? at, double? shift, int? interval, bool? append})
    Future<void> applyColorMap(int colorMap, {int? at, int? interval, bool? append})
    Future<void> resize(int width, int height, {int? at, int? mode, int? interval, bool? append})
    Future<void> crop(int xStart, int xEnd, int yStart, int yEnd, {int? at, int? interval, bool? append})
    Future<void> rotate(int rotateCode, {int? at, int? interval, bool? append})
    Future<void> cvRectangle(double x1, double y1, double x2, double y2, int r, int g, int b, {int? at, int? thickness, int? lineType, int? shift, int? alpha, int? interval, bool? append})
    Future<void> threshold(double threshold, double max, {int? type, int? at, int? interval, bool? append, bool? runOnce})
    Future<void> zeroDepthFilter(int threshold, int range, {int? at, int? interval, bool? append, bool? runOnce})
    Future<void> copyTo(OpencvMat mat, {int? at, int? interval, bool? append, bool? runOnce}) async {
    Future<void> setInputTensorData(int modelIndex, int tensorIndex, int dataType, {int? at, int? interval, bool? append})
    Future<void> inference(int modelIndex, {int? at, int? interval, bool? append})
    Future<void> customHandler(int size, {int? at, int? interval, bool? append})
    Future<int> run({int? from, int? to})
}

class TFLiteModel{
    static Future<TFLiteModel> create(modelPath)

    Future<Float32List> getTensorOutput(int tensorIndex, List<int> size)
}

Known Issues

  • Tensorflow Lite model cannot load in debug mode on Windows. If you want to use tensorflow lite functions on Windows, run flutter app with release mode.

All the dependency package can be installed using apt-get expect TensorFlow Lite. Please follow to build libtensorflowlite.so and place it to where compiler can find (e.g /usr/lib/)

Download dependency files from

Load video from UVC camera. Use tensorflow lite pipeline functions to detect hand in frame. Hand detection model original from

Load video from UVC camera. Use tensorflow lite pipeline functions to recognize object in frame. Object detection model from

Load video from UVC camera. Use tensorflow lite pipeline functions to detect face in frame. Facial Recognition Model from

documentation
here
MediaPipe
TensorflowLite
LIPS Corp.
Link
Link
Link
Link
Link
Link
Link
Link
Link
Link
Link
Link
Link
FlutterVision3D