วันอาทิตย์ที่ 19 เมษายน พ.ศ. 2563

Raspberry pi TensorFlow.JS Facemesh






Raspberry pi TensorFlow.JS Facemesh

The facemesh package infers approximate 3D facial surface geometry from an image or video stream, requiring only a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips within the face, including details such as lip contours and the facial silhouette. This information can be used for downstream tasks such as expression classification (but not for identification). Refer to our model card for details on how the model performs across different datasets. This package is also available through MediaPipe.

Performance characteristics

Facemesh is a lightweight package containing only ~3MB of weights, making it ideally suited for real-time inference on a variety of mobile devices. When testing, note that TensorFlow.js also provides several different backends to choose from, including WebGL and WebAssembly (WASM) with XNNPACK for devices with lower-end GPU's. The table below shows how the package performs across a few different devices and TensorFlow.js backends:

The table shows how the package performs across different devices and TensorFlow.js backends


Demo Source Code
https://github.com/tensorflow/tfjs-models/tree/master/facemesh




Installation on Raspberry pi ( see in Youtube )

Install NodeJS and npm

curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -

sudo apt-get install nodejs


Install Yarn

curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -

echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list

sudo apt-get update && sudo apt-get install yarn


Check Version



Run Demo code ( need USB Webcam )

Git Clone code
git clone https://github.com/tensorflow/tfjs-models.git


Go into the facemesh folder:
cd facemesh
Install dependencies:
yarn
Publish facemesh locally:
yarn build && yarn yalc publish
Cd into the demos and install dependencies:
cd demo
yarn
Link the local facemesh to the demos:
yarn yalc link @tensorflow-models/facemesh
Start the dev demo server:
yarn watch
Then Open Web Browser and use domain localhost:1234

Need USB Webcam







Test on Android Web Browser



       .

Reference

Face and hand tracking in the browser with MediaPipe and TensorFlow.js
https://blog.tensorflow.org/2020/03/face-and-hand-tracking-in-browser-with-mediapipe-and-tensorflowjs.html

MediaPipe
https://github.com/google/mediapipe

Online Demo

วันศุกร์ที่ 10 เมษายน พ.ศ. 2563

Raspberry pi Person Tracking with Intel AI

Raspberry pi Person Tracking with Intel AI 

Multi Person Detect and Tracking with Intel AI on Raspberry piThis demo demonstrates how to run Multi Camera Multi Person demo using OpenVINO.


What is OpenVINO?

https://raspberrypi4u.blogspot.com/2019/04/raspberry-pi-openvino-intel-movidius.html

Pre-trained Model 

• Person-detection-retail-0013 Model 
• Person-reidentification-retail-0076 Model


Hardware

• Raspberry Pi Board (4B 1 GB)
• Intel Neural Compute Stick 2


Software

• OS Raspbien 10 ( Buster )
• Python 3.7.3• OpenVINO Toolkit 2020.1
• OpenCV 4.0.0

Python Source code

https://docs.openvinotoolkit.org/latest/_demos_python_demos_multi_camera_multi_person_tracking_README.html




Detect Output

The demo displays bounding boxes of tracked objects and unique IDs of those objects. To save output video with the result please use the option --output_video, to change configuration parameters please open the config.py file and edit it.Also demo can dump resulting tracks to a json file. To specify the file use the --history_file argument.

Reference

OpenViNO Toolkit
https://docs.openvinotoolkit.org/latest/index.html

Install OpenVINO™ toolkit for Raspbian* OS
https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_raspbian.html

Multi Camera Multi Person Python* Demo 
https://docs.openvinotoolkit.org/latest/_demos_python_demos_multi_camera_multi_person_tracking_README.html

วันจันทร์ที่ 6 เมษายน พ.ศ. 2563

Raspberry pi Object Detection with Intel AI Stick


Raspberry pi Object Detection with Intel AI Stick

This project showcases Object Detection with SSD and new Async API. Async API usage can improve overall frame-rate of the application, because rather than wait for inference to complete, the app can continue doing things on the host, while accelerator is busy. 

Specifically, this demo keeps two parallel infer requests and while the current is processed, the input frame for the next is being captured. This essentially hides the latency of capturing, so that the overall framerate is rather determined by the MAXIMUM(detection time, input capturing time) and not the SUM(detection time, input capturing time).

The technique can be generalized to any available parallel slack, for example, doing inference and simultaneously encoding the resulting (previous) frames or running further inference,
This and other performance implications and tips for the Async API are covered in the Optimization Guide
Other demo objectives are:
  • Video as input support via OpenCV
  • Visualization of the resulting bounding boxes and text labels (from the .labels file) or class number (if no file is provided)
  • OpenCV is used to draw resulting bounding boxes, labels, so you can copy paste this code without need to pull Open Model Zoo demos helpers to your app
  • Demonstration of the Async API in action, so the demo features two modes (toggled by the Tab key)
    • Old-style "Sync" way, where the frame capturing with OpenCV executes back to back with the Detection
    • "Truly Async" way when the Detection performed on the current frame, while the OpenCV captures the next one.

System Requirements

Hardware

·     Raspberry Pi Board (4B , 3B+)
·     Intel Neural Compute Stick ( V1 or V2 )
·     Pi Camera ( V1 or V2 ) or USB cam
·     SD Card 32GB
·     5V DC. 2A Power Supply

Software

·     OS Raspbien 10 ( Buster )
·     Python 3.7.3
·     OpenVINO Toolkit 2019.R3
·     OpenCV 4.0.0

Machine Learning Object Detection Model : Mobilenet SSD V2


Output Detection

he demo uses OpenCV to display the resulting frame with detections (rendered as bounding boxes and labels, if provided). In the default mode the demo reports
  • OpenCV time: frame decoding + time to render the bounding boxes, labels, and displaying the results.
  • Detection time: inference time for the (object detection) network. It is reported in the "SYNC" mode only.
  • Wallclock time, which is combined (application level) performance.


Compare with Raspberry pi 3B+ , 4 and Intel AI Stick 1 ,2

Raspberry pi 4 + Intel AI stick 2 
Frame rate 25 fps

Raspberry pi 4 + Intel AI stick 1
Frame rate 8 fps



Raspberry pi 3B+ and Intel AI stick 2
Frame rate 14 fps



Raspberry pi 3B+ and Intel AI stick 1
Frame rate 25 fps


Summary



Retrain Model with your own dataset. 


https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10

Reference

Install OpenVINO™ toolkit for Raspbian* OS

Object Detection SSD Python* Demo, Async API performance showcase
https://docs.openvinotoolkit.org/latest/_demos_python_demos_object_detection_demo_ssd_async_README.html




  •  Watch 
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10