This code pattern is based on Veremin, however modified to utilize the Human Pose Estimator design from the Model Asset eXchange, which is hosted on the Machine Learning eXchange. The Human Pose Estimator design is transformed to the TensorFlow.js web-friendly format. It is a deep knowing model that is trained to find people and their poses in a given image.
Human posture estimator model is converted to the TensorFlow.js web format using the Tensorflow.js converter.
User launches the web application.
Web application loads the TensorFlow.js design.
User stands in front of web cam and moves arms.
Web application catches video frame and sends out to the TensorFlow.js design. Model returns a prediction of the estimated presents in the frame.
Web application processes the prediction and overlays the skeleton of the estimated pose on the Web UI.
Web application transforms the position of the users wrists from the approximated present to a MIDI message, and the message is sent out to a connected MIDI gadget or noise is played in the browser.
Summary
This developer code pattern demonstrates how you can create your own music based upon your arm movements in front of a webcam. It uses the Model Asset eXchange (MAX) Human Pose Estimator design and TensorFlow.js.
Description
This code pattern is based upon Veremin, but customized to utilize the Human Pose Estimator design from the Model Asset eXchange, which is hosted on the Machine Learning eXchange. The Human Pose Estimator design is transformed to the TensorFlow.js web-friendly format. It is a deep learning model that is trained to detect human beings and their positions in a provided image.
The web application connects video from your web electronic camera, and the Human Pose Estimator design predicts the place of your wrists within the video. The application takes the predictions and converts them to tones in the browser or to MIDI worths, which get sent out to a linked MIDI gadget.
Flow
Guidelines
Get detailed directions on utilizing this pattern in the README.