Summary
The intro of the IBM Model Asset eXchange (MAX) that is hosted on the Machine Learning eXchange has provided application designers without data science experience simple access to prebuilt maker learning designs. This code pattern demonstrates how basic it can be to create a web app that makes use of a MAX design. The web app utilizes the Image Caption Generator from MAX and creates a simple web UI that lets you filter images based on the descriptions offered by the design.
Description
Every day 2.5 quintillion bytes of information are developed, based upon an IBM research study. A great deal of that information is unstructured information, such as large texts, audio recordings, and images. To do something beneficial with the data, you should first transform it into structured information.
This code pattern utilizes one of the models from the Model Asset Exchange, an exchange where developers can discover and try out open source deep learning models. Specifically, it uses the Image Caption Generator to produce a web application that captions images and lets you filter through images-based image material. The web application provides an interactive interface that is backed by a light-weight Python server utilizing Tornado. The server takes in images through the UI, sends them to a REST endpoint for the model, and shows the produced captions on the UI. The models REST endpoint is set up using the Docker image supplied on MAX. The web UI displays the created captions for each image in addition to an interactive word cloud to filter images based on their caption.
When you have finished this code pattern, you understand how to:
The server sends default images to the Model API and gets caption information.
The user communicates with the Web UI which contains the default material and uploads the images.
The web UI requests caption data for the images from the server and updates the material when the data is returned.
The server sends the images to the Model API and gets caption data to return to the web UI.
Deploy a deep learning model with a REST endpoint
Generate captions for an image using limit Models REST API
Run a web application that utilizes the models REST API
The web app utilizes the Image Caption Generator from MAX and produces a simple web UI that lets you filter images based on the descriptions given by the model.
Particularly, it utilizes the Image Caption Generator to develop a web application that captions images and lets you filter through images-based image material. The server takes in images through the UI, sends them to a REST endpoint for the design, and shows the created captions on the UI. The web UI shows the generated captions for each image as well as an interactive word cloud to filter images based on their caption.
Circulation
Guidelines
Ready to put this code pattern to utilize? Total information on how to begin utilizing this application and running are in the README.