Updated Using a Custom TensorFlow Model with Java (markdown)
@ -30,34 +30,82 @@ Press the "Upload" button to upload the file to the Robot Controller. The words
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/upload.png]]<br/>Press the "Upload" button to upload the file to the Robot Controller.<p>
|
||||
|
||||
### Creating the Op Mode
|
||||
Click on the "Blocks" tab at the top of the screen to navigate to the Blocks Programming page. Click on the Create New Op Mode Button to display the Create New Op Mode dialog box. Specify a name for your new op mode. Select "ConceptTensorFlowObjectDetectionCustomModel" as the sample op mode that will be used as a template for your new op mode. Note that if you do not have a webcam configured for your Robot Controller, the dialog box might display a warning message. You can ignore this warning message. Press "OK" to create your new op mode.
|
||||
Note: the process for creating the op mode is identical (except for the name) to the process described in [this tutorial](Java-Sample-TensorFlow-Object-Detection-Op-Mode). The steps are repeated here for convenience.
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/createNewOpMode.png]]<br/>Use "ConceptTensorFlowObjectDetectionCustomModel" as the template for your new op mode.<p>
|
||||
You can use the sample "ConceptTensorFlowObjectDetection" as a template to create your own Java op mode that uses the TensorFlow technology to "look for" any game elements, and determine the relative location of any identified elements. We will then modify this sample to use the inference model we uploaded to the robot controller.
|
||||
|
||||
The new op mode should appear in edit mode in your browser. By default, the sample op mode assumes that you are using the smartphone camera with your robot controller. If you are using a webcam, disable the phone-specific initialization block and enable the webcam-specific block.
|
||||
* If you are using a REV Control Hub with an externally connected webcam as your Robot Controller, select "ConceptTensorFlowObjectDetectionWebcam" as the sample op mode from the dropdown list in the New File dialog box.
|
||||
* If you are using an Android smartphone as your Robot Controller, select "ConceptTensorFlowObjectDetection" as the sample op mode from the dropdown list in the New File dialog box.
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/initialize.png]]<br/>Use the appropriate initialization block for the type of camera used on your robot.<p>
|
||||
Specify the name as "MyOBJCustomModel" (where "OBJ" stands for "OnBot Java"). Press "OK" to create the new op mode.
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/myObjCustomModel.png]]<br/>Create an Op Mode with ConceptTensorFlowObjectDetection as its template.<p>
|
||||
|
||||
Your new op mode should appear in the editing pane of the OnBot Java screen.
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/newlyCreated.png]]<br/>Your newly created op mode should be available for editing through OnBot Java.<p>
|
||||
|
||||
### Initializing the System
|
||||
Before you can run your op mode, you must first make sure you have a valid Vuforia developer license key to initialize the Vuforia software. You can obtain a key for free from [https://developer.vuforia.com/license-manager](https://developer.vuforia.com/license-manager). Once you obtain your key, replace the VUFORIA_KEY static String with the actual license key so the Vuforia software will be able to initialize properly.
|
||||
|
||||
```
|
||||
private static final String VUFORIA_KEY =
|
||||
" -- YOUR NEW VUFORIA KEY GOES HERE --- ";
|
||||
```
|
||||
Also, by default the op mode is disabled. Comment out the "@Disabled" annotation to enable your newly created op mode.
|
||||
|
||||
```
|
||||
@TeleOp(name = "Concept: TensorFlow Object Detection", group = "Concept")
|
||||
//@Disabled
|
||||
public class MyOBJCustomModel extends LinearOpMode {
|
||||
```
|
||||
|
||||
### Specifying the Filepath & Changing the Labels
|
||||
When you upload a custom inference model, the file is stored in a specific directory ("/sdcard/FIRST/tflitemodels/") on your robot controller . You will need to specify the path to your uploaded file.
|
||||
|
||||
You will also need to specify a list of labels that describe the known objects that are included in the model. For the "skystone.tflite" model, there are two known objects. The first object has a label of "Stone" and the second object has a label of "Skystone".
|
||||
|
||||
To make these changes, look towards the top of your op mode's class definition and find the static String declarations for the variables TFOD_MODEL_ASSET, LABEL_FIRST_ELEMENT, and LABEL_SECOND_ELEMENT. Change these declarations so they look like the following,
|
||||
|
||||
```
|
||||
private static final String TFOD_MODEL_ASSET = "/sdcard/FIRST/tflitemodels/Skystone.tflite";
|
||||
private static final String LABEL_FIRST_ELEMENT = "Stone";
|
||||
private static final String LABEL_SECOND_ELEMENT = "Skystone";
|
||||
```
|
||||
|
||||
### Loading the Custom Model
|
||||
Look in the op mode for the programming block "TensorFlowObjectDetectionCustomModel.setModelFromFile" and change the "tfliteModelFilename" from its default value of "WiffleBalls.tflite" to the name of the .tflite file that you uploaded to your robot controller.
|
||||
The sample op mode loads the default (Ultimate Goal) inference model as an asset. However, we want to change this and use our uploaded file instead.
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/skystoneTflite.png]]<br/>Change the tflitemodelFilename to match the name of your uploaded file.<p>
|
||||
Look in the op mode in the initTfod() method for the line that calls the loadModelFromAsset() method. Comment out this line and replace it instead with a line that uses the loadModelFromFile() method instead:
|
||||
|
||||
When you load an inference model, you must specify a list of labels that describe the known objects that are included in the model. For the "skystone.tflite" model, there are two known objects. The first object has a label of "Stone" and the second object has a label of "Skystone". Click on the little gear icon for the "create list with" block, and then drag a second item onto the list block, to change the number of items in the list from one to two.
|
||||
```
|
||||
// tfod.loadModelFromAsset(TFOD_MODEL_ASSET, LABEL_FIRST_ELEMENT, LABEL_SECOND_ELEMENT);
|
||||
tfod.loadModelFromFile(TFOD_MODEL_ASSET, LABEL_FIRST_ELEMENT, LABEL_SECOND_ELEMENT);
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/clickOnGear.png]]<br/>Click on gear icon to configure the number of items in the list.<p>
|
||||
|
||||
After you have added the additional item to the list, click on the gear icon a second time to hide the list configuration window. Change the names of the list items to "Stone" and "Skystone".
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/changeLabels.png]]<br/>Specify the labels to be used for the elements in the model.<p>
|
||||
```
|
||||
|
||||
### Adjusting the Zoom Factor
|
||||
If the object that you are trying to detect will be at a distance of 24" (61cm) or greater, you might want to set the digital zoom factor to a value greater than 1. This will tell the TensorFlow to use an artificially magnified portion of the image, which can result in more accurate detections at greater distances.
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/setZoom.png]]<br/>You can specify a zoom factor if the object is located at a distance greater than 24" (61cm) from the camera.<p>
|
||||
```
|
||||
/**
|
||||
* Activate TensorFlow Object Detection before we wait for the start command.
|
||||
* Do it here so that the Camera Stream window will have the TensorFlow annotations visible.
|
||||
**/
|
||||
if (tfod != null) {
|
||||
tfod.activate();
|
||||
|
||||
### Saving and Running the Op Mode
|
||||
Press the "Save Op Mode" button to save your changes. Then run the op mode. The robot controller should use the new Skystone inference model to identify and track the Stone and Skystone elements from the Skystone challenge.
|
||||
// The TensorFlow software will scale the input images from the camera to a lower resolution.
|
||||
// This can result in lower detection accuracy at longer distances (> 55cm or 22").
|
||||
// If your target is at distance greater than 50 cm (20") you can adjust the magnification value
|
||||
// to artificially zoom in to the center of image. For best results, the "aspectRatio" argument
|
||||
// should be set to the value of the images used to create the TensorFlow Object Detection model
|
||||
// (typically 16/9).
|
||||
tfod.setZoom(2.5, 16.0/9.0);
|
||||
}
|
||||
```
|
||||
### Building and Running the Op Mode
|
||||
Build the OnBot Java op mode and run it. The robot controller should use the new Skystone inference model to identify and track the Stone and Skystone elements from the Skystone challenge.
|
||||
|
||||
<p align="center">[[/images/Using-a-Custom-TensorFlow-Model-with-Blocks/skystoneDetected.png]]<br/>The op mode should detect the game elements from the Skystone challenge.<p>
|
||||
|
||||
@ -66,5 +114,3 @@ You can use the following images (and point the camera at your computer's screen
|
||||
|
||||
* [Stone Image](https://raw.githubusercontent.com/wiki/ftctechnh/FtcRobotController/images/Using-a-Custom-TensorFlow-Model-with-Blocks/stone.jpg)
|
||||
* [Skystone Image](https://raw.githubusercontent.com/wiki/ftctechnh/FtcRobotController/images/Using-a-Custom-TensorFlow-Model-with-Blocks/skystone.jpg)
|
||||
|
||||
|
||||
|
Reference in New Issue
Block a user