diff --git a/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md b/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md index eeea19d..0b44696 100644 --- a/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md +++ b/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md @@ -48,5 +48,143 @@ Use Windows Explorer to browse the internal shared storage of your Android devic

[[/images/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects/tfliteModelsFolder.png]]
Navigate to FIRST->tflitemodels and paste the two files in this directory.

Now the files are where we want them to be for this example. - +### Modifying a Sample Op Mode +Use the OnBot Java editor to create a new op mode that is called "TFODEverydayObjects" and that is based on the "ConceptTensorFlowObjectDetection" sample op mode. Note that a copy of the full op mode that was used for this example (except for the Vuforia key) is included at the end of this tutorial. + +#### Modify the Name and Enable the Op Mode +Modify the annotations to change the name to avoid a "collision" with any other op modes on your robot controller that are based on the same sample op mode. Also comment at the @Disabled annotation to enable this op mode. + +``` +@TeleOp(name = "TFOD Everyday Objects", group = "Concept") +//@Disabled +public class TFODEverydayObjects extends LinearOpMode { +``` + +#### Specify Your Vuforia License Key +Before you can run your op mode, you must first make sure you have a valid Vuforia developer license key to initialize the Vuforia software. You can obtain a key for free from [https://developer.vuforia.com/license-manager](https://developer.vuforia.com/license-manager). Once you obtain your key, replace the VUFORIA_KEY static String with the actual license key so the Vuforia software will be able to initialize properly. + +``` + private static final String VUFORIA_KEY = + " -- YOUR NEW VUFORIA KEY GOES HERE --- "; +``` + +#### Specify the Paths to the Model and to the Label Map +Modify the sample op mode so that you specify the paths to your model file ("detect.tflite") and to your label map file ("labelmap.txt"). + +``` +public class TFODEverydayObjects extends LinearOpMode { + private static final String TFOD_MODEL_FILE = "/sdcard/FIRST/tflitemodels/detect.tflite"; + private static final String TFOD_MODEL_LABELS = "/sdcard/FIRST/tflitemodels/labelmap.txt"; + private String[] labels; +``` + +#### Add import Statements +Your op mode will need the following additional import statements: + +``` +import java.io.BufferedReader; +import java.io.FileReader; +import java.util.ArrayList; +import java.util.List; +``` + +#### Create readLabels() and getStringArray() Methods +Create a method called readLabels() that will be used to read the label map file from the tflitemodels subdirectory: + +``` + /** + * Read the labels for the object detection model from a file. + */ + private void readLabels() { + ArrayList labelList = new ArrayList<>(); + + // try to read in the the labels. + try (BufferedReader br = new BufferedReader(new FileReader(TFOD_MODEL_LABELS))) { + int index = 0; + while (br.ready()) { + // skip the first row of the labelmap.txt file. + // if you look at the TFOD Android example project (https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android) + // you will see that the labels for the inference model are actually extracted (as metadata) from the .tflite model file + // instead of from the labelmap.txt file. if you build and run that example project, you'll see that + // the label list begins with the label "person" and does not include the first line of the labelmap.txt file ("???"). + // i suspect that the first line of the labelmap.txt file might be reserved for some future metadata schema + // (or that the generated label map file is incorrect). + // for now, skip the first line of the label map text file so that your label list is in sync with the embedded label list in the .tflite model. + if(index == 0) { + // skip first line. + br.readLine(); + } else { + labelList.add(br.readLine()); + } + index++; + } + } catch (Exception e) { + telemetry.addData("Exception", e.getLocalizedMessage()); + telemetry.update(); + } + + if (labelList.size() > 0) { + labels = getStringArray(labelList); + RobotLog.vv("readLabels()", "%d labels read.", labels.length); + for (String label : labels) { + RobotLog.vv("readLabels()", " " + label); + } + } else { + RobotLog.vv("readLabels()", "No labels read!"); + } + } + +``` +Important note: The readLabels() method actually skips the first line of the "labelmap.txt" file. If you review Google's [example TensorFlow Object Detection Android app](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android) carefully you will notice that the app actually extracts the label map as metadata from the .tflite file. If you build and run the app, you will see that when the the app extracts the labels from the .tflite file's metadata, the first label is "person". In order to ensure that your labels are in sync with the known objects of the sample .tflite model, the readLabels() method skips the first line of the label map file and starts with the second label ("person"). I suspect that the first line of the label map file might be reserved for future use (or it might be an error in the file). + +You will also need to define the getStringArray() method which the readLabels() method uses to convert the ArrayList to a String array. + +``` + // Function to convert ArrayList to String[] + private String[] getStringArray(ArrayList arr) + { + // declaration and initialize String Array + String str[] = new String[arr.size()]; + + // Convert ArrayList to object array + Object[] objArr = arr.toArray(); + + // Iterating and converting to String + int i = 0; + for (Object obj : objArr) { + str[i++] = (String)obj; + } + + return str; + } +``` + +#### Adjusting Zoom Factor +When I tested this example op mode, I disabled the digital zoom factor because for my testing, the phone was relatively close to the target objects. If you would like to test the op mode using small targets at larger distances, you can use a zoom factor greater than one to magnify the target object and increase the detection reliability. + +``` + // Uncomment the following line if you want to adjust the magnification and/or the aspect ratio of the input images. + //tfod.setZoom(2.5, 1.78); +``` + +#### Adjusting the Minimum Confidence Level +You can set the minimum result confidence level to a relatively lower value so that TensorFlow will identify a greater number of objects when you test your op mode. I tested my op mode with a value of 0.6. + +``` + tfodParameters.minResultConfidence = 0.6; +``` + +#### Calling readLabels() Method +Call the readLabels() method to read the label map and generate the labels list. This list will be needed when TensorFlow attempts to load the custom model file. + +``` + public void runOpMode() { + // read the label map text files. + readLabels(); + + // The TFObjectDetector uses the camera frames from the VuforiaLocalizer, so we create that + // first. + initVuforia(); + initTfod(); +``` \ No newline at end of file