9-25-2023 posted first draft, pending updates to the Sample OpMode. Requested 9-22-2023 by Danny Diaz, intending to work in parallel to correct this Sample (ref. PR 3087).
440
TFOD-Sample-OpMode-with-Java-Builder.md
Normal file
440
TFOD-Sample-OpMode-with-Java-Builder.md
Normal file
@ -0,0 +1,440 @@
|
||||
<!--
|
||||
|
||||
This text does not appear at the published wiki.
|
||||
|
||||
v01 9-25-23 posted first draft of tutorial at FTC wiki.
|
||||
|
||||
Per request 9-22-23 from Danny Diaz, this new article is based on:
|
||||
- SDK 9.0 VisionPortal, namely using Builder for Custom TFOD model, to be used for CENTERSTAGE
|
||||
|
||||
The original article still exists at the ftc wiki:
|
||||
https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/Using-a-Custom-TensorFlow-Model-with-Java
|
||||
|
||||
That topic (if not the same content) appears in two versions at ftc-docs (in RST format) "Written by Uday Vidyadharan, Team 7350":
|
||||
https://ftc-docs.firstinspires.org/en/latest/ftc_ml/implement/index.html
|
||||
|
||||
https://ftc-docs.firstinspires.org/en/latest/ftc_ml/implement/android_studios/android-studios.html
|
||||
https://ftc-docs.firstinspires.org/en/latest/ftc_ml/implement/obj/obj.html
|
||||
|
||||
-->
|
||||
|
||||
## Introduction
|
||||
|
||||
This tutorial describes the regular, or **Builder**, version of the FTC Java Sample OpMode for TensorFlow Object Detection (TFOD).
|
||||
|
||||
<i>This is **not the same** as the "Easy" version, which uses only default settings and official/built-in TFOD model(s), described [**here**](https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/Java-Sample-OpMode-for-TFOD).</i>
|
||||
|
||||
This Sample, called "ConceptTensorFlowObjectDetection.java", can recognize **official or custom** FTC game elements and provide their visible size and position. It uses the Java **Builder pattern** to customize standard/default TFOD settings.
|
||||
|
||||
For the 2023-2024 game CENTERSTAGE, the official game element is a hexagonal white **Pixel**. The FTC SDK software contains a TFOD model of this object, ready for recognition. That default model was created with a Machine Learning process described [**here**](https://ftc-docs.firstinspires.org/en/latest/ftc_ml/index.html).
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/010-TFOD-recognition.png]]</p>
|
||||
|
||||
For extra points, FTC teams may instead use their own custom TFOD models of game elements, called **Team Props** in CENTERSTAGE. That option is covered in this tutorial, along with showing how to use the default model. Custom TFOD models are created by teams using the same [**FTC Machine Learning process**](https://ftc-docs.firstinspires.org/en/latest/ftc_ml/index.html).
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/020-team-props.png]]</p>
|
||||
|
||||
This tutorial shows **OnBot Java** screens. Users of **Android Studio** can follow along, since the Sample OpMode is exactly the same.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Downloading the Model
|
||||
|
||||
The Robot Controller allows you to load a trained inference model in the form of a TensorFlow Lite (`.tflite`) file.
|
||||
|
||||
Here we use the standard FTC `.tflite` file from CENTERSTAGE (2023-2024), available on GitHub at the following link:
|
||||
|
||||
[CENTERSTAGE TFLite File](https://github.com/FIRST-Tech-Challenge/WikiSupport/blob/master/tensorflow/CenterStage.tflite)
|
||||
|
||||
<!--
|
||||
|
||||
<i>For competition, teams can use the FTC [**Machine Learning toolchain**](https://ftc-docs.firstinspires.org/en/latest/ftc_ml/index.html) to train their own custom models of Team Props. This uses the same process as the default model described here; simply specify your custom model filename and labels.</i>
|
||||
|
||||
<i>Very advanced teams could use [Google's TensorFlow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to create their own custom inference model.</i>
|
||||
|
||||
-->
|
||||
|
||||
Click the "Download Raw File" button to download the `CenterStage.tflite` file from GitHub to your local device (e.g. laptop). See the green arrow.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/030-Centerstage-public-repo.png]]</p>
|
||||
|
||||
Now the default TFOD model is stored on your laptop.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Uploading to the Robot Controller
|
||||
|
||||
Next, you need to upload the TFOD model to the Robot Controller. Connect your laptop to your Robot Controller's wireless network and navigate to the FTC "Manage" page:
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/040-Manage-page.png]]</p>
|
||||
|
||||
Scroll down and click on "Manage TensorFlow Lite Models".
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/050-Manage-TFLite-Models.png]]</p>
|
||||
|
||||
Now click the "Upload Models" button.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/060-Upload-Models.png]]</p>
|
||||
|
||||
Click "Choose Files", and use the dialog box to find and select the downloaded `CenterStage.tflite` file.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/070-Choose-Files.png]]</p>
|
||||
|
||||
Now the file will upload to the Robot Controller. The file will appear in the list of TensorFlow models available for use in OpModes.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/080-Centerstage-tflite.png]]</p>
|
||||
|
||||
<i>**Android Studio** users should instead store the TFOD model in the project **Assets** folder. Look for `FtcRobotController -> assets`. Left-click on the Assets folder, choose `Open In` and a file explorer, then copy in your `.tflite` file.</i>
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Creating the OpMode
|
||||
|
||||
At the FTC **OnBot Java** browser interface, click on the large black **plus-sign icon** "Add File", to open the New File dialog box.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/100-New-File.png]]</p>
|
||||
|
||||
Specify a name for your new OpMode. Select "ConceptTensorFlowObjectDetection" as the Sample OpMode that will be the template for your new OpMode.
|
||||
|
||||
This Sample has optional gamepad inputs, so it could be designated as a **TeleOp** OpMode (see green oval above).
|
||||
|
||||
Click "OK" to create your new OpMode.
|
||||
|
||||
<i>Android Studio users should follow the commented instructions to copy this class from the Samples folder to the Teamcode folder, with a new name. Also remove the `@Disabled` annotation, to make the OpMode visible in the Driver Station list.</i>
|
||||
|
||||
The new OpMode should appear in edit mode in your browser.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/110-Sample-Open.png]]</p>
|
||||
|
||||
By default, the Sample OpMode assumes you are using a webcam, configured as "Webcam 1". If you are using the built-in camera on your Android RC phone, change the USE_WEBCAM Boolean from `true` to `false` (orange oval above).
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Basic OpMode Settings
|
||||
|
||||
This Sample OpMode is **almost** ready to use, for detecting the default/built-in model (white Pixel for CENTERSTAGE).
|
||||
|
||||
First, change the filename here:
|
||||
|
||||
```java
|
||||
private static final String TFOD_MODEL_FILE = "/sdcard/FIRST/tflitemodels/myCustomModel.tflite";
|
||||
```
|
||||
|
||||
to this:
|
||||
|
||||
```java
|
||||
private static final String TFOD_MODEL_FILE = "/sdcard/FIRST/tflitemodels/CenterStage.tflite";
|
||||
```
|
||||
|
||||
Later, you can change this filename to the actual name of your custom TFOD model such as `myCustomModel.tflite`. Here we are using the default (white Pixel) model just downloaded.
|
||||
|
||||
=========
|
||||
|
||||
**Android Studio** users should instead store the TFOD model in the project **Assets** folder, and use:
|
||||
|
||||
```java
|
||||
private static final String TFOD_MODEL_ASSET = "CenterStage.tflite";
|
||||
```
|
||||
|
||||
OR (for example)
|
||||
|
||||
```java
|
||||
private static final String TFOD_MODEL_ASSET = "MyModelStoredAsAsset.tflite";
|
||||
```
|
||||
|
||||
=========
|
||||
|
||||
You **don't** need to change this line:
|
||||
|
||||
```java
|
||||
// Define the labels recognized in the model for TFOD (must be in training order!)
|
||||
private static final String[] LABELS = {
|
||||
"Pixel",
|
||||
};
|
||||
```
|
||||
|
||||
... because "Pixel" is the correct and only TFOD Label in that model file.
|
||||
|
||||
Later, you might have custom Labels like "myRedProp" and "myBlueProp" (for CENTERSTAGE). The list should be in alphabetical order and contain the labels in the dataset(s) used to make the model.
|
||||
|
||||
==========
|
||||
|
||||
Next, scroll down to the Java method `initTfod()`.
|
||||
|
||||
Here is the Java **Builder pattern**, used to specify various settings for the TFOD Processor.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/140-Builder-settings.png]]</p>
|
||||
|
||||
The **yellow ovals** indicate its distinctive features: creating the Processor with the `new Builder()` object, and closing the process with the `.build()` method.
|
||||
|
||||
<i>This is the streamlined version of the Builder pattern. Notice all the `.set` methods are "chained" to form a single Java expression, ending with a semicolon after `.build()`.</i>
|
||||
|
||||
Uncomment two Builder lines, circled above in green:
|
||||
|
||||
```java
|
||||
.setModelFileName(TFOD_MODEL_FILE)
|
||||
.setModelLabels(LABELS)
|
||||
```
|
||||
|
||||
These Builder settings tell the TFOD Processor which model and labels to use for evaluating camera frames.
|
||||
|
||||
<i>**That's it!**</i> You are ready to test this Sample OpMode.
|
||||
|
||||
<i>**Android Studio** users should instead uncomment the line `.setModelAssetName(TFOD_MODEL_ASSET)`.</i>
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Preliminary Testing
|
||||
|
||||
Click the "Build Everything" button (wrench icon at lower right), and wait for confirmation "BUILD SUCCESSFUL".
|
||||
|
||||
If Build is prevented by some other OpMode having errors/issues, they must be fixed before your new OpMode can run. For a quick fix, you could right-click on that filename and choose "Disable/Comment". This "comments out" all lines of code, effectively removing that file from the Build. That file can be re-activated later with "Enable/Uncomment".
|
||||
|
||||
<i>In Android Studio (or OnBot Java), you can open a problem class/OpMode and type **CTRL-A** and **CTRL-/** to select and "comment out" all lines of code. This is reversible with **CTRL-A** and **CTRL-/** again.</i>
|
||||
|
||||
Now run your new OpMode from the Driver Station (on the TeleOp list, if so designated). The OpMode should recognize any CENTERSTAGE white Pixel within the camera's view, based on the trained TFOD model in the SDK.
|
||||
|
||||
For a **preview** during the INIT phase, touch the Driver Station's 3-dot menu and select **Camera Stream**.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/200-Sample-DS-Camera-Stream.png]]</p>
|
||||
|
||||
Camera Stream is not live video; tap to refresh the image. Use the small white arrows at lower right to expand or revert the preview size. To close the preview, choose 3-dots and Camera Stream again.
|
||||
|
||||
After the DS START button is touched, the OpMode displays Telemetry for any recognized Pixel(s):
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/210-Sample-DS-Telemetry.png]]</p>
|
||||
|
||||
The above Telemetry shows the Label name, and TFOD recognition confidence level. It also gives the **center location** and **size** (in pixels) of the Bounding Box, which is the colored rectangle surrounding the recognized object.
|
||||
|
||||
<i>The pixel origin (0, 0) is at the top left corner of the image.</i>
|
||||
|
||||
Before and after DS START is touched, the Robot Controller provides a video preview called **LiveView**.
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/240-Sample-RC-LiveView.png]]</p>
|
||||
|
||||
For Control Hub (with no built-in screen), plug in an HDMI monitor or learn about [**`scrcpy`**](https://github.com/Genymobile/scrcpy). The above image is a LiveView screenshot via `scrcpy`.
|
||||
|
||||
If you don't have a physical Pixel on hand, try pointing the camera at this image:
|
||||
|
||||
<p align="center">[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/300-Sample-Pixel.png]]</p>
|
||||
|
||||
For a larger view, right-click the image to open in a new browser tab.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Program Logic and Initialization
|
||||
|
||||
During the INIT stage (before DS START is touched), this OpMode calls a **method to initialize** the TFOD Processor and the FTC VisionPortal. After DS START is touched, the OpMode runs a continuous loop, calling a **method to display telemetry** about any TFOD recognitions. The OpMode also contains three optional features to remind teams about **CPU resource management**, useful in vision processing.
|
||||
|
||||
You've already seen the first part of the method `initTfod()` which uses a streamlined, or "chained", sequence of builder commands to create the TFOD Processor.
|
||||
|
||||
The second part of that method uses regular, non-chained, Builder commands to create the VisionPortal.
|
||||
|
||||
```java
|
||||
// Create the vision portal by using a builder.
|
||||
VisionPortal.Builder builder = new VisionPortal.Builder();
|
||||
|
||||
// Set the camera (webcam vs. built-in RC phone camera).
|
||||
if (USE_WEBCAM) {
|
||||
builder.setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"));
|
||||
} else {
|
||||
builder.setCamera(BuiltinCameraDirection.BACK);
|
||||
}
|
||||
|
||||
// Choose a camera resolution. Not all cameras support all resolutions.
|
||||
builder.setCameraResolution(new Size(640, 480));
|
||||
|
||||
// Enable the RC preview (LiveView). Set "false" to omit camera monitoring.
|
||||
builder.enableLiveView(true);
|
||||
|
||||
// Set the stream format; MJPEG uses less bandwidth than default YUY2.
|
||||
builder.setStreamFormat(VisionPortal.StreamFormat.YUY2);
|
||||
|
||||
// Choose whether or not LiveView stops if no processors are enabled.
|
||||
// If set "true", monitor shows solid orange screen if no processors enabled.
|
||||
// If set "false", monitor shows camera view without annotations.
|
||||
builder.setAutoStopLiveView(false);
|
||||
|
||||
// Set and enable the processor.
|
||||
builder.addProcessor(tfod);
|
||||
|
||||
// Build the Vision Portal, using the above settings.
|
||||
visionPortal = builder.build();
|
||||
```
|
||||
|
||||
All settings have been uncommented here, to see them more easily.
|
||||
|
||||
<i>Here the `new Builder()` creates a separate `VisionPortal.Builder` object called `builder`, allowing traditional/individual Java method calls for each setting. For the streamlined "chained" TFOD process, the `new Builder()` operated directly on the TFOD Processor called `tfod`, without creating a `TfodProcesssor.Builder` object. Both approaches are valid.</i>
|
||||
|
||||
Notice the process again **closes** with a call to the `.build()` method.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/Java-Sample-OpMode-for-TFOD]]</i><p>
|
||||
|
||||
## Telemetry Method
|
||||
|
||||
After DS START is touched, the OpMode continuously calls this method to display telemetry about any TFOD recognitions:
|
||||
|
||||
```java
|
||||
/**
|
||||
* Add telemetry about TensorFlow Object Detection (TFOD) recognitions.
|
||||
*/
|
||||
private void telemetryTfod() {
|
||||
|
||||
List<Recognition> currentRecognitions = tfod.getRecognitions();
|
||||
telemetry.addData("# Objects Detected", currentRecognitions.size());
|
||||
|
||||
// Step through the list of recognitions and display info for each one.
|
||||
for (Recognition recognition : currentRecognitions) {
|
||||
double x = (recognition.getLeft() + recognition.getRight()) / 2 ;
|
||||
double y = (recognition.getTop() + recognition.getBottom()) / 2 ;
|
||||
|
||||
telemetry.addData(""," ");
|
||||
telemetry.addData("Image", "%s (%.0f %% Conf.)", recognition.getLabel(), recognition.getConfidence() * 100);
|
||||
telemetry.addData("- Position", "%.0f / %.0f", x, y);
|
||||
telemetry.addData("- Size", "%.0f x %.0f", recognition.getWidth(), recognition.getHeight());
|
||||
} // end for() loop
|
||||
|
||||
} // end method telemetryTfod()
|
||||
```
|
||||
|
||||
In the first line of code, **all TFOD recognitions** are collected and stored in a List variable. The camera might "see" more than one game element in its field of view, even if not intended (i.e. for CENTERSTAGE with 1 game element).
|
||||
|
||||
The `for() loop` then iterates through that List, handling each item, one at a time. Here the "handling" is simply processing certain TFOD fields for DS Telemetry.
|
||||
|
||||
The `for() loop` calculates the pixel coordinates of the **center** of each Bounding Box (the preview's colored rectangle around a recognized object).
|
||||
|
||||
Telemetry is created for the Driver Station, with the object's name (Label), recognition confidence level (percentage), and the Bounding Box's location and size (in pixels).
|
||||
|
||||
For competition, you want to do more than display Telemetry, and you want to exit the main OpMode loop at some point. These code modifications are discussed in another section below.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Resource Management
|
||||
|
||||
Vision processing is "expensive", using much **CPU capacity and USB bandwidth** to process millions of pixels streaming in from the camera.
|
||||
|
||||
This Sample OpMode contains three optional features to remind teams about resource management. Overall, the SDK provides [**over 10 tools**](https://ftc-docs.firstinspires.org/en/latest/apriltag/vision_portal/visionportal_cpu_and_bandwidth/visionportal-cpu-and-bandwidth.html) to manage these resources, allowing your OpMode to run effectively.
|
||||
|
||||
As the first example, streaming images from the camera can be paused and resumed. This is a very fast transition, freeing CPU resources (and potentially USB bandwidth).
|
||||
|
||||
```java
|
||||
|
||||
// Save CPU resources; can resume streaming when needed.
|
||||
if (gamepad1.dpad_down) {
|
||||
visionPortal.stopStreaming();
|
||||
} else if (gamepad1.dpad_up) {
|
||||
visionPortal.resumeStreaming();
|
||||
}
|
||||
```
|
||||
|
||||
Pressing the Dpad buttons, you can observe the off-and-on actions in the RC preview (LiveView), described above. In your competition OpMode, these streaming actions would be programmed, not manually controlled.
|
||||
|
||||
===========
|
||||
|
||||
The second example, commented out, similarly allows a vision processor (TFOD and/or AprilTag) to be disabled and re-enabled:
|
||||
|
||||
```java
|
||||
//Disable or re-enable the TFOD processor at any time.
|
||||
visionPortal.setProcessorEnabled(tfod, true);
|
||||
```
|
||||
|
||||
Simply set the Boolean to `false` (to disable), or `true` (to re-enable).
|
||||
|
||||
===========
|
||||
|
||||
The third example: after exiting the main loop, the VisionPortal is closed.
|
||||
|
||||
```java
|
||||
// Save more CPU resources when camera is no longer needed.
|
||||
visionPortal.close();
|
||||
```
|
||||
|
||||
Teams may consider this at any point when the VisionPortal is no longer needed by the OpMode, freeing valuable CPU resources for other tasks.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Adjusting the Zoom Factor
|
||||
|
||||
If the object to be recognized will be more than roughly 2 feet (61 cm) from the camera, you might want to set the digital Zoom factor to a value greater than 1. This tells TensorFlow to use an artificially magnified portion of the image, which may offer more accurate recognitions at greater distances.
|
||||
|
||||
```java
|
||||
// Indicate that only the zoomed center area of each
|
||||
// image will be passed to the TensorFlow object
|
||||
// detector. For no zooming, set magnification to 1.0.
|
||||
tfod.setZoom(2.0);
|
||||
```
|
||||
|
||||
This `setZoom()` method can be placed in the INIT section of your OpMode,
|
||||
|
||||
- immediately after the call to the `initTfod()` method, or
|
||||
|
||||
- as the very last command inside the `initTfod()` method.
|
||||
|
||||
This method is **not** part of the TFOD Processor Builder pattern, so the Zoom factor can be set to other values during the OpMode, if desired.
|
||||
|
||||
The "zoomed" region can be observed in the DS preview (Camera Stream) and the RC preview (LiveView), surrounded by a greyed-out area that is **not evaluated** by the TFOD Processor.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Other Adjustments
|
||||
|
||||
This Sample OpMode contains another adjustment, commented out:
|
||||
|
||||
```java
|
||||
// Set confidence threshold for TFOD recognitions, at any time.
|
||||
tfod.setMinResultConfidence(0.75f);
|
||||
```
|
||||
|
||||
The SDK uses a default **minimum confidence** level of 75%. This means the TensorFlow Processor needs a confidence level of 75% or higher, to consider an object as "recognized" in its field of view.
|
||||
|
||||
You can see the object name and actual confidence (as a **decimal**, e.g. 0.96) near the Bounding Box, in the Driver Station preview (Camera Stream) and Robot Controller preview (Liveview).
|
||||
|
||||
Adjust this parameter to a higher value if you would like the processor to be more selective in identifying an object.
|
||||
|
||||
===========
|
||||
|
||||
Another option is to define, or clip, a **custom area for TFOD evaluation**, unlike `setZoom` which is always centered.
|
||||
|
||||
```java
|
||||
// Set the number of pixels to obscure on the left, top,
|
||||
// right, and bottom edges of each image passed to the
|
||||
// TensorFlow object detector. The size of the images are not
|
||||
// changed, but the pixels in the margins are colored black.
|
||||
tfod.setClippingMargins(0, 200, 0, 0);
|
||||
```
|
||||
|
||||
Adjust the four margins as desired, in units of pixels.
|
||||
|
||||
These method calls can be placed in the INIT section of your OpMode,
|
||||
|
||||
- immediately after the call to the `initTfod()` method, or
|
||||
|
||||
- as the very last commands inside the `initTfod()` method.
|
||||
|
||||
As with `setProcessorEnabled()` and `setZoom()`, these methods are **not** part of the Processor or VisionPortal Builder patterns, so they can be set to other values during the OpMode, if desired.
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
## Modifying the Sample
|
||||
|
||||
In this Sample OpMode, the main loop ends only when the DS STOP button is touched. For CENTERSTAGE competition, teams should **modify this code** in at least two ways:
|
||||
|
||||
- for a significant recognition, take action or store key information -- inside the `for() loop`
|
||||
|
||||
- end the main loop based on your criteria, to continue the OpMode
|
||||
|
||||
As an example, you might set a Boolean variable `isPixelDetected` (or `isPropDetected`) to `true`, if a significant recognition has occurred.
|
||||
|
||||
You might also evaluate and store which randomized Spike Mark (red or blue tape stripe) holds the white Pixel or Team Prop.
|
||||
|
||||
Regarding the main loop, it could end after the camera views all three Spike Marks, or after your code provides a high-confidence result. If the camera’s view includes more than one Spike Mark position, perhaps the Pixel/Prop's **Bounding Box** size and location could be useful. Teams should consider how long to seek an acceptable recognition, and what to do otherwise.
|
||||
|
||||
In any case, the OpMode should exit the main loop and continue running, using any stored information.
|
||||
|
||||
Best of luck this season!
|
||||
|
||||
<p align="right"><i>[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]</i><p>
|
||||
|
||||
============
|
||||
|
||||
<i>Questions, comments and corrections to westsiderobotics@verizon.net</i>
|
Reference in New Issue
Block a user