diff --git a/TFOD-Sample-OpMode-with-Java-Builder.md b/TFOD-Sample-OpMode-with-Java-Builder.md index 55fa858..c84874c 100644 --- a/TFOD-Sample-OpMode-with-Java-Builder.md +++ b/TFOD-Sample-OpMode-with-Java-Builder.md @@ -20,12 +20,12 @@ https://ftc-docs.firstinspires.org/en/latest/ftc_ml/implement/obj/obj.html ## Introduction -This tutorial describes the regular, or **Builder**, version of the FTC Java Sample OpMode for TensorFlow Object Detection (TFOD). +This tutorial describes the regular, or **Builder**, version of the FTC Java **Sample OpMode** for TensorFlow Object Detection (TFOD). + +This Sample, called **"ConceptTensorFlowObjectDetection.java"**, can recognize **official or custom** FTC game elements and provide their visible size and position. It uses the Java **Builder pattern** to customize standard/default TFOD settings. This is **not the same** as the "Easy" version, which uses only default settings and official/built-in TFOD model(s), described [**here**](https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/Java-Sample-OpMode-for-TFOD). -This Sample, called "ConceptTensorFlowObjectDetection.java", can recognize **official or custom** FTC game elements and provide their visible size and position. It uses the Java **Builder pattern** to customize standard/default TFOD settings. - For the 2023-2024 game CENTERSTAGE, the official game element is a hexagonal white **Pixel**. The FTC SDK software contains a TFOD model of this object, ready for recognition. That default model was created with a Machine Learning process described [**here**](https://ftc-docs.firstinspires.org/en/latest/ftc_ml/index.html).
[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/010-TFOD-recognition.png]]
@@ -34,7 +34,7 @@ For extra points, FTC teams may instead use their own custom TFOD models of game[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/020-team-props.png]]
-This tutorial shows **OnBot Java** screens. Users of **Android Studio** can follow along, since the Sample OpMode is exactly the same. +This tutorial shows **OnBot Java** screens. Users of **Android Studio** can follow along with a few noted exceptions, since the Sample OpMode is exactly the same.[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]
@@ -64,7 +64,7 @@ Now the default TFOD model is stored on your laptop. ## Uploading to the Robot Controller -Next, you need to upload the TFOD model to the Robot Controller. Connect your laptop to your Robot Controller's wireless network and navigate to the FTC "Manage" page: +Next, you need to upload the TFOD model to the Robot Controller. Connect your laptop to your Robot Controller's wireless network, open the Chrome browser, and navigate to the FTC "Manage" page:
[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/040-Manage-page.png]]
@@ -84,7 +84,7 @@ Now the file will upload to the Robot Controller. The file will appear in the l[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/080-Centerstage-tflite.png]]
-**Android Studio** users should instead store the TFOD model in the project **Assets** folder. Look for `FtcRobotController -> assets`. Left-click on the Assets folder, choose `Open In` and a file explorer, then copy in your `.tflite` file. +**Android Studio** users should instead store the TFOD model in the project **assets** folder. Look for `FtcRobotController -> assets`. Left-click on the assets folder, choose `Open In` and a file explorer, then copy/paste your `.tflite` file.[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]
@@ -94,7 +94,7 @@ At the FTC **OnBot Java** browser interface, click on the large black **plus-sig
[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/100-New-File.png]]
-Specify a name for your new OpMode. Select "ConceptTensorFlowObjectDetection" as the Sample OpMode that will be the template for your new OpMode. +Specify a name for your new OpMode. Select **"ConceptTensorFlowObjectDetection"** as the Sample OpMode to be the template for your new OpMode. This Sample has optional gamepad inputs, so it could be designated as a **TeleOp** OpMode (see green oval above). @@ -102,11 +102,11 @@ Click "OK" to create your new OpMode. Android Studio users should follow the commented instructions to copy this class from the Samples folder to the Teamcode folder, with a new name. Also remove the `@Disabled` annotation, to make the OpMode visible in the Driver Station list. -The new OpMode should appear in edit mode in your browser. +The new OpMode should appear in the editing window of OnBot Java.[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/110-Sample-Open.png]]
-By default, the Sample OpMode assumes you are using a webcam, configured as "Webcam 1". If you are using the built-in camera on your Android RC phone, change the USE_WEBCAM Boolean from `true` to `false` (orange oval above). +By default, the Sample OpMode assumes you are using a webcam, configured as "Webcam 1". If instead you are using the built-in camera on your Android RC phone, change the USE_WEBCAM Boolean from `true` to `false` (orange oval above).[[Return to Top|https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/TFOD-Sample-OpMode-with-Java-Builder]]
@@ -126,11 +126,11 @@ to this: private static final String TFOD_MODEL_FILE = "/sdcard/FIRST/tflitemodels/CenterStage.tflite"; ``` -Later, you can change this filename to the actual name of your custom TFOD model. Here we are using the default (white Pixel) model just downloaded. +Later, you can change this filename back to the actual name of your custom TFOD model. Here we are using the default (white Pixel) model just downloaded. ========= -**Android Studio** users should instead store the TFOD model in the project **Assets** folder, and use: +**Android Studio** users should instead verify or store the TFOD model in the project **assets** folder, and use: ```java private static final String TFOD_MODEL_ASSET = "CenterStage.tflite"; @@ -144,7 +144,7 @@ private static final String TFOD_MODEL_ASSET = "MyModelStoredAsAsset.tflite"; ========= -You **don't** need to change this line: +This line **does not** need to be changed: ```java // Define the labels recognized in the model for TFOD (must be in training order!) @@ -155,7 +155,7 @@ private static final String[] LABELS = { ... because "Pixel" is the correct and only TFOD Label in that model file. -Later, you might have custom Labels like "myRedProp" and "myBlueProp" (for CENTERSTAGE). The list should be in alphabetical order and contain the labels in the dataset(s) used to make the model. +Later, you might have custom Labels like "myRedProp" and "myBlueProp" (for CENTERSTAGE). The list should be in alphabetical order and contain the labels in the dataset(s) used to make the TFOD model. ========== @@ -165,7 +165,7 @@ Here is the Java **Builder pattern**, used to specify various settings for the T
[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/140-Builder-settings.png]]
-The **yellow ovals** indicate its distinctive features: creating the Processor with the `new Builder()` object, and closing the process with the `.build()` method. +The **yellow ovals** indicate its distinctive features: create the Processor with the `new Builder()` object, and close/finalize with the `.build()` method. This is the streamlined version of the Builder pattern. Notice all the `.set` methods are "chained" to form a single Java expression, ending with a semicolon after `.build()`. @@ -192,13 +192,13 @@ If Build is prevented by some other OpMode having errors/issues, they must be fi In Android Studio (or OnBot Java), you can open a problem class/OpMode and type **CTRL-A** and **CTRL-/** to select and "comment out" all lines of code. This is reversible with **CTRL-A** and **CTRL-/** again. -Now run your new OpMode from the Driver Station (on the TeleOp list, if so designated). The OpMode should recognize any CENTERSTAGE white Pixel within the camera's view, based on the trained TFOD model in the SDK. +Now run your new OpMode from the Driver Station (in the TeleOp list, if so designated). The OpMode should recognize any CENTERSTAGE white Pixel within the camera's view, based on the trained TFOD model. For a **preview** during the INIT phase, touch the Driver Station's 3-dot menu and select **Camera Stream**.[[https://github.com/FIRST-Tech-Challenge/FtcRobotController/wiki/images/TFOD-Sample-OpMode-with-Java-Builder/200-Sample-DS-Camera-Stream.png]]
-Camera Stream is not live video; tap to refresh the image. Use the small white arrows at lower right to expand or revert the preview size. To close the preview, choose 3-dots and Camera Stream again. +Camera Stream is not live video; tap to refresh the image. Use the small white arrows at bottom right to expand or revert the preview size. To close the preview, choose 3-dots and Camera Stream again. After the DS START button is touched, the OpMode displays Telemetry for any recognized Pixel(s): @@ -224,7 +224,7 @@ For a larger view, right-click the image to open in a new browser tab. ## Program Logic and Initialization -During the INIT stage (before DS START is touched), this OpMode calls a **method to initialize** the TFOD Processor and the FTC VisionPortal. After DS START is touched, the OpMode runs a continuous loop, calling a **method to display telemetry** about any TFOD recognitions. The OpMode also contains three optional features to remind teams about **CPU resource management**, useful in vision processing. +During the INIT stage (before DS START is touched), this OpMode calls a **method to initialize** the TFOD Processor and the FTC VisionPortal. After DS START is touched, the OpMode runs a continuous loop, calling a **method to display telemetry** about any TFOD recognitions. The OpMode also contains optional features to remind teams about **CPU resource management**, useful in vision processing. You've already seen the first part of the method `initTfod()` which uses a streamlined, or "chained", sequence of Builder commands to create the TFOD Processor. @@ -264,7 +264,7 @@ visionPortal = builder.build(); All settings have been uncommented here, to see them more easily. -Here the `new Builder()` creates a separate `VisionPortal.Builder` object called `builder`, allowing traditional/individual Java method calls for each setting. For the streamlined "chained" TFOD process, the `new Builder()` operated directly on the TFOD Processor called `tfod`, without creating a `TfodProcesssor.Builder` object. Both approaches are valid. +Here the `new Builder()` creates a separate `VisionPortal.Builder` object called `builder`, allowing traditional/individual Java method calls for each setting. For the streamlined "chained" TFOD process, the `new Builder()` operated directly on the TFOD Processor called `tfod`, without creating a `TfodProcessor.Builder` object. Both approaches are valid. Notice the process again **closes** with a call to the `.build()` method. @@ -315,16 +315,16 @@ Vision processing is "expensive", using much **CPU capacity and USB bandwidth** This Sample OpMode contains three optional features to remind teams about resource management. Overall, the SDK provides [**over 10 tools**](https://ftc-docs.firstinspires.org/en/latest/apriltag/vision_portal/visionportal_cpu_and_bandwidth/visionportal-cpu-and-bandwidth.html) to manage these resources, allowing your OpMode to run effectively. -As the first example, streaming images from the camera can be paused and resumed. This is a very fast transition, freeing CPU resources (and potentially USB bandwidth). +As the first example, **streaming images** from the camera can be paused and resumed. This is a very fast transition, freeing CPU resources (and potentially USB bandwidth). ```java - // Save CPU resources; can resume streaming when needed. - if (gamepad1.dpad_down) { - visionPortal.stopStreaming(); - } else if (gamepad1.dpad_up) { - visionPortal.resumeStreaming(); - } +// Save CPU resources; can resume streaming when needed. +if (gamepad1.dpad_down) { + visionPortal.stopStreaming(); +} else if (gamepad1.dpad_up) { + visionPortal.resumeStreaming(); +} ``` Pressing the Dpad buttons, you can observe the off-and-on actions in the RC preview (LiveView), described above. In your competition OpMode, these streaming actions would be programmed, not manually controlled. @@ -345,8 +345,8 @@ Simply set the Boolean to `false` (to disable), or `true` (to re-enable). The third example: after exiting the main loop, the VisionPortal is closed. ```java - // Save more CPU resources when camera is no longer needed. - visionPortal.close(); +// Save more CPU resources when camera is no longer needed. +visionPortal.close(); ``` Teams may consider this at any point when the VisionPortal is no longer needed by the OpMode, freeing valuable CPU resources for other tasks. @@ -358,10 +358,10 @@ Teams may consider this at any point when the VisionPortal is no longer needed b If the object to be recognized will be more than roughly 2 feet (61 cm) from the camera, you might want to set the digital Zoom factor to a value greater than 1. This tells TensorFlow to use an artificially magnified portion of the image, which may offer more accurate recognitions at greater distances. ```java - // Indicate that only the zoomed center area of each - // image will be passed to the TensorFlow object - // detector. For no zooming, set magnification to 1.0. - tfod.setZoom(2.0); +// Indicate that only the zoomed center area of each +// image will be passed to the TensorFlow object +// detector. For no zooming, set magnification to 1.0. +tfod.setZoom(2.0); ``` This `setZoom()` method can be placed in the INIT section of your OpMode, @@ -389,18 +389,18 @@ The SDK uses a default **minimum confidence** level of 75%. This means the Tenso You can see the object name and actual confidence (as a **decimal**, e.g. 0.96) near the Bounding Box, in the Driver Station preview (Camera Stream) and Robot Controller preview (Liveview). -Adjust this parameter to a higher value if you would like the processor to be more selective in identifying an object. +Adjust this parameter to a higher value if you want the processor to be more selective in identifying an object. =========== Another option is to define, or clip, a **custom area for TFOD evaluation**, unlike `setZoom` which is always centered. ```java - // Set the number of pixels to obscure on the left, top, - // right, and bottom edges of each image passed to the - // TensorFlow object detector. The size of the images are not - // changed, but the pixels in the margins are colored black. - tfod.setClippingMargins(0, 200, 0, 0); +// Set the number of pixels to obscure on the left, top, +// right, and bottom edges of each image passed to the +// TensorFlow object detector. The size of the images are not +// changed, but the pixels in the margins are colored black. +tfod.setClippingMargins(0, 200, 0, 0); ``` Adjust the four margins as desired, in units of pixels.