From 4dcf9cb6f3d34e9333037ca40f4b9713f3634246 Mon Sep 17 00:00:00 2001 From: FTC Engineering Date: Tue, 27 Oct 2020 14:51:28 -0400 Subject: [PATCH] Updated Blocks Sample TensorFlow Object Detection Op Mode (markdown) --- Blocks-Sample-TensorFlow-Object-Detection-Op-Mode.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode.md b/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode.md index 737499d..8a63631 100644 --- a/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode.md +++ b/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode.md @@ -17,15 +17,13 @@ Let's take a look at the initial blocks in the op mode. The first block in the

[[/images/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode/blocksInit.png]]
Initialize the Vuforia and TensorFlow libraries.

-In the screenshot shown above, the sample op mode disables the camera monitoring window on the Robot Controller. If you are using a REV Robotics Control Hub as your Robot Controller (which lacks a touch screen) you normally want to disable the camera monitoring window and use the Camera Stream function instead to view the output of the webcam. If you are using an Android phone as your Robot Controller, however, you can enable the camera monitoring window so you can see the camera output on the Robot Controller's touch screen. - You can initialize both the Vuforia and the TensorFlow libraries in the same op mode. This is useful, for example, if you would like to use the TensorFlow library to determine the ring stack and then use the Vuforia library to help the robot autonomously navigate on the game field to navigate to the appropriate target zone from its starting position. Note that in this example the ObjectTracker parameter is set to true for this block, so an _object tracker_ will be used, in addition to the TensorFlow interpreter, to keep track of the locations of detected objects. The object tracker _interpolates_ object recognitions so that results are smoother than they would be if the system were to solely rely on the TensorFlow interpreter. Also note that the Minimum Confidence level is set to 70%. This means that the TensorFlow library needs to have a confidence level of 70% or higher in order to consider an object as being detected in its field of view. You can adjust this parameter to a higher value if you would like the system to be more selective in identifying an object. -If a camera monitor window is enabled for the TensorFlow library, then the confidence level for a detected target will be displayed near the bounding box of the identified object (when the object tracker is enabled). For example, a value of "0.92" indicates a 92% confidence that the object has been identified correctly. +The confidence level for a detected target will be displayed near the bounding box of the identified object (when the object tracker is enabled) on the Robot Controller. For example, a value of "0.92" indicates a 92% confidence that the object has been identified correctly. When an object is identified by the TensorFlow library, the op mode can read the "Left", "Right", "Top" and "Bottom" values associated with the detected object. These values correspond to the location of the left, right, top and bottom boundaries of the detection box for that object. These values are in pixel coordinates of the image from the camera.