From 54e0f61eab65a0e828201bf3a537f30b294098e7 Mon Sep 17 00:00:00 2001 From: Westside Robotics Date: Thu, 21 Oct 2021 08:38:35 -0700 Subject: [PATCH] Updated Blocks Sample Op Mode for TensorFlow Object Detection (markdown) --- Blocks-Sample-Op-Mode-for-TensorFlow-Object-Detection.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Blocks-Sample-Op-Mode-for-TensorFlow-Object-Detection.md b/Blocks-Sample-Op-Mode-for-TensorFlow-Object-Detection.md index 16d517a..bc66c73 100644 --- a/Blocks-Sample-Op-Mode-for-TensorFlow-Object-Detection.md +++ b/Blocks-Sample-Op-Mode-for-TensorFlow-Object-Detection.md @@ -18,7 +18,7 @@ Let's take a look at the initial blocks in the op mode. The first block in the

[[https://raw.githubusercontent.com/wiki/WestsideRobotics/FTC-training/Images/010_Blocks_TFOD_webcam_init.png]]
Initialize the Vuforia and TensorFlow libraries.

-You can initialize both the Vuforia and the TensorFlow libraries in the same op mode. This is useful, for example, if you would like to use the TensorFlow library to recognize the Duck cargo and then use the Vuforia library to help the robot autonomously navigate on the game field. +You can initialize both the Vuforia and the TensorFlow libraries in the same op mode. This is useful, for example, if you would like to use the TensorFlow library to recognize the Duck and then use the Vuforia library to help the robot autonomously navigate on the game field. Note that in this example the ObjectTracker parameter is set to true for this block, so an _object tracker_ will be used, in addition to the TensorFlow interpreter, to keep track of the locations of detected objects. The object tracker _interpolates_ object recognitions so that results are smoother than they would be if the system were to solely rely on the TensorFlow interpreter.