diff --git a/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md b/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md index 6714c6e..d009d5d 100644 --- a/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md +++ b/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects.md @@ -209,10 +209,12 @@ Modify the initTfod() method to load the inference model from a file (rather tha ``` ### Running the Op Mode -Once you have made the changes to the sample op mode, rebuild the OnBot Java op modes and run the op mode to test it. The robot controller should now be able to detect everyday objects such as a cell phone, a teddy bear, a clock, a computer mouse, and a keyboard and will draw boundary boxes around recognized objects on the robot controller and display label information on the driver station (using telemetry). +Once you have made the changes to the sample op mode, rebuild the OnBot Java op modes and run the op mode to test it. The robot controller should now be able to detect everyday objects such as a cell phone, a teddy bear, a clock, a computer mouse, and a keyboard and will draw boundary boxes around recognized objects on the robot controller.
[[/images/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects/tfodRC.png]]
TensorFlow will recognize everyday objects like a cell phone.
+The op mode will also display label information on the driver station (using telemetry) for the objects that it recognizes in its field of view. +
[[/images/Using-a-TensorFlow-Pretrained-Model-to-Detect-Everyday-Objects/tfodDS.png]]
The op mode will display label information for recognized objects on the driver station using telemetry.
The example op mode (except for the Vuforia license key) is included below for reference: