Updated Blocks Sample TensorFlow Object Detection Op Mode (markdown)

FTC Engineering
2020-10-27 08:59:26 -04:00
parent 07244f9186
commit 3c426bdc6a

@ -67,13 +67,13 @@ Let's modify the sample Blocks Op Mode so it will indicate which target zone the
Also, use the Blocks editor to modify the function "displayInfo" to check the labels of the recognized object. If the label reads "Single" then send a telemetry message to indicate target zone B. If the label reads "Quad" then send a telemetry message to indicate target zone C. If the label is neither "Single" or "Quad" send a telemetry message indicating that the target zone is unknown. Also, use the Blocks editor to modify the function "displayInfo" to check the labels of the recognized object. If the label reads "Single" then send a telemetry message to indicate target zone B. If the label reads "Quad" then send a telemetry message to indicate target zone C. If the label is neither "Single" or "Quad" send a telemetry message indicating that the target zone is unknown.
Note that in this example, since the op mode iterates through the list of recognized objects, the target zone that is displayed on the driver station will be determined by the label of the last recognized object in the list. Note that in this example, since the op mode iterates through the list of recognized objects, the target zone will be displayed for each recognized object in the list.
<p align="center">[[/images/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode/otherTargetZones.png]]<br/>Check the recognized object's label to see which target zone to go after.<p> <p align="center">[[/images/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode/otherTargetZones.png]]<br/>Check the recognized object's label to see which target zone to go after.<p>
Save the op mode and re-run it. The op mode should display the target zone based on the label of the last recognized object in its list of recognized objects. Note that if you test this op mode with multiple ring stacks, the order of the detected objects can change with each iteration of your op mode. Save the op mode and re-run it. The op mode should display the target zone for each object in its list of recognized objects. Note that if you test this op mode with multiple ring stacks, the order of the detected objects can change with each iteration of your op mode.
<p align="center">[[/images/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode/modifiedBlocksExample.png]]<br/>The modified op mode should indicate target zone based on the label of last recognized object in its list.<p> <p align="center">[[/images/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode/modifiedBlocksExample.png]]<br/>The modified op mode should indicate target zone for each recognized object in its list.<p>
### Important Note Regarding Image Orientation ### Important Note Regarding Image Orientation
If you are using a webcam with your Robot Controller, then the camera orientation is fixed in landscape mode. However, if you are using a smartphone camera, the system will interpret images based on the phone's orientation (Portrait or Landscape) at the time that the TensorFlow object detector is created and initialized. If you are using a webcam with your Robot Controller, then the camera orientation is fixed in landscape mode. However, if you are using a smartphone camera, the system will interpret images based on the phone's orientation (Portrait or Landscape) at the time that the TensorFlow object detector is created and initialized.
@ -106,7 +106,3 @@ When the example Op Mode is no longer active (i.e., when the user has pressed th
<p align="center">[[/images/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode/blocksTensorFlowDeactivate.png]]<br/>Deactivate TensorFlow.<p> <p align="center">[[/images/Blocks-Sample-TensorFlow-Object-Detection-Op-Mode/blocksTensorFlowDeactivate.png]]<br/>Deactivate TensorFlow.<p>
### Using a Custom Inference Model
Users with advanced programming knowledge can use Google's [TensorFlow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to create their own custom inference model. If you have a custom inference model, you can import that model into a Blocks op mode, and use it to look for and track custom targets.