-
Notifications
You must be signed in to change notification settings - Fork 287
Remove outdated retroreflective tape documentation and add game piece detection guide #3104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Remove outdated retroreflective tape documentation and add game piece detection guide #3104
Conversation
… detection guide Removes three outdated articles about retroreflective tape that are no longer relevant since FRC games now use AprilTag-based localization instead. Adds a new comprehensive guide for detecting game pieces using modern vision processing techniques. Changes: - Remove target-info-and-retroreflection.rst (2016 game-specific) - Remove identifying-and-processing-the-targets.rst (outdated methods) - Remove 2017-vision-examples.rst (LabVIEW-only examples) - Add game-piece-detection.rst with guidance on traditional vision vs ML, coordinate transformations, and pose estimation - Update introduction/index.rst to reflect new article structure - Update apriltag-intro.rst to remove outdated repository fork language Fixes wpilibsuite#2957
Update references in basic-vision-example.rst and working-with-contours.rst to point to the new game-piece-detection.rst article instead of the deleted identifying-and-processing-the-targets.rst file.
sciencewhiz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that feels missing is choosing a camera.
Also, probably worth adding a link to photonvision in the strategies for vision processing for the coprocessor.
source/docs/software/vision-processing/introduction/game-piece-detection.rst
Outdated
Show resolved
Hide resolved
Add redirects from the three deleted retroreflective tape articles to the new game-piece-detection.rst article to maintain link compatibility.
7a6000f to
0911f6a
Compare
…mendations - Add comprehensive camera selection section covering FOV, resolution, frame rate, exposure control, and camera types - Add important note about ML hardware requirements and accelerator options (Google Coral, Limelight, Orange Pi NPU) - Add PhotonVision reference and link in strategies-for-vision-programming.rst - Clarify that teams commonly use Google Coral with Orange Pi + PhotonVision or Limelight with integrated acceleration
- Remove incorrect claim about Limelight with built-in ML support - Clarify Google Coral works with any coprocessor, not just PhotonVision - Add Hailo-8 as another accelerator option - Simplify language to be more accurate about hardware options
source/docs/software/vision-processing/introduction/game-piece-detection.rst
Outdated
Show resolved
Hide resolved
source/docs/software/vision-processing/introduction/strategies-for-vision-programming.rst
Outdated
Show resolved
Hide resolved
| ### Exposure Control | ||
|
|
||
| - **Manual exposure control**: Essential for consistent detection under varying lighting | ||
| - **Global shutter vs rolling shutter**: Global shutter is preferable for fast-moving robots to avoid image distortion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe something that says rolling shutter cameras are much more common and if it doesn't say it's global shutter, then it's rolling shutter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely agree. Maybe this latest update went too far? I agree that rolling is good enough for a lot of teams but not sure if it is most with how fast teams have had to be moving lately.
| ## Software Support | ||
|
|
||
| The main repository for the source code that detects and decodes AprilTags [is located here](https://github.com/wpilibsuite/apriltag/tree/main). | ||
| WPILib includes AprilTag detection and decoding functionality through the [wpilibsuite/apriltag repository](https://github.com/wpilibsuite/apriltag/tree/main). This repository provides: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still inaccurate lol. Silly LLM! It's located under https://github.com/wpilibsuite/allwpilib/tree/main/apriltag now! (also it's still technically "forked", or "patched", whatever term you want to use.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that is fair to blame the LLM for this. That was already written in the documentation is just didn't verify it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, it did say in the summary that it "removed outdated repository information", but fair.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link still needs to be addressed
- Add note about lens distortion in wide FOV cameras - Remove ethernet switch mention (VH-109 radio makes it unnecessary) - PhotonVision link already present in strategies document
- Expanded explanation of global vs rolling shutter - Noted that rolling shutter cameras are significantly more common - Clarified that if not explicitly marked global shutter, assume rolling - Added context that rolling shutter works adequately for most FRC use
|
|
||
| .. important:: Machine learning inference requires significant computational resources. While possible on devices like the Raspberry Pi, performance may be limited. For better ML performance, consider using hardware accelerators such as: | ||
|
|
||
| - **Google Coral USB/M.2 Accelerator** - Works with coprocessors running PhotonVision or custom code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this stay considering Coral isn't being sold anymore? Also, PhotonVision doesn't support Coral.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's already a discussion of NPU hardware over in the strategies page. How about replacing everything after line 35 (.. important) with:
.. important:: Machine learning inference requires significant computational resources. :ref:docs/software/vision-processing/wpilibpi/strategies-for-vision-programming:Strategies for Vision Programming) discusses NPU coprocessors in more detail.
| - **Orange Pi 5/5B with NPU** - Single-board computer with neural processing unit, compatible with PhotonVision | ||
| - **Hailo-8 Accelerator** - High-performance ML accelerator that can be added to compatible vision solutions | ||
|
|
||
| Many teams successfully use Google Coral accelerators with coprocessors like the Raspberry Pi or Orange Pi. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Many teams successfully use Google Coral accelerators with coprocessors like the Raspberry Pi or Orange Pi. | |
| Many teams successfully use the Orange Pi 5 NPU to detect objects with machine learning. |
Or if you want one with LL (although I cannot personally attest to this myself):
| Many teams successfully use Google Coral accelerators with coprocessors like the Raspberry Pi or Orange Pi. | |
| Many teams successfully use the Orange Pi 5 NPU or Hailo-8 Accelerator to detect objects with machine learning. |
| - There is **visual variability** in the objects themselves (wear, damage, manufacturing differences) | ||
| - Background conditions vary significantly | ||
|
|
||
| Examples: Detecting cargo with logos, irregularly shaped objects, pieces that look very different when viewed from different angles |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you'd like some examples of game piece variability, here is a gallery of 2020 FRC game pieces from week 1 that you are welcome to use.
https://1drv.ms/f/c/35d706fb0c681610/IgB8STg-07OnT694TUAod4gZASijj9FazJTobId7FUEC6f4?e=wnJG4V
Summary
Changes
Removed Files
target-info-and-retroreflection.rst- 2016 game-specific retroreflective tape documentationidentifying-and-processing-the-targets.rst- Outdated vision processing methods for retroreflective targets2017-vision-examples.rst- LabVIEW-only vision examplesAdded Files
game-piece-detection.rst- New comprehensive guide covering:Modified Files
introduction/index.rst- Updated to reflect new article structureapriltag/apriltag-intro.rst- Removed outdated "forked repository" language and TODO notewpilibpi/basic-vision-example.rst- Updated reference to point to new articlewpilibpi/working-with-contours.rst- Updated reference to point to new articleredirects.txt- Added redirects for deleted filesFixes #2957