Skip to content

Conversation

@jasondaming
Copy link
Member

Summary

  • Removes three outdated articles about retroreflective tape that are no longer relevant to modern FRC
  • Adds comprehensive game piece detection guide using current vision processing techniques
  • Updates AprilTag documentation to remove outdated repository information

Changes

Removed Files

  • target-info-and-retroreflection.rst - 2016 game-specific retroreflective tape documentation
  • identifying-and-processing-the-targets.rst - Outdated vision processing methods for retroreflective targets
  • 2017-vision-examples.rst - LabVIEW-only vision examples

Added Files

  • game-piece-detection.rst - New comprehensive guide covering:
    • Traditional vision vs machine learning approaches
    • Color-based masking and particle analysis
    • Coordinate system transformations
    • Field of view calculations
    • Distance and angle estimation using trigonometry
    • Pose estimation techniques
    • Best practices for camera setup and calibration

Modified Files

  • introduction/index.rst - Updated to reflect new article structure
  • apriltag/apriltag-intro.rst - Removed outdated "forked repository" language and TODO note
  • wpilibpi/basic-vision-example.rst - Updated reference to point to new article
  • wpilibpi/working-with-contours.rst - Updated reference to point to new article
  • redirects.txt - Added redirects for deleted files

Fixes #2957

… detection guide

Removes three outdated articles about retroreflective tape that are no longer relevant since FRC games now use AprilTag-based localization instead. Adds a new comprehensive guide for detecting game pieces using modern vision processing techniques.

Changes:
- Remove target-info-and-retroreflection.rst (2016 game-specific)
- Remove identifying-and-processing-the-targets.rst (outdated methods)
- Remove 2017-vision-examples.rst (LabVIEW-only examples)
- Add game-piece-detection.rst with guidance on traditional vision vs ML, coordinate transformations, and pose estimation
- Update introduction/index.rst to reflect new article structure
- Update apriltag-intro.rst to remove outdated repository fork language

Fixes wpilibsuite#2957
Update references in basic-vision-example.rst and working-with-contours.rst to point to the new game-piece-detection.rst article instead of the deleted identifying-and-processing-the-targets.rst file.
Copy link
Collaborator

@sciencewhiz sciencewhiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that feels missing is choosing a camera.
Also, probably worth adding a link to photonvision in the strategies for vision processing for the coprocessor.

Add redirects from the three deleted retroreflective tape articles to the new game-piece-detection.rst article to maintain link compatibility.
@jasondaming jasondaming force-pushed the issue-2957-update-vision-docs-main branch from 7a6000f to 0911f6a Compare October 7, 2025 01:15
…mendations

- Add comprehensive camera selection section covering FOV, resolution, frame rate, exposure control, and camera types
- Add important note about ML hardware requirements and accelerator options (Google Coral, Limelight, Orange Pi NPU)
- Add PhotonVision reference and link in strategies-for-vision-programming.rst
- Clarify that teams commonly use Google Coral with Orange Pi + PhotonVision or Limelight with integrated acceleration
- Remove incorrect claim about Limelight with built-in ML support
- Clarify Google Coral works with any coprocessor, not just PhotonVision
- Add Hailo-8 as another accelerator option
- Simplify language to be more accurate about hardware options
### Exposure Control

- **Manual exposure control**: Essential for consistent detection under varying lighting
- **Global shutter vs rolling shutter**: Global shutter is preferable for fast-moving robots to avoid image distortion
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe something that says rolling shutter cameras are much more common and if it doesn't say it's global shutter, then it's rolling shutter

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completely agree. Maybe this latest update went too far? I agree that rolling is good enough for a lot of teams but not sure if it is most with how fast teams have had to be moving lately.

## Software Support

The main repository for the source code that detects and decodes AprilTags [is located here](https://github.com/wpilibsuite/apriltag/tree/main).
WPILib includes AprilTag detection and decoding functionality through the [wpilibsuite/apriltag repository](https://github.com/wpilibsuite/apriltag/tree/main). This repository provides:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still inaccurate lol. Silly LLM! It's located under https://github.com/wpilibsuite/allwpilib/tree/main/apriltag now! (also it's still technically "forked", or "patched", whatever term you want to use.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that is fair to blame the LLM for this. That was already written in the documentation is just didn't verify it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it did say in the summary that it "removed outdated repository information", but fair.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link still needs to be addressed

- Add note about lens distortion in wide FOV cameras
- Remove ethernet switch mention (VH-109 radio makes it unnecessary)
- PhotonVision link already present in strategies document
- Expanded explanation of global vs rolling shutter
- Noted that rolling shutter cameras are significantly more common
- Clarified that if not explicitly marked global shutter, assume rolling
- Added context that rolling shutter works adequately for most FRC use

.. important:: Machine learning inference requires significant computational resources. While possible on devices like the Raspberry Pi, performance may be limited. For better ML performance, consider using hardware accelerators such as:

- **Google Coral USB/M.2 Accelerator** - Works with coprocessors running PhotonVision or custom code
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this stay considering Coral isn't being sold anymore? Also, PhotonVision doesn't support Coral.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's already a discussion of NPU hardware over in the strategies page. How about replacing everything after line 35 (.. important) with:

.. important:: Machine learning inference requires significant computational resources. :ref:docs/software/vision-processing/wpilibpi/strategies-for-vision-programming:Strategies for Vision Programming) discusses NPU coprocessors in more detail.

- **Orange Pi 5/5B with NPU** - Single-board computer with neural processing unit, compatible with PhotonVision
- **Hailo-8 Accelerator** - High-performance ML accelerator that can be added to compatible vision solutions

Many teams successfully use Google Coral accelerators with coprocessors like the Raspberry Pi or Orange Pi.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Many teams successfully use Google Coral accelerators with coprocessors like the Raspberry Pi or Orange Pi.
Many teams successfully use the Orange Pi 5 NPU to detect objects with machine learning.

Or if you want one with LL (although I cannot personally attest to this myself):

Suggested change
Many teams successfully use Google Coral accelerators with coprocessors like the Raspberry Pi or Orange Pi.
Many teams successfully use the Orange Pi 5 NPU or Hailo-8 Accelerator to detect objects with machine learning.

- There is **visual variability** in the objects themselves (wear, damage, manufacturing differences)
- Background conditions vary significantly

Examples: Detecting cargo with logos, irregularly shaped objects, pieces that look very different when viewed from different angles
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you'd like some examples of game piece variability, here is a gallery of 2020 FRC game pieces from week 1 that you are welcome to use.

https://1drv.ms/f/c/35d706fb0c681610/IgB8STg-07OnT694TUAod4gZASijj9FazJTobId7FUEC6f4?e=wnJG4V

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Articles discussing Retroreflective Tape are no longer relevant

4 participants