Autonomy Software C++ 24.5.1
Welcome to the Autonomy Software repository of the Mars Rover Design Team (MRDT) at Missouri University of Science and Technology (Missouri S&T)! API reference contains the source code and other resources for the development of the autonomy software for our Mars rover. The Autonomy Software project aims to compete in the University Rover Challenge (URC) by demonstrating advanced autonomous capabilities and robust navigation algorithms.
Loading...
Searching...
No Matches
TagDetector Class Reference

Run's Aruco detection & camera pose estimation in a multithreading environment. Given a camera, tag detection using OpenCV's ArUco library and a custom trained model for detecting general tags will be continuously ran on the camera frames. More...

#include <TagDetector.h>

Inheritance diagram for TagDetector:
Collaboration diagram for TagDetector:

Public Member Functions

 TagDetector (std::shared_ptr< BasicCamera > pBasicCam, const int nArucoCornerRefinementMaxIterations=30, const int nArucoCornerRefinementMethod=cv::aruco::CORNER_REFINE_NONE, const int nArucoMarkerBorderBits=1, const bool bArucoDetectInvertedMarkers=false, const bool bUseAruco3Detection=false, const bool bEnableTracking=false, const int nDetectorMaxFPS=30, const bool bEnableRecordingFlag=false, const int nNumDetectedTagsRetrievalThreads=5, const bool bUsingGpuMats=false)
 Construct a new TagDetector object.
 
 TagDetector (std::shared_ptr< ZEDCamera > pZEDCam, const int nArucoCornerRefinementMaxIterations=30, const int nArucoCornerRefinementMethod=cv::aruco::CORNER_REFINE_NONE, const int nArucoMarkerBorderBits=1, const bool bArucoDetectInvertedMarkers=false, const bool bUseAruco3Detection=false, const bool bEnableTracking=false, const int nDetectorMaxFPS=30, const bool bEnableRecordingFlag=false, const int nNumDetectedTagsRetrievalThreads=5, const bool bUsingGpuMats=false)
 Construct a new TagDetector object.
 
 ~TagDetector ()
 Destroy the Tag Detector:: Tag Detector object.
 
std::future< bool > RequestDetectionOverlayFrame (cv::Mat &cvFrame)
 Request a copy of a frame containing the tag detection overlays from the aruco and tensorflow library.
 
std::future< bool > RequestDetectedArucoTags (std::vector< tagdetectutils::ArucoTag > &vArucoTags)
 Request the most up to date vector of detected tags from OpenCV's Aruco algorithm.
 
bool InitTorchDetection (const std::string &szModelPath, yolomodel::pytorch::PyTorchInterpreter::HardwareDevices eDevice=yolomodel::pytorch::PyTorchInterpreter::HardwareDevices::eCUDA)
 Attempt to open the next available Torch hardware and load model at the given path onto the device.
 
void EnableTorchDetection (const float fMinObjectConfidence=0.4f, const float fNMSThreshold=0.6f)
 Turn on torch detection with given parameters.
 
void DisableTorchDetection ()
 Set flag to stop tag detection with the torch model.
 
void SetDetectorMaxFPS (const int nRecordingFPS)
 Mutator for the desired max FPS for this detector.
 
void SetEnableRecordingFlag (const bool bEnableRecordingFlag)
 Mutator for the Enable Recording Flag private member.
 
bool GetIsReady ()
 Accessor for the status of this TagDetector.
 
int GetDetectorMaxFPS () const
 Accessor for the desired max FPS for this detector.
 
bool GetEnableRecordingFlag () const
 Accessor for the Enable Recording Flag private member.
 
std::string GetCameraName ()
 Accessor for the camera name or path that this TagDetector is tied to.
 
cv::Size GetProcessFrameResolution () const
 Accessor for the resolution of the process image used for tag detection.
 
- Public Member Functions inherited from AutonomyThread< void >
 AutonomyThread ()
 Construct a new Autonomy Thread object.
 
virtual ~AutonomyThread ()
 Destroy the Autonomy Thread object. If the parent object or main thread is destroyed or exited while this thread is still running, a race condition will occur. Stopping and joining the thread here insures that the main program can't exit if the user forgot to stop and join the thread.
 
void Start ()
 When this method is called, it starts a new thread that runs the code within the ThreadedContinuousCode method. This is the users main code that will run the important and continuous code for the class.
 
void RequestStop ()
 Signals threads to stop executing user code, terminate. DOES NOT JOIN. This method will not force the thread to exit, if the user code is not written properly and contains WHILE statement or any other long-executing or blocking code, then the thread will not exit until the next iteration.
 
void Join ()
 Waits for thread to finish executing and then closes thread. This method will block the calling code until thread is finished.
 
bool Joinable () const
 Check if the code within the thread and all pools created by it are finished executing and the thread is ready to be closed.
 
AutonomyThreadState GetThreadState () const
 Accessor for the Threads State private member.
 
IPSGetIPS ()
 Accessor for the Frame I P S private member.
 

Private Member Functions

void ThreadedContinuousCode () override
 This code will run continuously in a separate thread. New frames from the given camera are grabbed and the tags for the camera image are detected, filtered, and stored. Then any requests for the current tags are fulfilled.
 
void PooledLinearCode () override
 This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a queue filled by the Request methods.
 
void UpdateDetectedTags (std::vector< tagdetectutils::ArucoTag > &vNewlyDetectedTags)
 Updates the detected torch tags including tracking the detected tags over time and removing tags that haven't been seen for long enough.
 

Private Attributes

std::shared_ptr< Camera< cv::Mat > > m_pCamera
 
cv::aruco::ArucoDetector m_cvArucoDetector
 
cv::aruco::DetectorParameters m_cvArucoDetectionParams
 
cv::aruco::Dictionary m_cvTagDictionary
 
std::shared_ptr< yolomodel::pytorch::PyTorchInterpreterm_pTorchDetector
 
std::atomic< float > m_fTorchMinObjectConfidence
 
std::atomic< float > m_fTorchNMSThreshold
 
std::atomic_bool m_bTorchInitialized
 
std::atomic_bool m_bTorchEnabled
 
std::shared_ptr< tracking::MultiTrackerm_pMultiTracker
 
bool m_bUsingZedCamera
 
bool m_bUsingGpuMats
 
bool m_bCameraIsOpened
 
bool m_bEnableTracking
 
int m_nNumDetectedTagsRetrievalThreads
 
std::string m_szCameraName
 
std::atomic_bool m_bEnableRecordingFlag
 
std::vector< tagdetectutils::ArucoTagm_vNewlyDetectedTags
 
std::vector< tagdetectutils::ArucoTagm_vDetectedArucoTags
 
geoops::RoverPose m_stRoverPose
 
cv::Mat m_cvFrame
 
cv::cuda::GpuMat m_cvGPUFrame
 
cv::Mat m_cvArucoProcFrame
 
cv::Mat m_cvPointCloud
 
cv::cuda::GpuMat m_cvGPUPointCloud
 
std::queue< containers::FrameFetchContainer< cv::Mat > > m_qDetectedTagDrawnOverlayFramesCopySchedule
 
std::queue< containers::DataFetchContainer< std::vector< tagdetectutils::ArucoTag > > > m_qDetectedArucoTagCopySchedule
 
std::shared_mutex m_muPoolScheduleMutex
 
std::shared_mutex m_muFrameCopyMutex
 
std::shared_mutex m_muArucoDataCopyMutex
 

Additional Inherited Members

- Public Types inherited from AutonomyThread< void >
enum  AutonomyThreadState
 
- Protected Member Functions inherited from AutonomyThread< void >
void RunPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts/adds tasks to a thread pool that runs nNumTasksToQueue copies of the code within the PooledLinearCode() method using nNumThreads number of threads. This is meant to be used as an internal utility of the child class to further improve parallelization. Default value for nNumThreads is 2.
 
void RunDetachedPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts a thread pool full of threads that don't return std::futures (like a placeholder for the thread return type). This means the thread will not have a return type and there is no way to determine if the thread has finished other than calling the Join() method. Only use this if you want to 'set and forget'. It will be faster as it doesn't return futures. Runs PooledLinearCode() method code. This is meant to be used as an internal utility of the child class to further improve parallelization.
 
void ParallelizeLoop (const int nNumThreads, const N tTotalIterations, F &&tLoopFunction)
 Given a ref-qualified looping function and an arbitrary number of iterations, this method will divide up the loop and run each section in a thread pool. This function must not return anything. This method will block until the loop has completed.
 
void ClearPoolQueue ()
 Clears any tasks waiting to be ran in the queue, tasks currently running will remain running.
 
void JoinPool ()
 Waits for pool to finish executing tasks. This method will block the calling code until thread is finished.
 
bool PoolJoinable () const
 Check if the internal pool threads are done executing code and the queue is empty.
 
void SetMainThreadIPSLimit (int nMaxIterationsPerSecond=0)
 Mutator for the Main Thread Max I P S private member.
 
int GetPoolNumOfThreads ()
 Accessor for the Pool Num Of Threads private member.
 
int GetPoolQueueLength ()
 Accessor for the Pool Queue Size private member.
 
std::vector< void > GetPoolResults ()
 Accessor for the Pool Results private member. The action of getting results will destroy and remove them from this object. This method blocks if the thread is not finished, so no need to call JoinPool() before getting results.
 
int GetMainThreadMaxIPS () const
 Accessor for the Main Thread Max I P S private member.
 
- Protected Attributes inherited from AutonomyThread< void >
IPS m_IPS
 

Detailed Description

Run's Aruco detection & camera pose estimation in a multithreading environment. Given a camera, tag detection using OpenCV's ArUco library and a custom trained model for detecting general tags will be continuously ran on the camera frames.

What are the threads doing? Continuous Thread: In this thread we are constantly getting images and depth maps from the necessary cameras. We then detect the tags in the image and estimate their location with respect to the rover. Pooled Threads: Copy the vector of detected tags to all of the threads requesting it through the RequestDetectedArucoTags(...) function.

Author
jspencerpittman (jspen.nosp@m.cerp.nosp@m.ittma.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om), clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30

Constructor & Destructor Documentation

◆ TagDetector() [1/2]

TagDetector::TagDetector ( std::shared_ptr< BasicCamera pBasicCam,
const int  nArucoCornerRefinementMaxIterations = 30,
const int  nArucoCornerRefinementMethod = cv::aruco::CORNER_REFINE_NONE,
const int  nArucoMarkerBorderBits = 1,
const bool  bArucoDetectInvertedMarkers = false,
const bool  bUseAruco3Detection = false,
const bool  bEnableTracking = false,
const int  nDetectorMaxFPS = 30,
const bool  bEnableRecordingFlag = false,
const int  nNumDetectedTagsRetrievalThreads = 5,
const bool  bUsingGpuMats = false 
)

Construct a new TagDetector object.

Parameters
pBasicCam- A pointer to the BasicCam camera to get frames from for detection.
nArucoCornerRefinementMaxIterations- The number of iterations to use when refining marker corners.
nArucoCornerRefinementMethod- The refinement method to use.
nArucoMarkerBorderBits- The number of border unit squares around the marker.
bArucoDetectInvertedMarkers- Enable or disable upside-down marker detection.
bUseAruco3Detection- Whether or not to use the newer/faster method of detection. Experimental.
bEnableTracking- Whether or not to enable tracking of detected tags.
nDetectorMaxFPS- The max FPS limit the detector can run at.
bEnableRecordingFlag- Whether or not this TagDetector's overlay output should be recorded.
nNumDetectedTagsRetrievalThreads- The number of threads to use when fulfilling requests for the detected aruco tags. Default is 5.
bUsingGpuMats- Whether or not the given camera name will be using GpuMats.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-10
48{
49 // Initialize member variables.
50 m_pCamera = pBasicCam;
51 m_bTorchInitialized = false;
52 m_bTorchEnabled = false;
53 m_bEnableTracking = bEnableTracking;
54 m_bUsingZedCamera = false; // Toggle ZED functions off.
55 m_bUsingGpuMats = bUsingGpuMats;
56 m_bCameraIsOpened = false;
57 m_nNumDetectedTagsRetrievalThreads = nNumDetectedTagsRetrievalThreads;
58 m_szCameraName = pBasicCam->GetCameraLocation();
59 m_bEnableRecordingFlag = bEnableRecordingFlag;
60 m_stRoverPose = geoops::RoverPose();
61
62 // Setup aruco detector params.
63 m_cvArucoDetectionParams = cv::aruco::DetectorParameters();
64 m_cvArucoDetectionParams.cornerRefinementMaxIterations = nArucoCornerRefinementMaxIterations;
65 m_cvArucoDetectionParams.cornerRefinementMethod = nArucoCornerRefinementMethod;
66 m_cvArucoDetectionParams.markerBorderBits = nArucoMarkerBorderBits;
67 m_cvArucoDetectionParams.detectInvertedMarker = bArucoDetectInvertedMarkers;
68 m_cvArucoDetectionParams.useAruco3Detection = bUseAruco3Detection;
69 // Get aruco dictionary and initialize aruco detector.
70 m_cvTagDictionary = cv::aruco::getPredefinedDictionary(constants::ARUCO_DICTIONARY);
71 m_cvArucoDetector = cv::aruco::ArucoDetector(m_cvTagDictionary, m_cvArucoDetectionParams);
72
73 // Create a multi-tracker for tracking multiple tags from the torch detectors.
74 m_pMultiTracker = std::make_shared<tracking::MultiTracker>(constants::BBOX_TRACKER_LOST_TIMEOUT,
75 constants::BBOX_TRACKER_MAX_TRACK_TIME,
76 constants::BBOX_TRACKER_IOU_MATCH_THRESHOLD);
77
78 // Set max IPS of main thread.
79 this->SetMainThreadIPSLimit(nDetectorMaxFPS);
80
81 // Submit logger message.
82 LOG_INFO(logging::g_qSharedLogger, "TagDetector created for camera at path/index: {}", m_szCameraName);
83}
void SetMainThreadIPSLimit(int nMaxIterationsPerSecond=0)
Mutator for the Main Thread Max I P S private member.
Definition AutonomyThread.hpp:467
Dictionary getPredefinedDictionary(PredefinedDictionaryType name)
CornerRefineMethod cornerRefinementMethod
This struct is used by the WaypointHandler to provide an easy way to store all pose data about the ro...
Definition GeospatialOperations.hpp:677
Here is the call graph for this function:

◆ TagDetector() [2/2]

TagDetector::TagDetector ( std::shared_ptr< ZEDCamera pZEDCam,
const int  nArucoCornerRefinementMaxIterations = 30,
const int  nArucoCornerRefinementMethod = cv::aruco::CORNER_REFINE_NONE,
const int  nArucoMarkerBorderBits = 1,
const bool  bArucoDetectInvertedMarkers = false,
const bool  bUseAruco3Detection = false,
const bool  bEnableTracking = false,
const int  nDetectorMaxFPS = 30,
const bool  bEnableRecordingFlag = false,
const int  nNumDetectedTagsRetrievalThreads = 5,
const bool  bUsingGpuMats = false 
)

Construct a new TagDetector object.

Parameters
pZEDCam- A pointer to the ZEDCam camera to get frames from for detection. Override for ZED camera.
nArucoCornerRefinementMaxIterations- The number of iterations to use when refining marker corners.
nArucoCornerRefinementMethod- The refinement method to use.
nArucoMarkerBorderBits- The number of border unit squares around the marker.
bArucoDetectInvertedMarkers- Enable or disable upside-down marker detection.
bUseAruco3Detection- Whether or not to use the newer/faster method of detection. Experimental.
bEnableTracking- Whether or not to enable tracking of detected tags.
nDetectorMaxFPS- The max FPS limit the detector can run at.
bEnableRecordingFlag- Whether or not this TagDetector's overlay output should be recorded.
nNumDetectedTagsRetrievalThreads- The number of threads to use when fulfilling requests for the detected aruco tags. Default is 5.
bUsingGpuMats- Whether or not the given camera name will be using GpuMats.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-07
115{
116 // Initialize member variables.
117 m_pCamera = pZEDCam;
118 m_bTorchInitialized = false;
119 m_bTorchEnabled = false;
120 m_bUsingZedCamera = true; // Toggle ZED functions on.
121 m_bEnableTracking = bEnableTracking;
122 m_bUsingGpuMats = bUsingGpuMats;
123 m_bCameraIsOpened = false;
124 m_nNumDetectedTagsRetrievalThreads = nNumDetectedTagsRetrievalThreads;
125 m_szCameraName = pZEDCam->GetCameraModel() + "_" + std::to_string(pZEDCam->GetCameraSerial());
126 m_bEnableRecordingFlag = bEnableRecordingFlag;
127 m_IPS = IPS();
128 m_stRoverPose = geoops::RoverPose();
129
130 // Setup aruco detector params.
131 m_cvArucoDetectionParams = cv::aruco::DetectorParameters();
132 m_cvArucoDetectionParams.cornerRefinementMaxIterations = nArucoCornerRefinementMaxIterations;
133 m_cvArucoDetectionParams.cornerRefinementMethod = nArucoCornerRefinementMethod;
134 m_cvArucoDetectionParams.markerBorderBits = nArucoMarkerBorderBits;
135 m_cvArucoDetectionParams.detectInvertedMarker = bArucoDetectInvertedMarkers;
136 m_cvArucoDetectionParams.useAruco3Detection = bUseAruco3Detection;
137
138 // Create a multi-tracker for tracking multiple tags from the torch detectors.
139 m_pMultiTracker = std::make_shared<tracking::MultiTracker>(constants::BBOX_TRACKER_LOST_TIMEOUT,
140 constants::BBOX_TRACKER_IOU_MATCH_THRESHOLD,
141 constants::BBOX_TRACKER_IOU_MATCH_THRESHOLD);
142
143 // Set max IPS of main thread.
144 this->SetMainThreadIPSLimit(nDetectorMaxFPS);
145
146 // Submit logger message.
147 LOG_INFO(logging::g_qSharedLogger, "TagDetector created for camera: {}", m_szCameraName);
148}
This util class provides an easy way to keep track of iterations per second for any body of code.
Definition IPS.hpp:30
Here is the call graph for this function:

◆ ~TagDetector()

TagDetector::~TagDetector ( )

Destroy the Tag Detector:: Tag Detector object.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-09
158{
159 // Stop threaded code.
160 this->RequestStop();
161 this->Join();
162
163 // Submit logger message.
164 LOG_INFO(logging::g_qSharedLogger, "TagDetector for camera {} has been successfully destroyed.", this->GetCameraName());
165}
void Join()
Waits for thread to finish executing and then closes thread. This method will block the calling code ...
Definition AutonomyThread.hpp:180
void RequestStop()
Signals threads to stop executing user code, terminate. DOES NOT JOIN. This method will not force the...
Definition AutonomyThread.hpp:164
std::string GetCameraName()
Accessor for the camera name or path that this TagDetector is tied to.
Definition TagDetector.cpp:658
Here is the call graph for this function:

Member Function Documentation

◆ RequestDetectionOverlayFrame()

std::future< bool > TagDetector::RequestDetectionOverlayFrame ( cv::Mat cvFrame)

Request a copy of a frame containing the tag detection overlays from the aruco and tensorflow library.

Parameters
cvFrame- The frame to copy the detection overlay image to.
Returns
std::future<bool> - The future that should be waited on before using the passed in frame. Future will be true or false based on whether or not the frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-11
432{
433 // Assemble the DataFetchContainer.
434 containers::FrameFetchContainer<cv::Mat> stContainer(cvFrame, PIXEL_FORMATS::eArucoDetection);
435
436 // Acquire lock on pool copy queue.
437 std::unique_lock<std::shared_mutex> lkScheduler(m_muPoolScheduleMutex);
438 // Append frame fetch container to the schedule queue.
439 m_qDetectedTagDrawnOverlayFramesCopySchedule.push(stContainer);
440 // Release lock on the frame schedule queue.
441 lkScheduler.unlock();
442
443 // Return the future from the promise stored in the container.
444 return stContainer.pCopiedFrameStatus->get_future();
445}
This struct is used to carry references to camera frames for scheduling and copying....
Definition FetchContainers.hpp:89

◆ RequestDetectedArucoTags()

std::future< bool > TagDetector::RequestDetectedArucoTags ( std::vector< tagdetectutils::ArucoTag > &  vArucoTags)

Request the most up to date vector of detected tags from OpenCV's Aruco algorithm.

Parameters
vArucoTags- The vector the detected aruco tags will be saved to.
Returns
std::future<bool> - The future that should be waited on before using the passed in tag vector. Future will be true or false based on whether or not the tags were successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-07
459{
460 // Assemble the DataFetchContainer.
462
463 // Acquire lock on pool copy queue.
464 std::unique_lock<std::shared_mutex> lkScheduler(m_muPoolScheduleMutex);
465 // Append detected tag fetch container to the schedule queue.
466 m_qDetectedArucoTagCopySchedule.push(stContainer);
467 // Release lock on the frame schedule queue.
468 lkScheduler.unlock();
469
470 // Return the future from the promise stored in the container.
471 return stContainer.pCopiedDataStatus->get_future();
472}
This struct is used to carry references to any datatype for scheduling and copying....
Definition FetchContainers.hpp:165

◆ InitTorchDetection()

bool TagDetector::InitTorchDetection ( const std::string &  szModelPath,
yolomodel::pytorch::PyTorchInterpreter::HardwareDevices  eDevice = yolomodel::pytorch::PyTorchInterpreter::HardwareDevices::eCUDA 
)

Attempt to open the next available Torch hardware and load model at the given path onto the device.

Parameters
szModelPath- The absolute path to the model to open.
eDevice- The hardware device to launch the Torch model on.
Returns
true - Model was opened and loaded successfully onto the Torch device.
false - Something went wrong, model/device not opened.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-03-06
487{
488 // Initialize a new YOLOModel object.
489 m_pTorchDetector = std::make_shared<yolomodel::pytorch::PyTorchInterpreter>(szModelPath, eDevice);
490
491 // Check if device/model was opened without issue.
492 if (m_pTorchDetector->IsReadyForInference())
493 {
494 // Update member variable.
495 m_bTorchInitialized = true;
496 // Return status.
497 return true;
498 }
499 else
500 {
501 // Submit logger message.
502 LOG_ERROR(logging::g_qSharedLogger, "Unable to initialize Torch detection for TagDetector.");
503 // Update member variable.
504 m_bTorchInitialized = false;
505 // Return status.
506 return false;
507 }
508}

◆ EnableTorchDetection()

void TagDetector::EnableTorchDetection ( const float  fMinObjectConfidence = 0.4f,
const float  fNMSThreshold = 0.6f 
)

Turn on torch detection with given parameters.

Parameters
fMinObjectConfidence- The lower limit of detection confidence.
fNMSThreshold- The overlap thresh for NMS algorithm.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-03-06
520{
521 // Update member variables.
522 m_fTorchMinObjectConfidence = fMinObjectConfidence;
523 m_fTorchNMSThreshold = fNMSThreshold;
524
525 // Check if torch model has been initialized.
526 if (m_bTorchInitialized)
527 {
528 // Update member variable.
529 m_bTorchEnabled = true;
530 }
531 else
532 {
533 // Submit logger message.
534 LOG_WARNING(logging::g_qSharedLogger, "Tried to enable torch detection for TagDetector but it has not been initialized yet!");
535 // Update member variable.
536 m_bTorchEnabled = false;
537 }
538}

◆ DisableTorchDetection()

void TagDetector::DisableTorchDetection ( )

Set flag to stop tag detection with the torch model.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-03-06
548{
549 // Update member variables.
550 m_bTorchEnabled = false;
551}

◆ SetDetectorMaxFPS()

void TagDetector::SetDetectorMaxFPS ( const int  nRecordingFPS)

Mutator for the desired max FPS for this detector.

Parameters
nRecordingFPS- The max frames per second to detect tags at.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-22
562{
563 // Set the max iterations per second of the recording handler.
564 this->SetMainThreadIPSLimit(nRecordingFPS);
565}
Here is the call graph for this function:

◆ SetEnableRecordingFlag()

void TagDetector::SetEnableRecordingFlag ( const bool  bEnableRecordingFlag)

Mutator for the Enable Recording Flag private member.

Parameters
bEnableRecordingFlag- Whether or not recording should be enabled for this detector.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-12-31
576{
577 m_bEnableRecordingFlag = bEnableRecordingFlag;
578}

◆ GetIsReady()

bool TagDetector::GetIsReady ( )

Accessor for the status of this TagDetector.

Returns
true - The detector is running and detecting tags from the camera.
false - The detector thread and/or camera is not running/opened.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-04
590{
591 // Create instance variables.
592 bool bDetectorIsReady = false;
593
594 // Check if this detectors thread is currently running.
595 if (this->GetThreadState() == AutonomyThreadState::eRunning)
596 {
597 // Check if using ZEDCam or BasicCam.
598 if (m_bUsingZedCamera)
599 {
600 // Check if camera is NOT open.
601 if (std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->GetCameraIsOpen())
602 {
603 // Set camera opened toggle.
604 bDetectorIsReady = true;
605 }
606 }
607 else
608 {
609 // Check if camera is NOT open.
610 if (std::dynamic_pointer_cast<BasicCamera>(m_pCamera)->GetCameraIsOpen())
611 {
612 // Set camera opened toggle.
613 bDetectorIsReady = true;
614 }
615 }
616 }
617
618 // Return if this detector is ready or not.
619 return bDetectorIsReady;
620}
AutonomyThreadState GetThreadState() const
Accessor for the Threads State private member.
Definition AutonomyThread.hpp:214
Here is the call graph for this function:

◆ GetDetectorMaxFPS()

int TagDetector::GetDetectorMaxFPS ( ) const

Accessor for the desired max FPS for this detector.

Returns
int - The max frames per second the detector can run at.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-22
631{
632 // Return member variable value.
633 return this->GetMainThreadMaxIPS();
634}
int GetMainThreadMaxIPS() const
Accessor for the Main Thread Max I P S private member.
Definition AutonomyThread.hpp:531
Here is the call graph for this function:

◆ GetEnableRecordingFlag()

bool TagDetector::GetEnableRecordingFlag ( ) const

Accessor for the Enable Recording Flag private member.

Returns
true - Recording for this detector has been requested/flagged.
false - This detector should not be recorded.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-12-31
646{
647 return m_bEnableRecordingFlag;
648}

◆ GetCameraName()

std::string TagDetector::GetCameraName ( )

Accessor for the camera name or path that this TagDetector is tied to.

Returns
std::string - The name/path/index of the camera used by this TagDetector.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-01
659{
660 return m_szCameraName;
661}
Here is the caller graph for this function:

◆ GetProcessFrameResolution()

cv::Size TagDetector::GetProcessFrameResolution ( ) const

Accessor for the resolution of the process image used for tag detection.

Returns
cv::Size - The resolution stored in an OpenCV cv::Size.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-01
672{
673 // Check if using a ZED camera.
674 if (m_bUsingZedCamera)
675 {
676 // Concatenate camera model name and serial number.
677 return std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->GetPropResolution();
678 }
679 else
680 {
681 // Concatenate camera path or index.
682 return std::dynamic_pointer_cast<BasicCamera>(m_pCamera)->GetPropResolution();
683 }
684}

◆ ThreadedContinuousCode()

void TagDetector::ThreadedContinuousCode ( )
overrideprivatevirtual

This code will run continuously in a separate thread. New frames from the given camera are grabbed and the tags for the camera image are detected, filtered, and stored. Then any requests for the current tags are fulfilled.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-07

Implements AutonomyThread< void >.

177{
178 // Check if using ZEDCam or BasicCam.
179 if (m_bUsingZedCamera)
180 {
181 // Check if camera is NOT open.
182 if (!std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->GetCameraIsOpen())
183 {
184 // Set camera opened toggle.
185 m_bCameraIsOpened = false;
186
187 // If camera's not open on first iteration of thread, it's probably not present, so stop.
188 if (this->GetThreadState() == AutonomyThreadState::eStarting)
189 {
190 // Shutdown threads for this ZEDCam.
191 this->RequestStop();
192
193 // Submit logger message.
194 LOG_CRITICAL(logging::g_qSharedLogger,
195 "TagDetector start was attempted for ZED camera with serial number {}, but camera never properly opened or it has been closed/rebooted! "
196 "This tag detector will now stop.",
197 std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->GetCameraSerial());
198 }
199 }
200 else
201 {
202 // Set camera opened toggle.
203 m_bCameraIsOpened = true;
204 }
205 }
206 else
207 {
208 // Check if camera is NOT open.
209 if (!std::dynamic_pointer_cast<BasicCamera>(m_pCamera)->GetCameraIsOpen())
210 {
211 // Set camera opened toggle.
212 m_bCameraIsOpened = false;
213
214 // If camera's not open on first iteration of thread, it's probably not present, so stop.
215 if (this->GetThreadState() == AutonomyThreadState::eStarting)
216 {
217 // Shutdown threads for this BasicCam.
218 this->RequestStop();
219
220 // Submit logger message.
221 LOG_CRITICAL(logging::g_qSharedLogger,
222 "TagDetector start was attempted for BasicCam at {}, but camera never properly opened or it has become disconnected!",
223 std::dynamic_pointer_cast<BasicCamera>(m_pCamera)->GetCameraLocation());
224 }
225 }
226 else
227 {
228 // Set camera opened toggle.
229 m_bCameraIsOpened = true;
230 }
231 }
232
233 // Check if camera is opened.
234 if (m_bCameraIsOpened)
235 {
236 // Create future for indicating when the frame has been copied.
237 std::future<bool> fuPointCloudCopyStatus;
238 std::future<bool> fuRegularFrameCopyStatus;
239
240 // Check if the camera is setup to use CPU or GPU mats.
241 if (m_bUsingZedCamera)
242 {
243 // Check if the ZED camera is returning cv::cuda::GpuMat or cv:Mat.
244 if (m_bUsingGpuMats)
245 {
246 // Grabs point cloud from ZEDCam. Dynamic casts Camera to ZEDCamera* so we can use ZEDCam methods.
247 fuPointCloudCopyStatus = std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->RequestPointCloudCopy(m_cvGPUPointCloud);
248 // Get the regular RGB image from the camera.
249 fuRegularFrameCopyStatus = std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->RequestFrameCopy(m_cvGPUFrame);
250
251 // Wait for point cloud to be retrieved.
252 if (fuPointCloudCopyStatus.get() && fuRegularFrameCopyStatus.get())
253 {
254 // Download mat from GPU memory.
255 m_cvGPUPointCloud.download(m_cvPointCloud);
256 m_cvGPUFrame.download(m_cvFrame);
257 // Drop alpha channel.
258 cv::cvtColor(m_cvFrame, m_cvFrame, cv::COLOR_BGRA2BGR);
259 }
260 else
261 {
262 // Submit logger message.
263 LOG_WARNING(logging::g_qSharedLogger, "TagDetector unable to get point cloud from ZEDCam!");
264 }
265 }
266 else
267 {
268 // Grabs point cloud from ZEDCam.
269 fuPointCloudCopyStatus = std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->RequestPointCloudCopy(m_cvPointCloud);
270 fuRegularFrameCopyStatus = std::dynamic_pointer_cast<ZEDCamera>(m_pCamera)->RequestFrameCopy(m_cvFrame);
271
272 // Wait for point cloud to be retrieved.
273 if (!fuPointCloudCopyStatus.get())
274 {
275 // Submit logger message.
276 LOG_WARNING(logging::g_qSharedLogger, "TagDetector unable to get point cloud from ZEDCam!");
277 }
278 if (!fuRegularFrameCopyStatus.get())
279 {
280 // Submit logger message.
281 LOG_WARNING(logging::g_qSharedLogger, "TagDetector unable to get regular frame from ZEDCam!");
282 }
283 }
284 }
285 else
286 {
287 // Grab frames from camera.
288 fuRegularFrameCopyStatus = std::dynamic_pointer_cast<BasicCamera>(m_pCamera)->RequestFrameCopy(m_cvFrame);
289
290 // Wait for point cloud to be retrieved.
291 if (!fuRegularFrameCopyStatus.get())
292 {
293 // Submit logger message.
294 LOG_WARNING(logging::g_qSharedLogger, "TagDetector unable to get RGB image from BasicCam!");
295 }
296 }
297
299 // Actual detection logic goes here.
301 // Check if the frame is empty.
302 if (m_cvFrame.empty())
303 {
304 // Submit logger message.
305 LOG_WARNING(logging::g_qSharedLogger, "Frame from camera is empty!");
306 return;
307 }
308
309 // Clear the list of newly detected tags.
310 m_vNewlyDetectedTags.clear();
311 // Clone frames.
312 m_cvArucoProcFrame = m_cvFrame.clone();
313 // Detect tags in the image
314 std::vector<tagdetectutils::ArucoTag> vNewOpenCVTags = arucotag::Detect(m_cvArucoProcFrame, m_cvArucoDetector);
315 // Add OpenCV tags to the list of newly detected tags.
316 m_vNewlyDetectedTags.insert(m_vNewlyDetectedTags.end(), vNewOpenCVTags.begin(), vNewOpenCVTags.end());
317
318 // Check if torch detection if turned on.
319 if (m_bTorchEnabled)
320 {
321 // Detect tags in the image.
322 std::vector<tagdetectutils::ArucoTag> vNewTorchTags =
323 torchtag::Detect(m_cvArucoProcFrame, *m_pTorchDetector, m_fTorchMinObjectConfidence, m_fTorchNMSThreshold);
324 // Add Torch tags to the list of newly detected tags.
325 m_vNewlyDetectedTags.insert(m_vNewlyDetectedTags.end(), vNewTorchTags.begin(), vNewTorchTags.end());
326 }
327
328 // Set the FOV of the camera in the tag structs for this detector's camera.
329 for (tagdetectutils::ArucoTag& stTag : m_vNewlyDetectedTags)
330 {
331 // Set tag FOV parameter to this tag detectors camera's FOV.
332 stTag.dHorizontalFOV = m_pCamera->GetPropHorizontalFOV();
333 }
334
335 // Merge the newly detected tags with the pre-existing detected tags.
336 this->UpdateDetectedTags(m_vNewlyDetectedTags);
337
338 // Draw tag overlays onto normal image.
339 arucotag::DrawDetections(m_cvArucoProcFrame, m_vDetectedArucoTags);
340 torchtag::DrawDetections(m_cvArucoProcFrame, m_vDetectedArucoTags);
342 }
343
344 // Acquire a shared_lock on the detected tags copy queue.
345 std::shared_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
346 // Check if the detected tag copy queue is empty.
347 if (!m_qDetectedTagDrawnOverlayFramesCopySchedule.empty() || !m_qDetectedArucoTagCopySchedule.empty())
348 {
349 size_t siQueueLength = m_qDetectedTagDrawnOverlayFramesCopySchedule.size() + m_qDetectedArucoTagCopySchedule.size();
350 // Start the thread pool to store multiple copies of the detected tags to the requesting threads
351 this->RunDetachedPool(siQueueLength, m_nNumDetectedTagsRetrievalThreads);
352 // Wait for thread pool to finish.
353 this->JoinPool();
354 // Release lock on frame copy queue.
355 lkSchedulers.unlock();
356 }
357}
void RunDetachedPool(const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
When this method is called, it starts a thread pool full of threads that don't return std::futures (l...
Definition AutonomyThread.hpp:336
void JoinPool()
Waits for pool to finish executing tasks. This method will block the calling code until thread is fin...
Definition AutonomyThread.hpp:439
void UpdateDetectedTags(std::vector< tagdetectutils::ArucoTag > &vNewlyDetectedTags)
Updates the detected torch tags including tracking the detected tags over time and removing tags that...
Definition TagDetector.cpp:695
CV_NODISCARD_STD Mat clone() const
bool empty() const
void download(OutputArray dst) const
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0)
COLOR_BGRA2BGR
std::vector< tagdetectutils::ArucoTag > Detect(const cv::Mat &cvFrame, const cv::aruco::ArucoDetector &cvArucoDetector)
Detect ArUco tags in the provided image.
Definition ArucoDetection.hpp:93
void DrawDetections(cv::Mat &cvDetectionsFrame, const std::vector< tagdetectutils::ArucoTag > &vDetectedTags)
Given a vector of tagdetectutils::ArucoTag structs draw each tag corner and ID onto the given image.
Definition ArucoDetection.hpp:138
void DrawDetections(cv::Mat &cvDetectionsFrame, const std::vector< tagdetectutils::ArucoTag > &vDetectedTags)
Given a vector of tagdetectutils::ArucoTag structs draw each tag corner and confidence onto the given...
Definition TorchTagDetection.hpp:103
std::vector< tagdetectutils::ArucoTag > Detect(const cv::Mat &cvFrame, yolomodel::pytorch::PyTorchInterpreter &trPyTorchDetector, const float fMinObjectConfidence=0.40f, const float fNMSThreshold=0.60f)
Detect ArUco tags in the provided image using a YOLO DNN model.
Definition TorchTagDetection.hpp:45
Represents a single ArUco tag. Combines attributes from TorchTag, TensorflowTag, and the original Aru...
Definition TagDetectionUtilty.hpp:59
Here is the call graph for this function:

◆ PooledLinearCode()

void TagDetector::PooledLinearCode ( )
overrideprivatevirtual

This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a queue filled by the Request methods.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-08

Implements AutonomyThread< void >.

370{
372 // Detection Overlay Frame queue.
374 // Acquire sole writing access to the detectedTagCopySchedule.
375 std::unique_lock<std::shared_mutex> lkTagOverlayFrameQueue(m_muFrameCopyMutex);
376 // Check if there are unfulfilled requests.
377 if (!m_qDetectedTagDrawnOverlayFramesCopySchedule.empty())
378 {
379 // Get frame container out of queue.
380 containers::FrameFetchContainer<cv::Mat> stContainer = m_qDetectedTagDrawnOverlayFramesCopySchedule.front();
381 // Pop out of queue.
382 m_qDetectedTagDrawnOverlayFramesCopySchedule.pop();
383 // Release lock.
384 lkTagOverlayFrameQueue.unlock();
385
386 // Check which frame we should copy.
387 switch (stContainer.eFrameType)
388 {
389 case PIXEL_FORMATS::eArucoDetection: *stContainer.pFrame = m_cvArucoProcFrame.clone(); break;
390 default: *stContainer.pFrame = m_cvArucoProcFrame.clone(); break;
391 }
392
393 // Signal future that the frame has been successfully retrieved.
394 stContainer.pCopiedFrameStatus->set_value(true);
395 }
396
398 // ArucoTag queue.
400 // Acquire sole writing access to the detectedTagCopySchedule.
401 std::unique_lock<std::shared_mutex> lkArucoTagQueue(m_muArucoDataCopyMutex);
402 // Check if there are unfulfilled requests.
403 if (!m_qDetectedArucoTagCopySchedule.empty())
404 {
405 // Get frame container out of queue.
406 containers::DataFetchContainer<std::vector<tagdetectutils::ArucoTag>> stContainer = m_qDetectedArucoTagCopySchedule.front();
407 // Pop out of queue.
408 m_qDetectedArucoTagCopySchedule.pop();
409 // Release lock.
410 lkArucoTagQueue.unlock();
411
412 // Copy the detected tags to the target location
413 *stContainer.pData = m_vDetectedArucoTags;
414
415 // Signal future that the frame has been successfully retrieved.
416 stContainer.pCopiedDataStatus->set_value(true);
417 }
418}
Here is the call graph for this function:

◆ UpdateDetectedTags()

void TagDetector::UpdateDetectedTags ( std::vector< tagdetectutils::ArucoTag > &  vNewlyDetectedTags)
private

Updates the detected torch tags including tracking the detected tags over time and removing tags that haven't been seen for long enough.

Parameters
vNewlyDetectedTags- Input vector of TorchTag structs containing the tag info.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-03-15
696{
697 // Check if tracking is enabled.
698 if (m_bEnableTracking)
699 {
700 // Check if the given tag vector is empty
701 if (vNewlyDetectedTags.empty())
702 {
703 // Since the tags are empty that means the detector has not detected any new ground truth tags.
704 // In this case we will fallback to relying on the multi-tracker to track the tags and just update the tags
705 // stored in the m_vDetectedArucoTags vector.
706 // This is necessary because the torch detector is not perfect and may not detect all tags in the frame
707 // and it doesn't have the ability to track tags over time.
708 // We will use the multi-tracker to track the tags over time and update the bounding box data for the tags.
709
710 // Update the multi-tracker with the current frame.
711 m_pMultiTracker->Update(m_cvFrame);
712 }
713 else
714 {
715 // Loop through the newly detected tags.
716 for (tagdetectutils::ArucoTag& stTag : vNewlyDetectedTags)
717 {
718 // Add the newly detected tags to the multi-tracker.
719 bool bMatchedTagToExistingTracker = m_pMultiTracker->InitTracker(m_cvFrame, stTag.pBoundingBox, constants::BBOX_TRACKER_TYPE);
720 // Check if the tag was matched to an existing tracker.
721 if (!bMatchedTagToExistingTracker)
722 {
723 // Add the new tag to the member variable list.
724 m_vDetectedArucoTags.emplace_back(stTag);
725 }
726 else
727 {
728 // Find the tag with the same bounding box pointer and update the ID and confidence.
729 for (tagdetectutils::ArucoTag& stExistingTag : m_vDetectedArucoTags)
730 {
731 // Check if the bounding box pointers are the same.
732 if (stTag.pBoundingBox == stExistingTag.pBoundingBox)
733 {
734 // Update the ID and confidence of the existing tag.
735 stExistingTag.nID = stTag.nID;
736 stExistingTag.dConfidence = stTag.dConfidence;
737 }
738 }
739 }
740 }
741
742 // Update the multi-tracker with the current frame.
743 m_pMultiTracker->Update(m_cvFrame);
744 }
745
746 // Loop through the detected tags and check if there are any we need to remove, and also update the time last seen.
747 for (std::vector<tagdetectutils::ArucoTag>::iterator itTag = m_vDetectedArucoTags.begin(); itTag != m_vDetectedArucoTags.end();)
748 {
749 // Check if the bounding box is 0,0,0,0.
750 if (itTag->pBoundingBox->x == 0 && itTag->pBoundingBox->y == 0 && itTag->pBoundingBox->width == 0 && itTag->pBoundingBox->height == 0)
751 {
752 // Remove the tag from the vector.
753 itTag = m_vDetectedArucoTags.erase(itTag);
754 }
755 else
756 {
757 ++itTag;
758 }
759 }
760 }
761 else
762 {
763 // If tracking is not enabled, we will just clear the detected tags and add the new ones.
764 m_vDetectedArucoTags.clear();
765 // Loop through the newly detected tags and add them to the detected tags vector.
766 for (tagdetectutils::ArucoTag& stTag : vNewlyDetectedTags)
767 {
768 // Set the tag creation time to 0. The tags aren't being tracked, so we can't really tell their age.
769 stTag.tmCreation = std::chrono::system_clock::time_point::min();
770
771 // Add the new tag to the member variable list.
772 m_vDetectedArucoTags.emplace_back(stTag);
773 }
774 }
775
776 // Check if we are using a ZED camera.
777 if (m_bUsingZedCamera)
778 {
779 // Check if the point cloud is empty.
780 if (!m_cvPointCloud.empty())
781 {
782 // Get the rover pose from the waypoint handler.
783 m_stRoverPose = globals::g_pWaypointHandler->SmartRetrieveRoverPose();
784 // Loop through the tags and use their center point to lookup their distance in the point cloud.
785 for (tagdetectutils::ArucoTag& stTag : m_vDetectedArucoTags)
786 {
787 // Use either width of height for the neighborhood size.
788 int nNeighborhoodSize = std::min(stTag.pBoundingBox->width, stTag.pBoundingBox->height);
789 // Geolocate the tag in the point cloud.
790 geoops::Waypoint stGeolocation =
791 geoloc::GeolocateBox(m_cvPointCloud, m_stRoverPose, cv::Point(stTag.pBoundingBox->x, stTag.pBoundingBox->y), nNeighborhoodSize);
792 // Since this is a tag detection, set the tag's waypoint type appropriately.
793 stGeolocation.eType = geoops::WaypointType::eTagWaypoint;
794
795 // Check if the geolocation is valid.
796 if (stGeolocation != geoops::Waypoint())
797 {
798 // Calculate the geo measurement and print the distance to the tag.
799 geoops::GeoMeasurement stMeasurement = geoops::CalculateGeoMeasurement(m_stRoverPose.GetUTMCoordinate(), stGeolocation.GetUTMCoordinate());
800
801 // Check that the distance is in a reasonable range.
802 if (stMeasurement.dDistanceMeters > 0.0 && stMeasurement.dDistanceMeters < 25.0)
803 {
804 // Set the tag's geolocation.
805 stTag.stGeolocatedPosition = stGeolocation;
806 // Use the rover heading and the azimuth angle to calculate the relative heading to the tag.
807 stTag.dYawAngle = numops::AngularDifference(m_stRoverPose.GetCompassHeading(), stMeasurement.dStartRelativeBearing);
808 // Set the straight line distance to the tag.
809 stTag.dStraightLineDistance = stMeasurement.dDistanceMeters;
810 }
811 }
812 }
813 }
814 }
815 else
816 {
817 // Estimate the positions of the tags using some basic trig.
818 for (tagdetectutils::ArucoTag& stTag : m_vDetectedArucoTags)
819 {
820 // Use some trig to get the location of the tag.
822 }
823 }
824}
geoops::RoverPose SmartRetrieveRoverPose(bool bVIOHeading=true, bool bVIOTracking=false)
Retrieve the rover's current position and heading. Automatically picks between getting the position/h...
Definition WaypointHandler.cpp:742
geoops::Waypoint GeolocateBox(const cv::Mat &cvPointcloud, const geoops::RoverPose &stRoverPose, const cv::Point &cvPixel, int nNeighborhoodSize=5)
Converts pixel coordinates in a ZED2i pointcloud into global UTM coordinates, given the rover's curre...
Definition Geolocate.hpp:55
GeoMeasurement CalculateGeoMeasurement(const GPSCoordinate &stCoord1, const GPSCoordinate &stCoord2)
The shortest path between two points on an ellipsoid at (lat1, lon1) and (lat2, lon2) is called the g...
Definition GeospatialOperations.hpp:522
constexpr T AngularDifference(T tFirstValue, T tSecondValue)
Calculates the distance in degrees between two angles. This function accounts for wrap around so that...
Definition NumberOperations.hpp:204
void EstimatePoseFromCameraFrame(ArucoTag &stTag)
Estimate the pose of a tag from a camera frame.
Definition TagDetectionUtilty.hpp:214
This struct is used to store the distance, arc length, and relative bearing for a calculated geodesic...
Definition GeospatialOperations.hpp:82
double GetCompassHeading() const
Accessor for the Compass Heading private member.
Definition GeospatialOperations.hpp:756
const geoops::UTMCoordinate & GetUTMCoordinate() const
Accessor for the geoops::UTMCoordinate member variable.
Definition GeospatialOperations.hpp:736
This struct is used by the WaypointHandler class to store location, size, and type information about ...
Definition GeospatialOperations.hpp:392
const geoops::UTMCoordinate & GetUTMCoordinate() const
Accessor for the geoops::UTMCoordinate member variable.
Definition GeospatialOperations.hpp:477
Here is the call graph for this function:
Here is the caller graph for this function:

The documentation for this class was generated from the following files: