Autonomy Software C++ 24.5.1
Welcome to the Autonomy Software repository of the Mars Rover Design Team (MRDT) at Missouri University of Science and Technology (Missouri S&T)! API reference contains the source code and other resources for the development of the autonomy software for our Mars rover. The Autonomy Software project aims to compete in the University Rover Challenge (URC) by demonstrating advanced autonomous capabilities and robust navigation algorithms.
Loading...
Searching...
No Matches
ObjectDetector Class Reference

This class implements a modular and easy to use object detector for a single camera. Given a camera name, this class will detect objects using the depth measure from a ZED camera and/or inferenced objects from a custom trained model. This class and it's detections are ran in a different thread. More...

#include <ObjectDetector.h>

Inheritance diagram for ObjectDetector:
Collaboration diagram for ObjectDetector:

Public Member Functions

 ObjectDetector (BasicCamera *pBasicCam, const int nNumDetectedObjectsRetrievalThreads=5, const bool bUsingGpuMats=false)
 Construct a new ObjectDetector object.
 
 ObjectDetector (ZEDCamera *pZEDCam, const int nNumDetectedObjectsRetrievalThreads=5, const bool bUsingGpuMats=false)
 Construct a new ObjectDetector object.
 
std::future< bool > RequestDepthDetectionOverlayFrame (cv::Mat &cvFrame)
 Request a copy of a frame containing the object detection overlays from the depth library.
 
std::future< bool > RequestTensorflowDetectionOverlayFrame (cv::Mat &cvFrame)
 Request a copy of a frame containing the object detection overlays from the tensorflow model.
 
std::future< bool > RequestDetectedDepthObjects (std::vector< depthobject::DepthObject > &vDepthObjects)
 Request the most up to date vector of detected objects from OpenCV's Depth algorithm.
 
std::future< bool > RequestDetectedTensorflowObjects (std::vector< tensorflowobject::TensorflowObject > &vTensorflowObjects)
 Request the most up to date vector of detected objects from our custom tensorflow model.
 
IPSGetIPS ()
 Accessor for the Frame I P S private member.
 
- Public Member Functions inherited from AutonomyThread< void >
 AutonomyThread ()
 Construct a new Autonomy Thread object.
 
virtual ~AutonomyThread ()
 Destroy the Autonomy Thread object. If the parent object or main thread is destroyed or exited while this thread is still running, a race condition will occur. Stopping and joining the thread here insures that the main program can't exit if the user forgot to stop and join the thread.
 
void Start ()
 When this method is called, it starts a new thread that runs the code within the ThreadedContinuousCode method. This is the users main code that will run the important and continuous code for the class.
 
void RequestStop ()
 Signals threads to stop executing user code, terminate. DOES NOT JOIN. This method will not force the thread to exit, if the user code is not written properly and contains WHILE statement or any other long-executing or blocking code, then the thread will not exit until the next iteration.
 
void Join ()
 Waits for thread to finish executing and then closes thread. This method will block the calling code until thread is finished.
 
bool Joinable () const
 Check if the code within the thread and all pools created by it are finished executing and the thread is ready to be closed.
 
AutonomyThreadState GetThreadState () const
 Accessor for the Threads State private member.
 
IPSGetIPS ()
 Accessor for the Frame I P S private member.
 

Private Member Functions

void ThreadedContinuousCode () override
 This code will run continuously in a separate thread. New frames from the given camera are grabbed and the objects for the camera image are detected, filtered, and stored. Then any requests for the current objects are fulfilled.
 
void PooledLinearCode () override
 This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a queue filled by the Request methods.
 
void UpdateDetectedObjects (std::vector< depthobject::DepthObject > &vNewlyDetectedObjects)
 
void UpdateDetectedObjects (std::vector< tensorflowobject::TensorflowObject > &vNewlyDetectedObjects)
 

Private Attributes

Camera< cv::Mat > * m_pCamera
 
bool m_bUsingZedCamera
 
bool m_bUsingGpuMats
 
int m_nNumDetectedObjectsRetrievalThreads
 
IPS m_IPS
 
std::vector< depthobject::DepthObjectm_vDetectedDepthObjects
 
std::vector< tensorflowobject::TensorflowObjectm_vDetectedTensorObjects
 
cv::Mat m_cvNormalFrame
 
cv::Mat m_cvProcFrame
 
cv::Mat m_cvDepthMeasure
 
cv::cuda::GpuMat m_cvGPUNormalFrame
 
cv::cuda::GpuMat m_cvGPUDepthMeasure
 
std::queue< containers::FrameFetchContainer< cv::Mat > > m_qDetectedObjectDrawnOverlayFrames
 
std::queue< containers::DataFetchContainer< std::vector< depthobject::DepthObject > > > m_qDetectedDepthObjectCopySchedule
 
std::queue< containers::DataFetchContainer< std::vector< tensorflowobject::TensorflowObject > > > m_qDetectedTensorflowObjectCopySchedule
 
std::shared_mutex m_muPoolScheduleMutex
 
std::mutex m_muFrameCopyMutex
 
std::mutex m_muDepthDataCopyMutex
 
std::mutex m_muTensorflowDataCopyMutex
 

Additional Inherited Members

- Public Types inherited from AutonomyThread< void >
enum  AutonomyThreadState
 
- Protected Member Functions inherited from AutonomyThread< void >
void RunPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts/adds tasks to a thread pool that runs nNumTasksToQueue copies of the code within the PooledLinearCode() method using nNumThreads number of threads. This is meant to be used as an internal utility of the child class to further improve parallelization. Default value for nNumThreads is 2.
 
void RunDetachedPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts a thread pool full of threads that don't return std::futures (like a placeholder for the thread return type). This means the thread will not have a return type and there is no way to determine if the thread has finished other than calling the Join() method. Only use this if you want to 'set and forget'. It will be faster as it doesn't return futures. Runs PooledLinearCode() method code. This is meant to be used as an internal utility of the child class to further improve parallelization.
 
void ParallelizeLoop (const int nNumThreads, const N tTotalIterations, F &&tLoopFunction)
 Given a ref-qualified looping function and an arbitrary number of iterations, this method will divide up the loop and run each section in a thread pool. This function must not return anything. This method will block until the loop has completed.
 
void ClearPoolQueue ()
 Clears any tasks waiting to be ran in the queue, tasks currently running will remain running.
 
void JoinPool ()
 Waits for pool to finish executing tasks. This method will block the calling code until thread is finished.
 
bool PoolJoinable () const
 Check if the internal pool threads are done executing code and the queue is empty.
 
void SetMainThreadIPSLimit (int nMaxIterationsPerSecond=0)
 Mutator for the Main Thread Max I P S private member.
 
int GetPoolNumOfThreads ()
 Accessor for the Pool Num Of Threads private member.
 
int GetPoolQueueLength ()
 Accessor for the Pool Queue Size private member.
 
std::vector< void > GetPoolResults ()
 Accessor for the Pool Results private member. The action of getting results will destroy and remove them from this object. This method blocks if the thread is not finished, so no need to call JoinPool() before getting results.
 
int GetMainThreadMaxIPS () const
 Accessor for the Main Thread Max I P S private member.
 
- Protected Attributes inherited from AutonomyThread< void >
IPS m_IPS
 

Detailed Description

This class implements a modular and easy to use object detector for a single camera. Given a camera name, this class will detect objects using the depth measure from a ZED camera and/or inferenced objects from a custom trained model. This class and it's detections are ran in a different thread.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-24

Constructor & Destructor Documentation

◆ ObjectDetector() [1/2]

ObjectDetector::ObjectDetector ( BasicCamera pBasicCam,
const int  nNumDetectedObjectsRetrievalThreads = 5,
const bool  bUsingGpuMats = false 
)

Construct a new ObjectDetector object.

Parameters
pBasicCam- A pointer to the BasicCam camera to get frames from for detection.
nNumDetectedObjectsRetrievalThreads- The number of threads to use when fulfilling requests for the detected depth objects. Default is 5.
bUsingGpuMats- Whether or not the given camera name will be using GpuMats.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-10
26{
27 // Initialize member variables.
28 m_pCamera = pBasicCam;
29 m_bUsingZedCamera = false; // Toggle ZED functions off.
30 m_nNumDetectedObjectsRetrievalThreads = nNumDetectedObjectsRetrievalThreads;
31 m_bUsingGpuMats = bUsingGpuMats;
32 m_IPS = IPS();
33}
This util class provides an easy way to keep track of iterations per second for any body of code.
Definition IPS.hpp:30

◆ ObjectDetector() [2/2]

ObjectDetector::ObjectDetector ( ZEDCamera pZEDCam,
const int  nNumDetectedObjectsRetrievalThreads = 5,
const bool  bUsingGpuMats = false 
)

Construct a new ObjectDetector object.

Parameters
pZEDCam- A pointer to the ZEDCam camera to get frames from for detection. Override for ZED camera.
nNumDetectedObjectsRetrievalThreads- The number of threads to use when fulfilling requests for the detected depth objects. Default is 5.
bUsingGpuMats- Whether or not the given camera name will be using GpuMats.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-07
47{
48 // Initialize member variables.
49 m_pCamera = pZEDCam;
50 m_bUsingZedCamera = true; // Toggle ZED functions off.
51 m_nNumDetectedObjectsRetrievalThreads = nNumDetectedObjectsRetrievalThreads;
52 m_bUsingGpuMats = bUsingGpuMats;
53}

Member Function Documentation

◆ RequestDepthDetectionOverlayFrame()

std::future< bool > ObjectDetector::RequestDepthDetectionOverlayFrame ( cv::Mat cvFrame)

Request a copy of a frame containing the object detection overlays from the depth library.

Parameters
cvFrame- The frame to copy the detection overlay image to.
Returns
std::future<bool> - The future that should be waited on before using the passed in frame. Future will be true or false based on whether or not the frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-11
244{
245 // Assemble the DataFetchContainer.
246 containers::FrameFetchContainer<cv::Mat> stContainer(cvFrame, PIXEL_FORMATS::eDepthDetection);
247
248 // Acquire lock on pool copy queue.
249 std::unique_lock<std::shared_mutex> lkScheduler(m_muPoolScheduleMutex);
250 // Append frame fetch container to the schedule queue.
251 m_qDetectedObjectDrawnOverlayFrames.push(stContainer);
252 // Release lock on the frame schedule queue.
253 lkScheduler.unlock();
254
255 // Return the future from the promise stored in the container.
256 return stContainer.pCopiedFrameStatus->get_future();
257}
This struct is used to carry references to camera frames for scheduling and copying....
Definition FetchContainers.hpp:86

◆ RequestTensorflowDetectionOverlayFrame()

std::future< bool > ObjectDetector::RequestTensorflowDetectionOverlayFrame ( cv::Mat cvFrame)

Request a copy of a frame containing the object detection overlays from the tensorflow model.

Parameters
cvFrame- The frame to copy the detection overlay image to.
Returns
std::future<bool> - The future that should be waited on before using the passed in frame. Future will be true or false based on whether or not the frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-11
271{
272 // Assemble the DataFetchContainer.
273 containers::FrameFetchContainer<cv::Mat> stContainer(cvFrame, PIXEL_FORMATS::eTensorflowDetection);
274
275 // Acquire lock on pool copy queue.
276 std::unique_lock<std::shared_mutex> lkScheduler(m_muPoolScheduleMutex);
277 // Append frame fetch container to the schedule queue.
278 m_qDetectedObjectDrawnOverlayFrames.push(stContainer);
279 // Release lock on the frame schedule queue.
280 lkScheduler.unlock();
281
282 // Return the future from the promise stored in the container.
283 return stContainer.pCopiedFrameStatus->get_future();
284}

◆ RequestDetectedDepthObjects()

std::future< bool > ObjectDetector::RequestDetectedDepthObjects ( std::vector< depthobject::DepthObject > &  vDepthObjects)

Request the most up to date vector of detected objects from OpenCV's Depth algorithm.

Parameters
vDepthObjects- The vector the detected depth objects will be saved to.
Returns
std::future<bool> - The future that should be waited on before using the passed in object vector. Future will be true or false based on whether or not the objects were successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-07
298{
299 // Assemble the DataFetchContainer.
301
302 // Acquire lock on pool copy queue.
303 std::unique_lock<std::shared_mutex> lkScheduler(m_muPoolScheduleMutex);
304 // Append detected object fetch container to the schedule queue.
305 m_qDetectedDepthObjectCopySchedule.push(stContainer);
306 // Release lock on the frame schedule queue.
307 lkScheduler.unlock();
308
309 // Return the future from the promise stored in the container.
310 return stContainer.pCopiedDataStatus->get_future();
311}
This struct is used to carry references to any datatype for scheduling and copying....
Definition FetchContainers.hpp:162

◆ RequestDetectedTensorflowObjects()

std::future< bool > ObjectDetector::RequestDetectedTensorflowObjects ( std::vector< tensorflowobject::TensorflowObject > &  vTensorflowObjects)

Request the most up to date vector of detected objects from our custom tensorflow model.

Parameters
vTensorflowObjects- The vector the detected tensorflow objects will be saved to.
Returns
std::future<bool> - The future that should be waited on before using the passed in object vector. Future will be true or false based on whether or not the objects were successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-07
325{
326 // Assemble the DataFetchContainer.
328
329 // Acquire lock on pool copy queue.
330 std::unique_lock<std::shared_mutex> lkScheduler(m_muPoolScheduleMutex);
331 // Append detected object fetch container to the schedule queue.
332 m_qDetectedTensorflowObjectCopySchedule.push(stContainer);
333 // Release lock on the frame schedule queue.
334 lkScheduler.unlock();
335
336 // Return the future from the promise stored in the container.
337 return stContainer.pCopiedDataStatus->get_future();
338}

◆ GetIPS()

IPS & ObjectDetector::GetIPS ( )

Accessor for the Frame I P S private member.

Returns
IPS& - The detector objects iteration per second counter.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-10
385{
386 // Return Iterations Per Second counter.
387 return m_IPS;
388}

◆ ThreadedContinuousCode()

void ObjectDetector::ThreadedContinuousCode ( )
overrideprivatevirtual

This code will run continuously in a separate thread. New frames from the given camera are grabbed and the objects for the camera image are detected, filtered, and stored. Then any requests for the current objects are fulfilled.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-07

Implements AutonomyThread< void >.

65{
66 // Create future for indicating when the frame has been copied.
67 std::future<bool> fuNormalFrame;
68 std::future<bool> fuDepthMeasureCopyStatus;
69
70 // Check if the camera is setup to use CPU or GPU mats.
71 if (m_bUsingZedCamera)
72 {
73 // Check if the ZED camera is returning cv::cuda::GpuMat or cv:Mat.
74 if (m_bUsingGpuMats)
75 {
76 // Grabs normal frame and depth measure from ZEDCam. Dynamic casts Camera to ZEDCamera* so we can use ZEDCam methods.
77 fuNormalFrame = dynamic_cast<ZEDCamera*>(m_pCamera)->RequestFrameCopy(m_cvGPUNormalFrame);
78 fuDepthMeasureCopyStatus = dynamic_cast<ZEDCamera*>(m_pCamera)->RequestDepthCopy(m_cvGPUDepthMeasure);
79
80 // Wait for requested frames to be retrieved.
81 if (fuDepthMeasureCopyStatus.get() && fuNormalFrame.get())
82 {
83 // Download mat from GPU memory.
84 m_cvGPUNormalFrame.download(m_cvNormalFrame);
85 m_cvGPUDepthMeasure.download(m_cvDepthMeasure);
86 }
87 else
88 {
89 // Submit logger message.
90 LOG_WARNING(logging::g_qSharedLogger, "ObjectDetector unable to get normal frame or depth measure from ZEDCam!");
91 }
92 }
93 else
94 {
95 // Grabs normal frame and depth measure from ZEDCam. Dynamic casts Camera to ZEDCamera* so we can use ZEDCam methods.
96 fuNormalFrame = dynamic_cast<ZEDCamera*>(m_pCamera)->RequestFrameCopy(m_cvNormalFrame);
97 fuDepthMeasureCopyStatus = dynamic_cast<ZEDCamera*>(m_pCamera)->RequestDepthCopy(m_cvDepthMeasure);
98
99 // Wait for requested frames to be retrieved.
100 if (!fuDepthMeasureCopyStatus.get() || !fuNormalFrame.get())
101 {
102 // Submit logger message.
103 LOG_WARNING(logging::g_qSharedLogger, "ObjectDetector unable to get normal frame or depth measure from ZEDCam!");
104 }
105 }
106 }
107 else
108 {
109 // Grab frames from camera.
110 fuNormalFrame = dynamic_cast<BasicCamera*>(m_pCamera)->RequestFrameCopy(m_cvNormalFrame);
111
112 // Wait for requested frames to be retrieved.
113 if (!fuNormalFrame.get())
114 {
115 // Submit logger message.
116 LOG_WARNING(logging::g_qSharedLogger, "ObjectDetector unable to get requested frames from BasicCam!");
117 }
118 }
119
121 // Call detection methods and inference.
123
124 // TODO: Implement when ready, commented out to suppress warnings.
125 // Merge the newly detected objects with the pre-existing detected objects
126 // this->UpdateDetectedObjects(vNewlyDetectedObjects);
127
128 // Call FPS tick.
129 m_IPS.Tick();
131
132 // Acquire a shared_lock on the detected objects copy queue.
133 std::shared_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
134 // Check if the detected object copy queue is empty.
135 if (!m_qDetectedDepthObjectCopySchedule.empty() || !m_qDetectedTensorflowObjectCopySchedule.empty() || !m_qDetectedObjectDrawnOverlayFrames.empty())
136 {
137 size_t siQueueLength =
138 std::max({m_qDetectedDepthObjectCopySchedule.size(), m_qDetectedTensorflowObjectCopySchedule.size(), m_qDetectedObjectDrawnOverlayFrames.size()});
139 // Start the thread pool to store multiple copies of the detected objects to the requesting threads
140 this->RunDetachedPool(siQueueLength, m_nNumDetectedObjectsRetrievalThreads);
141 // Wait for thread pool to finish.
142 this->JoinPool();
143 // Release lock on frame copy queue.
144 lkSchedulers.unlock();
145 }
146}
void RunDetachedPool(const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
When this method is called, it starts a thread pool full of threads that don't return std::futures (l...
Definition AutonomyThread.hpp:336
void JoinPool()
Waits for pool to finish executing tasks. This method will block the calling code until thread is fin...
Definition AutonomyThread.hpp:439
This class serves as a middle inheritor between the Camera interface and the BasicCam class....
Definition BasicCamera.hpp:28
void Tick()
This method is used to update the iterations per second counter and recalculate all of the IPS metric...
Definition IPS.hpp:138
This class serves as a middle inheritor between the Camera interface and the ZEDCam class....
Definition ZEDCamera.hpp:33
void download(OutputArray dst) const
Here is the call graph for this function:

◆ PooledLinearCode()

void ObjectDetector::PooledLinearCode ( )
overrideprivatevirtual

This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a queue filled by the Request methods.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-08

Implements AutonomyThread< void >.

159{
161 // Detection Overlay Frame queue.
163 // Acquire sole writing access to the detectedObjectCopySchedule.
164 std::unique_lock<std::mutex> lkObjectOverlayFrameQueue(m_muFrameCopyMutex);
165 // Check if there are unfulfilled requests.
166 if (!m_qDetectedObjectDrawnOverlayFrames.empty())
167 {
168 // Get frame container out of queue.
169 containers::FrameFetchContainer<cv::Mat> stContainer = m_qDetectedObjectDrawnOverlayFrames.front();
170 // Pop out of queue.
171 m_qDetectedObjectDrawnOverlayFrames.pop();
172 // Release lock.
173 lkObjectOverlayFrameQueue.unlock();
174
175 // Check which frame we should copy.
176 switch (stContainer.eFrameType)
177 {
178 case PIXEL_FORMATS::eDepthDetection: *(stContainer.pFrame) = m_cvProcFrame; break;
179 case PIXEL_FORMATS::eTensorflowDetection: *(stContainer.pFrame) = m_cvProcFrame; break;
180 default: *(stContainer.pFrame) = m_cvProcFrame;
181 }
182
183 // Signal future that the frame has been successfully retrieved.
184 stContainer.pCopiedFrameStatus->set_value(true);
185 }
186
188 // DepthObject queue.
190 // Acquire sole writing access to the detectedObjectCopySchedule.
191 std::unique_lock<std::mutex> lkDepthObjectQueue(m_muDepthDataCopyMutex);
192 // Check if there are unfulfilled requests.
193 if (!m_qDetectedDepthObjectCopySchedule.empty())
194 {
195 // Get frame container out of queue.
196 containers::DataFetchContainer<std::vector<depthobject::DepthObject>> stContainer = m_qDetectedDepthObjectCopySchedule.front();
197 // Pop out of queue.
198 m_qDetectedDepthObjectCopySchedule.pop();
199 // Release lock.
200 lkDepthObjectQueue.unlock();
201
202 // Copy the detected objects to the target location
203 *(stContainer.pData) = m_vDetectedDepthObjects;
204
205 // Signal future that the frame has been successfully retrieved.
206 stContainer.pCopiedDataStatus->set_value(true);
207 }
208
210 // TensorflowObject queue.
212 // Acquire sole writing access to the detectedObjectCopySchedule.
213 std::unique_lock<std::mutex> lkTensorflowObjectQueue(m_muTensorflowDataCopyMutex);
214 // Check if there are unfulfilled requests.
215 if (!m_qDetectedTensorflowObjectCopySchedule.empty())
216 {
217 // Get frame container out of queue.
218 containers::DataFetchContainer<std::vector<tensorflowobject::TensorflowObject>> stContainer = m_qDetectedTensorflowObjectCopySchedule.front();
219 // Pop out of queue.
220 m_qDetectedTensorflowObjectCopySchedule.pop();
221 // Release lock.
222 lkTensorflowObjectQueue.unlock();
223
224 // Copy the detected objects to the target location
225 *(stContainer.pData) = m_vDetectedTensorObjects;
226
227 // Signal future that the frame has been successfully retrieved.
228 stContainer.pCopiedDataStatus->set_value(true);
229 }
230}

The documentation for this class was generated from the following files: