Autonomy Software C++ 24.5.1
Welcome to the Autonomy Software repository of the Mars Rover Design Team (MRDT) at Missouri University of Science and Technology (Missouri S&T)! API reference contains the source code and other resources for the development of the autonomy software for our Mars rover. The Autonomy Software project aims to compete in the University Rover Challenge (URC) by demonstrating advanced autonomous capabilities and robust navigation algorithms.
Loading...
Searching...
No Matches
SIMZEDCam Class Reference

This class implements and interfaces with the SIM cameras and data. It is designed in such a way that multiple other classes/threads can safely call any method of an object of this class withing resource corruption or slowdown of the camera. More...

#include <SIMZEDCam.h>

Inheritance diagram for SIMZEDCam:
Collaboration diagram for SIMZEDCam:

Public Member Functions

 SIMZEDCam (const std::string szCameraPath, const int nPropResolutionX, const int nPropResolutionY, const int nPropFramesPerSecond, const double dPropHorizontalFOV, const double dPropVerticalFOV, const bool bEnableRecordingFlag, const int nNumFrameRetrievalThreads=10, const unsigned int unCameraSerialNumber=0)
 Construct a new SIM Cam:: SIM Cam object.
 
 ~SIMZEDCam ()
 Destroy the SIM Cam:: SIM Cam object.
 
std::future< bool > RequestFrameCopy (cv::Mat &cvFrame) override
 Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. Remember, this code will be ran in whatever, class/thread calls it.
 
std::future< bool > RequestDepthCopy (cv::Mat &cvDepth, const bool bRetrieveMeasure=true)
 Requests a depth measure or image from the camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. This image has the same shape as a grayscale image, but the values represent the depth in MILLIMETERS. The ZEDSDK will always return this measure in MILLIMETERS.
 
std::future< bool > RequestPointCloudCopy (cv::Mat &cvPointCloud)
 Requests a point cloud image from the camera. This image has the same resolution as a normal image but with three XYZ values replacing the old color values in the 3rd dimension. The units and sign of the XYZ values are determined by ZED_MEASURE_UNITS and ZED_COORD_SYSTEM constants set in AutonomyConstants.h.
 
std::future< bool > RequestPositionalPoseCopy (Pose &stPose) override
 Puts a sl::GeoPose pointer into a queue so a copy of a GeoPose from the camera can be written to it.
 
std::future< bool > RequestFusionGeoPoseCopy (sl::GeoPose &slGeoPose) override
 Puts a sl::GeoPose pointer into a queue so a copy of a GeoPose from the camera can be written to it.
 
std::future< bool > RequestSensorsCopy (sl::SensorsData &slSensorsData) override
 Requests a copy of the sensors data from the camera.
 
sl::ERROR_CODE ResetPositionalTracking () override
 This method is used to reset the positional tracking of the camera. Because this is a simulation camera, and then ZEDSDK is not available, this method will just reset the offsets to zero.
 
sl::ERROR_CODE RebootCamera () override
 This method is used to reboot the camera. This method will stop the camera thread, join the camera thread, destroy the camera stream objects, reconstruct the camera stream objects, set the frame callbacks, and restart the camera thread. This simulates a camera reboot since the ZED SDK is not available.
 
sl::FUSION_ERROR_CODE SubscribeFusionToCameraUUID (sl::CameraIdentifier &slCameraUUID) override
 This method is used to subscribe the sl::Fusion object to the camera with the given UUID. Given that this is a simulation camera, and the ZED SDK is not available, this method will do nothing.
 
sl::CameraIdentifier PublishCameraToFusion () override
 This method is used to publish the camera to the sl::Fusion object. Since this is a simulation camera this will do nothing.
 
sl::ERROR_CODE EnablePositionalTracking (const float fExpectedCameraHeightFromFloorTolerance=constants::ZED_DEFAULT_FLOOR_PLANE_ERROR) override
 This method is used to enable positional tracking on the camera. Since this is a simulation camera, this method will just set the member variable and then use the NavBoard to simulate the camera's position.
 
void DisablePositionalTracking () override
 This method is used to disable positional tracking on the camera. Since this is a simulation camera, this method will just set the member variable.
 
void SetPositionalPose (const double dX, const double dY, const double dZ, const double dXO, const double dYO, const double dZO) override
 This method is used to set the positional pose of the camera. Since this is a simulation camera, this method will just set the offset member variables.
 
bool GetCameraIsOpen () override
 Accessor for the camera open status.
 
bool GetUsingGPUMem () const override
 Returns if the camera is using GPU memory. This is a simulation camera, so this method will always return false.
 
std::string GetCameraModel () override
 Accessor for the name of this model of camera.
 
bool GetPositionalTrackingEnabled () override
 Accessor for the if the camera's positional tracking is enabled. Since this is a simulation camera, this method will just return the member variable.
 
- Public Member Functions inherited from ZEDCamera
 ZEDCamera (const int nPropResolutionX, const int nPropResolutionY, const int nPropFramesPerSecond, const double dPropHorizontalFOV, const double dPropVerticalFOV, const bool bEnableRecordingFlag, const bool bMemTypeGPU, const bool bUseHalfDepthPrecision, const bool bEnableFusionMaster, const int nNumFrameRetrievalThreads, const unsigned int unCameraSerialNumber)
 Construct a new ZEDCamera object.
 
virtual ~ZEDCamera ()=default
 Destroy the ZEDCamera object.
 
virtual std::future< bool > RequestFrameCopy (cv::cuda::GpuMat &cvGPUFrame)
 Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it.
 
virtual std::future< bool > RequestDepthCopy (cv::cuda::GpuMat &cvGPUDepth, const bool bRetrieveMeasure=true)
 Puts a frame pointer into a queue so a copy of a depth frame from the camera can be written to it.
 
virtual std::future< bool > RequestPointCloudCopy (cv::cuda::GpuMat &cvGPUPointCloud)
 Puts a frame pointer into a queue so a copy of a point cloud from the camera can be written to it.
 
virtual std::future< bool > RequestFloorPlaneCopy (sl::Plane &slFloorPlane)
 Puts a FloorPlane pointer into a queue so a copy of a FloorPlane from the camera can be written to it.
 
virtual std::future< bool > RequestObjectsCopy (std::vector< sl::ObjectData > &vObjectData)
 Puts a vector of ObjectData pointers into a queue so a copy of a vector of ObjectData from the camera can be written to it.
 
virtual std::future< bool > RequestBatchedObjectsCopy (std::vector< sl::ObjectsBatch > &vBatchedObjectData)
 Puts a vector of ObjectsBatch pointers into a queue so a copy of a vector of ObjectsBatch from the camera can be written to it.
 
virtual sl::ERROR_CODE TrackCustomBoxObjects (std::vector< ZedObjectData > &vCustomObjects)
 Tracks custom bounding boxes in the camera's field of view.
 
virtual sl::FUSION_ERROR_CODE IngestGPSDataToFusion (geoops::GPSCoordinate stNewGPSLocation)
 Ingests GPS data into the fusion object.
 
virtual sl::ERROR_CODE EnableSpatialMapping ()
 Enables spatial mapping.
 
virtual void DisableSpatialMapping ()
 Disables spatial mapping.
 
virtual sl::ERROR_CODE EnableObjectDetection (const bool bEnableBatching=false)
 Enables object detection.
 
virtual void DisableObjectDetection ()
 Disables object detection.
 
virtual bool GetIsFusionMaster () const
 Accessor for if this ZED is running a fusion instance.
 
virtual unsigned int GetCameraSerial ()
 Accessor for the Camera Serial private member.
 
virtual sl::PositionalTrackingStatus GetPositionalTrackingState ()
 Accessor for the Positional Tracking State private member.
 
virtual sl::FusedPositionalTrackingStatus GetFusedPositionalTrackingState ()
 Accessor for the Fused Positional Tracking State private member.
 
virtual sl::SPATIAL_MAPPING_STATE GetSpatialMappingState ()
 Accessor for the Spatial Mapping State private member.
 
virtual sl::SPATIAL_MAPPING_STATE ExtractSpatialMapAsync (std::future< sl::Mesh > &fuMeshFuture)
 Puts a Mesh pointer into a queue so a copy of a spatial mapping mesh from the camera can be written to it.
 
virtual bool GetObjectDetectionEnabled ()
 Accessor for the Object Detection Enabled private member.
 
- Public Member Functions inherited from Camera< cv::Mat >
 Camera (const int nPropResolutionX, const int nPropResolutionY, const int nPropFramesPerSecond, const PIXEL_FORMATS ePropPixelFormat, const double dPropHorizontalFOV, const double dPropVerticalFOV, const bool bEnableRecordingFlag, const int nNumFrameRetrievalThreads=5)
 Construct a new Camera object.
 
virtual ~Camera ()
 Destroy the Camera object.
 
cv::Size GetPropResolution () const
 Accessor for the Prop Resolution private member.
 
int GetPropFramesPerSecond () const
 Accessor for the Prop Frames Per Second private member.
 
PIXEL_FORMATS GetPropPixelFormat () const
 Accessor for the Prop Pixel Format private member.
 
double GetPropHorizontalFOV () const
 Accessor for the Prop Horizontal F O V private member.
 
double GetPropVerticalFOV () const
 Accessor for the Prop Vertical F O V private member.
 
bool GetEnableRecordingFlag () const
 Accessor for the Enable Recording Flag private member.
 
void SetEnableRecordingFlag (const bool bEnableRecordingFlag)
 Mutator for the Enable Recording Flag private member.
 
- Public Member Functions inherited from AutonomyThread< void >
 AutonomyThread ()
 Construct a new Autonomy Thread object.
 
virtual ~AutonomyThread ()
 Destroy the Autonomy Thread object. If the parent object or main thread is destroyed or exited while this thread is still running, a race condition will occur. Stopping and joining the thread here insures that the main program can't exit if the user forgot to stop and join the thread.
 
void Start ()
 When this method is called, it starts a new thread that runs the code within the ThreadedContinuousCode method. This is the users main code that will run the important and continuous code for the class.
 
void RequestStop ()
 Signals threads to stop executing user code, terminate. DOES NOT JOIN. This method will not force the thread to exit, if the user code is not written properly and contains WHILE statement or any other long-executing or blocking code, then the thread will not exit until the next iteration.
 
void Join ()
 Waits for thread to finish executing and then closes thread. This method will block the calling code until thread is finished.
 
bool Joinable () const
 Check if the code within the thread and all pools created by it are finished executing and the thread is ready to be closed.
 
AutonomyThreadState GetThreadState () const
 Accessor for the Threads State private member.
 
IPSGetIPS ()
 Accessor for the Frame I P S private member.
 

Private Member Functions

void ThreadedContinuousCode () override
 The code inside this private method runs in a separate thread, but still has access to this*. This method continuously get new frames from the OpenCV VideoCapture object and stores it in a member variable. Then a thread pool is started and joined once per iteration to mass copy the frames and/or measure to any other thread waiting in the queues.
 
void PooledLinearCode () override
 This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a vector queued up by the Grab methods.
 
void SetCallbacks ()
 This method sets the callbacks for the WebRTC connections.
 
void EstimateDepthMeasure (const cv::Mat &cvDepthImage, cv::Mat &cvDepthMeasure)
 This method estimates the depth measure from the depth image.
 
void CalculatePointCloud (const cv::Mat &cvDepthMeasure, cv::Mat &cvPointCloud)
 This method calculates a point cloud from the decoded depth measure use some simple trig and the camera FOV.
 

Private Attributes

std::string m_szCameraPath
 
std::atomic< bool > m_bCameraPositionalTrackingEnabled
 
sl::SensorsData m_stIMUData
 
std::unique_ptr< WebRTCm_pRGBStream
 
std::unique_ptr< WebRTCm_pDepthImageStream
 
double m_dPoseOffsetX
 
double m_dPoseOffsetY
 
double m_dPoseOffsetZ
 
double m_dPoseOffsetXO
 
double m_dPoseOffsetYO
 
double m_dPoseOffsetZO
 
geoops::RoverPose m_stCurrentRoverPose
 
std::shared_mutex m_muCurrentRoverPoseMutex
 
cv::Mat m_cvFrame
 
cv::Mat m_cvDepthImageBuffer
 
cv::Mat m_cvDepthImage
 
cv::Mat m_cvDepthMeasure
 
cv::Mat m_cvPointCloud
 
std::queue< containers::DataFetchContainer< Pose > > m_qPoseCopySchedule
 
std::queue< containers::DataFetchContainer< sl::GeoPose > > m_qGeoPoseCopySchedule
 
std::queue< containers::DataFetchContainer< sl::SensorsData > > m_qSensorsCopySchedule
 
std::shared_mutex m_muWebRTCRGBImageCopyMutex
 
std::shared_mutex m_muWebRTCDepthImageCopyMutex
 
std::shared_mutex m_muPoseCopyMutex
 
std::shared_mutex m_muGeoPoseCopyMutex
 
std::shared_mutex m_muSensorsCopyMutex
 
std::atomic< bool > m_bPosesQueued
 
std::atomic< bool > m_bGeoPosesQueued
 
std::atomic< bool > m_bSensorsQueued
 

Additional Inherited Members

- Public Types inherited from AutonomyThread< void >
enum  AutonomyThreadState
 
- Protected Member Functions inherited from AutonomyThread< void >
void RunPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts/adds tasks to a thread pool that runs nNumTasksToQueue copies of the code within the PooledLinearCode() method using nNumThreads number of threads. This is meant to be used as an internal utility of the child class to further improve parallelization. Default value for nNumThreads is 2.
 
void RunDetachedPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts a thread pool full of threads that don't return std::futures (like a placeholder for the thread return type). This means the thread will not have a return type and there is no way to determine if the thread has finished other than calling the Join() method. Only use this if you want to 'set and forget'. It will be faster as it doesn't return futures. Runs PooledLinearCode() method code. This is meant to be used as an internal utility of the child class to further improve parallelization.
 
void ParallelizeLoop (const int nNumThreads, const N tTotalIterations, F &&tLoopFunction)
 Given a ref-qualified looping function and an arbitrary number of iterations, this method will divide up the loop and run each section in a thread pool. This function must not return anything. This method will block until the loop has completed.
 
void ClearPoolQueue ()
 Clears any tasks waiting to be ran in the queue, tasks currently running will remain running.
 
void JoinPool ()
 Waits for pool to finish executing tasks. This method will block the calling code until thread is finished.
 
bool PoolJoinable () const
 Check if the internal pool threads are done executing code and the queue is empty.
 
void SetMainThreadIPSLimit (int nMaxIterationsPerSecond=0)
 Mutator for the Main Thread Max I P S private member.
 
int GetPoolNumOfThreads ()
 Accessor for the Pool Num Of Threads private member.
 
int GetPoolQueueLength ()
 Accessor for the Pool Queue Size private member.
 
std::vector< void > GetPoolResults ()
 Accessor for the Pool Results private member. The action of getting results will destroy and remove them from this object. This method blocks if the thread is not finished, so no need to call JoinPool() before getting results.
 
int GetMainThreadMaxIPS () const
 Accessor for the Main Thread Max I P S private member.
 
- Protected Attributes inherited from ZEDCamera
const std::memory_order ATOMIC_MEMORY_ORDER_METHOD = std::memory_order_relaxed
 
unsigned int m_unCameraSerialNumber
 
bool m_bCameraIsFusionMaster
 
- Protected Attributes inherited from Camera< cv::Mat >
int m_nPropResolutionX
 
int m_nPropResolutionY
 
int m_nPropFramesPerSecond
 
int m_nNumFrameRetrievalThreads
 
PIXEL_FORMATS m_ePropPixelFormat
 
double m_dPropHorizontalFOV
 
double m_dPropVerticalFOV
 
std::atomic_bool m_bEnableRecordingFlag
 
std::queue< containers::FrameFetchContainer< cv::Mat > > m_qFrameCopySchedule
 
std::shared_mutex m_muPoolScheduleMutex
 
std::shared_mutex m_muFrameCopyMutex
 
- Protected Attributes inherited from AutonomyThread< void >
IPS m_IPS
 

Detailed Description

This class implements and interfaces with the SIM cameras and data. It is designed in such a way that multiple other classes/threads can safely call any method of an object of this class withing resource corruption or slowdown of the camera.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30

Constructor & Destructor Documentation

◆ SIMZEDCam()

SIMZEDCam::SIMZEDCam ( const std::string  szCameraPath,
const int  nPropResolutionX,
const int  nPropResolutionY,
const int  nPropFramesPerSecond,
const double  dPropHorizontalFOV,
const double  dPropVerticalFOV,
const bool  bEnableRecordingFlag,
const int  nNumFrameRetrievalThreads = 10,
const unsigned int  unCameraSerialNumber = 0 
)

Construct a new SIM Cam:: SIM Cam object.

Parameters
szCameraPath- The file path to the camera hardware.
nPropResolutionX- X res of camera.
nPropResolutionY- Y res of camera.
nPropFramesPerSecond- FPS camera is running at.
ePropPixelFormat- The pixel layout/format of the image.
dPropHorizontalFOV- The horizontal field of view.
dPropVerticalFOV- The vertical field of view.
bEnableRecordingFlag- Whether or not this camera should be recorded.
nNumFrameRetrievalThreads- The number of threads to use for frame queueing and copying.
unCameraSerialNumber- The serial number of the camera.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30
50 :
51 ZEDCamera(nPropResolutionX,
52 nPropResolutionY,
53 nPropFramesPerSecond,
54 dPropHorizontalFOV,
55 dPropVerticalFOV,
56 bEnableRecordingFlag,
57 false,
58 false,
59 false,
60 nNumFrameRetrievalThreads,
61 unCameraSerialNumber)
62{
63 // Assign member variables.
64 m_szCameraPath = szCameraPath;
65 m_nNumFrameRetrievalThreads = nNumFrameRetrievalThreads;
66
67 // Initialize OpenCV mats to a black/empty image the size of the camera resolution.
68 m_cvFrame = cv::Mat::zeros(nPropResolutionY, nPropResolutionX, CV_8UC4);
69 m_cvDepthImage = cv::Mat::zeros(nPropResolutionY, nPropResolutionX, CV_8UC1);
70 m_cvDepthMeasure = cv::Mat::zeros(nPropResolutionY, nPropResolutionX, CV_32FC1);
71 m_cvPointCloud = cv::Mat::zeros(nPropResolutionY, nPropResolutionX, CV_32FC4);
72
73 // Construct camera stream objects. Append proper camera path arguments to each URL camera path.
74 m_pRGBStream = std::make_unique<WebRTC>(szCameraPath, "ZEDFrontRGB");
75 m_pDepthImageStream = std::make_unique<WebRTC>(szCameraPath, "ZEDFrontDepthImage");
76
77 // Set callbacks for the WebRTC connections.
78 this->SetCallbacks();
79
80 // Set max FPS of the ThreadedContinuousCode method.
81 this->SetMainThreadIPSLimit(nPropFramesPerSecond);
82}
void SetMainThreadIPSLimit(int nMaxIterationsPerSecond=0)
Mutator for the Main Thread Max I P S private member.
Definition AutonomyThread.hpp:467
void SetCallbacks()
This method sets the callbacks for the WebRTC connections.
Definition SIMZEDCam.cpp:115
This class serves as a middle inheritor between the Camera interface and the ZEDCam class....
Definition ZEDCamera.hpp:33
static CV_NODISCARD_STD MatExpr zeros(int rows, int cols, int type)
Here is the call graph for this function:

◆ ~SIMZEDCam()

SIMZEDCam::~SIMZEDCam ( )

Destroy the SIM Cam:: SIM Cam object.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30
92{
93 // Close the WebRTC connections.
94 if (m_pRGBStream)
95 {
96 m_pRGBStream->CloseConnection();
97 }
98 if (m_pDepthImageStream)
99 {
100 m_pDepthImageStream->CloseConnection();
101 }
102
103 // Stop threaded code.
104 this->RequestStop();
105 this->Join();
106}
void Join()
Waits for thread to finish executing and then closes thread. This method will block the calling code ...
Definition AutonomyThread.hpp:180
void RequestStop()
Signals threads to stop executing user code, terminate. DOES NOT JOIN. This method will not force the...
Definition AutonomyThread.hpp:164
Here is the call graph for this function:

Member Function Documentation

◆ RequestFrameCopy()

std::future< bool > SIMZEDCam::RequestFrameCopy ( cv::Mat cvFrame)
overridevirtual

Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. Remember, this code will be ran in whatever, class/thread calls it.

Parameters
cvFrame- A reference to the cv::Mat to store the frame in.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
ClayJay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30

Implements ZEDCamera.

514{
515 // Assemble the FrameFetchContainer.
516 containers::FrameFetchContainer<cv::Mat> stContainer(cvFrame, m_ePropPixelFormat);
517
518 // Acquire lock on frame copy queue.
519 std::unique_lock<std::shared_mutex> lkScheduler(m_muPoolScheduleMutex);
520 // Append frame fetch container to the schedule queue.
521 m_qFrameCopySchedule.push(stContainer);
522 // Release lock on the frame schedule queue.
523 lkScheduler.unlock();
524
525 // Return the future from the promise stored in the container.
526 return stContainer.pCopiedFrameStatus->get_future();
527}
This struct is used to carry references to camera frames for scheduling and copying....
Definition FetchContainers.hpp:89

◆ RequestDepthCopy()

std::future< bool > SIMZEDCam::RequestDepthCopy ( cv::Mat cvDepth,
const bool  bRetrieveMeasure = true 
)
virtual

Requests a depth measure or image from the camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. This image has the same shape as a grayscale image, but the values represent the depth in MILLIMETERS. The ZEDSDK will always return this measure in MILLIMETERS.

Parameters
cvDepth- A reference to the cv::Mat to copy the depth frame to.
bRetrieveMeasure- False to get depth IMAGE instead of MEASURE. Do not use the 8-bit grayscale depth image purposes other than displaying depth.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Implements ZEDCamera.

545{
546 // Create instance variables.
547 PIXEL_FORMATS eFrameType;
548
549 // Check if the container should be set to retrieve an image or a measure.
550 bRetrieveMeasure ? eFrameType = PIXEL_FORMATS::eDepthMeasure : eFrameType = PIXEL_FORMATS::eDepthImage;
551 // Assemble container.
552 containers::FrameFetchContainer<cv::Mat> stContainer(cvDepth, eFrameType);
553
554 // Acquire lock on frame copy queue.
555 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
556 // Append frame fetch container to the schedule queue.
557 m_qFrameCopySchedule.push(stContainer);
558 // Release lock on the frame schedule queue.
559 lkSchedulers.unlock();
560
561 // Return the future from the promise stored in the container.
562 return stContainer.pCopiedFrameStatus->get_future();
563}

◆ RequestPointCloudCopy()

std::future< bool > SIMZEDCam::RequestPointCloudCopy ( cv::Mat cvPointCloud)
virtual

Requests a point cloud image from the camera. This image has the same resolution as a normal image but with three XYZ values replacing the old color values in the 3rd dimension. The units and sign of the XYZ values are determined by ZED_MEASURE_UNITS and ZED_COORD_SYSTEM constants set in AutonomyConstants.h.

Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it.

Parameters
cvPointCloud- A reference to the cv::Mat to copy the point cloud frame to.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Implements ZEDCamera.

581{
582 // Assemble the FrameFetchContainer.
583 containers::FrameFetchContainer<cv::Mat> stContainer(cvPointCloud, PIXEL_FORMATS::eXYZ);
584
585 // Acquire lock on frame copy queue.
586 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
587 // Append frame fetch container to the schedule queue.
588 m_qFrameCopySchedule.push(stContainer);
589 // Release lock on the frame schedule queue.
590 lkSchedulers.unlock();
591
592 // Return the future from the promise stored in the container.
593 return stContainer.pCopiedFrameStatus->get_future();
594}

◆ RequestPositionalPoseCopy()

std::future< bool > SIMZEDCam::RequestPositionalPoseCopy ( ZEDCamera::Pose stPose)
overridevirtual

Puts a sl::GeoPose pointer into a queue so a copy of a GeoPose from the camera can be written to it.

Parameters
stPose- A reference to the sl::GeoPose to store the GeoPose in.
Returns
std::future<bool> - A future that should be waited on before the passed in GeoPose is used.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

606{
607 // Check if positional tracking is enabled.
608 if (m_bCameraPositionalTrackingEnabled)
609 {
610 // Assemble the data container.
611 containers::DataFetchContainer<Pose> stContainer(stPose);
612
613 // Acquire lock on pose copy queue.
614 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
615 // Append pose fetch container to the schedule queue.
616 m_qPoseCopySchedule.push(stContainer);
617 // Release lock on the pose schedule queue.
618 lkSchedulers.unlock();
619
620 // Check if pose queue toggle has already been set.
621 if (!m_bPosesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
622 {
623 // Signify that the pose queue is not empty.
624 m_bPosesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
625 }
626
627 // Return the future from the promise stored in the container.
628 return stContainer.pCopiedDataStatus->get_future();
629 }
630 else
631 {
632 // Submit logger message.
633 LOG_WARNING(logging::g_qSharedLogger, "Attempted to get ZED positional pose but positional tracking is not enabled or is still initializing!");
634
635 // Create dummy promise to return the future.
636 std::promise<bool> pmDummyPromise;
637 std::future<bool> fuDummyFuture = pmDummyPromise.get_future();
638 // Set future value.
639 pmDummyPromise.set_value(false);
640
641 // Return unsuccessful.
642 return fuDummyFuture;
643 }
644}
This struct is used to carry references to any datatype for scheduling and copying....
Definition FetchContainers.hpp:165

◆ RequestFusionGeoPoseCopy()

std::future< bool > SIMZEDCam::RequestFusionGeoPoseCopy ( sl::GeoPose &  slGeoPose)
overridevirtual

Puts a sl::GeoPose pointer into a queue so a copy of a GeoPose from the camera can be written to it.

Parameters
slGeoPose- A reference to the sl::GeoPose to store the GeoPose in.
Returns
std::future<bool> - A future that should be waited on before the passed in GeoPose is used.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

656{
657 // Check if positional tracking has been enabled.
658 if (m_bCameraIsFusionMaster && m_bCameraPositionalTrackingEnabled)
659 {
660 // Assemble the data container.
661 containers::DataFetchContainer<sl::GeoPose> stContainer(slGeoPose);
662
663 // Acquire lock on frame copy queue.
664 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
665 // Append frame fetch container to the schedule queue.
666 m_qGeoPoseCopySchedule.push(stContainer);
667 // Release lock on the frame schedule queue.
668 lkSchedulers.unlock();
669
670 // Check if pose queue toggle has already been set.
671 if (!m_bGeoPosesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
672 {
673 // Signify that the pose queue is not empty.
674 m_bGeoPosesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
675 }
676
677 // Return the future from the promise stored in the container.
678 return stContainer.pCopiedDataStatus->get_future();
679 }
680 else
681 {
682 // Submit logger message.
683 LOG_WARNING(logging::g_qSharedLogger,
684 "Attempted to get ZED FUSION geo pose but positional tracking is not enabled and/or this camera was not initialized as a Fusion Master!");
685
686 // Create dummy promise to return the future.
687 std::promise<bool> pmDummyPromise;
688 std::future<bool> fuDummyFuture = pmDummyPromise.get_future();
689 // Set future value.
690 pmDummyPromise.set_value(false);
691
692 // Return unsuccessful.
693 return fuDummyFuture;
694 }
695}

◆ RequestSensorsCopy()

std::future< bool > SIMZEDCam::RequestSensorsCopy ( sl::SensorsData &  slSensorsData)
overridevirtual

Requests a copy of the sensors data from the camera.

Parameters
slSensorsData- A reference to the sl::SensorsData to store the sensors data in.
Returns
std::future<bool> - A future that should be waited on before the passed in sensors data is used.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-08-26

Reimplemented from ZEDCamera.

707{
708 // Assemble the DataFetchContainer.
709 containers::DataFetchContainer<sl::SensorsData> stContainer(slSensorsData);
710
711 // Acquire lock on data copy queue.
712 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
713 // Append data fetch container to the schedule queue.
714 m_qSensorsCopySchedule.push(stContainer);
715 // Release lock on the data schedule queue.
716 lkSchedulers.unlock();
717
718 // Check if pose queue toggle has already been set.
719 if (!m_bSensorsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
720 {
721 // Signify that the pose queue is not empty.
722 m_bSensorsQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
723 }
724
725 // Return the future from the promise stored in the container.
726 return stContainer.pCopiedDataStatus->get_future();
727}

◆ ResetPositionalTracking()

sl::ERROR_CODE SIMZEDCam::ResetPositionalTracking ( )
overridevirtual

This method is used to reset the positional tracking of the camera. Because this is a simulation camera, and then ZEDSDK is not available, this method will just reset the offsets to zero.

Returns
sl::ERROR_CODE - The error code returned by the ZED SDK. In this case, it will always be SUCCESS.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

740{
741 // Reset offsets back to zero.
742 m_dPoseOffsetX = 0.0;
743 m_dPoseOffsetY = 0.0;
744 m_dPoseOffsetZ = 0.0;
745 m_dPoseOffsetXO = 0.0;
746 m_dPoseOffsetYO = 0.0;
747 m_dPoseOffsetZO = 0.0;
748
749 return sl::ERROR_CODE::SUCCESS;
750}
Here is the caller graph for this function:

◆ RebootCamera()

sl::ERROR_CODE SIMZEDCam::RebootCamera ( )
overridevirtual

This method is used to reboot the camera. This method will stop the camera thread, join the camera thread, destroy the camera stream objects, reconstruct the camera stream objects, set the frame callbacks, and restart the camera thread. This simulates a camera reboot since the ZED SDK is not available.

Returns
sl::ERROR_CODE - The error code returned by the ZED SDK. In this case, it will always be SUCCESS. Even if the streams are not successfully reconnected, the camera will still be considered open.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

765{
766 // Stop the camera thread.
767 this->RequestStop();
768 // Join the camera thread.
769 this->Join();
770
771 // Destroy the camera stream objects.
772 m_pRGBStream.reset();
773 m_pDepthImageStream.reset();
774 // Reconstruct camera stream objects. Append proper camera path arguments to each URL camera path.
775 m_pRGBStream = std::make_unique<WebRTC>(m_szCameraPath, "ZEDFrontRGB");
776 m_pDepthImageStream = std::make_unique<WebRTC>(m_szCameraPath, "ZEDFrontDepthImage");
777
778 // Set the frame callbacks.
779 this->SetCallbacks();
780
781 // Restart the camera thread.
782 this->Start();
783
784 return sl::ERROR_CODE::SUCCESS;
785}
void Start()
When this method is called, it starts a new thread that runs the code within the ThreadedContinuousCo...
Definition AutonomyThread.hpp:117
Here is the call graph for this function:
Here is the caller graph for this function:

◆ SubscribeFusionToCameraUUID()

sl::FUSION_ERROR_CODE SIMZEDCam::SubscribeFusionToCameraUUID ( sl::CameraIdentifier &  slCameraUUID)
overridevirtual

This method is used to subscribe the sl::Fusion object to the camera with the given UUID. Given that this is a simulation camera, and the ZED SDK is not available, this method will do nothing.

Parameters
slCameraUUID- The UUID of the camera to subscribe to.
Returns
sl::FUSION_ERROR_CODE - The error code returned by the ZED SDK. In this case, it will always be SUCCESS.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

799{
800 // Unused parameter.
801 (void) slCameraUUID;
802
803 return sl::FUSION_ERROR_CODE::SUCCESS;
804}

◆ PublishCameraToFusion()

sl::CameraIdentifier SIMZEDCam::PublishCameraToFusion ( )
overridevirtual

This method is used to publish the camera to the sl::Fusion object. Since this is a simulation camera this will do nothing.

Returns
sl::CameraIdentifier - The identifier of the camera.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

816{
817 return sl::CameraIdentifier(m_unCameraSerialNumber);
818}

◆ EnablePositionalTracking()

sl::ERROR_CODE SIMZEDCam::EnablePositionalTracking ( const float  fExpectedCameraHeightFromFloorTolerance = constants::ZED_DEFAULT_FLOOR_PLANE_ERROR)
overridevirtual

This method is used to enable positional tracking on the camera. Since this is a simulation camera, this method will just set the member variable and then use the NavBoard to simulate the camera's position.

Parameters
fExpectedCameraHeightFromFloorTolerance- The expected camera height from the floor tolerance.
Returns
sl::ERROR_CODE - The error code returned by the ZED SDK. In this case, it will always be SUCCESS.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

832{
833 // Unused parameter.
834 (void) fExpectedCameraHeightFromFloorTolerance;
835 // Update member variables.
836 m_bCameraPositionalTrackingEnabled = true;
837
838 return sl::ERROR_CODE::SUCCESS;
839}
Here is the caller graph for this function:

◆ DisablePositionalTracking()

void SIMZEDCam::DisablePositionalTracking ( )
overridevirtual

This method is used to disable positional tracking on the camera. Since this is a simulation camera, this method will just set the member variable.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

850{
851 // Update member variables.
852 m_bCameraPositionalTrackingEnabled = false;
853}
Here is the caller graph for this function:

◆ SetPositionalPose()

void SIMZEDCam::SetPositionalPose ( const double  dX,
const double  dY,
const double  dZ,
const double  dXO,
const double  dYO,
const double  dZO 
)
overridevirtual

This method is used to set the positional pose of the camera. Since this is a simulation camera, this method will just set the offset member variables.

Parameters
dX- The new X position of the camera in ZED_MEASURE_UNITS.
dY- The new Y position of the camera in ZED_MEASURE_UNITS.
dZ- The new Z position of the camera in ZED_MEASURE_UNITS.
dXO- The new tilt of the camera around the X axis in degrees.
dYO- The new tilt of the camera around the Y axis in degrees.
dZO- The new tilt of the camera around the Z axis in degrees.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

870{
871 // Acquire lock on the current rover pose mutex.
872 std::unique_lock<std::shared_mutex> lkPose(m_muCurrentRoverPoseMutex);
873
874 // Update offset member variables.
875 m_dPoseOffsetX = dX - m_stCurrentRoverPose.GetUTMCoordinate().dEasting;
876 m_dPoseOffsetY = dY - m_stCurrentRoverPose.GetUTMCoordinate().dAltitude;
877 m_dPoseOffsetZ = dZ - m_stCurrentRoverPose.GetUTMCoordinate().dNorthing;
878 m_dPoseOffsetXO = dXO;
879 m_dPoseOffsetYO = dYO - m_stCurrentRoverPose.GetCompassHeading();
880 m_dPoseOffsetZO = dZO;
881}
double GetCompassHeading() const
Accessor for the Compass Heading private member.
Definition GeospatialOperations.hpp:756
const geoops::UTMCoordinate & GetUTMCoordinate() const
Accessor for the geoops::UTMCoordinate member variable.
Definition GeospatialOperations.hpp:736
Here is the call graph for this function:

◆ GetCameraIsOpen()

bool SIMZEDCam::GetCameraIsOpen ( )
overridevirtual

Accessor for the camera open status.

Returns
true - The camera has been successfully opened.
false - The camera has not been successfully opened.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30

Implements Camera< cv::Mat >.

893{
894 return m_pRGBStream->GetIsConnected() && m_pDepthImageStream->GetIsConnected() && this->GetThreadState() == AutonomyThreadState::eRunning;
895}
AutonomyThreadState GetThreadState() const
Accessor for the Threads State private member.
Definition AutonomyThread.hpp:214
Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetUsingGPUMem()

bool SIMZEDCam::GetUsingGPUMem ( ) const
overridevirtual

Returns if the camera is using GPU memory. This is a simulation camera, so this method will always return false.

Returns
true - We are using GPU memory.
false - We are not using GPU memory.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Reimplemented from ZEDCamera.

908{
909 return false;
910}

◆ GetCameraModel()

std::string SIMZEDCam::GetCameraModel ( )
overridevirtual

Accessor for the name of this model of camera.

Returns
std::string - The model of the camera.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

921{
922 return "SIMZED2i";
923}

◆ GetPositionalTrackingEnabled()

bool SIMZEDCam::GetPositionalTrackingEnabled ( )
overridevirtual

Accessor for the if the camera's positional tracking is enabled. Since this is a simulation camera, this method will just return the member variable.

Returns
true - Positional tracking is enabled.
false - Positional tracking is disabled.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26

Implements ZEDCamera.

936{
937 return m_bCameraPositionalTrackingEnabled && this->GetCameraIsOpen();
938}
bool GetCameraIsOpen() override
Accessor for the camera open status.
Definition SIMZEDCam.cpp:892
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ThreadedContinuousCode()

void SIMZEDCam::ThreadedContinuousCode ( )
overrideprivatevirtual

The code inside this private method runs in a separate thread, but still has access to this*. This method continuously get new frames from the OpenCV VideoCapture object and stores it in a member variable. Then a thread pool is started and joined once per iteration to mass copy the frames and/or measure to any other thread waiting in the queues.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30

Implements AutonomyThread< void >.

255{
256 // Acquire a lock on the rover pose mutex.
257 std::unique_lock<std::shared_mutex> lkRoverPoseLock(m_muCurrentRoverPoseMutex);
258 // Check if the NavBoard pointer is valid.
259 if (globals::g_pNavigationBoard != nullptr)
260 {
261 // Get the current rover pose from the NavBoard.
262 m_stCurrentRoverPose = geoops::RoverPose(globals::g_pNavigationBoard->GetGPSData(), globals::g_pNavigationBoard->GetHeading());
263 }
264 // Release lock.
265 lkRoverPoseLock.unlock();
266
267 // Check if the depth image WebRTC connection is open.
268 if (m_pDepthImageStream != nullptr && m_pDepthImageStream->GetIsConnected())
269 {
270 // Acquire a lock on the WebRTC mutex.
271 std::shared_lock<std::shared_mutex> lkWebRTC2(m_muWebRTCDepthImageCopyMutex);
272 // Estimate the depth measure from the depth image.
273 this->EstimateDepthMeasure(m_cvDepthImage, m_cvDepthMeasure);
274 // Check if the depth image is empty.
275 if (m_cvDepthImage.empty())
276 {
277 // Release lock.
278 lkWebRTC2.unlock();
279 return;
280 }
281 // Release lock.
282 lkWebRTC2.unlock();
283
284 // Calculate the point cloud from the estimated depth measure.
285 this->CalculatePointCloud(m_cvDepthMeasure, m_cvPointCloud);
286 }
287
288 // Acquire a shared_lock on the frame copy queue.
289 std::shared_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
290 // Check if the frame copy queue is empty.
291 if (!m_qFrameCopySchedule.empty() || !m_qPoseCopySchedule.empty() || !m_qGeoPoseCopySchedule.empty() || !m_qSensorsCopySchedule.empty())
292 {
293 // Add the length of all queues together to determine the number of tasks to create.
294 size_t siTotalQueueLength = m_qFrameCopySchedule.size() + m_qPoseCopySchedule.size() + m_qGeoPoseCopySchedule.size() + m_qSensorsCopySchedule.size();
295
296 // Acquire shared lock on the WebRTC mutex, so that the WebRTC connection doesn't try to write to the Mats while they are being copied in the thread pool.
297 std::shared_lock<std::shared_mutex> lkWebRTC(m_muWebRTCRGBImageCopyMutex);
298 std::shared_lock<std::shared_mutex> lkWebRTC2(m_muWebRTCDepthImageCopyMutex);
299
300 // Start the thread pool to store multiple copies of the sl::Mat into the given cv::Mats.
301 this->RunDetachedPool(siTotalQueueLength, m_nNumFrameRetrievalThreads);
302
303 // Static bool for keeping track of if the thread pool has been started.
304 static bool bQueueTogglesAlreadyReset = false;
305 // Get current time.
306 std::chrono::_V2::system_clock::duration tmCurrentTime = std::chrono::high_resolution_clock::now().time_since_epoch();
307 // Only reset once every couple seconds.
308 if (std::chrono::duration_cast<std::chrono::seconds>(tmCurrentTime).count() % 31 == 0 && !bQueueTogglesAlreadyReset)
309 {
310 // Reset queue counters.
311 m_bPosesQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
312 m_bGeoPosesQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
313 m_bSensorsQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
314
315 // Set reset toggle.
316 bQueueTogglesAlreadyReset = true;
317 }
318 // Crucial for toggle action. If time is not evenly devisable and toggles have previously been set, reset queue reset boolean.
319 else if (bQueueTogglesAlreadyReset)
320 {
321 // Reset reset toggle.
322 bQueueTogglesAlreadyReset = false;
323 }
324
325 // Wait for thread pool to finish.
326 this->JoinPool();
327
328 // Release lock on WebRTC mutex.
329 lkWebRTC.unlock();
330 lkWebRTC2.unlock();
331 }
332
333 // Release lock on frame copy queue.
334 lkSchedulers.unlock();
335}
void RunDetachedPool(const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
When this method is called, it starts a thread pool full of threads that don't return std::futures (l...
Definition AutonomyThread.hpp:336
void JoinPool()
Waits for pool to finish executing tasks. This method will block the calling code until thread is fin...
Definition AutonomyThread.hpp:439
void CalculatePointCloud(const cv::Mat &cvDepthMeasure, cv::Mat &cvPointCloud)
This method calculates a point cloud from the decoded depth measure use some simple trig and the came...
Definition SIMZEDCam.cpp:202
void EstimateDepthMeasure(const cv::Mat &cvDepthImage, cv::Mat &cvDepthMeasure)
This method estimates the depth measure from the depth image.
Definition SIMZEDCam.cpp:155
bool empty() const
This struct is used by the WaypointHandler to provide an easy way to store all pose data about the ro...
Definition GeospatialOperations.hpp:677
Here is the call graph for this function:

◆ PooledLinearCode()

void SIMZEDCam::PooledLinearCode ( )
overrideprivatevirtual

This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a vector queued up by the Grab methods.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-30

Implements AutonomyThread< void >.

348{
350 // Frame queue.
352
353 // Acquire mutex for getting frames out of the queue.
354 std::unique_lock<std::shared_mutex> lkFrameQueue(m_muFrameCopyMutex);
355 // Check if the queue is empty.
356 if (!m_qFrameCopySchedule.empty())
357 {
358 // Get frame container out of queue.
359 containers::FrameFetchContainer<cv::Mat> stContainer = m_qFrameCopySchedule.front();
360 // Pop out of queue.
361 m_qFrameCopySchedule.pop();
362 // Release lock.
363 lkFrameQueue.unlock();
364
365 // Determine which frame should be copied.
366 switch (stContainer.eFrameType)
367 {
368 case PIXEL_FORMATS::eBGRA: *(stContainer.pFrame) = m_cvFrame.clone(); break;
369 case PIXEL_FORMATS::eDepthImage: *(stContainer.pFrame) = m_cvDepthImage.clone(); break;
370 case PIXEL_FORMATS::eDepthMeasure: *(stContainer.pFrame) = m_cvDepthMeasure.clone(); break;
371 case PIXEL_FORMATS::eXYZ: *(stContainer.pFrame) = m_cvPointCloud.clone(); break;
372 default: *(stContainer.pFrame) = m_cvFrame.clone(); break;
373 }
374
375 // Signal future that the frame has been successfully retrieved.
376 stContainer.pCopiedFrameStatus->set_value(true);
377 }
378
380 // Pose queue.
382 // Acquire mutex for getting data out of the pose queue.
383 std::unique_lock<std::shared_mutex> lkPoseQueue(m_muPoseCopyMutex);
384 // Check if the queue is empty.
385 if (!m_qPoseCopySchedule.empty())
386 {
387 // Get pose container out of queue.
388 containers::DataFetchContainer<Pose> stContainer = m_qPoseCopySchedule.front();
389 // Pop out of queue.
390 m_qPoseCopySchedule.pop();
391 // Release lock.
392 lkPoseQueue.unlock();
393
394 // Get angle realignments.
395 double dNewYO = numops::InputAngleModulus<double>(m_stCurrentRoverPose.GetCompassHeading() + m_dPoseOffsetYO, 0.0, 360.0);
396 // Repack values into pose.
397 Pose stPose(m_stCurrentRoverPose.GetUTMCoordinate().dEasting + m_dPoseOffsetX,
398 m_stCurrentRoverPose.GetUTMCoordinate().dAltitude + m_dPoseOffsetY,
399 m_stCurrentRoverPose.GetUTMCoordinate().dNorthing + m_dPoseOffsetZ,
400 m_dPoseOffsetXO,
401 dNewYO,
402 m_dPoseOffsetZO);
403
404 // ISSUE NOTE: Might be in the future if we ever change our coordinate system on the ZED. This can be used to fix the directions of the Pose's coordinate system.
405 // // Check ZED coordinate system.
406 // switch (m_slCameraParams.coordinate_system)
407 // {
408 // case sl::COORDINATE_SYSTEM::LEFT_HANDED_Y_UP:
409 // {
410 // // Realign based in the signedness of this coordinate system. Z is backwards.
411 // stPose.stTranslation.dZ *= -1;
412 // break;
413 // }
414 // default:
415 // {
416 // // No need to flip signs for other coordinate systems.
417 // break;
418 // }
419 // }
420
421 // Copy pose.
422 *(stContainer.pData) = stPose;
423
424 // Signal future that the data has been successfully retrieved.
425 stContainer.pCopiedDataStatus->set_value(true);
426 }
427 else
428 {
429 // Release lock.
430 lkPoseQueue.unlock();
431 }
432
434 // GeoPose queue.
436 // Acquire mutex for getting data out of the pose queue.
437 std::unique_lock<std::shared_mutex> lkGeoPoseQueue(m_muGeoPoseCopyMutex);
438 // Check if the queue is empty.
439 if (!m_qGeoPoseCopySchedule.empty())
440 {
441 // Get pose container out of queue.
442 containers::DataFetchContainer<sl::GeoPose> stContainer = m_qGeoPoseCopySchedule.front();
443 // Pop out of queue.
444 m_qGeoPoseCopySchedule.pop();
445 // Release lock.
446 lkGeoPoseQueue.unlock();
447
448 // Get rover's current UTM position.
449 geoops::UTMCoordinate stCurrentUTM = m_stCurrentRoverPose.GetUTMCoordinate();
450 // Add offsets to the current UTM position.
451 stCurrentUTM.dEasting += m_dPoseOffsetX;
452 stCurrentUTM.dAltitude += m_dPoseOffsetY;
453 stCurrentUTM.dNorthing += m_dPoseOffsetZ;
454 // Create a GPS coordinate from the UTM position.
455 geoops::GPSCoordinate stCurrentGPS = geoops::ConvertUTMToGPS(stCurrentUTM);
456
457 // Create new GeoPose.
458 sl::GeoPose slGeoPose;
459 slGeoPose.heading = numops::InputAngleModulus<double>(m_stCurrentRoverPose.GetCompassHeading() + m_dPoseOffsetYO, 0.0, 360.0);
460 slGeoPose.latlng_coordinates.setCoordinates(stCurrentGPS.dLatitude, stCurrentGPS.dLongitude, stCurrentGPS.dAltitude, false);
461
462 // Copy pose.
463 *(stContainer.pData) = slGeoPose;
464
465 // Signal future that the data has been successfully retrieved.
466 stContainer.pCopiedDataStatus->set_value(true);
467 }
468 else
469 {
470 // Release lock.
471 lkGeoPoseQueue.unlock();
472 }
473
475 // Sensors queue.
477 // Acquire mutex for getting data out of the sensors queue.
478 std::unique_lock<std::shared_mutex> lkSensorsQueue(m_muSensorsCopyMutex);
479 // Check if the queue is empty.
480 if (!m_qSensorsCopySchedule.empty())
481 {
482 // Get pose container out of queue.
483 containers::DataFetchContainer<sl::SensorsData> stContainer = m_qSensorsCopySchedule.front();
484 // Pop out of queue.
485 m_qSensorsCopySchedule.pop();
486 // Release lock.
487 lkSensorsQueue.unlock();
488
489 // Copy pose.
490 *(stContainer.pData) = m_stIMUData;
491
492 // Signal future that the data has been successfully retrieved.
493 stContainer.pCopiedDataStatus->set_value(true);
494 }
495 else
496 {
497 // Release lock.
498 lkSensorsQueue.unlock();
499 }
500}
CV_NODISCARD_STD Mat clone() const
GPSCoordinate ConvertUTMToGPS(const UTMCoordinate &stUTMCoord)
Given a UTM coordinate, convert to GPS and create a new GPSCoordinate object.
Definition GeospatialOperations.hpp:347
This struct stores/contains information about a GPS data.
Definition GeospatialOperations.hpp:99
This struct stores/contains information about a UTM coordinate.
Definition GeospatialOperations.hpp:195
Here is the call graph for this function:

◆ SetCallbacks()

void SIMZEDCam::SetCallbacks ( )
private

This method sets the callbacks for the WebRTC connections.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-12-26
116{
117 // Set the frame callbacks.
118 m_pRGBStream->SetOnFrameReceivedCallback(
119 [this](cv::Mat& cvFrame)
120 {
121 // Check if the frame is empty.
122 if (!cvFrame.empty())
123 {
124 // Acquire a lock on the webRTC copy mutex.
125 std::unique_lock<std::shared_mutex> lkWebRTC(m_muWebRTCRGBImageCopyMutex);
126 // Deep copy the frame.
127 m_cvFrame = cvFrame.clone();
128 }
129 });
130 m_pDepthImageStream->SetOnFrameReceivedCallback(
131 [this](cv::Mat& cvFrame)
132 {
133 // Check if the frame is empty.
134 if (!cvFrame.empty())
135 {
136 // Acquire a lock on the webRTC copy mutex.
137 std::unique_lock<std::shared_mutex> lkWebRTC(m_muWebRTCDepthImageCopyMutex);
138 // Deep copy the frame to the depth image buffer.
139 m_cvDepthImageBuffer = cvFrame.clone();
140 // Convert the depth image buffer to grayscale.
141 cv::cvtColor(m_cvDepthImageBuffer, m_cvDepthImage, cv::COLOR_BGR2GRAY);
142 }
143 });
144}
Here is the call graph for this function:
Here is the caller graph for this function:

◆ EstimateDepthMeasure()

void SIMZEDCam::EstimateDepthMeasure ( const cv::Mat cvDepthImage,
cv::Mat cvDepthMeasure 
)
private

This method estimates the depth measure from the depth image.

Parameters
cvDepthImage- The depth image to estimate the depth measure from.
cvDepthMeasure- The estimated depth measure that will be written to.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-04-29
156{
157 // Declare instance variables.
158 const float fMaxDepth = 2001.0f; // Maximum depth in cm.
159
160 // Check if the depth image is empty.
161 if (cvDepthImage.empty())
162 {
163 // If the depth image is empty, fill the depth measure with zeros.
164 cvDepthMeasure = cv::Mat::zeros(cvDepthMeasure.size(), CV_32FC1);
165 return;
166 }
167
168// TEST: Even though this speeds up the code, it might be too much CPU work as the codebase grows. Use a GpuMat instead.
169#pragma omp parallel for collapse(2)
170
171 // Iterate over each pixel in the cvDepthImage image.
172 for (int nY = 0; nY < cvDepthImage.rows; ++nY)
173 {
174 for (int nX = 0; nX < cvDepthImage.cols; ++nX)
175 {
176 // For this, we are just using the depth image to estimate the depth measure. We will treat 255 as 0 cm and 0 as fMaxDepth - 1 cm.
177 // Get the depth value from the depth image.
178 uchar ucDepthValue = cvDepthImage.at<uchar>(nY, nX);
179
180 // Calculate the depth in cm.
181 float fDepth = (1.0f - (ucDepthValue / 255.0f)) * fMaxDepth;
182 // Check if nY and nX are within the bounds of the depth measure image.
183 if (nY < cvDepthMeasure.rows && nX < cvDepthMeasure.cols)
184 {
185 // Store the estimated depth in the new cv::Mat. Convert cm to m.
186 cvDepthMeasure.at<float>(nY, nX) = fDepth / 100.0f; // Convert cm to m.
187 }
188 }
189 }
190}
MatSize size
_Tp & at(int i0=0)
unsigned char uchar
Here is the call graph for this function:
Here is the caller graph for this function:

◆ CalculatePointCloud()

void SIMZEDCam::CalculatePointCloud ( const cv::Mat cvDepthMeasure,
cv::Mat cvPointCloud 
)
private

This method calculates a point cloud from the decoded depth measure use some simple trig and the camera FOV.

Parameters
cvDepthMeasure- The decoded depth measure.
cvPointCloud- The point cloud that will be written to.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-01-04
203{
204 // Calculate focal lengths from FOV.
205 const double dRadPerDeg = M_PI / 180.0;
206 const double dFx = (cvDepthMeasure.cols / 2.0) / tan(m_dPropHorizontalFOV * dRadPerDeg / 2.0);
207 const double dFy = (cvDepthMeasure.rows / 2.0) / tan(m_dPropVerticalFOV * dRadPerDeg / 2.0);
208 // Image center.
209 const double dCx = cvDepthMeasure.cols / 2.0;
210 const double dCy = cvDepthMeasure.rows / 2.0;
211
212// TEST: Even though this speeds up the code, it might be too much CPU work as the codebase grows. Use a GpuMat instead.
213// This is a parallel for loop that calculates the point cloud from the decoded depth measure.
214#pragma omp parallel for collapse(2)
215
216 // Iterate over each pixel in the cvDepthMeasure image.
217 for (int nY = 0; nY < cvDepthMeasure.rows; ++nY)
218 {
219 for (int nX = 0; nX < cvDepthMeasure.cols; ++nX)
220 {
221 // Get depth value.
222 float fDepth = cvDepthMeasure.at<float>(nY, nX);
223
224 // Skip invalid depth values.
225 if (fDepth <= 0)
226 {
227 cvPointCloud.at<cv::Vec4f>(nY, nX) = cv::Vec4f(0, 0, 0, 0);
228 continue;
229 }
230
231 // Convert from pixel coordinates to 3D coordinates.
232 float fX = static_cast<float>((nX - dCx) * fDepth / dFx);
233 float fY = static_cast<float>((dCy - nY) * fDepth / dFy);
234 float fZ = fDepth;
235
236 // Store point. (XYZ + intensity, using Y channel for intensity)
237 cvPointCloud.at<cv::Vec4f>(nY, nX) = cv::Vec4f(fX, fY, fZ, 255);
238 }
239 }
240}
__device__ __forceinline__ float3 tan(const uchar3 &a)
Here is the call graph for this function:
Here is the caller graph for this function:

The documentation for this class was generated from the following files: