Autonomy Software C++ 24.5.1
Welcome to the Autonomy Software repository of the Mars Rover Design Team (MRDT) at Missouri University of Science and Technology (Missouri S&T)! API reference contains the source code and other resources for the development of the autonomy software for our Mars rover. The Autonomy Software project aims to compete in the University Rover Challenge (URC) by demonstrating advanced autonomous capabilities and robust navigation algorithms.
Loading...
Searching...
No Matches
ZEDCam Class Reference

This class implements and interfaces with the most common ZEDSDK cameras and features. It is designed in such a way that multiple other classes/threads can safely call any method of an object of this class withing resource corruption or slowdown of the camera. More...

#include <ZEDCam.h>

Inheritance diagram for ZEDCam:
Collaboration diagram for ZEDCam:

Public Member Functions

 ZEDCam (const int nPropResolutionX, const int nPropResolutionY, const int nPropFramesPerSecond, const double dPropHorizontalFOV, const double dPropVerticalFOV, const bool bEnableRecordingFlag=false, const bool bExportSVORecordingFlag=false, const float fMinSenseDistance=constants::ZED_DEFAULT_MINIMUM_DISTANCE, const float fMaxSenseDistance=constants::ZED_DEFAULT_MAXIMUM_DISTANCE, const bool bMemTypeGPU=false, const bool bUseHalfDepthPrecision=false, const bool bEnableFusionMaster=false, const int nNumFrameRetrievalThreads=10, const unsigned int unCameraSerialNumber=0)
 Construct a new Zed Cam:: Zed Cam object.
 
 ~ZEDCam ()
 Destroy the Zed Cam:: Zed Cam object.
 
std::future< bool > RequestFrameCopy (cv::Mat &cvFrame) override
 Requests a regular BGRA image from the LEFT eye of the zed camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. Remember this code will be ran in whatever class/thread calls it.
 
std::future< bool > RequestFrameCopy (cv::cuda::GpuMat &cvGPUFrame) override
 Grabs a regular BGRA image from the LEFT eye of the zed camera and stores it in a GPU mat. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. Remember this code will be ran in whatever class/thread calls it.
 
std::future< bool > RequestDepthCopy (cv::Mat &cvDepth, const bool bRetrieveMeasure=true) override
 Requests a depth measure or image from the camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. This image has the same shape as a grayscale image, but the values represent the depth in MILLIMETERS. The ZEDSDK will always return this measure in MILLIMETERS.
 
std::future< bool > RequestDepthCopy (cv::cuda::GpuMat &cvGPUDepth, const bool bRetrieveMeasure=true) override
 Requests a depth measure or image from the camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. This image has the same shape as a grayscale image, but the values represent the depth in MILLIMETERS. The ZEDSDK will always return this measure in MILLIMETERS.
 
std::future< bool > RequestPointCloudCopy (cv::Mat &cvPointCloud) override
 Requests a point cloud image from the camera. This image has the same resolution as a normal image but with three XYZ values replacing the old color values in the 3rd dimension. The units and sign of the XYZ values are determined by ZED_MEASURE_UNITS and ZED_COORD_SYSTEM constants set in AutonomyConstants.h.
 
std::future< bool > RequestPointCloudCopy (cv::cuda::GpuMat &cvGPUPointCloud) override
 Grabs a point cloud image from the camera. This image has the same resolution as a normal image but with three XYZ values replacing the old color values in the 3rd dimension. The units and sign of the XYZ values are determined by ZED_MEASURE_UNITS and ZED_COORD_SYSTEM constants set in AutonomyConstants.h.
 
std::future< bool > RequestPositionalPoseCopy (Pose &stPose) override
 Requests the current pose of the camera relative to it's start pose or the origin of the set pose. Puts a Pose pointer into a queue so a copy of a pose from the camera can be written to it. If positional tracking is not enabled, this method will return false and the ZEDCam::Pose may be uninitialized.
 
std::future< bool > RequestFusionGeoPoseCopy (sl::GeoPose &slGeoPose) override
 Requests the current geo pose of the camera. This method should be used to retrieve ONLY the GNSS data from the ZEDSDK's Fusion module. Puts a GeoPose pointer into a queue so a copy of a GeoPose from the camera can be written to it. If positional tracking or fusion is disabled for this camera, then this method will return false and the sl::GeoPose may be uninitialized.
 
std::future< bool > RequestFloorPlaneCopy (sl::Plane &slPlane) override
 Requests the current floor plane of the camera relative to it's current pose. Puts a Plane pointer into a queue so a copy of the floor plane from the camera can be written to it. If positional tracking is not enabled, this method will return false and the sl::Plane may be uninitialized.
 
std::future< bool > RequestSensorsCopy (sl::SensorsData &slSensorsData) override
 Requests the most up to date sensors data from the camera. This data include IMU pose and raw values, barometer, magnetometer, and temperature data.
 
std::future< bool > RequestObjectsCopy (std::vector< sl::ObjectData > &vObjectData) override
 Requests a current copy of the tracked objects from the camera. Puts a pointer to a vector containing sl::ObjectData into a queue so a copy of a frame from the camera can be written to it.
 
std::future< bool > RequestBatchedObjectsCopy (std::vector< sl::ObjectsBatch > &vBatchedObjectData) override
 If batching is enabled, this requests the normal objects and passes them to the the internal batching queue of the zed api. This performs short-term re-identification with deep learning and trajectories filtering. Batching must have been set to enabled when EnableObjectDetection() was called. Most of the time the vector will be empty and will be filled every ZED_OBJDETECTION_BATCH_LATENCY.
 
sl::ERROR_CODE ResetPositionalTracking () override
 Resets the cameras X,Y,Z translation and Roll,Pitch,Yaw orientation back to 0. THINK CAREFULLY! Do you actually want to reset this? It will also realign the coordinate system to whichever way the camera happens to be facing.
 
sl::ERROR_CODE TrackCustomBoxObjects (std::vector< ZedObjectData > &vCustomObjects) override
 A vector containing CustomBoxObjectData objects. These objects simply store information about your detected objects from an external object detection model. You will need to take your inference results and package them into a sl::CustomBoxObjectData so the the ZEDSDK can properly interpret your detections.
 
sl::ERROR_CODE RebootCamera () override
 Performs a hardware reset of the ZED2 or ZED2i camera.
 
sl::FUSION_ERROR_CODE SubscribeFusionToCameraUUID (sl::CameraIdentifier &slCameraUUID) override
 Give a UUID for another ZEDCam, subscribe that camera to this camera's Fusion instance. This will tell this camera's Fusion instance to start ingesting and fusing data from the other camera.
 
sl::CameraIdentifier PublishCameraToFusion () override
 Signal this camera to make its data available to the Fusion module and retrieve a UUID for this class's sl::Camera instance that can be used to subscribe an sl::Fusion instance to this camera later.
 
sl::FUSION_ERROR_CODE IngestGPSDataToFusion (geoops::GPSCoordinate stNewGPSLocation) override
 If this camera is the fusion instance master, this method can be used to ingest/process/fuse the current GNSS position of the camera with the ZEDSDK positional tracking. This allows the use of RequestGeoPose to get a high resolution and highly accurate GNSS/VIO camera pose.
 
sl::ERROR_CODE EnablePositionalTracking (const float fExpectedCameraHeightFromFloorTolerance=constants::ZED_DEFAULT_FLOOR_PLANE_ERROR) override
 Enable the positional tracking functionality of the camera.
 
void DisablePositionalTracking () override
 Disable to positional tracking functionality of the camera.
 
void SetPositionalPose (const double dX, const double dY, const double dZ, const double dXO, const double dYO, const double dZO) override
 Sets the pose of the positional tracking of the camera. XYZ will point in their respective directions according to ZED_COORD_SYSTEM defined in AutonomyConstants.h.
 
sl::ERROR_CODE EnableSpatialMapping () override
 Enabled the spatial mapping feature of the camera. Pose tracking will be enabled if it is not already.
 
void DisableSpatialMapping () override
 Disabled the spatial mapping feature of the camera.
 
sl::ERROR_CODE EnableObjectDetection (const bool bEnableBatching=false) override
 Enables the object detection and tracking feature of the camera.
 
void DisableObjectDetection () override
 Disables the object detection and tracking feature of the camera.
 
bool GetCameraIsOpen () override
 Accessor for the current status of the camera.
 
bool GetUsingGPUMem () const override
 Accessor for if this ZED is storing it's frames in GPU memory.
 
std::string GetCameraModel () override
 Accessor for the model enum from the ZEDSDK and represents the camera model as a string.
 
unsigned int GetCameraSerial () override
 Accessor for the camera's serial number.
 
bool GetPositionalTrackingEnabled () override
 Accessor for if the positional tracking functionality of the camera has been enabled and functioning.
 
sl::PositionalTrackingStatus GetPositionalTrackingState () override
 Accessor for the current positional tracking status of the camera.
 
sl::FusedPositionalTrackingStatus GetFusedPositionalTrackingState () override
 Accessor for the current positional tracking status of the fusion instance.
 
sl::SPATIAL_MAPPING_STATE GetSpatialMappingState () override
 Accessor for the current state of the camera's spatial mapping feature.
 
sl::SPATIAL_MAPPING_STATE ExtractSpatialMapAsync (std::future< sl::Mesh > &fuMeshFuture) override
 Retrieve the built spatial map from the camera. Spatial mapping must be enabled. This method takes in an std::future<sl::FusedPointCloud> to eventually store the map in. It returns a enum code representing the successful scheduling of building the map. Any code other than SPATIAL_MAPPING_STATE::OK means the future will never be filled.
 
bool GetObjectDetectionEnabled () override
 Accessor for if the cameras object detection and tracking feature is enabled.
 
- Public Member Functions inherited from ZEDCamera
 ZEDCamera (const int nPropResolutionX, const int nPropResolutionY, const int nPropFramesPerSecond, const double dPropHorizontalFOV, const double dPropVerticalFOV, const bool bEnableRecordingFlag, const bool bMemTypeGPU, const bool bUseHalfDepthPrecision, const bool bEnableFusionMaster, const int nNumFrameRetrievalThreads, const unsigned int unCameraSerialNumber)
 Construct a new ZEDCamera object.
 
virtual ~ZEDCamera ()=default
 Destroy the ZEDCamera object.
 
virtual bool GetIsFusionMaster () const
 Accessor for if this ZED is running a fusion instance.
 
- Public Member Functions inherited from Camera< cv::Mat >
 Camera (const int nPropResolutionX, const int nPropResolutionY, const int nPropFramesPerSecond, const PIXEL_FORMATS ePropPixelFormat, const double dPropHorizontalFOV, const double dPropVerticalFOV, const bool bEnableRecordingFlag, const int nNumFrameRetrievalThreads=5)
 Construct a new Camera object.
 
virtual ~Camera ()
 Destroy the Camera object.
 
cv::Size GetPropResolution () const
 Accessor for the Prop Resolution private member.
 
int GetPropFramesPerSecond () const
 Accessor for the Prop Frames Per Second private member.
 
PIXEL_FORMATS GetPropPixelFormat () const
 Accessor for the Prop Pixel Format private member.
 
double GetPropHorizontalFOV () const
 Accessor for the Prop Horizontal F O V private member.
 
double GetPropVerticalFOV () const
 Accessor for the Prop Vertical F O V private member.
 
bool GetEnableRecordingFlag () const
 Accessor for the Enable Recording Flag private member.
 
void SetEnableRecordingFlag (const bool bEnableRecordingFlag)
 Mutator for the Enable Recording Flag private member.
 
- Public Member Functions inherited from AutonomyThread< void >
 AutonomyThread ()
 Construct a new Autonomy Thread object.
 
virtual ~AutonomyThread ()
 Destroy the Autonomy Thread object. If the parent object or main thread is destroyed or exited while this thread is still running, a race condition will occur. Stopping and joining the thread here insures that the main program can't exit if the user forgot to stop and join the thread.
 
void Start ()
 When this method is called, it starts a new thread that runs the code within the ThreadedContinuousCode method. This is the users main code that will run the important and continuous code for the class.
 
void RequestStop ()
 Signals threads to stop executing user code, terminate. DOES NOT JOIN. This method will not force the thread to exit, if the user code is not written properly and contains WHILE statement or any other long-executing or blocking code, then the thread will not exit until the next iteration.
 
void Join ()
 Waits for thread to finish executing and then closes thread. This method will block the calling code until thread is finished.
 
bool Joinable () const
 Check if the code within the thread and all pools created by it are finished executing and the thread is ready to be closed.
 
AutonomyThreadState GetThreadState () const
 Accessor for the Threads State private member.
 
IPSGetIPS ()
 Accessor for the Frame I P S private member.
 

Private Member Functions

void ThreadedContinuousCode () override
 The code inside this private method runs in a separate thread, but still has access to this*. This method continuously calls the grab() function of the ZEDSDK, which updates all frames (RGB, depth, cloud) and all other data such as positional and spatial mapping. Then a thread pool is started and joined once per iteration to mass copy the frames and/or measure to any other thread waiting in the queues.
 
void PooledLinearCode () override
 This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a queue filled by the Request methods.
 

Private Attributes

sl::Camera m_slCamera
 
std::shared_mutex m_muCameraMutex
 
sl::InitParameters m_slCameraParams
 
sl::RuntimeParameters m_slRuntimeParams
 
sl::RecordingParameters m_slRecordingParams
 
sl::Fusion m_slFusionInstance
 
std::shared_mutex m_muFusionMutex
 
sl::InitFusionParameters m_slFusionParams
 
sl::MEASURE m_slDepthMeasureType
 
sl::PositionalTrackingParameters m_slPoseTrackingParams
 
sl::PositionalTrackingFusionParameters m_slFusionPoseTrackingParams
 
sl::Pose m_slCameraPose
 
sl::GeoPose m_slFusionGeoPose
 
sl::Plane m_slFloorPlane
 
sl::Transform m_slFloorTrackingTransform
 
sl::SensorsData m_slSensorsData
 
sl::SpatialMappingParameters m_slSpatialMappingParams
 
sl::ObjectDetectionParameters m_slObjectDetectionParams
 
sl::BatchParameters m_slObjectDetectionBatchParams
 
sl::Objects m_slDetectedObjects
 
std::vector< sl::ObjectsBatch > m_slDetectedObjectsBatched
 
sl::MEM m_slMemoryType
 
sl::MODEL m_slCameraModel
 
float m_fExpectedCameraHeightFromFloorTolerance
 
bool m_bCameraReopenAlreadyChecked
 
double m_dPoseOffsetX
 
double m_dPoseOffsetY
 
double m_dPoseOffsetZ
 
double m_dPoseOffsetXO
 
double m_dPoseOffsetYO
 
double m_dPoseOffsetZO
 
geoops::GPSCoordinate m_stCurrentGPSBasedPosition
 
sl::Mat m_slFrame
 
sl::Mat m_slDepthImage
 
sl::Mat m_slDepthMeasure
 
sl::Mat m_slPointCloud
 
std::queue< containers::FrameFetchContainer< cv::cuda::GpuMat > > m_qGPUFrameCopySchedule
 
std::queue< containers::DataFetchContainer< std::vector< ZedObjectData > > > m_qCustomBoxIngestSchedule
 
std::queue< containers::DataFetchContainer< Pose > > m_qPoseCopySchedule
 
std::queue< containers::DataFetchContainer< sl::GeoPose > > m_qGeoPoseCopySchedule
 
std::queue< containers::DataFetchContainer< sl::Plane > > m_qFloorCopySchedule
 
std::queue< containers::DataFetchContainer< sl::SensorsData > > m_qSensorsCopySchedule
 
std::queue< containers::DataFetchContainer< std::vector< sl::ObjectData > > > m_qObjectDataCopySchedule
 
std::queue< containers::DataFetchContainer< std::vector< sl::ObjectsBatch > > > m_qObjectBatchedDataCopySchedule
 
std::shared_mutex m_muCustomBoxIngestMutex
 
std::shared_mutex m_muPoseCopyMutex
 
std::shared_mutex m_muGeoPoseCopyMutex
 
std::shared_mutex m_muFloorCopyMutex
 
std::shared_mutex m_muSensorsCopyMutex
 
std::shared_mutex m_muObjectDataCopyMutex
 
std::shared_mutex m_muObjectBatchedDataCopyMutex
 
bool m_bQueueTogglesAlreadyReset
 
std::atomic< bool > m_bNormalFramesQueued
 
std::atomic< bool > m_bDepthFramesQueued
 
std::atomic< bool > m_bPointCloudsQueued
 
std::atomic< bool > m_bPosesQueued
 
std::atomic< bool > m_bGeoPosesQueued
 
std::atomic< bool > m_bFloorsQueued
 
std::atomic< bool > m_bSensorsQueued
 
std::atomic< bool > m_bObjectsQueued
 
std::atomic< bool > m_bBatchedObjectsQueued
 

Additional Inherited Members

- Public Types inherited from AutonomyThread< void >
enum  AutonomyThreadState
 
- Protected Member Functions inherited from AutonomyThread< void >
void RunPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts/adds tasks to a thread pool that runs nNumTasksToQueue copies of the code within the PooledLinearCode() method using nNumThreads number of threads. This is meant to be used as an internal utility of the child class to further improve parallelization. Default value for nNumThreads is 2.
 
void RunDetachedPool (const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
 When this method is called, it starts a thread pool full of threads that don't return std::futures (like a placeholder for the thread return type). This means the thread will not have a return type and there is no way to determine if the thread has finished other than calling the Join() method. Only use this if you want to 'set and forget'. It will be faster as it doesn't return futures. Runs PooledLinearCode() method code. This is meant to be used as an internal utility of the child class to further improve parallelization.
 
void ParallelizeLoop (const int nNumThreads, const N tTotalIterations, F &&tLoopFunction)
 Given a ref-qualified looping function and an arbitrary number of iterations, this method will divide up the loop and run each section in a thread pool. This function must not return anything. This method will block until the loop has completed.
 
void ClearPoolQueue ()
 Clears any tasks waiting to be ran in the queue, tasks currently running will remain running.
 
void JoinPool ()
 Waits for pool to finish executing tasks. This method will block the calling code until thread is finished.
 
bool PoolJoinable () const
 Check if the internal pool threads are done executing code and the queue is empty.
 
void SetMainThreadIPSLimit (int nMaxIterationsPerSecond=0)
 Mutator for the Main Thread Max I P S private member.
 
int GetPoolNumOfThreads ()
 Accessor for the Pool Num Of Threads private member.
 
int GetPoolQueueLength ()
 Accessor for the Pool Queue Size private member.
 
std::vector< void > GetPoolResults ()
 Accessor for the Pool Results private member. The action of getting results will destroy and remove them from this object. This method blocks if the thread is not finished, so no need to call JoinPool() before getting results.
 
int GetMainThreadMaxIPS () const
 Accessor for the Main Thread Max I P S private member.
 
- Protected Attributes inherited from ZEDCamera
const std::memory_order ATOMIC_MEMORY_ORDER_METHOD = std::memory_order_relaxed
 
unsigned int m_unCameraSerialNumber
 
bool m_bCameraIsFusionMaster
 
- Protected Attributes inherited from Camera< cv::Mat >
int m_nPropResolutionX
 
int m_nPropResolutionY
 
int m_nPropFramesPerSecond
 
int m_nNumFrameRetrievalThreads
 
PIXEL_FORMATS m_ePropPixelFormat
 
double m_dPropHorizontalFOV
 
double m_dPropVerticalFOV
 
std::atomic_bool m_bEnableRecordingFlag
 
std::queue< containers::FrameFetchContainer< cv::Mat > > m_qFrameCopySchedule
 
std::shared_mutex m_muPoolScheduleMutex
 
std::shared_mutex m_muFrameCopyMutex
 
- Protected Attributes inherited from AutonomyThread< void >
IPS m_IPS
 

Detailed Description

This class implements and interfaces with the most common ZEDSDK cameras and features. It is designed in such a way that multiple other classes/threads can safely call any method of an object of this class withing resource corruption or slowdown of the camera.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-21

Constructor & Destructor Documentation

◆ ZEDCam()

ZEDCam::ZEDCam ( const int  nPropResolutionX,
const int  nPropResolutionY,
const int  nPropFramesPerSecond,
const double  dPropHorizontalFOV,
const double  dPropVerticalFOV,
const bool  bEnableRecordingFlag = false,
const bool  bExportSVORecordingFlag = false,
const float  fMinSenseDistance = constants::ZED_DEFAULT_MINIMUM_DISTANCE,
const float  fMaxSenseDistance = constants::ZED_DEFAULT_MAXIMUM_DISTANCE,
const bool  bMemTypeGPU = false,
const bool  bUseHalfDepthPrecision = false,
const bool  bEnableFusionMaster = false,
const int  nNumFrameRetrievalThreads = 10,
const unsigned int  unCameraSerialNumber = 0 
)

Construct a new Zed Cam:: Zed Cam object.

Parameters
nPropResolutionX- X res of camera. Must be smaller than ZED_BASE_RESOLUTION.
nPropResolutionY- Y res of camera. Must be smaller than ZED_BASE_RESOLUTION.
nPropFramesPerSecond- FPS camera is running at.
dPropHorizontalFOV- The horizontal field of view.
dPropVerticalFOV- The vertical field of view.
bEnableRecordingFlag- Whether or not this camera should be recorded.
fMinSenseDistance- The minimum distance to include in depth measures.
fMaxSenseDistance- The maximum distance to include in depth measures.
bMemTypeGPU- Whether or not to use the GPU memory for operations.
bUseHalfPrecision- Whether or not to use a float16 instead of float32 for depth measurements.
bEnableFusionMaster- Enables ZEDSDK Fusion integration for this camera. This camera will serve as the master instance for all fusion functions.
nNumFrameRetrievalThreads- The number of threads to use for copying frames/data to requests.
unCameraSerialNumber- The serial number of the camera to open.
Note
Do not set bEnableFusionMaster to true if you want to subscribe the camera to another camera! Only one camera should have Fusion enabled! To subscribe a camera to the camera running the master fusion instance, use the GetFusionInstance() and GetCameraSerial() functions. Refer to Fusion documentation for info on working with fusion functions: https://www.stereolabs.com/docs/api/classsl_1_1Fusion.html
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26
53 :
54 ZEDCamera(nPropResolutionX,
55 nPropResolutionY,
56 nPropFramesPerSecond,
57 dPropHorizontalFOV,
58 dPropVerticalFOV,
59 bEnableRecordingFlag,
60 bMemTypeGPU,
61 bUseHalfDepthPrecision,
62 bEnableFusionMaster,
63 nNumFrameRetrievalThreads,
64 unCameraSerialNumber)
65{
66 // Assign member variables.
67 bMemTypeGPU ? m_slMemoryType = sl::MEM::GPU : m_slMemoryType = sl::MEM::CPU;
68 bUseHalfDepthPrecision ? m_slDepthMeasureType = sl::MEASURE::DEPTH_U16_MM : m_slDepthMeasureType = sl::MEASURE::DEPTH;
69 m_dPoseOffsetX = 0.0;
70 m_dPoseOffsetY = 0.0;
71 m_dPoseOffsetZ = 0.0;
72 m_dPoseOffsetXO = 0.0;
73 m_dPoseOffsetYO = 0.0;
74 m_dPoseOffsetZO = 0.0;
75 m_bCameraReopenAlreadyChecked = false;
76 // Initialize queued toggles.
77 m_bQueueTogglesAlreadyReset = false;
78 m_bNormalFramesQueued = false;
79 m_bDepthFramesQueued = false;
80 m_bPointCloudsQueued = false;
81 m_bPosesQueued = false;
82 m_bGeoPosesQueued = false;
83 m_bFloorsQueued = false;
84 m_bObjectsQueued = false;
85 m_bBatchedObjectsQueued = false;
86
87 // Setup camera params.
88 m_slCameraParams.camera_resolution = constants::ZED_BASE_RESOLUTION;
89 m_slCameraParams.camera_fps = nPropFramesPerSecond;
90 m_slCameraParams.coordinate_units = constants::ZED_MEASURE_UNITS;
91 m_slCameraParams.coordinate_system = constants::ZED_COORD_SYSTEM;
92 m_slCameraParams.sdk_verbose = constants::ZED_SDK_VERBOSE;
93 m_slCameraParams.depth_mode = constants::ZED_DEPTH_MODE;
94 m_slCameraParams.depth_minimum_distance = fMinSenseDistance;
95 m_slCameraParams.depth_maximum_distance = fMaxSenseDistance;
96 m_slCameraParams.depth_stabilization = constants::ZED_DEPTH_STABILIZATION;
97 // Only set serial number if necessary.
98 if (unCameraSerialNumber != static_cast<unsigned int>(0))
99 {
100 m_slCameraParams.input.setFromSerialNumber(unCameraSerialNumber);
101 }
102
103 // Setup camera runtime params.
104 m_slRuntimeParams.enable_fill_mode = constants::ZED_SENSING_FILL;
105 // Setup SVO recording parameters.
106 m_slRecordingParams.compression_mode = constants::ZED_SVO_COMPRESSION;
107 m_slRecordingParams.bitrate = constants::ZED_SVO_BITRATE;
108
109 // Setup positional tracking parameters.
110 m_slPoseTrackingParams.mode = constants::ZED_POSETRACK_MODE;
111 m_slPoseTrackingParams.enable_area_memory = constants::ZED_POSETRACK_AREA_MEMORY;
112 m_slPoseTrackingParams.enable_pose_smoothing = constants::ZED_POSETRACK_POSE_SMOOTHING;
113 m_slPoseTrackingParams.set_floor_as_origin = constants::ZED_POSETRACK_FLOOR_IS_ORIGIN;
114 m_slPoseTrackingParams.enable_imu_fusion = constants::ZED_POSETRACK_ENABLE_IMU_FUSION;
115 m_slPoseTrackingParams.depth_min_range = constants::ZED_POSETRACK_USABLE_DEPTH_MIN;
116 m_slPoseTrackingParams.set_gravity_as_origin = constants::ZED_POSETRACK_USE_GRAVITY_ORIGIN;
117
118 // Setup spatial mapping parameters.
119 m_slSpatialMappingParams.map_type = constants::ZED_MAPPING_TYPE;
120 m_slSpatialMappingParams.resolution_meter = constants::ZED_MAPPING_RESOLUTION_METER;
121 m_slSpatialMappingParams.save_texture = true;
122 m_slSpatialMappingParams.use_chunk_only = constants::ZED_MAPPING_USE_CHUNK_ONLY;
123 m_slSpatialMappingParams.stability_counter = constants::ZED_MAPPING_STABILITY_COUNTER;
124 // Set or auto-set max depth range for mapping.
125 if (constants::ZED_MAPPING_RANGE_METER <= 0)
126 {
127 // Automatically guess the best mapping depth range.
128 m_slSpatialMappingParams.range_meter = m_slSpatialMappingParams.getRecommendedRange(constants::ZED_MAPPING_RESOLUTION_METER, m_slCamera);
129 }
130 else
131 {
132 // Manually set.
133 m_slSpatialMappingParams.range_meter = constants::ZED_MAPPING_RANGE_METER;
134 }
135
136 // Setup object detection/tracking parameters.
137 m_slObjectDetectionParams.detection_model = sl::OBJECT_DETECTION_MODEL::CUSTOM_BOX_OBJECTS;
138 m_slObjectDetectionParams.enable_tracking = constants::ZED_OBJDETECTION_TRACK_OBJ;
139 m_slObjectDetectionParams.enable_segmentation = constants::ZED_OBJDETECTION_SEGMENTATION;
140 m_slObjectDetectionParams.filtering_mode = constants::ZED_OBJDETECTION_FILTERING;
141 m_slObjectDetectionParams.prediction_timeout_s = constants::ZED_OBJDETECTION_TRACKING_PREDICTION_TIMEOUT;
142 // Setup object detection/tracking batch parameters.
143 m_slObjectDetectionBatchParams.enable = false;
144 m_slObjectDetectionBatchParams.id_retention_time = constants::ZED_OBJDETECTION_BATCH_RETENTION_TIME;
145 m_slObjectDetectionBatchParams.latency = constants::ZED_OBJDETECTION_BATCH_LATENCY;
146 m_slObjectDetectionParams.batch_parameters = m_slObjectDetectionBatchParams;
147
148 // Attempt to open camera.
149 sl::ERROR_CODE slReturnCode = m_slCamera.open(m_slCameraParams);
150 // Check if the camera was successfully opened.
151 if (m_slCamera.isOpened())
152 {
153 // Update camera serial number if camera was opened with autodetect.
154 m_unCameraSerialNumber = m_slCamera.getCameraInformation().serial_number;
155 // Update camera model.
156 m_slCameraModel = m_slCamera.getCameraInformation().camera_model;
157 // Check if the camera should record and output an SVO file.
158 if (bExportSVORecordingFlag)
159 {
160 // Now that camera is opened get camera name and construct path.
161 std::string szSVOFilePath = constants::LOGGING_OUTPUT_PATH_ABSOLUTE + "/" + logging::g_szProgramStartTimeString + "/" + this->GetCameraModel() + "_" +
162 std::to_string(this->GetCameraSerial());
163 m_slRecordingParams.video_filename = szSVOFilePath.c_str();
164 // Enable recording.
165 sl::ERROR_CODE slReturnCode = m_slCamera.enableRecording(m_slRecordingParams);
166 // Check if recording was enabled successfully.
167 if (slReturnCode == sl::ERROR_CODE::SUCCESS)
168 {
169 // Submit logger message.
170 LOG_DEBUG(logging::g_qSharedLogger,
171 "Successfully enabled SVO recording for {} ZED stereo camera with serial number {}.",
172 this->GetCameraModel(),
173 m_unCameraSerialNumber);
174 }
175 else
176 {
177 // Submit logger message.
178 LOG_ERROR(logging::g_qSharedLogger,
179 "Failed to enable SVO recording for {} ZED stereo camera with serial number {}. sl::ERROR_CODE is {}",
180 this->GetCameraModel(),
181 m_unCameraSerialNumber,
182 sl::toString(slReturnCode).c_str());
183 }
184 }
185
186 // Submit logger message.
187 LOG_INFO(logging::g_qSharedLogger, "{} ZED stereo camera with serial number {} has been successfully opened.", this->GetCameraModel(), m_unCameraSerialNumber);
188 }
189 else
190 {
191 // Submit logger message.
192 LOG_ERROR(logging::g_qSharedLogger,
193 "Unable to open ZED stereo camera {} ({})! sl::ERROR_CODE is: {}",
194 sl::toString(m_slCameraModel).get(),
195 m_unCameraSerialNumber,
196 sl::toString(slReturnCode).get());
197 }
198
199 // Check if this camera should serve has the master Fusion instance.
200 if (bEnableFusionMaster)
201 {
202 // Setup Fusion params.
203 m_slFusionParams.coordinate_units = constants::FUSION_MEASUREMENT_UNITS;
204 m_slFusionParams.coordinate_system = constants::FUSION_COORD_SYSTEM;
205 m_slFusionParams.verbose = constants::FUSION_SDK_VERBOSE;
206 // Setup Fusion positional tracking parameters.
207 m_slFusionPoseTrackingParams.enable_GNSS_fusion = constants::FUSION_ENABLE_GNSS_FUSION;
208
209 // Initialize fusion instance for camera.
210 sl::FUSION_ERROR_CODE slReturnCode = m_slFusionInstance.init(m_slFusionParams);
211 // Check if fusion initialized properly.
212 if (slReturnCode == sl::FUSION_ERROR_CODE::SUCCESS)
213 {
214 // Enable odometry publishing for this ZED camera.
215 m_slCamera.startPublishing();
216 // Subscribe this camera to fusion instance.
217 slReturnCode = m_slFusionInstance.subscribe(sl::CameraIdentifier(m_unCameraSerialNumber));
218
219 // Check if this camera was successfully subscribed to the Fusion instance.
220 if (slReturnCode == sl::FUSION_ERROR_CODE::SUCCESS)
221 {
222 // Submit logger message.
223 LOG_DEBUG(logging::g_qSharedLogger, "Initialized FUSION instance for ZED camera {} ({})!", sl::toString(m_slCameraModel).get(), m_unCameraSerialNumber);
224 }
225 else
226 {
227 // Submit logger message.
228 LOG_DEBUG(logging::g_qSharedLogger,
229 "Unable to subscribe to internal FUSION instance for camera {} ({})! sl::FUSION_ERROR_CODE is: {}",
230 sl::toString(m_slCameraModel).get(),
231 m_unCameraSerialNumber,
232 sl::toString(slReturnCode).get());
233 }
234 }
235 else
236 {
237 // Submit logger message.
238 LOG_ERROR(logging::g_qSharedLogger,
239 "Unable to initialize FUSION instance for camera {} ({})! sl::FUSION_ERROR_CODE is: {}",
240 sl::toString(m_slCameraModel).get(),
241 m_unCameraSerialNumber,
242 sl::toString(slReturnCode).get());
243 }
244 }
245
246 // Set max FPS of the ThreadedContinuousCode method.
247 this->SetMainThreadIPSLimit(nPropFramesPerSecond);
248}
void SetMainThreadIPSLimit(int nMaxIterationsPerSecond=0)
Mutator for the Main Thread Max I P S private member.
Definition AutonomyThread.hpp:467
unsigned int GetCameraSerial() override
Accessor for the camera's serial number.
Definition ZEDCam.cpp:2273
std::string GetCameraModel() override
Accessor for the model enum from the ZEDSDK and represents the camera model as a string.
Definition ZEDCam.cpp:2244
This class serves as a middle inheritor between the Camera interface and the ZEDCam class....
Definition ZEDCamera.hpp:33
__host__ cudaTextureObject_t get() const noexcept
Here is the call graph for this function:

◆ ~ZEDCam()

ZEDCam::~ZEDCam ( )

Destroy the Zed Cam:: Zed Cam object.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26
258{
259 // Stop threaded code.
260 this->RequestStop();
261 this->Join();
262
263 // Check if fusion master instance has been started for this camera.
264 if (m_bCameraIsFusionMaster)
265 {
266 // Close Fusion instance.
267 m_slFusionInstance.close();
268 }
269
270 // Close the ZEDCam.
271 m_slCamera.close();
272
273 // Submit logger message.
274 LOG_INFO(logging::g_qSharedLogger, "ZED stereo camera with serial number {} has been successfully closed.", m_unCameraSerialNumber);
275}
void Join()
Waits for thread to finish executing and then closes thread. This method will block the calling code ...
Definition AutonomyThread.hpp:180
void RequestStop()
Signals threads to stop executing user code, terminate. DOES NOT JOIN. This method will not force the...
Definition AutonomyThread.hpp:164
Here is the call graph for this function:

Member Function Documentation

◆ RequestFrameCopy() [1/2]

std::future< bool > ZEDCam::RequestFrameCopy ( cv::Mat cvFrame)
overridevirtual

Requests a regular BGRA image from the LEFT eye of the zed camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. Remember this code will be ran in whatever class/thread calls it.

Parameters
cvFrame- A reference to the cv::Mat to copy the normal frame to.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
ClayJay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-09

Implements ZEDCamera.

1002{
1003 // Assemble the FrameFetchContainer.
1004 containers::FrameFetchContainer<cv::Mat> stContainer(cvFrame, PIXEL_FORMATS::eBGRA);
1005
1006 // Acquire lock on frame copy queue.
1007 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1008 // Append frame fetch container to the schedule queue.
1009 m_qFrameCopySchedule.push(stContainer);
1010 // Release lock on the frame schedule queue.
1011 lkSchedulers.unlock();
1012
1013 // Check if frame queue toggle has already been set.
1014 if (!m_bNormalFramesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1015 {
1016 // Signify that the frame queue is not empty.
1017 m_bNormalFramesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1018 }
1019
1020 // Return the future from the promise stored in the container.
1021 return stContainer.pCopiedFrameStatus->get_future();
1022}
This struct is used to carry references to camera frames for scheduling and copying....
Definition FetchContainers.hpp:89

◆ RequestFrameCopy() [2/2]

std::future< bool > ZEDCam::RequestFrameCopy ( cv::cuda::GpuMat cvGPUFrame)
overridevirtual

Grabs a regular BGRA image from the LEFT eye of the zed camera and stores it in a GPU mat. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. Remember this code will be ran in whatever class/thread calls it.

Parameters
cvGPUFrame- A reference to the cv::Mat to copy the normal frame to.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
ClayJay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-09

Reimplemented from ZEDCamera.

1037{
1038 // Assemble the FrameFetchContainer.
1039 containers::FrameFetchContainer<cv::cuda::GpuMat> stContainer(cvGPUFrame, PIXEL_FORMATS::eBGRA);
1040
1041 // Acquire lock on frame copy queue.
1042 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1043 // Append frame fetch container to the schedule queue.
1044 m_qGPUFrameCopySchedule.push(stContainer);
1045 // Release lock on the frame schedule queue.
1046 lkSchedulers.unlock();
1047
1048 // Check if frame queue toggle has already been set.
1049 if (!m_bNormalFramesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1050 {
1051 // Signify that the frame queue is not empty.
1052 m_bNormalFramesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1053 }
1054
1055 // Return the future from the promise stored in the container.
1056 return stContainer.pCopiedFrameStatus->get_future();
1057}

◆ RequestDepthCopy() [1/2]

std::future< bool > ZEDCam::RequestDepthCopy ( cv::Mat cvDepth,
const bool  bRetrieveMeasure = true 
)
overridevirtual

Requests a depth measure or image from the camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. This image has the same shape as a grayscale image, but the values represent the depth in MILLIMETERS. The ZEDSDK will always return this measure in MILLIMETERS.

Parameters
cvDepth- A reference to the cv::Mat to copy the depth frame to.
bRetrieveMeasure- False to get depth IMAGE instead of MEASURE. Do not use the 8-bit grayscale depth image purposes other than displaying depth.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Implements ZEDCamera.

1075{
1076 // Create instance variables.
1077 PIXEL_FORMATS eFrameType;
1078
1079 // Check if the container should be set to retrieve an image or a measure.
1080 bRetrieveMeasure ? eFrameType = PIXEL_FORMATS::eDepthMeasure : eFrameType = PIXEL_FORMATS::eDepthImage;
1081 // Assemble container.
1082 containers::FrameFetchContainer<cv::Mat> stContainer(cvDepth, eFrameType);
1083
1084 // Acquire lock on frame copy queue.
1085 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1086 // Append frame fetch container to the schedule queue.
1087 m_qFrameCopySchedule.push(stContainer);
1088 // Release lock on the frame schedule queue.
1089 lkSchedulers.unlock();
1090
1091 // Check if frame queue toggle has already been set.
1092 if (!m_bDepthFramesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1093 {
1094 // Signify that the frame queue is not empty.
1095 m_bDepthFramesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1096 }
1097
1098 // Return the future from the promise stored in the container.
1099 return stContainer.pCopiedFrameStatus->get_future();
1100}

◆ RequestDepthCopy() [2/2]

std::future< bool > ZEDCam::RequestDepthCopy ( cv::cuda::GpuMat cvGPUDepth,
const bool  bRetrieveMeasure = true 
)
overridevirtual

Requests a depth measure or image from the camera. Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it. This image has the same shape as a grayscale image, but the values represent the depth in MILLIMETERS. The ZEDSDK will always return this measure in MILLIMETERS.

Parameters
cvGPUDepth- A reference to the cv::Mat to copy the depth frame to.
bRetrieveMeasure- False to get depth IMAGE instead of MEASURE. Do not use the 8-bit grayscale depth image purposes other than displaying depth.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Reimplemented from ZEDCamera.

1118{
1119 // Create instance variables.
1120 PIXEL_FORMATS eFrameType;
1121
1122 // Check if the container should be set to retrieve an image or a measure.
1123 bRetrieveMeasure ? eFrameType = PIXEL_FORMATS::eDepthMeasure : eFrameType = PIXEL_FORMATS::eDepthImage;
1124 // Assemble container.
1125 containers::FrameFetchContainer<cv::cuda::GpuMat> stContainer(cvGPUDepth, eFrameType);
1126
1127 // Acquire lock on frame copy queue.
1128 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1129 // Append frame fetch container to the schedule queue.
1130 m_qGPUFrameCopySchedule.push(stContainer);
1131 // Release lock on the frame schedule queue.
1132 lkSchedulers.unlock();
1133
1134 // Check if frame queue toggle has already been set.
1135 if (!m_bDepthFramesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1136 {
1137 // Signify that the frame queue is not empty.
1138 m_bDepthFramesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1139 }
1140
1141 // Return the future from the promise stored in the container.
1142 return stContainer.pCopiedFrameStatus->get_future();
1143}

◆ RequestPointCloudCopy() [1/2]

std::future< bool > ZEDCam::RequestPointCloudCopy ( cv::Mat cvPointCloud)
overridevirtual

Requests a point cloud image from the camera. This image has the same resolution as a normal image but with three XYZ values replacing the old color values in the 3rd dimension. The units and sign of the XYZ values are determined by ZED_MEASURE_UNITS and ZED_COORD_SYSTEM constants set in AutonomyConstants.h.

A 4th value in the 3rd dimension exists as a float32 storing the BGRA values. Each color value is 8-bits and is in this order: 00000000 00000000 00000000 00000000 = 32 bits (float32) B G R A

Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it.

Parameters
cvPointCloud- A reference to the cv::Mat to copy the point cloud frame to.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Implements ZEDCamera.

1166{
1167 // Assemble the FrameFetchContainer.
1168 containers::FrameFetchContainer<cv::Mat> stContainer(cvPointCloud, PIXEL_FORMATS::eXYZBGRA);
1169
1170 // Acquire lock on frame copy queue.
1171 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1172 // Append frame fetch container to the schedule queue.
1173 m_qFrameCopySchedule.push(stContainer);
1174 // Release lock on the frame schedule queue.
1175 lkSchedulers.unlock();
1176
1177 // Check if point cloud queue toggle has already been set.
1178 if (!m_bPointCloudsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1179 {
1180 // Signify that the point cloud queue is not empty.
1181 m_bPointCloudsQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1182 }
1183
1184 // Return the future from the promise stored in the container.
1185 return stContainer.pCopiedFrameStatus->get_future();
1186}

◆ RequestPointCloudCopy() [2/2]

std::future< bool > ZEDCam::RequestPointCloudCopy ( cv::cuda::GpuMat cvGPUPointCloud)
overridevirtual

Grabs a point cloud image from the camera. This image has the same resolution as a normal image but with three XYZ values replacing the old color values in the 3rd dimension. The units and sign of the XYZ values are determined by ZED_MEASURE_UNITS and ZED_COORD_SYSTEM constants set in AutonomyConstants.h.

A 4th value in the 3rd dimension exists as a float32 storing the BGRA values. Each color value is 8-bits and is in this order: 00000000 00000000 00000000 00000000 = 32 bits (float32) B G R A

Puts a frame pointer into a queue so a copy of a frame from the camera can be written to it.

Parameters
cvGPUPointCloud- A reference to the cv::Mat to copy the point cloud frame to.
Returns
std::future<bool> - A future that should be waited on before the passed in frame is used. Value will be true if frame was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Reimplemented from ZEDCamera.

1209{
1210 // Assemble the FrameFetchContainer.
1211 containers::FrameFetchContainer<cv::cuda::GpuMat> stContainer(cvGPUPointCloud, PIXEL_FORMATS::eXYZBGRA);
1212
1213 // Acquire lock on frame copy queue.
1214 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1215 // Append frame fetch container to the schedule queue.
1216 m_qGPUFrameCopySchedule.push(stContainer);
1217 // Release lock on the frame schedule queue.
1218 lkSchedulers.unlock();
1219
1220 // Check if point cloud queue toggle has already been set.
1221 if (!m_bPointCloudsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1222 {
1223 // Signify that the point cloud queue is not empty.
1224 m_bPointCloudsQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1225 }
1226
1227 // Return the future from the promise stored in the container.
1228 return stContainer.pCopiedFrameStatus->get_future();
1229}

◆ RequestPositionalPoseCopy()

std::future< bool > ZEDCam::RequestPositionalPoseCopy ( Pose stPose)
overridevirtual

Requests the current pose of the camera relative to it's start pose or the origin of the set pose. Puts a Pose pointer into a queue so a copy of a pose from the camera can be written to it. If positional tracking is not enabled, this method will return false and the ZEDCam::Pose may be uninitialized.

Parameters
stPose- A reference to the ZEDCam::Pose object to copy the current camera pose to.
Returns
std::future<bool> - A future that should be waited on before the passed in sl::Pose is used. Value will be true if pose was successfully retrieved.
Note
If this camera is acting as the ZEDSDK Fusion master instance, then the positional pose returned will be from the Fusion instance. aka The Fused GNSS and visual inertial odometry will be returned.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Implements ZEDCamera.

1247{
1248 // Acquire read lock.
1249 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1250 // Check if positional tracking has been enabled.
1251 if (m_slCamera.isPositionalTrackingEnabled())
1252 {
1253 // Release lock.
1254 lkCameraLock.unlock();
1255 // Assemble the data container.
1256 containers::DataFetchContainer<Pose> stContainer(stPose);
1257
1258 // Acquire lock on pose copy queue.
1259 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1260 // Append pose fetch container to the schedule queue.
1261 m_qPoseCopySchedule.push(stContainer);
1262 // Release lock on the pose schedule queue.
1263 lkSchedulers.unlock();
1264
1265 // Check if pose queue toggle has already been set.
1266 if (!m_bPosesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1267 {
1268 // Signify that the pose queue is not empty.
1269 m_bPosesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1270 }
1271
1272 // Return the future from the promise stored in the container.
1273 return stContainer.pCopiedDataStatus->get_future();
1274 }
1275 else
1276 {
1277 // Release lock.
1278 lkCameraLock.unlock();
1279 // Submit logger message.
1280 LOG_WARNING(logging::g_qSharedLogger, "Attempted to get ZED positional pose but positional tracking is not enabled or is still initializing!");
1281
1282 // Create dummy promise to return the future.
1283 std::promise<bool> pmDummyPromise;
1284 std::future<bool> fuDummyFuture = pmDummyPromise.get_future();
1285 // Set future value.
1286 pmDummyPromise.set_value(false);
1287
1288 // Return unsuccessful.
1289 return fuDummyFuture;
1290 }
1291}
This struct is used to carry references to any datatype for scheduling and copying....
Definition FetchContainers.hpp:165

◆ RequestFusionGeoPoseCopy()

std::future< bool > ZEDCam::RequestFusionGeoPoseCopy ( sl::GeoPose &  slGeoPose)
overridevirtual

Requests the current geo pose of the camera. This method should be used to retrieve ONLY the GNSS data from the ZEDSDK's Fusion module. Puts a GeoPose pointer into a queue so a copy of a GeoPose from the camera can be written to it. If positional tracking or fusion is disabled for this camera, then this method will return false and the sl::GeoPose may be uninitialized.

Parameters
slGeoPose- A reference to the sl::GeoPose object to copy the current geo pose to.
Returns
std::future<bool> - A future that should be waited on before the passed in sl::GeoPose is used. Value will be true if geo pose was successfully retrieved.
Note
This camera must be a Fusion Master and the NavBoard must be giving accurate info to the camera for this to be functional.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-25

Implements ZEDCamera.

1309{
1310 // Acquire read lock.
1311 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1312 // Check if positional tracking has been enabled.
1313 if (m_bCameraIsFusionMaster && m_slCamera.isPositionalTrackingEnabled())
1314 {
1315 // Release lock.
1316 lkCameraLock.unlock();
1317 // Assemble the data container.
1318 containers::DataFetchContainer<sl::GeoPose> stContainer(slGeoPose);
1319
1320 // Acquire lock on frame copy queue.
1321 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1322 // Append frame fetch container to the schedule queue.
1323 m_qGeoPoseCopySchedule.push(stContainer);
1324 // Release lock on the frame schedule queue.
1325 lkSchedulers.unlock();
1326
1327 // Check if pose queue toggle has already been set.
1328 if (!m_bGeoPosesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1329 {
1330 // Signify that the pose queue is not empty.
1331 m_bGeoPosesQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1332 }
1333
1334 // Return the future from the promise stored in the container.
1335 return stContainer.pCopiedDataStatus->get_future();
1336 }
1337 else
1338 {
1339 // Release lock.
1340 lkCameraLock.unlock();
1341 // Submit logger message.
1342 LOG_WARNING(logging::g_qSharedLogger,
1343 "Attempted to get ZED FUSION geo pose but positional tracking is not enabled and/or this camera was not initialized as a Fusion Master!");
1344
1345 // Create dummy promise to return the future.
1346 std::promise<bool> pmDummyPromise;
1347 std::future<bool> fuDummyFuture = pmDummyPromise.get_future();
1348 // Set future value.
1349 pmDummyPromise.set_value(false);
1350
1351 // Return unsuccessful.
1352 return fuDummyFuture;
1353 }
1354}

◆ RequestFloorPlaneCopy()

std::future< bool > ZEDCam::RequestFloorPlaneCopy ( sl::Plane &  slPlane)
overridevirtual

Requests the current floor plane of the camera relative to it's current pose. Puts a Plane pointer into a queue so a copy of the floor plane from the camera can be written to it. If positional tracking is not enabled, this method will return false and the sl::Plane may be uninitialized.

Parameters
slPlane- A reference to the sl::Plane object to copy the current camera floor plane to.
Returns
std::future<bool> - A future that should be waited on before the passed in sl::Plane is used. Value will be true if data was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-22

Reimplemented from ZEDCamera.

1369{
1370 // Acquire read lock.
1371 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1372 // Check if positional tracking has been enabled.
1373 if (m_slCamera.isPositionalTrackingEnabled())
1374 {
1375 // Release lock.
1376 lkCameraLock.unlock();
1377 // Assemble the data container.
1378 containers::DataFetchContainer<sl::Plane> stContainer(slPlane);
1379
1380 // Acquire lock on pose copy queue.
1381 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1382 // Append data fetch container to the schedule queue.
1383 m_qFloorCopySchedule.push(stContainer);
1384 // Release lock on the pose schedule queue.
1385 lkSchedulers.unlock();
1386
1387 // Check if pose queue toggle has already been set.
1388 if (!m_bFloorsQueued.load(std::memory_order_relaxed))
1389 {
1390 // Signify that the pose queue is not empty.
1391 m_bFloorsQueued.store(true, std::memory_order_relaxed);
1392 }
1393
1394 // Return the future from the promise stored in the container.
1395 return stContainer.pCopiedDataStatus->get_future();
1396 }
1397 else
1398 {
1399 // Release lock.
1400 lkCameraLock.unlock();
1401 // Submit logger message.
1402 LOG_WARNING(logging::g_qSharedLogger, "Attempted to get ZED floor plane but positional tracking is not enabled!");
1403
1404 // Create dummy promise to return the future.
1405 std::promise<bool> pmDummyPromise;
1406 std::future<bool> fuDummyFuture = pmDummyPromise.get_future();
1407 // Set future value.
1408 pmDummyPromise.set_value(false);
1409
1410 // Return unsuccessful.
1411 return fuDummyFuture;
1412 }
1413}

◆ RequestSensorsCopy()

std::future< bool > ZEDCam::RequestSensorsCopy ( sl::SensorsData &  slSensorsData)
overridevirtual

Requests the most up to date sensors data from the camera. This data include IMU pose and raw values, barometer, magnetometer, and temperature data.

Parameters
slSensorsData- A reference to the sl::SensorsData struct to copy data into.
Returns
std::future<bool> - A future that should be waited on before the passed in sl::SensorsData is used. Value will be true if data was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-08-26

Reimplemented from ZEDCamera.

1427{
1428 // Assemble the data container.
1429 containers::DataFetchContainer<sl::SensorsData> stContainer(slSensorsData);
1430
1431 // Acquire lock on sensors copy queue.
1432 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1433 // Append sensors fetch container to the schedule queue.
1434 m_qSensorsCopySchedule.push(stContainer);
1435 // Release lock on the sensors schedule queue.
1436 lkSchedulers.unlock();
1437
1438 // Check if sensors queue toggle has already been set.
1439 if (!m_bSensorsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1440 {
1441 // Signify that the sensors queue is not empty.
1442 m_bSensorsQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1443 }
1444
1445 // Return the future from the promise stored in the container.
1446 return stContainer.pCopiedDataStatus->get_future();
1447}

◆ RequestObjectsCopy()

std::future< bool > ZEDCam::RequestObjectsCopy ( std::vector< sl::ObjectData > &  vObjectData)
overridevirtual

Requests a current copy of the tracked objects from the camera. Puts a pointer to a vector containing sl::ObjectData into a queue so a copy of a frame from the camera can be written to it.

Parameters
vObjectData- A vector that will have data copied to it containing sl::ObjectData objects.
Returns
std::future<bool> - A future that should be waited on before the passed in vector is used. Value will be true if data was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

1461{
1462 // Acquire read lock.
1463 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1464 // Check if object detection has been enabled.
1465 if (m_slCamera.isObjectDetectionEnabled())
1466 {
1467 // Release lock.
1468 lkCameraLock.unlock();
1469 // Assemble the data container.
1471
1472 // Acquire lock on object copy queue.
1473 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1474 // Append data fetch container to the schedule queue.
1475 m_qObjectDataCopySchedule.push(stContainer);
1476 // Release lock on the object schedule queue.
1477 lkSchedulers.unlock();
1478
1479 // Check if objects queue toggle has already been set.
1480 if (!m_bObjectsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1481 {
1482 // Signify that the objects queue is not empty.
1483 m_bObjectsQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1484 }
1485
1486 // Return the future from the promise stored in the container.
1487 return stContainer.pCopiedDataStatus->get_future();
1488 }
1489 else
1490 {
1491 // Release lock.
1492 lkCameraLock.unlock();
1493 // Submit logger message.
1494 LOG_WARNING(logging::g_qSharedLogger, "Attempted to get ZED object data but object detection/tracking is not enabled!");
1495
1496 // Create dummy promise to return the future.
1497 std::promise<bool> pmDummyPromise;
1498 std::future<bool> fuDummyFuture = pmDummyPromise.get_future();
1499 // Set future value.
1500 pmDummyPromise.set_value(false);
1501
1502 // Return unsuccessful.
1503 return fuDummyFuture;
1504 }
1505}

◆ RequestBatchedObjectsCopy()

std::future< bool > ZEDCam::RequestBatchedObjectsCopy ( std::vector< sl::ObjectsBatch > &  vBatchedObjectData)
overridevirtual

If batching is enabled, this requests the normal objects and passes them to the the internal batching queue of the zed api. This performs short-term re-identification with deep learning and trajectories filtering. Batching must have been set to enabled when EnableObjectDetection() was called. Most of the time the vector will be empty and will be filled every ZED_OBJDETECTION_BATCH_LATENCY.

Parameters
vBatchedObjectData- A vector containing objects of sl::ObjectsBatch object that will have object data copied to.
Returns
std::future<bool> - A future that should be waited on before the passed in vector is used. Value will be true if data was successfully retrieved.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-30

Reimplemented from ZEDCamera.

1523{
1524 // Acquire read lock.
1525 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1526 // Check if object detection and batching has been enabled.
1527 if (m_slCamera.isObjectDetectionEnabled() && m_slObjectDetectionBatchParams.enable)
1528 {
1529 // Release lock.
1530 lkCameraLock.unlock();
1531 // Assemble the data container.
1532 containers::DataFetchContainer<std::vector<sl::ObjectsBatch>> stContainer(vBatchedObjectData);
1533
1534 // Acquire lock on batched object copy queue.
1535 std::unique_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
1536 // Append data fetch container to the schedule queue.
1537 m_qObjectBatchedDataCopySchedule.push(stContainer);
1538 // Release lock on the data schedule queue.
1539 lkSchedulers.unlock();
1540
1541 // Check if objects queue toggle has already been set.
1542 if (!m_bBatchedObjectsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
1543 {
1544 // Signify that the objects queue is not empty.
1545 m_bBatchedObjectsQueued.store(true, ATOMIC_MEMORY_ORDER_METHOD);
1546 }
1547
1548 // Return the future from the promise stored in the container.
1549 return stContainer.pCopiedDataStatus->get_future();
1550 }
1551 else
1552 {
1553 // Release lock.
1554 lkCameraLock.unlock();
1555 // Submit logger message.
1556 LOG_WARNING(logging::g_qSharedLogger, "Attempted to get ZED batched object data but object detection/tracking is not enabled!");
1557
1558 // Create dummy promise to return the future.
1559 std::promise<bool> pmDummyPromise;
1560 std::future<bool> fuDummyFuture = pmDummyPromise.get_future();
1561 // Set future value.
1562 pmDummyPromise.set_value(false);
1563
1564 // Return unsuccessful.
1565 return fuDummyFuture;
1566 }
1567}

◆ ResetPositionalTracking()

sl::ERROR_CODE ZEDCam::ResetPositionalTracking ( )
overridevirtual

Resets the cameras X,Y,Z translation and Roll,Pitch,Yaw orientation back to 0. THINK CAREFULLY! Do you actually want to reset this? It will also realign the coordinate system to whichever way the camera happens to be facing.

Returns
sl::ERROR_CODE - Status of the positional tracking reset.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Implements ZEDCamera.

1580{
1581 // Create new translation to set position back to user given values.
1582 sl::Translation slZeroTranslation(0, 0, 0);
1583 // Update offset member variables.
1584 m_dPoseOffsetX = 0.0;
1585 m_dPoseOffsetY = 0.0;
1586 m_dPoseOffsetZ = 0.0;
1587 // This will reset position and coordinate frame.
1588 sl::Rotation slZeroRotation;
1589 slZeroRotation.setEulerAngles(sl::float3(0.0, 0.0, 0.0), false);
1590
1591 // Store new translation and rotation in a transform object.
1592 sl::Transform slZeroTransform(slZeroRotation, slZeroTranslation);
1593
1594 // Submit logger message.
1595 LOG_NOTICE(logging::g_qSharedLogger, "Resetting positional tracking for camera {} ({})!", sl::toString(m_slCameraModel).get(), m_unCameraSerialNumber);
1596
1597 // Acquire write lock.
1598 std::unique_lock<std::shared_mutex> lkWriteCameraLock(m_muCameraMutex);
1599 // Reset the positional tracking location of the camera.
1600 return m_slCamera.resetPositionalTracking(slZeroTransform);
1601}
Here is the call graph for this function:

◆ TrackCustomBoxObjects()

sl::ERROR_CODE ZEDCam::TrackCustomBoxObjects ( std::vector< ZedObjectData > &  vCustomObjects)
overridevirtual

A vector containing CustomBoxObjectData objects. These objects simply store information about your detected objects from an external object detection model. You will need to take your inference results and package them into a sl::CustomBoxObjectData so the the ZEDSDK can properly interpret your detections.

Giving the bounding boxes of your detected objects to the ZEDSDK will enable positional tracking and velocity estimation for each object. Even when not in view. The IDs of objects will also become persistent.

Parameters
vCustomObjects- A vector of sl::CustomBoxObjectData objects.
Returns
sl::ERROR_CODE - The return status of ingestion.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Reimplemented from ZEDCamera.

1620{
1621 // Create instance variables.
1622 std::vector<sl::CustomBoxObjectData> vCustomBoxData;
1623
1624 // Repack detection data into sl specific object.
1625 for (ZedObjectData stObjectData : vCustomObjects)
1626 {
1627 // Create new sl CustomBoxObjectData struct.
1628 sl::CustomBoxObjectData slCustomBox;
1629 std::vector<sl::uint2> vCorners;
1630
1631 // Assign simple attributes.
1632 slCustomBox.unique_object_id = sl::String(stObjectData.GetObjectUUID().c_str());
1633 slCustomBox.label = stObjectData.nClassNumber;
1634 slCustomBox.probability = stObjectData.fConfidence;
1635 slCustomBox.is_grounded = stObjectData.bObjectRemainsOnFloorPlane;
1636 // Repackage object corner data.
1637 vCorners.emplace_back(sl::uint2(stObjectData.cvBoundingBox.x, stObjectData.cvBoundingBox.y)); // Top-left corner
1638 vCorners.emplace_back(sl::uint2(stObjectData.cvBoundingBox.x + stObjectData.cvBoundingBox.width, stObjectData.cvBoundingBox.y)); // Top-right corner
1639 vCorners.emplace_back(sl::uint2(stObjectData.cvBoundingBox.x, stObjectData.cvBoundingBox.y + stObjectData.cvBoundingBox.height)); // Bottom-left corner
1640 vCorners.emplace_back(sl::uint2(stObjectData.cvBoundingBox.x + stObjectData.cvBoundingBox.width,
1641 stObjectData.cvBoundingBox.y + stObjectData.cvBoundingBox.height)); // Bottom-right corner
1642 slCustomBox.bounding_box_2d = vCorners;
1643
1644 // Append repackaged object to vector.
1645 vCustomBoxData.emplace_back(slCustomBox);
1646 }
1647
1648 // Acquire write lock.
1649 std::unique_lock<std::shared_mutex> lkWriteCameraLock(m_muCameraMutex);
1650 // Give the custom box data to the zed api.
1651 sl::ERROR_CODE slReturnCode = m_slCamera.ingestCustomBoxObjects(vCustomBoxData);
1652 // Release lock.
1653 lkWriteCameraLock.unlock();
1654
1655 // Check if successful.
1656 if (slReturnCode == sl::ERROR_CODE::SUCCESS)
1657 {
1658 // Submit logger message.
1659 LOG_WARNING(logging::g_qSharedLogger,
1660 "Failed to ingest new objects for camera {} ({})! sl::ERROR_CODE is: {}",
1661 sl::toString(m_slCameraModel).get(),
1662 m_unCameraSerialNumber,
1663 sl::toString(slReturnCode).get());
1664 }
1665
1666 // Return error code.
1667 return slReturnCode;
1668}
Here is the call graph for this function:

◆ RebootCamera()

sl::ERROR_CODE ZEDCam::RebootCamera ( )
overridevirtual

Performs a hardware reset of the ZED2 or ZED2i camera.

Returns
sl::ERROR_CODE - Whether or not the camera reboot was successful.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Implements ZEDCamera.

1679{
1680 // Acquire write lock.
1681 std::unique_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1682 // Reboot this camera and return the status code.
1683 return sl::Camera::reboot(m_unCameraSerialNumber);
1684}

◆ SubscribeFusionToCameraUUID()

sl::FUSION_ERROR_CODE ZEDCam::SubscribeFusionToCameraUUID ( sl::CameraIdentifier &  slCameraUUID)
overridevirtual

Give a UUID for another ZEDCam, subscribe that camera to this camera's Fusion instance. This will tell this camera's Fusion instance to start ingesting and fusing data from the other camera.

Parameters
slCameraUUID- The Camera unique identifier given by the other camera's PublishCameraToFusion() method.
Returns
sl::FUSION_ERROR_CODE - Whether or not the camera and fusion module has been successfully subscribed.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-26

Implements ZEDCamera.

1697{
1698 // Create instance variables.
1699 sl::FUSION_ERROR_CODE slReturnCode = sl::FUSION_ERROR_CODE::MODULE_NOT_ENABLED;
1700
1701 // Check if this camera is a fusion master.
1702 if (m_bCameraIsFusionMaster)
1703 {
1704 // Acquire write lock.
1705 std::unique_lock<std::shared_mutex> lkFusionLock(m_muFusionMutex);
1706 // Subscribe this camera to fusion instance.
1707 slReturnCode = m_slFusionInstance.subscribe(slCameraUUID);
1708 // Release lock.
1709 lkFusionLock.unlock();
1710 }
1711
1712 // Check if this camera was successfully subscribed to the Fusion instance.
1713 if (slReturnCode == sl::FUSION_ERROR_CODE::SUCCESS)
1714 {
1715 // Submit logger message.
1716 LOG_DEBUG(logging::g_qSharedLogger,
1717 "Subscribed stereo camera with serial number {} to Fusion instance ran by stereo camera {} ({})!",
1718 slCameraUUID.sn,
1719 sl::toString(m_slCameraModel).get(),
1720 m_unCameraSerialNumber);
1721 }
1722 else
1723 {
1724 // Submit logger message.
1725 LOG_DEBUG(logging::g_qSharedLogger,
1726 "Unable to subscribe camera with serial number {} to FUSION instance for camera {} ({})! sl::FUSION_ERROR_CODE is: {}",
1727 slCameraUUID.sn,
1728 sl::toString(m_slCameraModel).get(),
1729 m_unCameraSerialNumber,
1730 sl::toString(slReturnCode).get());
1731 }
1732
1733 // Return the sl::FUSION_ERROR_CODE status.
1734 return slReturnCode;
1735}

◆ PublishCameraToFusion()

sl::CameraIdentifier ZEDCam::PublishCameraToFusion ( )
overridevirtual

Signal this camera to make its data available to the Fusion module and retrieve a UUID for this class's sl::Camera instance that can be used to subscribe an sl::Fusion instance to this camera later.

Returns
sl::CameraIdentifier - A globally unique identifier generated from this camera's serial number.
Note
Just calling this method does not send data to the ZEDSDK's fusion module. It just enables the capability. The camera acting as the fusion master instance must be subscribed to this camera using the SubscribeFusionToCameraUUID() method and the returned UUID.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-01-26

Implements ZEDCamera.

1752{
1753 // Acquire write lock.
1754 std::unique_lock<std::shared_mutex> lkWriteCameraLock(m_muCameraMutex);
1755 // Make this cameras data available to the Fusion module if it is later subscribed.
1756 m_slCamera.startPublishing();
1757 // Release lock.
1758 lkWriteCameraLock.unlock();
1759
1760 // Return a UUID for this camera. This is used by the camera running the master fusion instance to subscribe to the data being published.
1761 return sl::CameraIdentifier(m_unCameraSerialNumber);
1762}

◆ IngestGPSDataToFusion()

sl::FUSION_ERROR_CODE ZEDCam::IngestGPSDataToFusion ( geoops::GPSCoordinate  stNewGPSLocation)
overridevirtual

If this camera is the fusion instance master, this method can be used to ingest/process/fuse the current GNSS position of the camera with the ZEDSDK positional tracking. This allows the use of RequestGeoPose to get a high resolution and highly accurate GNSS/VIO camera pose.

Parameters
stNewGPSLocation- The current GPS location, this must be up-to-date or the realtime position of the camera in the GPS space.
Returns
sl::FUSION_ERROR_CODE - Return status from the ZEDSDK Fusion module.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-04-15

Reimplemented from ZEDCamera.

1776{
1777 // Create instance variables.
1778 sl::FUSION_ERROR_CODE slReturnCode = sl::FUSION_ERROR_CODE::FAILURE;
1779
1780 // Check if this camera is the fusion master.
1781 if (m_bCameraIsFusionMaster)
1782 {
1783 // Acquire read lock.
1784 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1785 // Check if fusion positional tracking is enabled.
1786 if (constants::FUSION_ENABLE_GNSS_FUSION && m_slCamera.isPositionalTrackingEnabled())
1787 {
1788 // Release lock.
1789 lkCameraLock.unlock();
1790
1791 // Check if the GPS data from the NavBoard is recent.
1792 if (stNewGPSLocation.dLatitude != m_stCurrentGPSBasedPosition.dLatitude && stNewGPSLocation.dLongitude != m_stCurrentGPSBasedPosition.dLongitude &&
1793 stNewGPSLocation.d2DAccuracy != m_stCurrentGPSBasedPosition.d2DAccuracy && stNewGPSLocation.d3DAccuracy != m_stCurrentGPSBasedPosition.d3DAccuracy)
1794 {
1795 // Repack gps data int sl::GNSSData object.
1796 sl::GNSSData slGNSSData = sl::GNSSData();
1797 slGNSSData.setCoordinates(stNewGPSLocation.dLatitude, stNewGPSLocation.dLongitude, stNewGPSLocation.dAltitude, false);
1798 // Position covariance matrix expects millimeters.
1799 double dHorizontalAccuracy = stNewGPSLocation.d2DAccuracy * 1000;
1800 double dVerticalAccuracy = stNewGPSLocation.d3DAccuracy * 1000;
1801 // Calculate the covariance matrix from the 2D and 3D accuracies.
1802 slGNSSData.position_covariance = {dHorizontalAccuracy * dHorizontalAccuracy,
1803 0.0,
1804 0.0,
1805 0.0,
1806 dHorizontalAccuracy * dHorizontalAccuracy,
1807 0.0,
1808 0.0,
1809 0.0,
1810 dVerticalAccuracy * dVerticalAccuracy};
1811
1812 // Set GNSS timestamp from input GPSCoordinate data. sl::GNSSData expects time since epoch.
1813 slGNSSData.ts = sl::Timestamp(std::chrono::time_point_cast<std::chrono::nanoseconds>(stNewGPSLocation.tmTimestamp).time_since_epoch().count());
1814
1815 // Get the GNSS fix type status from the given GPS coordinate.
1816 switch (stNewGPSLocation.eCoordinateAccuracyFixType)
1817 {
1818 case geoops::PositionFixType::eNoFix:
1819 {
1820 slGNSSData.gnss_status = sl::GNSS_STATUS::SINGLE;
1821 slGNSSData.gnss_mode = sl::GNSS_MODE::NO_FIX;
1822 break;
1823 }
1824 case geoops::PositionFixType::eDeadReckoning:
1825 {
1826 slGNSSData.gnss_status = sl::GNSS_STATUS::PPS;
1827 slGNSSData.gnss_mode = sl::GNSS_MODE::NO_FIX;
1828 break;
1829 }
1830 case geoops::PositionFixType::eFix2D:
1831 {
1832 slGNSSData.gnss_status = sl::GNSS_STATUS::PPS;
1833 slGNSSData.gnss_mode = sl::GNSS_MODE::FIX_2D;
1834 break;
1835 }
1836 case geoops::PositionFixType::eFix3D:
1837 {
1838 slGNSSData.gnss_status = sl::GNSS_STATUS::RTK_FIX;
1839 slGNSSData.gnss_mode = sl::GNSS_MODE::FIX_3D;
1840 break;
1841 }
1842 case geoops::PositionFixType::eGNSSDeadReckoningCombined:
1843 {
1844 slGNSSData.gnss_status = sl::GNSS_STATUS::RTK_FIX;
1845 slGNSSData.gnss_mode = sl::GNSS_MODE::FIX_3D;
1846 break;
1847 }
1848 case geoops::PositionFixType::eTimeOnly:
1849 {
1850 slGNSSData.gnss_status = sl::GNSS_STATUS::RTK_FIX;
1851 slGNSSData.gnss_mode = sl::GNSS_MODE::FIX_3D;
1852 break;
1853 }
1854 default:
1855 {
1856 slGNSSData.gnss_status = sl::GNSS_STATUS::RTK_FIX;
1857 slGNSSData.gnss_mode = sl::GNSS_MODE::FIX_3D;
1858 break;
1859 }
1860 }
1861 // Check if fix is based off of differential GPS calculations.
1862 if (stNewGPSLocation.bIsDifferential)
1863 {
1864 slGNSSData.gnss_status = sl::GNSS_STATUS::DGNSS;
1865 }
1866
1867 // Acquire write lock.
1868 std::unique_lock<std::shared_mutex> lkFusionLock(m_muFusionMutex);
1869 // Publish GNSS data to fusion from the NavBoard.
1870 slReturnCode = m_slFusionInstance.ingestGNSSData(slGNSSData);
1871 // Release lock.
1872 lkFusionLock.unlock();
1873 // Check if the GNSS data was successfully ingested by the Fusion instance.
1874 if (slReturnCode != sl::FUSION_ERROR_CODE::SUCCESS)
1875 {
1876 // Covariance error.
1877 if (slReturnCode == sl::FUSION_ERROR_CODE::GNSS_DATA_COVARIANCE_MUST_VARY || slReturnCode == sl::FUSION_ERROR_CODE::INVALID_COVARIANCE)
1878 {
1879 // Submit logger message.
1880 LOG_WARNING(logging::g_qSharedLogger,
1881 "Unable to ingest fusion GNSS data for camera {} ({})! sl::Fusion positional tracking may be inaccurate! sl::FUSION_ERROR_CODE "
1882 "is: {}({}). Current accuracy data is 2D: {}, 3D {}.",
1883 sl::toString(m_slCameraModel).get(),
1884 m_unCameraSerialNumber,
1885 sl::toString(slReturnCode).get(),
1886 static_cast<int>(slReturnCode),
1887 stNewGPSLocation.d2DAccuracy,
1888 stNewGPSLocation.d3DAccuracy);
1889 }
1890 else if (slReturnCode != sl::FUSION_ERROR_CODE::NO_NEW_DATA_AVAILABLE)
1891 {
1892 // Submit logger message.
1893 LOG_WARNING(logging::g_qSharedLogger,
1894 "Unable to ingest fusion GNSS data for camera {} ({})! sl::Fusion positional tracking may be inaccurate! sl::FUSION_ERROR_CODE "
1895 "is: {}({})",
1896 sl::toString(m_slCameraModel).get(),
1897 m_unCameraSerialNumber,
1898 sl::toString(slReturnCode).get(),
1899 static_cast<int>(slReturnCode));
1900 }
1901 }
1902 else
1903 {
1904 // Acquire write lock.
1905 std::shared_lock<std::shared_mutex> lkFusionLock(m_muFusionMutex);
1906 // Get the current status of the fusion positional tracking.
1907 sl::FusedPositionalTrackingStatus slFusionPoseTrackStatus = m_slFusionInstance.getFusedPositionalTrackingStatus();
1908 // Release lock.
1909 lkFusionLock.unlock();
1910 // Submit logger message. DEBUG log the current fused position tracking state.
1911 LOG_DEBUG(logging::g_qSharedLogger,
1912 "PoseTrack Fusion Status: {}, GNSS Fusion Status: {}, VIO SpatialMemory Status: {}",
1913 sl::toString(slFusionPoseTrackStatus.tracking_fusion_status).get(),
1914 sl::toString(slFusionPoseTrackStatus.gnss_fusion_status).get(),
1915 sl::toString(slFusionPoseTrackStatus.spatial_memory_status).get());
1916 }
1917
1918 // Update current GPS position.
1919 m_stCurrentGPSBasedPosition = stNewGPSLocation;
1920 }
1921 }
1922 else
1923 {
1924 // Release lock.
1925 lkCameraLock.unlock();
1926 // Submit logger message.
1927 LOG_ERROR(logging::g_qSharedLogger,
1928 "Cannot ingest GNSS data because camera {} ({}) does not have positional tracking enabled or constants::FUSION_ENABLE_GNSS_FUSION is false!",
1929 sl::toString(m_slCameraModel).get(),
1930 m_unCameraSerialNumber);
1931 }
1932 }
1933 else
1934 {
1935 // Submit logger message.
1936 LOG_WARNING(logging::g_qSharedLogger,
1937 "Cannot ingest GNSS data because camera {} ({}) is not an sl::Fusion master!",
1938 sl::toString(m_slCameraModel).get(),
1939 m_unCameraSerialNumber);
1940 }
1941
1942 return slReturnCode;
1943}
Here is the call graph for this function:

◆ EnablePositionalTracking()

sl::ERROR_CODE ZEDCam::EnablePositionalTracking ( const float  fExpectedCameraHeightFromFloorTolerance = constants::ZED_DEFAULT_FLOOR_PLANE_ERROR)
overridevirtual

Enable the positional tracking functionality of the camera.

Parameters
fExpectedCameraHeightFromFloorTolerance- The expected height of the camera from the floor. This aids with floor plane detection.
Returns
sl::ERROR_CODE - Whether or not positional tracking was successfully enabled.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-22

Implements ZEDCamera.

1956{
1957 // Assign member variable.
1958 m_fExpectedCameraHeightFromFloorTolerance = fExpectedCameraHeightFromFloorTolerance;
1959
1960 // Acquire write lock.
1961 std::unique_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
1962 // Enable pose tracking and store return code.
1963 sl::ERROR_CODE slReturnCode = m_slCamera.enablePositionalTracking(m_slPoseTrackingParams);
1964 // Release lock.
1965 lkCameraLock.unlock();
1966
1967 // Check if positional tracking was enabled properly.
1968 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
1969 {
1970 // Submit logger message.
1971 LOG_ERROR(logging::g_qSharedLogger,
1972 "Failed to enable positional tracking for camera {} ({})! sl::ERROR_CODE is: {}",
1973 sl::toString(m_slCameraModel).get(),
1974 m_unCameraSerialNumber,
1975 sl::toString(slReturnCode).get());
1976 }
1977 // Check if fusion positional tracking should be enabled for this camera.
1978 else if (m_bCameraIsFusionMaster)
1979 {
1980 // Acquire write lock.
1981 std::unique_lock<std::shared_mutex> lkFusionLock(m_muFusionMutex);
1982 // Enable fusion positional tracking.
1983 sl::FUSION_ERROR_CODE slFusionReturnCode = m_slFusionInstance.enablePositionalTracking(m_slFusionPoseTrackingParams);
1984 // Release lock.
1985 lkFusionLock.unlock();
1986
1987 // Check if the fusion positional tracking was enabled successfully.
1988 if (slFusionReturnCode != sl::FUSION_ERROR_CODE::SUCCESS)
1989 {
1990 // Submit logger message.
1991 LOG_ERROR(logging::g_qSharedLogger,
1992 "Failed to enable fusion positional tracking for camera {} ({})! sl::FUSION_ERROR_CODE is: {}",
1993 sl::toString(m_slCameraModel).get(),
1994 m_unCameraSerialNumber,
1995 sl::toString(slFusionReturnCode).get());
1996 }
1997 }
1998
1999 // Return error code.
2000 return slReturnCode;
2001}
Here is the call graph for this function:
Here is the caller graph for this function:

◆ DisablePositionalTracking()

void ZEDCam::DisablePositionalTracking ( )
overridevirtual

Disable to positional tracking functionality of the camera.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-26

Implements ZEDCamera.

2011{
2012 // Check if fusion positional tracking should be enabled for this camera.
2013 if (m_bCameraIsFusionMaster)
2014 {
2015 // Acquire write lock.
2016 std::unique_lock<std::shared_mutex> lkFusionLock(m_muFusionMutex);
2017 // Enable fusion positional tracking.
2018 m_slFusionInstance.disablePositionalTracking();
2019 // Release lock.
2020 lkFusionLock.unlock();
2021 }
2022 // Acquire write lock.
2023 std::unique_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2024 // Disable pose tracking.
2025 m_slCamera.disablePositionalTracking();
2026}

◆ SetPositionalPose()

void ZEDCam::SetPositionalPose ( const double  dX,
const double  dY,
const double  dZ,
const double  dXO,
const double  dYO,
const double  dZO 
)
overridevirtual

Sets the pose of the positional tracking of the camera. XYZ will point in their respective directions according to ZED_COORD_SYSTEM defined in AutonomyConstants.h.

Warning: This method is slow and should not be called in a loop. Setting the pose will temporarily block the entire camera from grabbed or copying frames to new threads. This method should only be called occasionally when absolutely needed.

Parameters
dX- The X position of the camera in ZED_MEASURE_UNITS.
dY- The Y position of the camera in ZED_MEASURE_UNITS.
dZ- The Z position of the camera in ZED_MEASURE_UNITS.
dXO- The tilt of the camera around the X axis in degrees. (0-360)
dYO- The tilt of the camera around the Y axis in degrees. (0-360)
dZO- The tilt of the camera around the Z axis in degrees. (0-360)
Returns
sl::ERROR_CODE - Whether or not the pose was set successfully.
Bug:
The ZEDSDK currently cannot handle resetting the positional pose with large translational (dX, dY, dZ) values without breaking positional tracking. This is because the values are floats and not doubles. To fix this I (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om), have decided to just handle the translation offsets internally. So when SetPositionalPose() is called is assigns the dX, dY, and dZ values to private member variables of this class, then the offsets are added the the pose in the PooledLinearCode() under the pose requests section. If StereoLabs fixes this in the future, I will go back to using the sl::Translation to reset positional tracking.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Implements ZEDCamera.

2055{
2056 // Update offset member variables.
2057 m_dPoseOffsetX = dX - m_slCameraPose.getTranslation().x;
2058 m_dPoseOffsetY = dY - m_slCameraPose.getTranslation().y;
2059 m_dPoseOffsetZ = dZ - m_slCameraPose.getTranslation().z;
2060 // Find the angular distance from current and desired pose angles. This is complicated because zed uses different angle ranges.
2061 m_dPoseOffsetXO =
2062 numops::InputAngleModulus(numops::AngularDifference(numops::InputAngleModulus<double>(m_slCameraPose.getEulerAngles(false).x, 0.0, 360.0), dXO), 0.0, 360.0);
2063 m_dPoseOffsetYO =
2064 numops::InputAngleModulus(numops::AngularDifference(numops::InputAngleModulus<double>(m_slCameraPose.getEulerAngles(false).y, 0.0, 360.0), dYO), 0.0, 360.0);
2065 m_dPoseOffsetZO =
2066 numops::InputAngleModulus(numops::AngularDifference(numops::InputAngleModulus<double>(m_slCameraPose.getEulerAngles(false).z, 0.0, 360.0), dZO), 0.0, 360.0);
2067}
constexpr T InputAngleModulus(T tValue, T tMinValue, T tMaxValue)
Calculates the modulus of an input angle.
Definition NumberOperations.hpp:165
constexpr T AngularDifference(T tFirstValue, T tSecondValue)
Calculates the distance in degrees between two angles. This function accounts for wrap around so that...
Definition NumberOperations.hpp:204
Here is the call graph for this function:

◆ EnableSpatialMapping()

sl::ERROR_CODE ZEDCam::EnableSpatialMapping ( )
overridevirtual

Enabled the spatial mapping feature of the camera. Pose tracking will be enabled if it is not already.

Returns
sl::ERROR_CODE - Whether or not spatial mapping was successfully enabled.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2079{
2080 // Create instance variables.
2081 sl::Pose slCameraPose;
2082 sl::ERROR_CODE slReturnCode = sl::ERROR_CODE::SUCCESS;
2083
2084 // Acquire read lock.
2085 std::shared_lock<std::shared_mutex> lkReadCameraLock(m_muCameraMutex);
2086 // Check if positional tracking is enabled.
2087 if (!m_slCamera.isPositionalTrackingEnabled())
2088 {
2089 // Release lock.
2090 lkReadCameraLock.unlock();
2091 // Enable positional tracking.
2092 slReturnCode = this->EnablePositionalTracking();
2093 }
2094 else
2095 {
2096 // Release lock.
2097 lkReadCameraLock.unlock();
2098 }
2099
2100 // Check if positional tracking is or was enabled successfully.
2101 if (slReturnCode == sl::ERROR_CODE::SUCCESS)
2102 {
2103 // Acquire write lock.
2104 std::unique_lock<std::shared_mutex> lkWriteCameraLock(m_muCameraMutex);
2105 // Call camera grab function once to ensure the camera is initialized with data.
2106 m_slCamera.grab(m_slRuntimeParams);
2107 // Enable spatial mapping.
2108 slReturnCode = m_slCamera.enableSpatialMapping(m_slSpatialMappingParams);
2109 // Release lock.
2110 lkWriteCameraLock.unlock();
2111
2112 // Check if positional tracking was enabled properly.
2113 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
2114 {
2115 // Submit logger message.
2116 LOG_ERROR(logging::g_qSharedLogger,
2117 "Failed to enabled spatial mapping for camera {} ({})! sl::ERROR_CODE is: {}",
2118 sl::toString(m_slCameraModel).get(),
2119 m_unCameraSerialNumber,
2120 sl::toString(slReturnCode).get());
2121 }
2122 }
2123 else
2124 {
2125 // Submit logger message.
2126 LOG_ERROR(logging::g_qSharedLogger,
2127 "Failed to enabled spatial mapping for camera {} ({}) because positional tracking could not be enabled! sl::ERROR_CODE is: {}",
2128 sl::toString(m_slCameraModel).get(),
2129 m_unCameraSerialNumber,
2130 sl::toString(slReturnCode).get());
2131 }
2132
2133 // Return error code.
2134 return slReturnCode;
2135}
sl::ERROR_CODE EnablePositionalTracking(const float fExpectedCameraHeightFromFloorTolerance=constants::ZED_DEFAULT_FLOOR_PLANE_ERROR) override
Enable the positional tracking functionality of the camera.
Definition ZEDCam.cpp:1955
Here is the call graph for this function:
Here is the caller graph for this function:

◆ DisableSpatialMapping()

void ZEDCam::DisableSpatialMapping ( )
overridevirtual

Disabled the spatial mapping feature of the camera.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2145{
2146 // Acquire write lock.
2147 std::unique_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2148 // Disable spatial mapping.
2149 m_slCamera.disableSpatialMapping();
2150}

◆ EnableObjectDetection()

sl::ERROR_CODE ZEDCam::EnableObjectDetection ( const bool  bEnableBatching = false)
overridevirtual

Enables the object detection and tracking feature of the camera.

Returns
sl::ERROR_CODE - Whether or not object detection/tracking was successfully enabled.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2161{
2162 // Check if batching should be turned on.
2163 bEnableBatching ? m_slObjectDetectionBatchParams.enable = true : m_slObjectDetectionBatchParams.enable = false;
2164 // Give batch params to detection params.
2165 m_slObjectDetectionParams.batch_parameters = m_slObjectDetectionBatchParams;
2166
2167 // Acquire write lock.
2168 std::unique_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2169 // Enable object detection.
2170 sl::ERROR_CODE slReturnCode = m_slCamera.enableObjectDetection(m_slObjectDetectionParams);
2171 // Release lock.
2172 lkCameraLock.unlock();
2173
2174 // Check if positional tracking was enabled properly.
2175 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
2176 {
2177 // Submit logger message.
2178 LOG_ERROR(logging::g_qSharedLogger,
2179 "Failed to enabled object detection for camera {} ({})! sl::ERROR_CODE is: {}",
2180 sl::toString(m_slCameraModel).get(),
2181 m_unCameraSerialNumber,
2182 sl::toString(slReturnCode).get());
2183 }
2184
2185 // Return error code.
2186 return slReturnCode;
2187}
Here is the call graph for this function:
Here is the caller graph for this function:

◆ DisableObjectDetection()

void ZEDCam::DisableObjectDetection ( )
overridevirtual

Disables the object detection and tracking feature of the camera.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2197{
2198 // Acquire write lock.
2199 std::unique_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2200 // Disable object detection and tracking.
2201 m_slCamera.disableObjectDetection();
2202}

◆ GetCameraIsOpen()

bool ZEDCam::GetCameraIsOpen ( )
overridevirtual

Accessor for the current status of the camera.

Returns
true - Camera is currently opened and functional.
false - Camera is not opened and/or connected.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Implements Camera< cv::Mat >.

2214{
2215 // Acquire read lock.
2216 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2217 return this->GetThreadState() == AutonomyThreadState::eRunning && m_slCamera.isOpened();
2218}
AutonomyThreadState GetThreadState() const
Accessor for the Threads State private member.
Definition AutonomyThread.hpp:214
Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetUsingGPUMem()

bool ZEDCam::GetUsingGPUMem ( ) const
overridevirtual

Accessor for if this ZED is storing it's frames in GPU memory.

Returns
true - Using GPU memory for mats.
false - Using CPU memory for mats.
Author
ClayJay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-09

Reimplemented from ZEDCamera.

2230{
2231 // Check if we are using GPU memory.
2232 return m_slMemoryType == sl::MEM::GPU;
2233}

◆ GetCameraModel()

std::string ZEDCam::GetCameraModel ( )
overridevirtual

Accessor for the model enum from the ZEDSDK and represents the camera model as a string.

Returns
std::string - The model of the zed camera. Possible values: ZED, ZED_MINI, ZED_2, ZED_2i, ZED_X, ZED_X_MINI, UNDEFINED_UNKNOWN
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Implements ZEDCamera.

2245{
2246 // Acquire read lock.
2247 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2248 // Check if the camera is opened.
2249 if (m_slCamera.isOpened())
2250 {
2251 // Release lock.
2252 lkCameraLock.unlock();
2253 // Convert camera model to a string and return.
2254 return sl::toString(m_slCameraModel).get();
2255 }
2256 else
2257 {
2258 // Release lock.
2259 lkCameraLock.unlock();
2260 // Return the model string to show camera isn't opened.
2261 return "NOT_OPENED";
2262 }
2263}
Here is the caller graph for this function:

◆ GetCameraSerial()

unsigned int ZEDCam::GetCameraSerial ( )
overridevirtual

Accessor for the camera's serial number.

Returns
unsigned int - The serial number of the camera.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2274{
2275 // Return the model string to show camera isn't opened.
2276 return m_unCameraSerialNumber;
2277}
Here is the caller graph for this function:

◆ GetPositionalTrackingEnabled()

bool ZEDCam::GetPositionalTrackingEnabled ( )
overridevirtual

Accessor for if the positional tracking functionality of the camera has been enabled and functioning.

Returns
true - Positional tracking is enabled.
false - Positional tracking is not enabled.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Implements ZEDCamera.

2290{
2291 // Acquire read lock.
2292 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2293 return m_slCamera.isPositionalTrackingEnabled() && m_slCamera.getPositionalTrackingStatus().odometry_status == sl::ODOMETRY_STATUS::OK;
2294}

◆ GetPositionalTrackingState()

sl::PositionalTrackingStatus ZEDCam::GetPositionalTrackingState ( )
overridevirtual

Accessor for the current positional tracking status of the camera.

Returns
sl::PositionalTrackingStatus - The sl::PositionalTrackingStatus struct storing information about the current VIO positional tracking state.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-04-20

Reimplemented from ZEDCamera.

2306{
2307 // Acquire read lock.
2308 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2309 return m_slCamera.getPositionalTrackingStatus();
2310}

◆ GetFusedPositionalTrackingState()

sl::FusedPositionalTrackingStatus ZEDCam::GetFusedPositionalTrackingState ( )
overridevirtual

Accessor for the current positional tracking status of the fusion instance.

Returns
sl::FusedPositionalTrackingStatus - The sl::FusedPositionalTrackingStatus struct storing information about the current VIO and GNSS fusion positional tracking state.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-04-22

Reimplemented from ZEDCamera.

2322{
2323 // Acquire read lock.
2324 std::shared_lock<std::shared_mutex> lkFusionLock(m_muFusionMutex);
2325 return m_slFusionInstance.getFusedPositionalTrackingStatus();
2326}

◆ GetSpatialMappingState()

sl::SPATIAL_MAPPING_STATE ZEDCam::GetSpatialMappingState ( )
overridevirtual

Accessor for the current state of the camera's spatial mapping feature.

Returns
sl::SPATIAL_MAPPING_STATE - The enum value of the spatial mapping state.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2337{
2338 // Acquire read lock.
2339 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2340 // Return the current spatial mapping state of the camera.
2341 return m_slCamera.getSpatialMappingState();
2342}

◆ ExtractSpatialMapAsync()

sl::SPATIAL_MAPPING_STATE ZEDCam::ExtractSpatialMapAsync ( std::future< sl::Mesh > &  fuMeshFuture)
overridevirtual

Retrieve the built spatial map from the camera. Spatial mapping must be enabled. This method takes in an std::future<sl::FusedPointCloud> to eventually store the map in. It returns a enum code representing the successful scheduling of building the map. Any code other than SPATIAL_MAPPING_STATE::OK means the future will never be filled.

Parameters
std::future<sl::Mesh>- The future to eventually store the map in.
Returns
sl::SPATIAL_MAPPING_STATE - Whether or not the building of the map was successfully scheduled. Anything other than OK means the future will never be filled.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2358{
2359 // Acquire read lock.
2360 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2361 // Get and store current state of spatial mapping.
2362 sl::SPATIAL_MAPPING_STATE slReturnState = m_slCamera.getSpatialMappingState();
2363
2364 // Check if spatial mapping has been enabled and ready
2365 if (slReturnState == sl::SPATIAL_MAPPING_STATE::OK)
2366 {
2367 // Request that the ZEDSDK begin processing the spatial map for export.
2368 m_slCamera.requestSpatialMapAsync();
2369
2370 // Start an async thread to wait for spatial map processing to finish. Return resultant future object.
2371 fuMeshFuture = std::async(std::launch::async,
2372 [this]()
2373 {
2374 // Create instance variables.
2375 sl::Mesh slSpatialMap;
2376
2377 // Loop until map is finished generating.
2378 while (m_slCamera.getSpatialMapRequestStatusAsync() == sl::ERROR_CODE::FAILURE)
2379 {
2380 // Sleep for 10ms.
2381 std::this_thread::sleep_for(std::chrono::milliseconds(10));
2382 }
2383
2384 // Check if the spatial map was exported successfully.
2385 if (m_slCamera.getSpatialMapRequestStatusAsync() == sl::ERROR_CODE::SUCCESS)
2386 {
2387 // Get and store the spatial map.
2388 m_slCamera.retrieveSpatialMapAsync(slSpatialMap);
2389
2390 // Return spatial map.
2391 return slSpatialMap;
2392 }
2393 else
2394 {
2395 // Submit logger message.
2396 LOG_ERROR(logging::g_qSharedLogger,
2397 "Failed to extract ZED spatial map. sl::ERROR_CODE is: {}",
2398 sl::toString(m_slCamera.getSpatialMapRequestStatusAsync()).get());
2399
2400 // Return empty point cloud.
2401 return sl::Mesh();
2402 }
2403 });
2404 }
2405 else
2406 {
2407 // Submit logger message.
2408 LOG_WARNING(logging::g_qSharedLogger, "ZED spatial mapping was never enabled, can't extract spatial map!");
2409 }
2410
2411 // Return current spatial mapping state.
2412 return slReturnState;
2413}

◆ GetObjectDetectionEnabled()

bool ZEDCam::GetObjectDetectionEnabled ( )
overridevirtual

Accessor for if the cameras object detection and tracking feature is enabled.

Returns
true - Object detection and tracking is enabled.
false - Object detection and tracking is not enabled.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-08-27

Reimplemented from ZEDCamera.

2425{
2426 // Acquire read lock.
2427 std::shared_lock<std::shared_mutex> lkCameraLock(m_muCameraMutex);
2428 return m_slCamera.isObjectDetectionEnabled();
2429}

◆ ThreadedContinuousCode()

void ZEDCam::ThreadedContinuousCode ( )
overrideprivatevirtual

The code inside this private method runs in a separate thread, but still has access to this*. This method continuously calls the grab() function of the ZEDSDK, which updates all frames (RGB, depth, cloud) and all other data such as positional and spatial mapping. Then a thread pool is started and joined once per iteration to mass copy the frames and/or measure to any other thread waiting in the queues.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-01

Implements AutonomyThread< void >.

290{
291 // Acquire read lock for camera object.
292 std::shared_lock<std::shared_mutex> lkReadCameraLock(m_muCameraMutex);
293 // Check if camera is opened.
294 if (!m_slCamera.isOpened())
295 {
296 // Release lock.
297 lkReadCameraLock.unlock();
298
299 // If this is the first iteration of the thread the camera probably isn't present so stop thread to save resources.
300 if (this->GetThreadState() == AutonomyThreadState::eStarting)
301 {
302 // Shutdown threads for this ZEDCam.
303 this->RequestStop();
304 // Submit logger message.
305 LOG_CRITICAL(logging::g_qSharedLogger,
306 "Camera start was attempted for ZED camera with serial number {}, but camera never properly opened or it has been closed/rebooted!",
307 m_unCameraSerialNumber);
308 }
309 else
310 {
311 std::chrono::time_point tmCurrentTime = std::chrono::system_clock::now();
312 // Convert time point to seconds since epoch
313 int nTimeSinceEpoch = std::chrono::duration_cast<std::chrono::seconds>(tmCurrentTime.time_since_epoch()).count();
314
315 // Only try to reopen camera every 5 seconds.
316 if (nTimeSinceEpoch % 5 == 0 && !m_bCameraReopenAlreadyChecked)
317 {
318 // Acquire write lock for camera object.
319 std::unique_lock<std::shared_mutex> lkWriteCameraLock(m_muCameraMutex);
320 // Attempt to reopen camera.
321 sl::ERROR_CODE slReturnCode = m_slCamera.open(m_slCameraParams);
322 // Release lock.
323 lkWriteCameraLock.unlock();
324
325 // Check if camera was reopened.
326 if (slReturnCode == sl::ERROR_CODE::SUCCESS)
327 {
328 // Submit logger message.
329 LOG_INFO(logging::g_qSharedLogger, "ZED stereo camera with serial number {} has been reconnected and reopened!", m_unCameraSerialNumber);
330
331 // Check if this camera is a fusion master.
332 if (m_bCameraIsFusionMaster)
333 {
334 // Acquire write lock for camera object.
335 std::unique_lock<std::shared_mutex> lkWriteCameraLock(m_muCameraMutex);
336 // Enable odometry publishing for this ZED camera.
337 m_slCamera.startPublishing();
338 // Release lock.
339 lkWriteCameraLock.unlock();
340 }
341
342 // Check if positional tracking was enabled.
343 if (!m_slCamera.isPositionalTrackingEnabled())
344 {
345 slReturnCode = this->EnablePositionalTracking();
346 }
347 else
348 {
349 // Submit logger message.
350 LOG_ERROR(logging::g_qSharedLogger,
351 "After reopening ZED stereo camera with serial number {}, positional tracking failed to reinitialize. sl::ERROR_CODE is: {}",
352 m_unCameraSerialNumber,
353 sl::toString(slReturnCode).get());
354 }
355 // Check if spatial mapping was enabled.
356 if (m_slCamera.getSpatialMappingState() != sl::SPATIAL_MAPPING_STATE::OK)
357 {
358 slReturnCode = this->EnableSpatialMapping();
359 }
360 else
361 {
362 // Submit logger message.
363 LOG_ERROR(logging::g_qSharedLogger,
364 "After reopening ZED stereo camera with serial number {}, spatial mapping failed to reinitialize. sl::ERROR_CODE is: {}",
365 m_unCameraSerialNumber,
366 sl::toString(slReturnCode).get());
367 }
368 // Check if object detection was enabled.
369 if (!m_slCamera.isObjectDetectionEnabled())
370 {
371 slReturnCode = this->EnableObjectDetection();
372 }
373 else
374 {
375 // Submit logger message.
376 LOG_ERROR(logging::g_qSharedLogger,
377 "After reopening ZED stereo camera with serial number {}, object detection failed to reinitialize. sl::ERROR_CODE is: {}",
378 m_unCameraSerialNumber,
379 sl::toString(slReturnCode).get());
380 }
381 }
382 else
383 {
384 // Submit logger message.
385 LOG_WARNING(logging::g_qSharedLogger,
386 "Attempt to reopen ZED stereo camera with serial number {} has failed! Trying again in 5 seconds...",
387 m_unCameraSerialNumber);
388 }
389
390 // Set toggle.
391 m_bCameraReopenAlreadyChecked = true;
392 }
393 else if (nTimeSinceEpoch % 5 != 0)
394 {
395 // Reset toggle.
396 m_bCameraReopenAlreadyChecked = false;
397 }
398 }
399 }
400 else
401 {
402 // Release lock.
403 lkReadCameraLock.unlock();
404 // Acquire write lock for camera object.
405 std::unique_lock<std::shared_mutex> lkWriteCameraLock(m_muCameraMutex);
406 // Call generalized update method of zed api.
407 sl::ERROR_CODE slReturnCode = m_slCamera.grab(m_slRuntimeParams);
408 // Release camera lock.
409 lkWriteCameraLock.unlock();
410
411 // Check if this camera is the fusion master instance. Feed data to sl::Fusion.
412 if (m_bCameraIsFusionMaster)
413 {
414 // Acquire write lock.
415 std::unique_lock<std::shared_mutex> lkFusionLock(m_muFusionMutex);
416 // Call generalized process method of Fusion instance.
417 sl::FUSION_ERROR_CODE slReturnCode = m_slFusionInstance.process();
418 // Release lock.
419 lkFusionLock.unlock();
420
421 // Check if fusion data was processed correctly.
422 if (slReturnCode != sl::FUSION_ERROR_CODE::SUCCESS && slReturnCode != sl::FUSION_ERROR_CODE::NO_NEW_DATA_AVAILABLE)
423 {
424 // Submit logger message.
425 LOG_WARNING(logging::g_qSharedLogger,
426 "Unable to process fusion data for camera {} ({})! sl::FUSION_ERROR_CODE is: {}",
427 sl::toString(m_slCameraModel).get(),
428 m_unCameraSerialNumber,
429 sl::toString(slReturnCode).get());
430 }
431 }
432
433 // Check if new frame was computed successfully.
434 if (slReturnCode == sl::ERROR_CODE::SUCCESS)
435 {
436 // Check if normal frames have been requested.
437 if (m_bNormalFramesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
438 {
439 // Grab regular image and store it in member variable.
440 slReturnCode = m_slCamera.retrieveImage(m_slFrame, constants::ZED_RETRIEVE_VIEW, m_slMemoryType, sl::Resolution(m_nPropResolutionX, m_nPropResolutionY));
441 // Check that the regular frame was retrieved successfully.
442 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
443 {
444 // Submit logger message.
445 LOG_WARNING(logging::g_qSharedLogger,
446 "Unable to retrieve new frame image for stereo camera {} ({})! sl::ERROR_CODE is: {}",
447 sl::toString(m_slCameraModel).get(),
448 m_unCameraSerialNumber,
449 sl::toString(slReturnCode).get());
450 }
451 }
452
453 // Check if depth frames have been requested.
454 if (m_bDepthFramesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
455 {
456 // Grab depth measure and store it in member variable.
457 slReturnCode = m_slCamera.retrieveMeasure(m_slDepthMeasure, m_slDepthMeasureType, m_slMemoryType, sl::Resolution(m_nPropResolutionX, m_nPropResolutionY));
458 // Check that the regular frame was retrieved successfully.
459 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
460 {
461 // Submit logger message.
462 LOG_WARNING(logging::g_qSharedLogger,
463 "Unable to retrieve new depth measure for stereo camera {} ({})! sl::ERROR_CODE is: {}",
464 sl::toString(m_slCameraModel).get(),
465 m_unCameraSerialNumber,
466 sl::toString(slReturnCode).get());
467 }
468
469 // Grab depth grayscale image and store it in member variable.
470 slReturnCode = m_slCamera.retrieveImage(m_slDepthImage, sl::VIEW::DEPTH, m_slMemoryType, sl::Resolution(m_nPropResolutionX, m_nPropResolutionY));
471 // Check that the regular frame was retrieved successfully.
472 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
473 {
474 // Submit logger message.
475 LOG_WARNING(logging::g_qSharedLogger,
476 "Unable to retrieve new depth image for stereo camera {} ({})! sl::ERROR_CODE is: {}",
477 sl::toString(m_slCameraModel).get(),
478 m_unCameraSerialNumber,
479 sl::toString(slReturnCode).get());
480 }
481 }
482
483 // Check if point clouds have been requested.
484 if (m_bPointCloudsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
485 {
486 // Grab regular resized image and store it in member variable.
487 slReturnCode = m_slCamera.retrieveMeasure(m_slPointCloud, sl::MEASURE::XYZBGRA, m_slMemoryType, sl::Resolution(m_nPropResolutionX, m_nPropResolutionY));
488 // Check that the regular frame was retrieved successfully.
489 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
490 {
491 // Submit logger message.
492 LOG_WARNING(logging::g_qSharedLogger,
493 "Unable to retrieve new point cloud for stereo camera {} ({})! sl::ERROR_CODE is: {}",
494 sl::toString(m_slCameraModel).get(),
495 m_unCameraSerialNumber,
496 sl::toString(slReturnCode).get());
497 }
498 }
499
500 // Check if positional tracking is enabled.
501 if (m_slCamera.isPositionalTrackingEnabled())
502 {
503 // Check if poses have been requested.
504 if (m_bPosesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
505 {
506 // Create instance variable for storing the result of retrieving the pose.
507 sl::POSITIONAL_TRACKING_STATE slPoseTrackReturnCode;
508
509 // Check if this camera has Fusion instance enabled.
510 if (m_bCameraIsFusionMaster)
511 {
512 // Get tracking pose from Fusion.
513 slPoseTrackReturnCode = m_slFusionInstance.getPosition(m_slCameraPose, sl::REFERENCE_FRAME::WORLD);
514 }
515 else
516 {
517 // Get normal vision tracking pose from camera.
518 slPoseTrackReturnCode = m_slCamera.getPosition(m_slCameraPose, sl::REFERENCE_FRAME::WORLD);
519 }
520 // Check that the regular frame was retrieved successfully.
521 if (slPoseTrackReturnCode != sl::POSITIONAL_TRACKING_STATE::OK)
522 {
523 // Submit logger message.
524 LOG_WARNING(logging::g_qSharedLogger,
525 "Unable to retrieve new positional tracking pose for stereo camera {} ({})! sl::POSITIONAL_TRACKING_STATE is: {}",
526 sl::toString(m_slCameraModel).get(),
527 m_unCameraSerialNumber,
528 sl::toString(slPoseTrackReturnCode).get());
529 }
530 }
531
532 // Check if geo poses have been requested.
533 if (m_bCameraIsFusionMaster && m_bGeoPosesQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
534 {
535 // Get the fused geo pose from the camera.
536 sl::GNSS_FUSION_STATUS slGeoPoseTrackReturnCode = m_slFusionInstance.getGeoPose(m_slFusionGeoPose);
537 // Check that the geo pose was retrieved successfully.
538 if (slGeoPoseTrackReturnCode != sl::GNSS_FUSION_STATUS::OK && slGeoPoseTrackReturnCode != sl::GNSS_FUSION_STATUS::CALIBRATION_IN_PROGRESS)
539 {
540 // Submit logger message.
541 LOG_WARNING(logging::g_qSharedLogger,
542 "Geo pose tracking state for stereo camera {} ({}) is suboptimal! sl::GNSS_FUSION_STATUS is: {}",
543 sl::toString(m_slCameraModel).get(),
544 m_unCameraSerialNumber,
545 sl::toString(slGeoPoseTrackReturnCode).get());
546 }
547 }
548
549 // Check if floor planes are being requested.
550 if (m_bFloorsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
551 {
552 // Get the current pose of the camera.
553 slReturnCode = m_slCamera.findFloorPlane(m_slFloorPlane,
554 m_slFloorTrackingTransform,
555 m_slCameraPose.getTranslation().y,
556 m_slCameraPose.getRotationMatrix(),
557 m_fExpectedCameraHeightFromFloorTolerance);
558 // Check that the regular frame was retrieved successfully.
559 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
560 {
561 // Submit logger message.
562 LOG_WARNING(logging::g_qSharedLogger,
563 "Unable to retrieve new floor plane for stereo camera {} ({})! sl::ERROR_CODE is: {}",
564 sl::toString(m_slCameraModel).get(),
565 m_unCameraSerialNumber,
566 sl::toString(slReturnCode).get());
567 }
568 }
569 }
570
571 // Get the IMU, barometer, magnetometer, and temperature sensor info from the camera.
572 if (m_bSensorsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
573 {
574 // Get the sensor data from the camera.
575 slReturnCode = m_slCamera.getSensorsData(m_slSensorsData, sl::TIME_REFERENCE::CURRENT);
576
577 // Check if the sensor data was retrieved successfully.
578 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
579 {
580 // Submit logger message.
581 LOG_WARNING(logging::g_qSharedLogger,
582 "Unable to retrieve sensor data for stereo camera {} ({})! sl::ERROR_CODE is: {}",
583 sl::toString(m_slCameraModel).get(),
584 m_unCameraSerialNumber,
585 sl::toString(slReturnCode).get());
586 }
587 }
588
589 // Check if object detection is enabled.
590 if (m_slCamera.isObjectDetectionEnabled())
591 {
592 // Check if objects have been requested.
593 if (m_bObjectsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
594 {
595 // Get updated objects from camera.
596 slReturnCode = m_slCamera.retrieveObjects(m_slDetectedObjects);
597 // Check that the regular frame was retrieved successfully.
598 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
599 {
600 // Submit logger message.
601 LOG_WARNING(logging::g_qSharedLogger,
602 "Unable to retrieve new object data for stereo camera {} ({})! sl::ERROR_CODE is: {}",
603 sl::toString(m_slCameraModel).get(),
604 m_unCameraSerialNumber,
605 sl::toString(slReturnCode).get());
606 }
607 }
608
609 // Check if batched object data is enabled.
610 if (m_slObjectDetectionBatchParams.enable)
611 {
612 // Check if batched objects have been requested.
613 if (m_bBatchedObjectsQueued.load(ATOMIC_MEMORY_ORDER_METHOD))
614 {
615 // Get updated batched objects from camera.
616 slReturnCode = m_slCamera.getObjectsBatch(m_slDetectedObjectsBatched);
617 // Check that the regular frame was retrieved successfully.
618 if (slReturnCode != sl::ERROR_CODE::SUCCESS)
619 {
620 // Submit logger message.
621 LOG_WARNING(logging::g_qSharedLogger,
622 "Unable to retrieve new batched object data for stereo camera {} ({})! sl::ERROR_CODE is: {}",
623 sl::toString(m_slCameraModel).get(),
624 m_unCameraSerialNumber,
625 sl::toString(slReturnCode).get());
626 }
627 }
628 }
629 }
630 // Detection not enabled, but got requests.
631 else if (m_bObjectsQueued.load(std::memory_order_relaxed) || m_bBatchedObjectsQueued.load(std::memory_order_relaxed))
632 {
633 // Submit logger message.
634 LOG_WARNING(logging::g_qSharedLogger,
635 "Unable to retrieve new object data for stereo camera {} ({})! Object detection is disabled!",
636 sl::toString(m_slCameraModel).get(),
637 m_unCameraSerialNumber);
638 }
639 }
640 else
641 {
642 // Submit logger message.
643 LOG_ERROR(logging::g_qSharedLogger,
644 "Unable to update stereo camera {} ({}) frames, measurements, and sensors! sl::ERROR_CODE is: {}. Closing camera...",
645 sl::toString(m_slCameraModel).get(),
646 m_unCameraSerialNumber,
647 sl::toString(slReturnCode).get());
648
649 // Release camera resources.
650 m_slCamera.close();
651 }
652 }
653
654 // Acquire a shared_lock on the frame copy queue.
655 std::shared_lock<std::shared_mutex> lkSchedulers(m_muPoolScheduleMutex);
656 // Check if any requests have been made.
657 if (!m_qFrameCopySchedule.empty() || !m_qGPUFrameCopySchedule.empty() || !m_qCustomBoxIngestSchedule.empty() || !m_qPoseCopySchedule.empty() ||
658 !m_qGeoPoseCopySchedule.empty() || !m_qFloorCopySchedule.empty() || !m_qSensorsCopySchedule.empty() || !m_qObjectDataCopySchedule.empty() ||
659 !m_qObjectBatchedDataCopySchedule.empty())
660 {
661 // Add the length of all queues together to determine how many tasks need to be run.
662 size_t siTotalQueueLength = m_qFrameCopySchedule.size() + m_qGPUFrameCopySchedule.size() + m_qCustomBoxIngestSchedule.size() + m_qPoseCopySchedule.size() +
663 m_qGeoPoseCopySchedule.size() + m_qFloorCopySchedule.size() + m_qSensorsCopySchedule.size() + m_qObjectDataCopySchedule.size() +
664 m_qObjectBatchedDataCopySchedule.size();
665
666 // Start the thread pool to copy member variables to requesting other threads. Num of tasks queued depends on number of member variables updates and requests.
667 this->RunDetachedPool(siTotalQueueLength, m_nNumFrameRetrievalThreads);
668
669 // Get current time.
670 std::chrono::_V2::system_clock::duration tmCurrentTime = std::chrono::high_resolution_clock::now().time_since_epoch();
671 // Only reset once every couple seconds.
672 if (std::chrono::duration_cast<std::chrono::seconds>(tmCurrentTime).count() % 31 == 0 && !m_bQueueTogglesAlreadyReset)
673 {
674 // Reset queue counters.
675 m_bNormalFramesQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
676 m_bDepthFramesQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
677 m_bPointCloudsQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
678 m_bPosesQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
679 m_bGeoPosesQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
680 m_bFloorsQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
681 m_bSensorsQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
682 m_bObjectsQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
683 m_bBatchedObjectsQueued.store(false, ATOMIC_MEMORY_ORDER_METHOD);
684
685 // Set reset toggle.
686 m_bQueueTogglesAlreadyReset = true;
687 }
688 // Crucial for toggle action. If time is not evenly devisable and toggles have previously been set, reset queue reset boolean.
689 else if (m_bQueueTogglesAlreadyReset)
690 {
691 // Reset reset toggle.
692 m_bQueueTogglesAlreadyReset = false;
693 }
694
695 // Wait for thread pool to finish.
696 this->JoinPool();
697 }
698
699 // Release lock on frame copy queue.
700 lkSchedulers.unlock();
701}
void RunDetachedPool(const unsigned int nNumTasksToQueue, const unsigned int nNumThreads=2, const bool bForceStopCurrentThreads=false)
When this method is called, it starts a thread pool full of threads that don't return std::futures (l...
Definition AutonomyThread.hpp:336
void JoinPool()
Waits for pool to finish executing tasks. This method will block the calling code until thread is fin...
Definition AutonomyThread.hpp:439
sl::ERROR_CODE EnableSpatialMapping() override
Enabled the spatial mapping feature of the camera. Pose tracking will be enabled if it is not already...
Definition ZEDCam.cpp:2078
sl::ERROR_CODE EnableObjectDetection(const bool bEnableBatching=false) override
Enables the object detection and tracking feature of the camera.
Definition ZEDCam.cpp:2160
Here is the call graph for this function:

◆ PooledLinearCode()

void ZEDCam::PooledLinearCode ( )
overrideprivatevirtual

This method holds the code that is ran in the thread pool started by the ThreadedLinearCode() method. It copies the data from the different data objects to references of the same type stored in a queue filled by the Request methods.

Author
ClayJay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-09-08

Implements AutonomyThread< void >.

714{
716 // Frame queue.
718 // Check if we are using CPU or GPU mats.
719 if (m_slMemoryType == sl::MEM::CPU)
720 {
721 // Acquire mutex for getting frames out of the queue.
722 std::unique_lock<std::shared_mutex> lkFrameQueue(m_muFrameCopyMutex);
723 // Check if the queue is empty.
724 if (!m_qFrameCopySchedule.empty())
725 {
726 // Get frame container out of queue.
727 containers::FrameFetchContainer<cv::Mat> stContainer = m_qFrameCopySchedule.front();
728 // Pop out of queue.
729 m_qFrameCopySchedule.pop();
730 // Release lock.
731 lkFrameQueue.unlock();
732
733 // Determine which frame should be copied.
734 switch (stContainer.eFrameType)
735 {
736 case PIXEL_FORMATS::eBGRA: *stContainer.pFrame = imgops::ConvertSLMatToCVMat(m_slFrame); break;
737 case PIXEL_FORMATS::eDepthMeasure: *stContainer.pFrame = imgops::ConvertSLMatToCVMat(m_slDepthMeasure); break;
738 case PIXEL_FORMATS::eDepthImage: *stContainer.pFrame = imgops::ConvertSLMatToCVMat(m_slDepthImage); break;
739 case PIXEL_FORMATS::eXYZBGRA: *stContainer.pFrame = imgops::ConvertSLMatToCVMat(m_slPointCloud); break;
740 default: *stContainer.pFrame = imgops::ConvertSLMatToCVMat(m_slFrame); break;
741 }
742
743 // Signal future that the frame has been successfully retrieved.
744 stContainer.pCopiedFrameStatus->set_value(true);
745 }
746
747 // Check if anything has been added to the GPU queue.
748 if (!m_qGPUFrameCopySchedule.empty())
749 {
750 // Submit logger error.
751 LOG_ERROR(logging::g_qSharedLogger,
752 "ZEDCam ({}) is in CPU sl::Mat mode but a GPU mat has been added to the copy queue! Whichever thread queued the frame will now appear frozen if "
753 "future.get() is called. Either switch the camera to GPU Mat mode in AutonomyConstants.h or stop queueing frames of type cv::Mat.",
754 m_unCameraSerialNumber);
755 }
756 }
757 // Use GPU mat.
758 else
759 {
760 // Acquire mutex for getting frames out of the queue.
761 std::unique_lock<std::shared_mutex> lkFrameQueue(m_muFrameCopyMutex);
762 // Check if the queue is empty.
763 if (!m_qGPUFrameCopySchedule.empty())
764 {
765 // Get frame container out of queue.
766 containers::FrameFetchContainer<cv::cuda::GpuMat> stContainer = m_qGPUFrameCopySchedule.front();
767 // Pop out of queue.
768 m_qGPUFrameCopySchedule.pop();
769 // Release lock.
770 lkFrameQueue.unlock();
771
772 // Determine which frame should be copied.
773 switch (stContainer.eFrameType)
774 {
775 case PIXEL_FORMATS::eBGRA: *stContainer.pFrame = imgops::ConvertSLMatToGPUMat(m_slFrame); break;
776 case PIXEL_FORMATS::eDepthMeasure: *stContainer.pFrame = imgops::ConvertSLMatToGPUMat(m_slDepthMeasure); break;
777 case PIXEL_FORMATS::eDepthImage: *stContainer.pFrame = imgops::ConvertSLMatToGPUMat(m_slDepthImage); break;
778 case PIXEL_FORMATS::eXYZBGRA: *stContainer.pFrame = imgops::ConvertSLMatToGPUMat(m_slPointCloud); break;
779 default: *stContainer.pFrame = imgops::ConvertSLMatToGPUMat(m_slFrame); break;
780 }
781
782 // Signal future that the frame has been successfully retrieved.
783 stContainer.pCopiedFrameStatus->set_value(true);
784 }
785
786 // Check if anything has been added to the GPU queue.
787 if (!m_qFrameCopySchedule.empty())
788 {
789 // Submit logger error.
790 LOG_ERROR(logging::g_qSharedLogger,
791 "ZEDCam ({}) is in GPU sl::Mat mode but a CPU mat has been added to the copy queue! Whichever thread queued the frame will now appear frozen if "
792 "future.get() is called. Either switch the camera to GPU Mat mode in AutonomyConstants.h or stop queueing frames of type cv::cuda::GpuMat.",
793 m_unCameraSerialNumber);
794 }
795 }
796
798 // Pose queue.
800 // Acquire mutex for getting data out of the pose queue.
801 std::unique_lock<std::shared_mutex> lkPoseQueue(m_muPoseCopyMutex);
802 // Check if the queue is empty.
803 if (!m_qPoseCopySchedule.empty())
804 {
805 // Get pose container out of queue.
806 containers::DataFetchContainer<Pose> stContainer = m_qPoseCopySchedule.front();
807 // Pop out of queue.
808 m_qPoseCopySchedule.pop();
809 // Release lock.
810 lkPoseQueue.unlock();
811
812 // Rotate the ZED position coordinate frame to realign with the UTM global coordinate frame.
813 std::vector<numops::CoordinatePoint<double>> vPointCloud;
814 vPointCloud.emplace_back(m_slCameraPose.getTranslation().x, m_slCameraPose.getTranslation().y, m_slCameraPose.getTranslation().z);
815 // Get angle realignments.
816 double dNewXO = numops::InputAngleModulus<double>(m_slCameraPose.getEulerAngles(false).x + m_dPoseOffsetXO, 0.0, 360.0);
817 double dNewYO = numops::InputAngleModulus<double>(m_slCameraPose.getEulerAngles(false).y + m_dPoseOffsetYO, 0.0, 360.0);
818 double dNewZO = numops::InputAngleModulus<double>(m_slCameraPose.getEulerAngles(false).z + m_dPoseOffsetZO, 0.0, 360.0);
819 // Rotate coordinate frame.
820 numops::CoordinateFrameRotate3D(vPointCloud, m_dPoseOffsetXO, m_dPoseOffsetYO, m_dPoseOffsetZO);
821 // Repack values into pose.
822 Pose stPose(vPointCloud[0].tX + m_dPoseOffsetX, vPointCloud[0].tY + m_dPoseOffsetY, vPointCloud[0].tZ + m_dPoseOffsetZ, dNewXO, dNewYO, dNewZO);
823
824 // ISSUE NOTE: Might be in the future if we ever change our coordinate system on the ZED. This can be used to fix the directions of the Pose's coordinate system.
825 // // Check ZED coordinate system.
826 // switch (m_slCameraParams.coordinate_system)
827 // {
828 // case sl::COORDINATE_SYSTEM::LEFT_HANDED_Y_UP:
829 // {
830 // // Realign based in the signedness of this coordinate system. Z is backwards.
831 // stPose.stTranslation.dZ *= -1;
832 // break;
833 // }
834 // default:
835 // {
836 // // No need to flip signs for other coordinate systems.
837 // break;
838 // }
839 // }
840
841 // Copy pose.
842 *stContainer.pData = stPose;
843
844 // Signal future that the data has been successfully retrieved.
845 stContainer.pCopiedDataStatus->set_value(true);
846 }
847 else
848 {
849 // Release lock.
850 lkPoseQueue.unlock();
851 }
852
854 // GeoPose queue.
856 // Acquire mutex for getting data out of the pose queue.
857 std::unique_lock<std::shared_mutex> lkGeoPoseQueue(m_muGeoPoseCopyMutex);
858 // Check if the queue is empty.
859 if (!m_qGeoPoseCopySchedule.empty())
860 {
861 // Get pose container out of queue.
862 containers::DataFetchContainer<sl::GeoPose> stContainer = m_qGeoPoseCopySchedule.front();
863 // Pop out of queue.
864 m_qGeoPoseCopySchedule.pop();
865 // Release lock.
866 lkGeoPoseQueue.unlock();
867
868 // Copy pose.
869 *stContainer.pData = sl::GeoPose(m_slFusionGeoPose);
870
871 // Signal future that the data has been successfully retrieved.
872 stContainer.pCopiedDataStatus->set_value(true);
873 }
874 else
875 {
876 // Release lock.
877 lkGeoPoseQueue.unlock();
878 }
879
881 // Plane queue.
883 // Acquire mutex for getting frames out of the plane queue.
884 std::unique_lock<std::shared_mutex> lkPlaneQueue(m_muFloorCopyMutex);
885 // Check if the queue is empty.
886 if (!m_qFloorCopySchedule.empty())
887 {
888 // Get frame container out of queue.
889 containers::DataFetchContainer<sl::Plane> stContainer = m_qFloorCopySchedule.front();
890 // Pop out of queue.
891 m_qFloorCopySchedule.pop();
892 // Release lock.
893 lkPlaneQueue.unlock();
894
895 // Copy pose.
896 *stContainer.pData = sl::Plane(m_slFloorPlane);
897
898 // Signal future that the data has been successfully retrieved.
899 stContainer.pCopiedDataStatus->set_value(true);
900 }
901 else
902 {
903 // Release lock.
904 lkPlaneQueue.unlock();
905 }
906
908 // Sensors queue.
910 // Acquire mutex for getting frames out of the sensors queue.
911 std::unique_lock<std::shared_mutex> lkSensorsQueue(m_muSensorsCopyMutex);
912 // Check if the queue is empty.
913 if (!m_qSensorsCopySchedule.empty())
914 {
915 // Get frame container out of queue.
916 containers::DataFetchContainer<sl::SensorsData> stContainer = m_qSensorsCopySchedule.front();
917 // Pop out of queue.
918 m_qSensorsCopySchedule.pop();
919 // Release lock.
920 lkSensorsQueue.unlock();
921
922 // Copy pose.
923 *stContainer.pData = sl::SensorsData(m_slSensorsData);
924
925 // Signal future that the data had been successfully retrieved.
926 stContainer.pCopiedDataStatus->set_value(true);
927 }
928 else
929 {
930 // Release lock.
931 lkSensorsQueue.unlock();
932 }
933
935 // ObjectData queue.
937 // Acquire mutex for getting data out of the pose queue.
938 std::unique_lock<std::shared_mutex> lkObjectDataQueue(m_muObjectDataCopyMutex);
939 // Check if the queue is empty.
940 if (!m_qObjectDataCopySchedule.empty())
941 {
942 // Get frame container out of queue.
943 containers::DataFetchContainer<std::vector<sl::ObjectData>> stContainer = m_qObjectDataCopySchedule.front();
944 // Pop out of queue.
945 m_qObjectDataCopySchedule.pop();
946 // Release lock.
947 lkObjectDataQueue.unlock();
948
949 // Make copy of object vector. (Apparently the assignment operator actually does a deep copy)
950 *stContainer.pData = m_slDetectedObjects.object_list;
951
952 // Signal future that the data has been successfully retrieved.
953 stContainer.pCopiedDataStatus->set_value(true);
954 }
955 else
956 {
957 // Release lock.
958 lkObjectDataQueue.unlock();
959 }
960
962 // ObjectData Batched queue.
964 // Acquire mutex for getting data out of the pose queue.
965 std::unique_lock<std::shared_mutex> lkObjectBatchedDataQueue(m_muObjectBatchedDataCopyMutex);
966 // Check if the queue is empty.
967 if (!m_qObjectBatchedDataCopySchedule.empty())
968 {
969 // Get frame container out of queue.
970 containers::DataFetchContainer<std::vector<sl::ObjectsBatch>> stContainer = m_qObjectBatchedDataCopySchedule.front();
971 // Pop out of queue.
972 m_qObjectBatchedDataCopySchedule.pop();
973 // Release lock.
974 lkObjectBatchedDataQueue.unlock();
975
976 // Make copy of object vector. (Apparently the assignment operator actually does a deep copy)
977 *stContainer.pData = m_slDetectedObjectsBatched;
978
979 // Signal future that the data has been successfully retrieved.
980 stContainer.pCopiedDataStatus->set_value(true);
981 }
982 else
983 {
984 // Release lock.
985 lkObjectBatchedDataQueue.unlock();
986 }
987}
cv::cuda::GpuMat ConvertSLMatToGPUMat(sl::Mat &slInputMat)
Convert a ZEDSDK sl::Mat object into an OpenCV cv::cuda::GpuMat object. Keeps all Mat memory in GPU V...
Definition ImageOperations.hpp:127
cv::Mat ConvertSLMatToCVMat(sl::Mat &slInputMat)
Convert a ZEDSDK sl::Mat object into an OpenCV cv::Mat object. This copies the mat from GPU memory to...
Definition ImageOperations.hpp:106
constexpr void CoordinateFrameRotate3D(std::vector< CoordinatePoint< T > > &vPointCloud, const double dXRotationDegrees, const double dYRotationDegrees, const double dZRotationDegrees)
This method will rotate a list of 3D coordinate points a variable amount of degrees around the X,...
Definition NumberOperations.hpp:269
Here is the call graph for this function:

The documentation for this class was generated from the following files: