Autonomy Software C++ 24.5.1
Welcome to the Autonomy Software repository of the Mars Rover Design Team (MRDT) at Missouri University of Science and Technology (Missouri S&T)! API reference contains the source code and other resources for the development of the autonomy software for our Mars rover. The Autonomy Software project aims to compete in the University Rover Challenge (URC) by demonstrating advanced autonomous capabilities and robust navigation algorithms.
Loading...
Searching...
No Matches
statemachine Namespace Reference

Namespace containing all state machine related classes. More...

Classes

class  ApproachingMarkerState
 The ApproachingMarkerState class implements the Approaching Marker state for the Autonomy State Machine. More...
 
class  ApproachingObjectState
 The ApproachingObjectState class implements the Approaching Object state for the Autonomy State Machine. More...
 
class  AvoidanceState
 The AvoidanceState class implements the Avoidance state for the Autonomy State Machine. More...
 
class  IdleState
 The IdleState class implements the Idle state for the Autonomy State Machine. More...
 
class  NavigatingState
 The NavigatingState class implements the Navigating state for the Autonomy State Machine. More...
 
class  ReversingState
 The ReversingState class implements the Reversing state for the Autonomy State Machine. More...
 
class  SearchPatternState
 The SearchPatternState class implements the Search Pattern state for the Autonomy State Machine. More...
 
class  State
 The abstract state class. All states inherit from this class. More...
 
class  StuckState
 The StuckState class implements the Stuck state for the Autonomy State Machine. More...
 
class  TimeIntervalBasedStuckDetector
 This class should be instantiated within another state to be used for detection of if the rover is stuck. Stuck detection is solely based off of a check interval on the current velocity and rotation. If the velocity and rotation are non-moving for more then a maximum interval count, we are considered stuck. More...
 
class  VerifyingMarkerState
 The VerifyingMarkerState class implements the Verifying Marker state for the Autonomy State Machine. More...
 
class  VerifyingObjectState
 The VerifyingObjectState class implements the Verifying Object state for the Autonomy State Machine. More...
 
class  VerifyingPositionState
 The VerifyingPositionState class implements the Verifying Position state for the Autonomy State Machine. More...
 

Enumerations

enum class  States {
  eIdle , eNavigating , eSearchPattern , eApproachingMarker ,
  eApproachingObject , eVerifyingPosition , eVerifyingMarker , eVerifyingObject ,
  eAvoidance , eReversing , eStuck , NUM_STATES
}
 The states that the state machine can be in. More...
 
enum class  Event {
  eStart , eReachedGpsCoordinate , eReachedMarker , eReachedObject ,
  eMarkerSeen , eObjectSeen , eMarkerUnseen , eObjectUnseen ,
  eVerifyingComplete , eVerifyingFailed , eAbort , eRestart ,
  eObstacleAvoidance , eEndObstacleAvoidance , eNoWaypoint , eNewWaypoint ,
  eReverse , eReverseComplete , eSearchFailed , eStuck ,
  eUnstuck , NUM_EVENTS
}
 The events that can be triggered in the state machine. More...
 

Functions

std::string StateToString (States eState)
 Converts a state object to a string.
 
void LoadDetectedObjects (std::vector< objectdetectutils::Object > &vDetectedObjects, const std::vector< std::shared_ptr< ObjectDetector > > &vObjectDetectors)
 Aggregates all detected objects from each provided object detector.
 
int IdentifyTargetObject (const std::vector< std::shared_ptr< ObjectDetector > > &vObjectDetectors, objectdetectutils::Object &stObjectTarget, const geoops::WaypointType &eDesiredDetectionType=geoops::WaypointType::eUNKNOWN)
 Identify a target object in the rover's vision, using Torch detection.
 
void LoadDetectedTags (std::vector< tagdetectutils::ArucoTag > &vDetectedArucoTags, const std::vector< std::shared_ptr< TagDetector > > &vTagDetectors)
 Aggregates all detected tags from each provided tag detector for both OpenCV and Tensorflow detection.
 
int IdentifyTargetMarker (const std::vector< std::shared_ptr< TagDetector > > &vTagDetectors, tagdetectutils::ArucoTag &stArucoTarget, tagdetectutils::ArucoTag &stTorchTarget, const int nTargetTagID=static_cast< int >(manifest::Autonomy::AUTONOMYWAYPOINTTYPES::ANY))
 Identify a target marker in the rover's vision, using OpenCV detection.
 

Detailed Description

Namespace containing all state machine related classes.

Author
Eli Byrd (edbgk.nosp@m.k@ms.nosp@m.t.edu)
Date
2024-01-17
Author
Sam Hajdukiewicz (saman.nosp@m.thah.nosp@m.ajduk.nosp@m.iewi.nosp@m.cz@gm.nosp@m.ail..nosp@m.com)
Date
2024-01-17
Author
Eli Byrd (edbgk.nosp@m.k@ms.nosp@m.t.edu)
Date
2024-05-24
Author
Sam Hajdukiewicz (saman.nosp@m.thah.nosp@m.ajduk.nosp@m.iewi.nosp@m.cz@gm.nosp@m.ail..nosp@m.com)
Date
2025-05-08
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-04-23
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-04-04

Enumeration Type Documentation

◆ States

enum class statemachine::States
strong

The states that the state machine can be in.

Author
Eli Byrd (edbgk.nosp@m.k@ms.nosp@m.t.edu)
Date
2024-01-18
31 {
32 eIdle,
33 eNavigating,
34 eSearchPattern,
35 eApproachingMarker,
36 eApproachingObject,
37 eVerifyingPosition,
38 eVerifyingMarker,
39 eVerifyingObject,
40 eAvoidance,
41 eReversing,
42 eStuck,
43
44 NUM_STATES
45 };

◆ Event

enum class statemachine::Event
strong

The events that can be triggered in the state machine.

Author
Eli Byrd (edbgk.nosp@m.k@ms.nosp@m.t.edu)
Date
2024-01-18
54 {
55 eStart,
56 eReachedGpsCoordinate,
57 eReachedMarker,
58 eReachedObject,
59 eMarkerSeen,
60 eObjectSeen,
61 eMarkerUnseen,
62 eObjectUnseen,
63 eVerifyingComplete,
64 eVerifyingFailed,
65 eAbort,
66 eRestart,
67 eObstacleAvoidance,
68 eEndObstacleAvoidance,
69 eNoWaypoint,
70 eNewWaypoint,
71 eReverse,
72 eReverseComplete,
73 eSearchFailed,
74 eStuck,
75 eUnstuck,
76
77 NUM_EVENTS
78 };

Function Documentation

◆ StateToString()

std::string statemachine::StateToString ( States  eState)
inline

Converts a state object to a string.

Parameters
eState-
Returns
std::string -
Author
Eli Byrd (edbgk.nosp@m.k@ms.nosp@m.t.edu)
Date
2024-01-18
90 {
91 switch (eState)
92 {
93 case States::eIdle: return "Idle";
94 case States::eNavigating: return "Navigating";
95 case States::eSearchPattern: return "Search Pattern";
96 case States::eApproachingMarker: return "Approaching Marker";
97 case States::eApproachingObject: return "Approaching Object";
98 case States::eVerifyingPosition: return "Verifying Position";
99 case States::eVerifyingMarker: return "Verifying Marker";
100 case States::eVerifyingObject: return "Verifying Object";
101 case States::eAvoidance: return "Avoidance";
102 case States::eReversing: return "Reversing";
103 case States::eStuck: return "Stuck";
104 default: return "Unknown";
105 }
106 }
Here is the caller graph for this function:

◆ LoadDetectedObjects()

void statemachine::LoadDetectedObjects ( std::vector< objectdetectutils::Object > &  vDetectedObjects,
const std::vector< std::shared_ptr< ObjectDetector > > &  vObjectDetectors 
)
inline

Aggregates all detected objects from each provided object detector.

Parameters
vDetectedObject- Reference vector that will hold all of the aggregated detected objects.
vObjectDetectors- Vector of pointers to object detectors that will be used to request their detected objects.
Author
Sam Hajdukiewicz (saman.nosp@m.thah.nosp@m.ajduk.nosp@m.iewi.nosp@m.cz@gm.nosp@m.ail..nosp@m.com)
Date
2025-05-08
41 {
42 // Number of object detectors.
43 size_t siNumObjectDetectors = vObjectDetectors.size();
44
45 // Initialize vectors to store detected objects temporarily.
46 std::vector<std::vector<objectdetectutils::Object>> vDetectedObjectBuffers(siNumObjectDetectors);
47
48 // Initialize vectors to store detected objects futures.
49 std::vector<std::future<bool>> vDetectedObjectsFuture;
50
51 // Request objects from each detector.
52 for (size_t siIdx = 0; siIdx < siNumObjectDetectors; ++siIdx)
53 {
54 // Check if this object detector is ready.
55 if (vObjectDetectors[siIdx]->GetIsReady())
56 {
57 // Request detected objects from detector.
58 vDetectedObjectsFuture.emplace_back(vObjectDetectors[siIdx]->RequestDetectedObjects(vDetectedObjectBuffers[siIdx]));
59 }
60 }
61
62 // Ensure all requests have been fulfilled.
63 // Then transfer objects from the buffer to vDetectedObjects for the user to access.
64 for (size_t siIdx = 0; siIdx < vDetectedObjectsFuture.size(); ++siIdx)
65 {
66 // Wait for the request to be fulfilled.
67 vDetectedObjectsFuture[siIdx].get();
68
69 // Loop through the detected objects and add them to the vDetectedObjects vector.
70 for (const objectdetectutils::Object& tObject : vDetectedObjectBuffers[siIdx])
71 {
72 vDetectedObjects.emplace_back(tObject);
73 }
74 }
75 }
Represents a single detected object. Combines attributes from TorchObject and TensorflowObject struct...
Definition ObjectDetectionUtility.hpp:73
Here is the caller graph for this function:

◆ IdentifyTargetObject()

int statemachine::IdentifyTargetObject ( const std::vector< std::shared_ptr< ObjectDetector > > &  vObjectDetectors,
objectdetectutils::Object stObjectTarget,
const geoops::WaypointType &  eDesiredDetectionType = geoops::WaypointType::eUNKNOWN 
)
inline

Identify a target object in the rover's vision, using Torch detection.

Note
If multiple objects are detected the closest one will be chosen as the target.
Parameters
vObjectDetectors- The vector of object detectors to use for detection.
stObjectTarget- The detected object marker from Torch.
eDesiredDetectionType- The desired detection type to check for.
Returns
int - The total number of objects currently detected.
Author
Sam Hajdukiewicz (saman.nosp@m.thah.nosp@m.ajduk.nosp@m.iewi.nosp@m.cz@gm.nosp@m.ail..nosp@m.com)
Date
2025-05-09
93 {
94 // Create instance variables.
95 std::vector<objectdetectutils::Object> vDetectedObjects;
96 objectdetectutils::Object stBestObject;
97 std::string szIdentifiedObjects = "";
98
99 // Get the current time
100 std::chrono::system_clock::time_point tmCurrentTime = std::chrono::system_clock::now();
101
102 // Load all detected objects in the rover's vision.
103 LoadDetectedObjects(vDetectedObjects, vObjectDetectors);
104 // Find the best object.
105 for (const objectdetectutils::Object& stCandidate : vDetectedObjects)
106 {
107 // Calculate the total age of the object.
108 double dObjectTotalAge = std::fabs(std::chrono::duration_cast<std::chrono::milliseconds>(tmCurrentTime - stCandidate.tmCreation).count() / 1000.0);
109 // Calculate the total object area.
110 double dArea = stCandidate.pBoundingBox->area();
111 // Calculate what percentage of the screen the object takes up.
112 double dAreaPercentage = (dArea / (stCandidate.cvImageResolution.width * stCandidate.cvImageResolution.height)) * 100.0;
113
114 // If the distance of the object is not greater than 0, skip it.
115 if (stCandidate.dStraightLineDistance <= 0.0)
116 {
117 continue;
118 }
119
120 // Determine the desired detection type.
121 switch (eDesiredDetectionType)
122 {
123 case geoops::WaypointType::eMalletWaypoint:
124 {
125 if (stCandidate.eDetectionType != objectdetectutils::ObjectDetectionType::eMallet)
126 {
127 continue;
128 }
129 break;
130 }
131 case geoops::WaypointType::eWaterBottleWaypoint:
132 {
133 if (stCandidate.eDetectionType != objectdetectutils::ObjectDetectionType::eWaterBottle)
134 {
135 continue;
136 }
137 break;
138 }
139 case geoops::WaypointType::eObjectWaypoint:
140 {
141 if (stCandidate.eDetectionType != objectdetectutils::ObjectDetectionType::eMallet &&
142 stCandidate.eDetectionType != objectdetectutils::ObjectDetectionType::eWaterBottle)
143 {
144 continue;
145 }
146 break;
147 }
148 default:
149 {
150 break;
151 }
152 }
153
154 // Check the object detection method type.
155 if (stCandidate.eDetectionMethod == objectdetectutils::ObjectDetectionMethod::eTorch)
156 {
157 // Assemble the identified objects string.
158 szIdentifiedObjects += "\tObject Class: " + stCandidate.szClassName + " Object Age: " + std::to_string(dObjectTotalAge) +
159 "s Object Screen Percentage: " + std::to_string(dAreaPercentage) + "%\n";
160 // Check if the object meets the requirements.
161 if (dAreaPercentage < constants::BBOX_MIN_SCREEN_PERCENTAGE || dObjectTotalAge < constants::BBOX_MIN_LIFETIME_THRESHOLD)
162 {
163 continue;
164 }
165
166 // Check other object requirements.
167 if (dArea > stBestObject.pBoundingBox->area())
168 {
169 // Set the target object to the detected object.
170 stBestObject = stCandidate;
171 }
172 }
173 }
174
175 // Only print the identified objects if there are any.
176 if (stBestObject.dConfidence != 0.0)
177 {
178 // Submit logger message.
179 LOG_DEBUG(logging::g_qSharedLogger, "ObjectDetectionChecker: Identified objects:\n{}", szIdentifiedObjects);
180 }
181
182 // Set the target object to the best object.
183 stObjectTarget = stBestObject;
184
185 return static_cast<int>(vDetectedObjects.size());
186 }
void LoadDetectedObjects(std::vector< objectdetectutils::Object > &vDetectedObjects, const std::vector< std::shared_ptr< ObjectDetector > > &vObjectDetectors)
Aggregates all detected objects from each provided object detector.
Definition ObjectDetectionChecker.hpp:40
Here is the call graph for this function:
Here is the caller graph for this function:

◆ LoadDetectedTags()

void statemachine::LoadDetectedTags ( std::vector< tagdetectutils::ArucoTag > &  vDetectedArucoTags,
const std::vector< std::shared_ptr< TagDetector > > &  vTagDetectors 
)
inline

Aggregates all detected tags from each provided tag detector for both OpenCV and Tensorflow detection.

Parameters
vDetectedArucoTags- Reference vector that will hold all of the aggregated detected Aruco tags.
vTagDetectors- Vector of pointers to tag detectors that will be used to request their detected tags.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-04-04
41 {
42 // Number of tag detectors.
43 size_t siNumTagDetectors = vTagDetectors.size();
44
45 // Initialize vectors to store detected tags temporarily.
46 std::vector<std::vector<tagdetectutils::ArucoTag>> vDetectedArucoTagBuffers(siNumTagDetectors);
47
48 // Initialize vectors to store detected tags futures.
49 std::vector<std::future<bool>> vDetectedArucoTagsFuture;
50
51 // Request tags from each detector.
52 for (size_t siIdx = 0; siIdx < siNumTagDetectors; ++siIdx)
53 {
54 // Check if this tag detector is ready.
55 if (vTagDetectors[siIdx]->GetIsReady())
56 {
57 // Request detected Aruco tags from detector.
58 vDetectedArucoTagsFuture.emplace_back(vTagDetectors[siIdx]->RequestDetectedArucoTags(vDetectedArucoTagBuffers[siIdx]));
59 }
60 }
61
62 // Ensure all requests have been fulfilled.
63 // Then transfer tags from the buffer to vDetectedArucoTags and vDetectedTensorflowTags for the user to access.
64 for (size_t siIdx = 0; siIdx < vDetectedArucoTagsFuture.size(); ++siIdx)
65 {
66 // Wait for the request to be fulfilled.
67 vDetectedArucoTagsFuture[siIdx].get();
68
69 // Loop through the detected Aruco tags and add them to the vDetectedArucoTags vector.
70 for (const tagdetectutils::ArucoTag& tTag : vDetectedArucoTagBuffers[siIdx])
71 {
72 vDetectedArucoTags.emplace_back(tTag);
73 }
74 }
75 }
Represents a single ArUco tag. Combines attributes from TorchTag, TensorflowTag, and the original Aru...
Definition TagDetectionUtilty.hpp:59
Here is the caller graph for this function:

◆ IdentifyTargetMarker()

int statemachine::IdentifyTargetMarker ( const std::vector< std::shared_ptr< TagDetector > > &  vTagDetectors,
tagdetectutils::ArucoTag stArucoTarget,
tagdetectutils::ArucoTag stTorchTarget,
const int  nTargetTagID = static_cast<int>(manifest::Autonomy::AUTONOMYWAYPOINTTYPES::ANY) 
)
inline

Identify a target marker in the rover's vision, using OpenCV detection.

Note
If multiple markers are detected the closest one will be chosen as the target.
Parameters
vTagDetectors- The vector of tag detectors to use for detection.
stArucoTarget- The detected target marker from OpenCV.
stTorchTarget- The detected target marker from Torch.
nTargetTagID- The ID of the target tag to identify. If -1, the closest tag will be chosen.
Returns
int - The total number of tags currently detected.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2025-04-04
95 {
96 // Create instance variables.
97 std::vector<tagdetectutils::ArucoTag> vDetectedArucoTags;
98 tagdetectutils::ArucoTag stArucoBestTag;
99 tagdetectutils::ArucoTag stTorchBestTag;
100 std::string szIdentifiedTags = "";
101
102 // Get the current time
103 std::chrono::system_clock::time_point tmCurrentTime = std::chrono::system_clock::now();
104
105 // Load all detected tags in the rover's vision.
106 LoadDetectedTags(vDetectedArucoTags, vTagDetectors);
107 // Find the best tag from the Aruco tags.
108 for (const tagdetectutils::ArucoTag& stCandidate : vDetectedArucoTags)
109 {
110 // Calculate the total age of the tag.
111 double dTagTotalAge = std::fabs(std::chrono::duration_cast<std::chrono::milliseconds>(tmCurrentTime - stCandidate.tmCreation).count() / 1000.0);
112 // Calculate the total tag area.
113 double dArea = stCandidate.pBoundingBox->area();
114 // Calculate what percentage of the screen the tag takes up.
115 double dAreaPercentage = (dArea / (stCandidate.cvImageResolution.width * stCandidate.cvImageResolution.height)) * 100.0;
116
117 // If the distance of the tag is not greater than 0, skip it.
118 if (stCandidate.dStraightLineDistance <= 0.0)
119 {
120 continue;
121 }
122
123 // Check the tag detection method type.
124 if (stCandidate.eDetectionMethod == tagdetectutils::TagDetectionMethod::eOpenCV)
125 {
126 // Assemble the identified tags string.
127 szIdentifiedTags += "\tArUco ID: " + std::to_string(stCandidate.nID) + " Tag Age: " + std::to_string(dTagTotalAge) +
128 "s Tag Screen Percentage: " + std::to_string(dAreaPercentage) + "%\n";
129 // Check if the tag is best.
130 if (stCandidate.nID == nTargetTagID || static_cast<int>(manifest::Autonomy::AUTONOMYWAYPOINTTYPES::ANY))
131 {
132 // Check if the tag meets the requirements.
133 if (dAreaPercentage < constants::BBOX_MIN_SCREEN_PERCENTAGE || dTagTotalAge < constants::BBOX_MIN_LIFETIME_THRESHOLD)
134 {
135 continue;
136 }
137
138 // Check other tag requirements.
139 if (dArea > stArucoBestTag.pBoundingBox->area())
140 {
141 // Set the target tag to the detected tag.
142 stArucoBestTag = stCandidate;
143 }
144 }
145 }
146 else if (stCandidate.eDetectionMethod == tagdetectutils::TagDetectionMethod::eTorch)
147 {
148 // Assemble the identified tags string.
149 szIdentifiedTags += "\tTorch Class: " + stCandidate.szClassName + " Tag Age: " + std::to_string(dTagTotalAge) +
150 "s Tag Screen Percentage: " + std::to_string(dAreaPercentage) + "%\n";
151 // Check if the tag meets the requirements.
152 if (dAreaPercentage < constants::BBOX_MIN_SCREEN_PERCENTAGE || dTagTotalAge < constants::BBOX_MIN_LIFETIME_THRESHOLD)
153 {
154 continue;
155 }
156
157 // Check other tag requirements.
158 if (dArea > stTorchBestTag.pBoundingBox->area())
159 {
160 // Set the target tag to the detected tag.
161 stTorchBestTag = stCandidate;
162 }
163 }
164 }
165
166 // Only print the identified tags if there are any.
167 if (stArucoBestTag.nID != -1 || stTorchBestTag.dConfidence != 0.0)
168 {
169 // Submit logger message.
170 LOG_DEBUG(logging::g_qSharedLogger, "TagDetectionChecker: Identified tags:\n{}", szIdentifiedTags);
171 }
172
173 // Set the target tag to the best tag.
174 stArucoTarget = stArucoBestTag;
175 stTorchTarget = stTorchBestTag;
176
177 return static_cast<int>(vDetectedArucoTags.size());
178 }
void LoadDetectedTags(std::vector< tagdetectutils::ArucoTag > &vDetectedArucoTags, const std::vector< std::shared_ptr< TagDetector > > &vTagDetectors)
Aggregates all detected tags from each provided tag detector for both OpenCV and Tensorflow detection...
Definition TagDetectionChecker.hpp:40
Here is the call graph for this function:
Here is the caller graph for this function: