Autonomy Software C++ 24.5.1
Welcome to the Autonomy Software repository of the Mars Rover Design Team (MRDT) at Missouri University of Science and Technology (Missouri S&T)! API reference contains the source code and other resources for the development of the autonomy software for our Mars rover. The Autonomy Software project aims to compete in the University Rover Challenge (URC) by demonstrating advanced autonomous capabilities and robust navigation algorithms.
Loading...
Searching...
No Matches
TensorflowTPU< T, P > Class Template Referenceabstract

This class is designed to enable quick, easy, and robust handling of .tflite models for deployment and inference on the Coral EdgeTPU Accelerator. More...

#include <TensorflowTPU.hpp>

Public Types

enum class  DeviceType { eAuto , ePCIe , eUSB }
 
enum class  PerformanceModes { eLow , eMedium , eHigh , eMax }
 

Public Member Functions

 TensorflowTPU (std::string szModelPath, PerformanceModes ePowerMode=PerformanceModes::eHigh, unsigned int unMaxBulkInQueueLength=32, bool bUSBAlwaysDFU=false)
 Construct a new TensorflowTPU object.
 
 ~TensorflowTPU ()
 Destroy the TensorflowTPU object.
 
void CloseHardware ()
 Release all hardware and reset models and interpreters.
 
TfLiteStatus OpenAndLoad (DeviceType eDeviceType=DeviceType::eAuto)
 Attempt to open the model at the given path and load it onto the EdgeTPU device.
 
bool GetDeviceIsOpened () const
 Accessor for the Device Is Opened private member.
 

Static Public Member Functions

static std::vector< edgetpu::EdgeTpuManager::DeviceEnumerationRecord > GetHardwareDevices ()
 Retrieve a list of EdgeTPU devices from the edge API.
 
static std::vector< std::shared_ptr< edgetpu::EdgeTpuContext > > GetOpenedHardwareDevices ()
 Retrieve a list of already opened EdgeTPU devices from the edge API.
 

Protected Member Functions

edgetpu::EdgeTpuManager * GetEdgeManager ()
 Retrieves a pointer to an EdgeTPUManager instance from the libedgetpu library.
 
std::string DeviceTypeToString (edgetpu::DeviceType eDeviceType)
 to_string method for converting a device type to a readable string.
 

Protected Attributes

std::string m_szModelPath
 
edgetpu::EdgeTpuManager::DeviceEnumerationRecord m_tpuDevice
 
edgetpu::EdgeTpuManager::DeviceOptions m_tpuDeviceOptions
 
std::unique_ptr< tflite::FlatBufferModel > m_pTFLiteModel
 
std::shared_ptr< edgetpu::EdgeTpuContext > m_pEdgeTPUContext
 
std::unique_ptr< tflite::Interpreter > m_pInterpreter
 
bool m_bDeviceOpened
 

Private Member Functions

virtual T Inference (const P &tInput, const float fMinObjectConfidence, const float fNMSThreshold)=0
 

Detailed Description

template<typename T, typename P>
class TensorflowTPU< T, P >

This class is designed to enable quick, easy, and robust handling of .tflite models for deployment and inference on the Coral EdgeTPU Accelerator.

Template Parameters
T- The type used for the return of the Inference() method.
P- The type used for the argument of the Inference() method.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-24

Member Enumeration Documentation

◆ DeviceType

template<typename T , typename P >
enum class TensorflowTPU::DeviceType
strong
47 {
48 eAuto, // Any open device will be picked. Prioritizes PCIe device if not already in use.
49 ePCIe, // Attempt to use a PCIe device for this model.
50 eUSB // Attempt to use a USB device for this model.
51 };

◆ PerformanceModes

template<typename T , typename P >
enum class TensorflowTPU::PerformanceModes
strong
55 {
56 eLow, // Power saver mode. Low power draw and little heat output, but not great performance.
57 eMedium, // Balanced. Medium power draw, medium performance.
58 eHigh, // Performance mode. High power draw and increased heat output, great performance.
59 eMax // Maximum clock speed. Max performance, but greatest power draw and heat output. Could damage device in hot environments.
60 };

Constructor & Destructor Documentation

◆ TensorflowTPU()

template<typename T , typename P >
TensorflowTPU< T, P >::TensorflowTPU ( std::string  szModelPath,
PerformanceModes  ePowerMode = PerformanceModes::eHigh,
unsigned int  unMaxBulkInQueueLength = 32,
bool  bUSBAlwaysDFU = false 
)
inline

Construct a new TensorflowTPU object.

Parameters
szModelPath- The path to the model to open and inference on the EdgeTPU.
ePowerMode- The desired power mode of the device.
unMaxBulkInQueueLength- Input queue length for device. Larger queue may improve USB performance going from device to host.
bUSBAlwaysDFU- Whether or not to always reload firmware into the device after when this object is created.
Note
The given model must be a tflite model custom compiled to map operations to the EdgeTPU refer to https://coral.ai/docs/edgetpu/models-intro/#compiling and https://coral.ai/docs/edgetpu/compiler/#system-requirements
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-11-11
86 {
87 // Initialize member variables.
88 m_szModelPath = szModelPath;
89 m_tpuDeviceOptions["Usb.MaxBulkInQueueLength"] = std::to_string(unMaxBulkInQueueLength);
90 m_bDeviceOpened = false;
91
92 // Determine which power mode should be set.
93 switch (ePowerMode)
94 {
95 case PerformanceModes::eLow: m_tpuDeviceOptions["Performance"] = "Low"; break;
96 case PerformanceModes::eMedium: m_tpuDeviceOptions["Performance"] = "Medium"; break;
97 case PerformanceModes::eHigh: m_tpuDeviceOptions["Performance"] = "High"; break;
98 case PerformanceModes::eMax: m_tpuDeviceOptions["Performance"] = "Max"; break;
99 default: m_tpuDeviceOptions["Performance"] = "High"; break;
100 }
101
102 // Determine if firmware should be loaded ever time code is started.
103 if (bUSBAlwaysDFU)
104 {
105 // Always load firmware.
106 m_tpuDeviceOptions["Usb.AlwaysDfu"] = "True";
107 }
108 else
109 {
110 // Only load firmware on first init of device.
111 m_tpuDeviceOptions["Usb.AlwaysDfu"] = "False";
112 }
113 }

◆ ~TensorflowTPU()

template<typename T , typename P >
TensorflowTPU< T, P >::~TensorflowTPU ( )
inline

Destroy the TensorflowTPU object.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-10-24
123 {
124 // Check if device has been opened.
125 if (m_bDeviceOpened)
126 {
127 // Close tflite interpreter.
128 m_pInterpreter.reset();
129 // Close edgetpu hardware.
130 m_pEdgeTPUContext.reset();
131 // Close model.
132 m_pTFLiteModel.reset();
133 }
134 }

Member Function Documentation

◆ CloseHardware()

template<typename T , typename P >
void TensorflowTPU< T, P >::CloseHardware ( )
inline

Release all hardware and reset models and interpreters.

Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2024-03-31
144 {
145 // Set opened toggle.
146 m_bDeviceOpened = false;
147 // Close tflite interpreter.
148 m_pInterpreter.reset();
149 // Close edgetpu hardware.
150 m_pEdgeTPUContext.reset();
151 // Close model.
152 m_pTFLiteModel.reset();
153 }

◆ OpenAndLoad()

template<typename T , typename P >
TfLiteStatus TensorflowTPU< T, P >::OpenAndLoad ( DeviceType  eDeviceType = DeviceType::eAuto)
inline

Attempt to open the model at the given path and load it onto the EdgeTPU device.

Parameters
eDeviceType- An enumerator specifying which device this model should run on. (PCIe, USB, or autoselect)
Returns
TfLiteStatus - The Tensorflow Lite status of the model interpreter. Status will be TfLiteOk if model was successfully opened and loaded onto the device.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-11-11
166 {
167 // Create instance variables.
168 TfLiteStatus tfReturnStatus = TfLiteStatus::kTfLiteCancelled;
169 std::vector<edgetpu::EdgeTpuManager::DeviceEnumerationRecord> vValidDevices;
170
171 // Determine which device is going to be used for this model.
172 switch (eDeviceType)
173 {
174 case DeviceType::eAuto: m_tpuDevice.type = edgetpu::DeviceType(-1); break;
175 case DeviceType::ePCIe: m_tpuDevice.type = edgetpu::DeviceType::kApexPci; break;
176 case DeviceType::eUSB: m_tpuDevice.type = edgetpu::DeviceType::kApexUsb; break;
177 default: m_tpuDevice.type = edgetpu::DeviceType(-1); break;
178 }
179
180 // Load compiled Edge TPU model as a flatbuffer model.
181 m_pTFLiteModel = tflite::FlatBufferModel::VerifyAndBuildFromFile(m_szModelPath.c_str());
182 // Check if model was successfully opened.
183 if (m_pTFLiteModel != nullptr)
184 {
185 // Get a list of available devices and already opened devices.
186 std::vector<edgetpu::EdgeTpuManager::DeviceEnumerationRecord> vDevices = this->GetHardwareDevices();
187 std::vector<std::shared_ptr<edgetpu::EdgeTpuContext>> vAlreadyOpenedDevices = this->GetOpenedHardwareDevices();
188
189 // Get list of valid, unopened devices.
190 // Loop through available devices.
191 for (unsigned int unIter = 0; unIter < vDevices.size(); ++unIter)
192 {
193 // Create instance variables.
194 bool bValidDevice = true;
195
196 // Loop through all opened devices.
197 for (unsigned int nJter = 0; nJter < vAlreadyOpenedDevices.size(); ++nJter)
198 {
199 // Check if current available device has already been opened.
200 if (vAlreadyOpenedDevices[nJter]->GetDeviceEnumRecord().path == vDevices[unIter].path)
201 {
202 // Set device as not valid.
203 bValidDevice = false;
204 }
205 // Determine if we should check device type.
206 else if (eDeviceType != DeviceType::eAuto)
207 {
208 // Check if device type matches.
209 if (vDevices[unIter].type != m_tpuDevice.type)
210 {
211 // Set device as not valid.
212 bValidDevice = false;
213 }
214 }
215 }
216
217 // Check if still valid.
218 if (bValidDevice)
219 {
220 // Append to valid devices vector.
221 vValidDevices.emplace_back(vDevices[unIter]);
222 }
223 }
224
225 // Check if any valid devices were found.
226 if (vValidDevices.size() > 0)
227 {
228 // Loop through each device until one successfully opens.
229 for (unsigned int unIter = 0; unIter < vValidDevices.size() && !m_bDeviceOpened; ++unIter)
230 {
231 // Submit logger message.
232 LOG_INFO(logging::g_qSharedLogger,
233 "Attempting to load {} onto {} device at {} ({})...",
234 m_szModelPath,
235 this->DeviceTypeToString(vValidDevices[unIter].type),
236 vValidDevices[unIter].path,
237 this->DeviceTypeToString(vValidDevices[unIter].type));
238
239 // Attempt to open device.
240 m_pEdgeTPUContext = this->GetEdgeManager()->OpenDevice(vValidDevices[unIter].type, vValidDevices[unIter].path, m_tpuDeviceOptions);
241
242 // Only proceed if device opened.
243 if (m_pEdgeTPUContext != nullptr && m_pEdgeTPUContext->IsReady())
244 {
245 // Create custom tflite operations for edge tpu.
246 tflite::ops::builtin::BuiltinOpResolverWithXNNPACK tfResolver;
247 tfResolver.AddCustom(edgetpu::kCustomOp, edgetpu::RegisterCustomOp());
248 // Create tflite interpreter with model and operations resolver.
249 if (tflite::InterpreterBuilder(*m_pTFLiteModel, tfResolver)(&m_pInterpreter) != kTfLiteOk)
250 {
251 // Submit logger message.
252 LOG_ERROR(logging::g_qSharedLogger,
253 "Unable to build interpreter for model {} with device {} ({})",
254 m_szModelPath,
255 vValidDevices[unIter].path,
256 this->DeviceTypeToString(vValidDevices[unIter].type));
257
258 // Release interpreter and context.
259 m_pInterpreter.reset();
260 m_pEdgeTPUContext.reset();
261
262 // Update return status.
263 tfReturnStatus = TfLiteStatus::kTfLiteUnresolvedOps;
264 }
265 else
266 {
267 // Bind the given context device with interpreter.
268 m_pInterpreter->SetExternalContext(kTfLiteEdgeTpuContext, m_pEdgeTPUContext.get());
269 // Attempt to allocate necessary tensors for model onto device.
270 if (m_pInterpreter->AllocateTensors() != kTfLiteOk)
271 {
272 // Submit logger message.
273 LOG_WARNING(logging::g_qSharedLogger,
274 "Even though device was opened and interpreter was built, allocation of tensors failed for model {} with device {} ({})",
275 m_szModelPath,
276 vValidDevices[unIter].path,
277 this->DeviceTypeToString(vValidDevices[unIter].type));
278
279 // Release interpreter and context.
280 m_pInterpreter.reset();
281 m_pEdgeTPUContext.reset();
282
283 // Update return status.
284 tfReturnStatus = TfLiteStatus::kTfLiteDelegateDataWriteError;
285 }
286 else
287 {
288 // Submit logger message.
289 LOG_INFO(logging::g_qSharedLogger,
290 "Successfully opened and loaded model {} with device {} ({})",
291 m_szModelPath,
292 vValidDevices[unIter].path,
293 this->DeviceTypeToString(vValidDevices[unIter].type));
294
295 // Set toggle that model is opened with device.
296 m_bDeviceOpened = true;
297
298 // Update return status.
299 tfReturnStatus = TfLiteStatus::kTfLiteOk;
300 }
301 }
302 }
303 else
304 {
305 // Submit logger message.
306 LOG_ERROR(logging::g_qSharedLogger,
307 "Unable to open device {} ({}) for model {}.",
308 vValidDevices[unIter].path,
309 this->DeviceTypeToString(vValidDevices[unIter].type),
310 m_szModelPath);
311 }
312 }
313 }
314 else
315 {
316 // Submit logger message.
317 LOG_ERROR(logging::g_qSharedLogger,
318 "No valid devices were found for model {}. Device type is {}",
319 m_szModelPath,
320 this->DeviceTypeToString(m_tpuDevice.type));
321 }
322 }
323 else
324 {
325 // Submit logger message.
326 LOG_ERROR(logging::g_qSharedLogger, "Unable to load model {}. Does it exist at this path? Is this actually compiled for the EdgeTPU?", m_szModelPath);
327 }
328
329 // Return status.
330 return tfReturnStatus;
331 }
edgetpu::EdgeTpuManager * GetEdgeManager()
Retrieves a pointer to an EdgeTPUManager instance from the libedgetpu library.
Definition TensorflowTPU.hpp:420
static std::vector< edgetpu::EdgeTpuManager::DeviceEnumerationRecord > GetHardwareDevices()
Retrieve a list of EdgeTPU devices from the edge API.
Definition TensorflowTPU.hpp:361
static std::vector< std::shared_ptr< edgetpu::EdgeTpuContext > > GetOpenedHardwareDevices()
Retrieve a list of already opened EdgeTPU devices from the edge API.
Definition TensorflowTPU.hpp:388
std::string DeviceTypeToString(edgetpu::DeviceType eDeviceType)
to_string method for converting a device type to a readable string.
Definition TensorflowTPU.hpp:445
Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetDeviceIsOpened()

template<typename T , typename P >
bool TensorflowTPU< T, P >::GetDeviceIsOpened ( ) const
inline

Accessor for the Device Is Opened private member.

Returns
true - Model has been successfully opened and loaded onto device.
false - Model has not yet been opened or loaded onto a device.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-11-11
350{ return m_bDeviceOpened; }

◆ GetHardwareDevices()

template<typename T , typename P >
static std::vector< edgetpu::EdgeTpuManager::DeviceEnumerationRecord > TensorflowTPU< T, P >::GetHardwareDevices ( )
inlinestatic

Retrieve a list of EdgeTPU devices from the edge API.

Returns
std::vector<edgetpu::EdgeTpuManager::DeviceEnumerationRecord> - A vector containing device records for currently connected devices. Each device record contains a type (usb/pcie) and path.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-11-11
362 {
363 // Create instance variables.
364 edgetpu::EdgeTpuManager* tpuEdgeManagerInstance = edgetpu::EdgeTpuManager::GetSingleton();
365
366 // Check if edgetpu singleton objects are supported.
367 if (tpuEdgeManagerInstance != nullptr)
368 {
369 // Get a list of devices from the edgetpu api.
370 return tpuEdgeManagerInstance->EnumerateEdgeTpu();
371 }
372 else
373 {
374 // Return empty vector.
375 return std::vector<edgetpu::EdgeTpuManager::DeviceEnumerationRecord>();
376 }
377 }
Here is the caller graph for this function:

◆ GetOpenedHardwareDevices()

template<typename T , typename P >
static std::vector< std::shared_ptr< edgetpu::EdgeTpuContext > > TensorflowTPU< T, P >::GetOpenedHardwareDevices ( )
inlinestatic

Retrieve a list of already opened EdgeTPU devices from the edge API.

Returns
std::vector<edgetpu::EdgeTpuManager::DeviceEnumerationRecord> - A vector containing device records for currently opened devices. Each device record contains a type (usb/pcie) and path.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-11-11
389 {
390 // Create instance variables.
391 edgetpu::EdgeTpuManager* tpuEdgeManagerInstance = edgetpu::EdgeTpuManager::GetSingleton();
392
393 // Check if edgetpu singleton objects are supported.
394 if (tpuEdgeManagerInstance != nullptr)
395 {
396 // Get a list of devices from the edgetpu api.
397 return tpuEdgeManagerInstance->GetOpenedDevices();
398 }
399 else
400 {
401 // Return empty vector.
402 return std::vector<std::shared_ptr<edgetpu::EdgeTpuContext>>();
403 }
404 }
Here is the caller graph for this function:

◆ GetEdgeManager()

template<typename T , typename P >
edgetpu::EdgeTpuManager * TensorflowTPU< T, P >::GetEdgeManager ( )
inlineprotected

Retrieves a pointer to an EdgeTPUManager instance from the libedgetpu library.

Returns
edgetpu::EdgeTpuManager* - A pointer to the manager. Will be nullptr if not supported on this operating system.
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-11-11
421 {
422 // Create instance variables.
423 edgetpu::EdgeTpuManager* tpuEdgeManagerInstance = edgetpu::EdgeTpuManager::GetSingleton();
424
425 // Check if edgetpu singleton objects are supported.
426 if (tpuEdgeManagerInstance == nullptr)
427 {
428 // Submit logger message.
429 LOG_CRITICAL(logging::g_qSharedLogger, "Unable to get EdgeTPU manager! This operating system does not support singletons.");
430 }
431
432 // Get a list of devices from the edgetpu api.
433 return tpuEdgeManagerInstance;
434 }
Here is the caller graph for this function:

◆ DeviceTypeToString()

template<typename T , typename P >
std::string TensorflowTPU< T, P >::DeviceTypeToString ( edgetpu::DeviceType  eDeviceType)
inlineprotected

to_string method for converting a device type to a readable string.

Parameters
eDeviceType- The edgetpu device type. (kApexUsb or kApexPci)
Returns
std::string - The equivalent string. (USB or PCIe)
Author
clayjay3 (clayt.nosp@m.onra.nosp@m.ycowe.nosp@m.n@gm.nosp@m.ail.c.nosp@m.om)
Date
2023-11-11
446 {
447 // Determine which device type string should be returned.
448 switch (eDeviceType)
449 {
450 case edgetpu::DeviceType::kApexUsb: return "USB";
451 case edgetpu::DeviceType::kApexPci: return "PCIe";
452 default: return "Not Found";
453 }
454 }
Here is the caller graph for this function:

◆ Inference()

template<typename T , typename P >
virtual T TensorflowTPU< T, P >::Inference ( const P &  tInput,
const float  fMinObjectConfidence,
const float  fNMSThreshold 
)
privatepure virtual

The documentation for this class was generated from the following file: