TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
nvinfer1::ITensor Class Referenceabstract

A tensor in a network definition. More...

Public Member Functions

virtual void setName (const char *name)=0
 Set the tensor name. More...
 
virtual const char * getName () const =0
 Get the tensor name. More...
 
virtual void setDimensions (Dims dimensions)=0
 Set the dimensions of a tensor. More...
 
virtual Dims getDimensions () const =0
 Get the dimensions of a tensor. More...
 
virtual void setType (DataType type)=0
 Set the data type of a tensor. More...
 
virtual DataType getType () const =0
 Get the data type of a tensor. More...
 
virtual bool setDynamicRange (float min, float max)=0
 Set dynamic range for the tensor. More...
 
 __attribute__ ((deprecated)) virtual float getDynamicRange() const =0
 Get dynamic range for the tensor. More...
 
virtual bool isNetworkInput () const =0
 Whether the tensor is a network input. More...
 
virtual bool isNetworkOutput () const =0
 Whether the tensor is a network output. More...
 
virtual void setBroadcastAcrossBatch (bool broadcastAcrossBatch)=0
 Set whether to enable broadcast of tensor across the batch. More...
 
virtual bool getBroadcastAcrossBatch () const =0
 Check if tensor is broadcast across the batch. More...
 
virtual TensorLocation getLocation () const =0
 Get the storage location of a tensor. More...
 
virtual void setLocation (TensorLocation location)=0
 Set the storage location of a tensor. More...
 
virtual bool dynamicRangeIsSet () const =0
 Query whether dynamic range is set. More...
 
virtual void resetDynamicRange ()=0
 Undo effect of setDynamicRange. More...
 
virtual float getDynamicRangeMin () const =0
 Get minimum of dynamic range. More...
 
virtual float getDynamicRangeMax () const =0
 Get maximum of dynamic range. More...
 
virtual void setAllowedFormats (TensorFormats formats)=0
 Set allowed formats for this tensor. More...
 
virtual TensorFormats getAllowedFormats () const =0
 Get a bitmask of TensorFormat values that the tensor supports. More...
 
virtual bool isShapeTensor () const =0
 Whether the tensor is a shape tensor. More...
 
virtual bool isExecutionTensor () const =0
 Whether the tensor is an execution tensor. More...
 

Protected Member Functions

virtual ~ITensor ()
 

Detailed Description

A tensor in a network definition.

To remove a tensor from a network definition, use INetworkDefinition::removeTensor().

When using the DLA, the cumulative size of all Tensors that are not marked as Network Input or Output tensors, must be less than 1GB in size to fit into a single subgraph. If the build option kGPU_FALLBACK is specified, then multiple subgraphs can be created, with each subgraph limited to less than 1GB of internal tensors data.

Warning
Do not inherit from this class, as doing so will break forward-compatibility of the API and ABI.

Constructor & Destructor Documentation

◆ ~ITensor()

virtual nvinfer1::ITensor::~ITensor ( )
inlineprotectedvirtual

Member Function Documentation

◆ setName()

virtual void nvinfer1::ITensor::setName ( const char *  name)
pure virtual

Set the tensor name.

For a network input, the name is assigned by the application. For tensors which are layer outputs, a default name is assigned consisting of the layer name followed by the index of the output in brackets.

This method copies the name string.

Parameters
nameThe name.
See also
getName()
Here is the caller graph for this function:

◆ getName()

virtual const char* nvinfer1::ITensor::getName ( ) const
pure virtual

Get the tensor name.

Returns
The name, as a pointer to a NULL-terminated character sequence.
See also
setName()
Here is the caller graph for this function:

◆ setDimensions()

virtual void nvinfer1::ITensor::setDimensions ( Dims  dimensions)
pure virtual

Set the dimensions of a tensor.

For a network input the name is assigned by the application. For a network output it is computed based on the layer parameters and the inputs to the layer. If a tensor size or a parameter is modified in the network, the dimensions of all dependent tensors will be recomputed.

This call is only legal for network input tensors, since the dimensions of layer output tensors are inferred based on layer inputs and parameters.

Parameters
dimensionsThe dimensions of the tensor.
See also
getDimensions()

◆ getDimensions()

virtual Dims nvinfer1::ITensor::getDimensions ( ) const
pure virtual

Get the dimensions of a tensor.

Returns
The dimensions of the tensor.
Warning
getDimensions() returns a -1 for dimensions that are derived from a wildcard dimension.
See also
setDimensions()
Here is the caller graph for this function:

◆ setType()

virtual void nvinfer1::ITensor::setType ( DataType  type)
pure virtual

Set the data type of a tensor.

Parameters
typeThe data type of the tensor.

The type is unchanged if the tensor is not a network input tensor, or marked as an output tensor or shape output tensor.

See also
getType()

◆ getType()

virtual DataType nvinfer1::ITensor::getType ( ) const
pure virtual

Get the data type of a tensor.

Returns
The data type of the tensor.
See also
setType()

◆ setDynamicRange()

virtual bool nvinfer1::ITensor::setDynamicRange ( float  min,
float  max 
)
pure virtual

Set dynamic range for the tensor.

Currently, only symmetric ranges are supported. Therefore, the larger of the absolute values of the provided bounds is used.

Returns
Whether the dynamic range was set successfully.

Requires that min and max be finite, and min <= max.

Here is the caller graph for this function:

◆ __attribute__()

nvinfer1::ITensor::__attribute__ ( (deprecated)  ) const
pure virtual

Get dynamic range for the tensor.

Returns
maximal absolute value of the dynamic range, -1.0f if no dynamic range is set.
Deprecated:
This interface is superseded by getDynamicRangeMin and getDynamicRangeMax and will be removed in TensorRT 8.0.

◆ isNetworkInput()

virtual bool nvinfer1::ITensor::isNetworkInput ( ) const
pure virtual

Whether the tensor is a network input.

◆ isNetworkOutput()

virtual bool nvinfer1::ITensor::isNetworkOutput ( ) const
pure virtual

Whether the tensor is a network output.

◆ setBroadcastAcrossBatch()

virtual void nvinfer1::ITensor::setBroadcastAcrossBatch ( bool  broadcastAcrossBatch)
pure virtual

Set whether to enable broadcast of tensor across the batch.

When a tensor is broadcast across a batch, it has the same value for every member in the batch. Memory is only allocated once for the single member.

This method is only valid for network input tensors, since the flags of layer output tensors are inferred based on layer inputs and parameters. If this state is modified for a tensor in the network, the states of all dependent tensors will be recomputed. If the tensor is for an explicit batch network, then this function does nothing.

Warning
The broadcast flag is ignored when using explicit batch network mode.
Parameters
broadcastAcrossBatchWhether to enable broadcast of tensor across the batch.
See also
getBroadcastAcrossBatch()

◆ getBroadcastAcrossBatch()

virtual bool nvinfer1::ITensor::getBroadcastAcrossBatch ( ) const
pure virtual

Check if tensor is broadcast across the batch.

When a tensor is broadcast across a batch, it has the same value for every member in the batch. Memory is only allocated once for the single member. If the network is in explicit batch mode, this function returns true if the leading dimension is 1.

Returns
True if tensor is broadcast across the batch, false otherwise.
See also
setBroadcastAcrossBatch()

◆ getLocation()

virtual TensorLocation nvinfer1::ITensor::getLocation ( ) const
pure virtual

Get the storage location of a tensor.

Returns
The location of tensor data.
See also
setLocation()

◆ setLocation()

virtual void nvinfer1::ITensor::setLocation ( TensorLocation  location)
pure virtual

Set the storage location of a tensor.

Parameters
locationthe location of tensor data

Only network input tensors for storing sequence lengths for RNNv2 are supported. Using host storage for layers that do not support it will generate errors at build time.

See also
getLocation()
Here is the caller graph for this function:

◆ dynamicRangeIsSet()

virtual bool nvinfer1::ITensor::dynamicRangeIsSet ( ) const
pure virtual

Query whether dynamic range is set.

Returns
True if dynamic range is set, false otherwise.

◆ resetDynamicRange()

virtual void nvinfer1::ITensor::resetDynamicRange ( )
pure virtual

Undo effect of setDynamicRange.

◆ getDynamicRangeMin()

virtual float nvinfer1::ITensor::getDynamicRangeMin ( ) const
pure virtual

Get minimum of dynamic range.

Returns
Minimum of dynamic range, or quiet NaN if range was not set.

◆ getDynamicRangeMax()

virtual float nvinfer1::ITensor::getDynamicRangeMax ( ) const
pure virtual

Get maximum of dynamic range.

Returns
Maximum of dynamic range, or quiet NaN if range was not set.

◆ setAllowedFormats()

virtual void nvinfer1::ITensor::setAllowedFormats ( TensorFormats  formats)
pure virtual

Set allowed formats for this tensor.

By default all formats are allowed. Shape tensors (for which isShapeTensor() returns true) may only have row major linear format.

When running network on DLA and allowGPUFallback is disabled, if DLA format(kCHW4 with Int8, kCHW4 with FP16, kCHW16 with FP16, kCHW32 with Int8) is set, the input format is treated as native DLA format with line stride requirement. Input/output binding with these format should have correct layout during inference.

Parameters
formatsA bitmask of TensorFormat values that are supported for this tensor.
See also
ITensor::getAllowedFormats()
TensorFormats

◆ getAllowedFormats()

virtual TensorFormats nvinfer1::ITensor::getAllowedFormats ( ) const
pure virtual

Get a bitmask of TensorFormat values that the tensor supports.

For a shape tensor, only row major linear format is allowed.

Returns
The value specified by setAllowedFormats or all possible formats.
See also
ITensor::setAllowedFormats()

◆ isShapeTensor()

virtual bool nvinfer1::ITensor::isShapeTensor ( ) const
pure virtual

Whether the tensor is a shape tensor.

A shape tensor is a tensor that is related to shape calculations. It must be 0D or 1D, have type Int32 or Bool, and its shape must be determinable at build time. Furthermore, it must be needed as a shape tensor, either marked as a network shape output via markOutputForShapes(), or as an input that is required to be a shape tensor, such as the second input to IShuffleLayer. Some layers are "polymorphic" in this respect. For example, the inputs to IElementWiseLayer must be shape tensors if the output is a shape tensor.

The TensorRT Developer Guide give the formal rules for what tensors are shape tensors.

The result of isShapeTensor() is reliable only when network construction is complete. For example, if a partially built network sums two tensors T1 and T2 to create tensor T3, and none are yet needed as shape tensors, isShapeTensor() returns false for all three tensors. Setting the second input of IShuffleLayer to be T3 would cause all three tensors to be shape tensors, because IShuffleLayer requires that its second optional input be a shape tensor, and IElementWiseLayer is "polymorphic".

If a tensor is a shape tensor and becomes an engine input or output, then ICudaEngine::isShapeBinding will be true for that tensor.

It is possible for a tensor to be both a shape tensor and an execution tensor.

Returns
True if tensor is a shape tensor, false otherwise.
See also
INetworkDefinition::markOutputForShapes(), ICudaEngine::isShapeBinding()

◆ isExecutionTensor()

virtual bool nvinfer1::ITensor::isExecutionTensor ( ) const
pure virtual

Whether the tensor is an execution tensor.

Tensors are usually execution tensors. The exceptions are tensors used solely for shape calculations or whose contents not needed to compute the outputs.

The result of isExecutionTensor() is reliable only when network construction is complete. For example, if a partially built network has no path from a tensor to a network output, isExecutionTensor() returns false. Completing the path would cause it to become true.

If a tensor is an execution tensor and becomes an engine input or output, then ICudaEngine::isExecutionBinding will be true for that tensor.

A tensor with isShapeTensor() == false and isExecutionTensor() == false can still show up as an input to the engine if its dimensions are required. In that case, only its dimensions need to be set at runtime and a nullptr can be passed instead of a pointer to its contents.


The documentation for this class was generated from the following file: