API - NN

Layer list

Module([name, act])

The basic Module class represents a single layer of a neural network.

Sequential(*args)

The class Sequential is a linear stack of layers.

ModuleList([modules])

Holds submodules in a list.

ModuleDict([modules])

Holds submodules in a dictionary.

Parameter([name, act])

This function creates a parameter.

ParameterList([parameters])

Holds parameters in a list.

ParameterDict([parameters])

Holds parameters in a dictionary.

Input(shape[, init, dtype, name])

The Input class is the starting layer of a neural network.

OneHot([depth, on_value, off_value, axis, …])

The OneHot class is the starting layer of a neural network, see tf.one_hot.

Word2vecEmbedding(num_embeddings, embedding_dim)

The Word2vecEmbedding class is a fully connected layer.

Embedding(num_embeddings, embedding_dim[, …])

A simple lookup table that stores embeddings of a fixed dictionary and size.

AverageEmbedding(num_embeddings, embedding_dim)

The AverageEmbedding averages over embeddings of inputs.

Linear(out_features[, act, W_init, b_init, …])

Applies a linear transformation to the incoming data: \(y = xA^T + b\)

Dropout([p, seed, name])

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution.

GaussianNoise([mean, stddev, is_always, …])

The GaussianNoise class is noise layer that adding noise with gaussian distribution to the activation.

DropconnectLinear([keep, out_features, act, …])

The DropconnectLinear class is Dense with DropConnect behaviour which randomly removes connections between this layer and the previous layer according to a keeping probability.

UpSampling2d(scale[, method, antialias, …])

The UpSampling2d class is a up-sampling 2D layer.

DownSampling2d(scale[, method, antialias, …])

The DownSampling2d class is down-sampling 2D layer.

Conv1d([out_channels, kernel_size, stride, …])

Applies a 1D convolution over an input signal composed of several input planes.

Conv2d([out_channels, kernel_size, stride, …])

Applies a 2D convolution over an input signal composed of several input planes.

Conv3d([out_channels, kernel_size, stride, …])

Applies a 3D convolution over an input signal composed of several input planes.

ConvTranspose1d([out_channels, kernel_size, …])

Applies a 1D transposed convolution operator over an input image composed of several input planes.

ConvTranspose2d([out_channels, kernel_size, …])

Applies a 2D transposed convolution operator over an input image composed of several input planes.

ConvTranspose3d([out_channels, kernel_size, …])

Applies a 3D transposed convolution operator over an input image composed of several input planes.

DepthwiseConv2d([kernel_size, stride, act, …])

Separable/Depthwise Convolutional 2D layer, see tf.nn.depthwise_conv2d.

SeparableConv1d([out_channels, kernel_size, …])

The SeparableConv1d class is a 1D depthwise separable convolutional layer.

SeparableConv2d([out_channels, kernel_size, …])

The SeparableConv2d class is a 2D depthwise separable convolutional layer.

DeformableConv2d([offset_layer, …])

The DeformableConv2d class is a 2D Deformable Convolutional Networks.

GroupConv2d([out_channels, kernel_size, …])

The GroupConv2d class is 2D grouped convolution, see here.

PadLayer([padding, mode, constant_values, name])

The PadLayer class is a padding layer for any mode and dimension.

ZeroPad1d(padding[, name, data_format])

The ZeroPad1d class is a 1D padding layer for signal [batch, length, channel].

ZeroPad2d(padding[, name, data_format])

The ZeroPad2d class is a 2D padding layer for image [batch, height, width, channel].

ZeroPad3d(padding[, name, data_format])

The ZeroPad3d class is a 3D padding layer for volume [batch, depth, height, width, channel].

MaxPool1d([kernel_size, stride, padding, …])

Max pooling for 1D signal.

AvgPool1d([kernel_size, stride, padding, …])

Avg pooling for 1D signal.

MaxPool2d([kernel_size, stride, padding, …])

Max pooling for 2D image.

AvgPool2d([kernel_size, stride, padding, …])

Avg pooling for 2D image [batch, height, width, channel].

MaxPool3d([kernel_size, stride, padding, …])

Max pooling for 3D volume.

AvgPool3d([kernel_size, stride, padding, …])

Avg pooling for 3D volume.

GlobalMaxPool1d([data_format, name])

The GlobalMaxPool1d class is a 1D Global Max Pooling layer.

GlobalAvgPool1d([data_format, name])

The GlobalAvgPool1d class is a 1D Global Avg Pooling layer.

GlobalMaxPool2d([data_format, name])

The GlobalMaxPool2d class is a 2D Global Max Pooling layer.

GlobalAvgPool2d([data_format, name])

The GlobalAvgPool2d class is a 2D Global Avg Pooling layer.

GlobalMaxPool3d([data_format, name])

The GlobalMaxPool3d class is a 3D Global Max Pooling layer.

GlobalAvgPool3d([data_format, name])

The GlobalAvgPool3d class is a 3D Global Avg Pooling layer.

AdaptiveAvgPool1d(output_size[, …])

The AdaptiveAvgPool1d class is a 1D Adaptive Avg Pooling layer.

AdaptiveMaxPool1d(output_size[, …])

The AdaptiveMaxPool1d class is a 1D Adaptive Max Pooling layer.

AdaptiveAvgPool2d(output_size[, …])

The AdaptiveAvgPool2d class is a 2D Adaptive Avg Pooling layer.

AdaptiveMaxPool2d(output_size[, …])

The AdaptiveMaxPool2d class is a 2D Adaptive Max Pooling layer.

AdaptiveAvgPool3d(output_size[, …])

The AdaptiveAvgPool3d class is a 3D Adaptive Avg Pooling layer.

AdaptiveMaxPool3d(output_size[, …])

The AdaptiveMaxPool3d class is a 3D Adaptive Max Pooling layer.

CornerPool2d([mode, name])

Corner pooling for 2D image [batch, height, width, channel], see here.

SubpixelConv1d([scale, act, in_channels, name])

It is a 1D sub-pixel up-sampling layer.

SubpixelConv2d([scale, data_format, act, name])

It is a 2D sub-pixel up-sampling layer, usually be used for Super-Resolution applications, see SRGAN for example.

BatchNorm([momentum, epsilon, act, …])

This interface is used to construct a callable object of the BatchNorm class.

BatchNorm1d([momentum, epsilon, act, …])

The BatchNorm1d applies Batch Normalization over 2D/3D input (a mini-batch of 1D inputs (optional) with additional channel dimension), of shape (N, C) or (N, L, C) or (N, C, L).

BatchNorm2d([momentum, epsilon, act, …])

The BatchNorm2d applies Batch Normalization over 4D input (a mini-batch of 2D inputs with additional channel dimension) of shape (N, H, W, C) or (N, C, H, W).

BatchNorm3d([momentum, epsilon, act, …])

The BatchNorm3d applies Batch Normalization over 5D input (a mini-batch of 3D inputs with additional channel dimension) with shape (N, D, H, W, C) or (N, C, D, H, W).

LayerNorm(normalized_shape[, epsilon, …])

It implements the function of the Layer Normalization Layer and can be applied to mini-batch input data.

RNNCell(input_size, hidden_size[, bias, …])

An Elman RNN cell with tanh or ReLU non-linearity.

LSTMCell(input_size, hidden_size[, bias, name])

A long short-term memory (LSTM) cell.

GRUCell(input_size, hidden_size[, bias, name])

A gated recurrent unit (GRU) cell.

RNN(input_size, hidden_size[, num_layers, …])

Multilayer Elman network(RNN).

LSTM(input_size, hidden_size[, num_layers, …])

Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.

GRU(input_size, hidden_size[, num_layers, …])

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

MultiheadAttention(embed_dim, num_heads[, …])

Allows the model to jointly attend to information from different representation subspaces.

Transformer([d_model, nhead, …])

A transformer model.

TransformerEncoder(encoder_layer, num_layers)

TransformerEncoder is a stack of N encoder layers

TransformerDecoder(decoder_layer, num_layers)

TransformerDecoder is a stack of N decoder layers

TransformerEncoderLayer(d_model, nhead, …)

TransformerEncoderLayer is made up of self-attn and feedforward network.

TransformerDecoderLayer(d_model, nhead, …)

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.

Flatten([name])

A layer that reshapes high-dimension input into a vector.

Reshape(shape[, name])

A layer that reshapes a given tensor.

Transpose([perm, conjugate, name])

A layer that transposes the dimension of a tensor.

Shuffle(group[, in_channels, name])

A layer that shuffle a 2D image [batch, height, width, channel], see here.

Concat([concat_dim, name])

A layer that concats multiple tensors according to given axis.

Elementwise([combine_fn, act, name])

A layer that combines multiple Layer that have the same output shapes according to an element-wise operation.

ExpandDims([axis, name])

The ExpandDims class inserts a dimension of 1 into a tensor’s shape, see tf.expand_dims() .

Tile([multiples, name])

The Tile class constructs a tensor by tiling a given tensor, see tf.tile() .

Stack([axis, name])

The Stack class is a layer for stacking a list of rank-R tensors into one rank-(R+1) tensor, see tf.stack().

UnStack([num, axis, name])

The UnStack class is a layer for unstacking the given dimension of a rank-R tensor into rank-(R-1) tensors., see tf.unstack().

Scale([init_scale, name])

The Scale class is to multiple a trainable scale value to the layer outputs.

BinaryLinear([out_features, act, use_gemm, …])

The BinaryLinear class is a binary fully connected layer, which weights are either -1 or 1 while inferencing.

BinaryConv2d([out_channels, kernel_size, …])

The BinaryConv2d class is a 2D binary CNN layer, which weights are either -1 or 1 while inference.

TernaryLinear([out_features, act, use_gemm, …])

The TernaryLinear class is a ternary fully connected layer, which weights are either -1 or 1 or 0 while inference.

TernaryConv2d([out_channels, kernel_size, …])

The TernaryConv2d class is a 2D ternary CNN layer, which weights are either -1 or 1 or 0 while inference.

DorefaLinear([bitW, bitA, out_features, …])

The DorefaLinear class is a binary fully connected layer, which weights are ‘bitW’ bits and the output of the previous layer are ‘bitA’ bits while inferencing.

DorefaConv2d([bitW, bitA, out_channels, …])

The DorefaConv2d class is a 2D quantized convolutional layer, which weights are ‘bitW’ bits and the output of the previous layer are ‘bitA’ bits while inferencing.

MaskedConv3d(mask_type, out_channels[, …])

MaskedConv3D.

Base Layer

Module

class tensorlayerx.nn.Module(name=None, act=None, *args, **kwargs)[source]

The basic Module class represents a single layer of a neural network. It should be subclassed when implementing new types of layers. :param name: A unique layer name. If None, a unique name will be automatically assigned. :type name: str or None

__init__()[source]

Initializing the Layer.

__call__()[source]

Forwarding the computation.

all_weights()

Return a list of Tensor which are all weights of this Layer.

trainable_weights()

Return a list of Tensor which are all trainable weights of this Layer.

nontrainable_weights()

Return a list of Tensor which are all nontrainable weights of this Layer.

build()[source]

Abstract method. Build the Layer. All trainable weights should be defined in this function.

_get_weights()[source]

Abstract method.Create weights for training parameters.

save_weights()[source]

Input file_path, save model weights into a file of given format.

load_weights()[source]

Load model weights from a given file, which should be previously saved by self.save_weights().

save_standard_weights()[source]

Input file_path, save model weights into a npz_dict file. These parameters can support multiple backends.

load_standard_weights()[source]

Load model weights from a given file, which should be previously saved by self.save_standard_weights().

forward()[source]

Abstract method. Forward computation and return computation results.

Sequential

class tensorlayerx.nn.Sequential(*args)[source]

The class Sequential is a linear stack of layers. The Sequential can be created by passing a list of layer instances. The given layer instances will be automatically connected one by one. :param layers: A list of layers. :type layers: list of Layer :param name: A unique layer name. If None, a unique name will be automatically assigned. :type name: str or None

__init__()[source]

Initializing the ModuleList.

weights()

A collection of weights of all the layer instances.

build()[source]

Build the ModuleList. The layer instances will be connected automatically one by one.

forward()[source]

Forward the computation. The computation will go through all layer instances.

Examples

>>> conv = tlx.layers.Conv2d(3, 2, 3, pad_mode='valid')
>>> bn = tlx.layers.BatchNorm2d(2)
>>> seq = tlx.nn.Sequential([conv, bn])
>>> x = tlx.layers.Input((1, 3, 4, 4))
>>> seq(x)

ModuleList

class tensorlayerx.nn.ModuleList(modules=None)[source]

Holds submodules in a list.

ModuleList can be used like a regular Python list, support ‘__getitem__’, ‘__setitem__’, ‘__delitem__’, ‘__len__’, ‘__iter__’ and ‘__iadd__’, but module it contains are properly registered, and will be visible by all Modules methods.

Parameters

args (list) – List of subclass of Module.

__init__()[source]

Initializing the ModuleList.

insert()[source]

Inserts a given layer before a given index in the list.

extend()[source]

Appends layers from a Python iterable to the end of the list.

append()[source]

Appends a given layer to the end of the list.

Examples

>>> from tensorlayerx.nn import Module, ModuleList, Linear
>>> import tensorlayerx as tlx
>>> d1 = Linear(out_features=800, act=tlx.ReLU, in_features=784, name='linear1')
>>> d2 = Linear(out_features=800, act=tlx.ReLU, in_features=800, name='linear2')
>>> d3 = Linear(out_features=10, act=tlx.ReLU, in_features=800, name='linear3')
>>> layer_list = ModuleList([d1, d2])
>>> # Inserts a given d2 before a given index in the list
>>> layer_list.insert(1, d2)
>>> layer_list.insert(2, d2)
>>> # Appends d2 from a Python iterable to the end of the list.
>>> layer_list.extend([d2])
>>> # Appends a given d3 to the end of the list.
>>> layer_list.append(d3)

ModuleDict

class tensorlayerx.nn.ModuleDict(modules=None)[source]

Holds submodules in a dictionary.

ModuleDict can be used like a regular Python dictionary, support ‘__getitem__’, ‘__setitem__’, ‘__delitem__’, ‘__len__’, ‘__iter__’ and ‘__contains__’, but module it contains are properly registered, and will be visible by all Modules methods.

Parameters

args (dict) – a mapping (dictionary) of (string: module) or an iterable of key-value pairs of type (string, module)

__init__()[source]

Initializing the ModuleDict.

clear()[source]

Remove all items from the ModuleDict.

pop()[source]

Remove key from the ModuleDict and return its module.

keys()[source]

Return an iterable of the ModuleDict keys.

items()[source]

Return an iterable of the ModuleDict key/value pairs.

values()[source]

Return an iterable of the ModuleDict values.

update()[source]

Update the ModuleDict with the key-value pairs from a mapping or an iterable, overwriting existing keys.

Examples

>>> from tensorlayerx.nn import Module, ModuleDict, Linear
>>> import tensorlayerx as tlx
>>> class MyModule(Module):
>>>     def __init__(self):
>>>         super(MyModule, self).__init__()
>>>         self.dict = ModuleDict({
>>>                 'linear1':Linear(out_features=800, act=tlx.ReLU, in_features=784, name='linear1'),
>>>                 'linear2':Linear(out_features=800, act=tlx.ReLU, in_features=800, name='linear2')
>>>                 })
>>>     def forward(self, x, linear):
>>>         x = self.dict[linear](x)
>>>         return x

Parameter

tensorlayerx.nn.Parameter(data=None, name=None)[source]

This function creates a parameter. The parameter is a learnable variable, which can have gradient, and can be optimized.

Parameters
  • data (Tensor) – parameter tensor

  • requires_grad (bool) – if the parameter requires gradient. Default: True

Returns

Return type

Parameter

Examples

>>> import tensorlayerx as tlx
>>> para = tlx.nn.Parameter(data=tlx.ones((5,5)), requires_grad=True)

ParameterList

class tensorlayerx.nn.ParameterList(parameters=None)[source]

Holds parameters in a list.

ParameterList can be indexed like a regular Python list. Support ‘__getitem__’, ‘__setitem__’, ‘__delitem__’, ‘__len__’, ‘__iter__’ and ‘__iadd__’.

Parameters

Parameters (list) – List of Parameter.

__init__()[source]

Initializing the ParameterList.

extend(parameter)[source]

Appends parameters from a Python iterable to the end of the list.

append(parameters)[source]

Appends a given parameter to the end of the list.

Examples

>>> from tensorlayerx.nn import Module, ModuleList, Linear
>>> import tensorlayerx as tlx
>>> class MyModule(Module):
>>>     def __init__(self):
>>>         super(MyModule, self).__init__()
>>>         self.params2 = ParameterList([Parameter(tlx.ones((10,5))), Parameter(tlx.ones((5,10)))])
>>>     def forward(self, x):
>>>         x = tlx.matmul(x, self.params2[0])
>>>         x = tlx.matmul(x, self.params2[1])
>>>         return x

ParameterDict

class tensorlayerx.nn.ParameterDict(parameters=None)[source]

Holds parameters in a dictionary.

ParameterDict can be used like a regular Python dictionary, support ‘__getitem__’, ‘__setitem__’, ‘__delitem__’, ‘__len__’, ‘__iter__’ and ‘__contains__’,

Parameters

parameters (dict) – a mapping (dictionary) of (string: parameter) or an iterable of key-value pairs of type (string, parameter)

__init__()[source]

Initializing the ParameterDict.

clear()[source]

Remove all items from the ParameterDict.

setdefault(key, default=None)[source]

If key is in the ParameterDict, return its parameter. If not, insert key with a parameter default and return default. default defaults to None.

popitem()[source]

Remove and return the last inserted (key, parameter) pair from the ParameterDict

pop(key)[source]

Remove key from the ParameterDict and return its parameter.

get(key, default = None):

Return the parameter associated with key if present. Otherwise return default if provided, None if not.

fromkeys(keys, default = None)[source]

Return a new ParameterDict with the keys provided

keys()[source]

Return an iterable of the ParameterDict keys.

items()[source]

Return an iterable of the ParameterDict key/value pairs.

values()[source]

Return an iterable of the ParameterDict values.

update()[source]

Update the ParameterDict with the key-value pairs from a mapping or an iterable, overwriting existing keys.

Examples

>>> from tensorlayerx.nn import Module, ParameterDict, Parameter
>>> import tensorlayerx as tlx
>>> class MyModule(Module):
>>>     def __init__(self):
>>>         super(MyModule, self).__init__()
>>>         self.dict = ParameterDict({
>>>                 'left': Parameter(tlx.ones((5, 10))),
>>>                 'right': Parameter(tlx.zeros((5, 10)))
>>>                 })
>>>     def forward(self, x, choice):
>>>         x = tlx.matmul(x, self.dict[choice])
>>>         return x

Input Layers

Input Layer

tensorlayerx.nn.Input(shape, init=None, dtype=tensorflow.float32, name=None)[source]

The Input class is the starting layer of a neural network.

Parameters
  • shape (tuple (int)) – Including batch size.

  • init (initializer or str or None) – The initializer for initializing the input matrix

  • dtype (dtype) – The type of input values. By default, tf.float32.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> ni = tlx.nn.Input([10, 50, 50, 32], name='input')
>>> output shape : [10, 50, 50, 32]

One-hot Layer

class tensorlayerx.nn.OneHot(depth=None, on_value=1.0, off_value=0.0, axis=-1, dtype=tensorflow.float32, name=None)[source]

The OneHot class is the starting layer of a neural network, see tf.one_hot. Useful link: https://www.tensorflow.org/api_docs/python/tf/one_hot.

Parameters
  • depth (None or int) – If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis (default: the new axis is appended at the end).

  • on_value (None or number) – The value to represnt ON. If None, it will default to the value 1.

  • off_value (None or number) – The value to represnt OFF. If None, it will default to the value 0.

  • axis (None or int) – The axis.

  • dtype (None or TensorFlow dtype) – The data type, None means tlx.float32.

  • name (str) – A unique layer name.

Examples

>>> net = tlx.nn.Input([32], dtype=tlx.int32)
>>> onehot = tlx.nn.OneHot(depth=8)
>>> print(onehot)
OneHot(depth=8, name='onehot')
>>> tensor = tlx.nn.OneHot(depth=8)(net)
>>> print(tensor)
Tensor([...], shape=(32, 8), dtype=float32)

Word2Vec Embedding Layer

class tensorlayerx.nn.Word2vecEmbedding(num_embeddings, embedding_dim, num_sampled=64, activate_nce_loss=True, nce_loss_args=None, E_init='random_uniform', nce_W_init='truncated_normal', nce_b_init='constant', name=None)[source]

The Word2vecEmbedding class is a fully connected layer. For Word Embedding, words are input as integer index. The output is the embedded word vector.

The layer integrates NCE loss by default (activate_nce_loss=True). If the NCE loss is activated, in a dynamic model, the computation of nce loss can be turned off in customised forward feeding by setting use_nce_loss=False when the layer is called. The NCE loss can be deactivated by setting activate_nce_loss=False.

Parameters
  • num_embeddings (int) – size of the dictionary of embeddings.

  • embedding_dim (int) – the size of each embedding vector.

  • num_sampled (int) – The number of negative examples for NCE loss

  • activate_nce_loss (boolean) – Whether activate nce loss or not. By default, True If True, the layer will return both outputs of embedding and nce_cost in forward feeding. If False, the layer will only return outputs of embedding. In a dynamic model, the computation of nce loss can be turned off in forward feeding by setting use_nce_loss=False when the layer is called. In a static model, once the model is constructed, the computation of nce loss cannot be changed (always computed or not computed).

  • nce_loss_args (dictionary) – The arguments for tf.ops.nce_loss()

  • E_init (initializer or str) – The initializer for initializing the embedding matrix

  • nce_W_init (initializer or str) – The initializer for initializing the nce decoder weight matrix

  • nce_b_init (initializer or str) – The initializer for initializing of the nce decoder bias vector

  • name (str) – A unique layer name

outputs

The embedding layer outputs.

Type

Tensor

normalized_embeddings

Normalized embedding matrix.

Type

Tensor

nce_weights

The NCE weights only when activate_nce_loss is True.

Type

Tensor

nce_biases

The NCE biases only when activate_nce_loss is True.

Type

Tensor

Examples

Word2Vec With TensorLayer (Example in examples/text_word_embedding/tutorial_word2vec_basic.py)

>>> import tensorlayerx as tlx
>>> batch_size = 8
>>> embedding_dim = 50
>>> inputs = tlx.nn.Input([batch_size], dtype=tlx.int32)
>>> labels = tlx.nn.Input([batch_size, 1], dtype=tlx.int32)
>>> emb_net = tlx.nn.Word2vecEmbedding(
>>>     num_embeddings=10000,
>>>     embedding_dim=embedding_dim,
>>>     num_sampled=100,
>>>     activate_nce_loss=True, # the nce loss is activated
>>>     nce_loss_args={},
>>>     E_init=tlx.initializers.random_uniform(minval=-1.0, maxval=1.0),
>>>     nce_W_init=tlx.initializers.truncated_normal(stddev=float(1.0 / np.sqrt(embedding_dim))),
>>>     nce_b_init=tlx.initializers.constant(value=0.0),
>>>     name='word2vec_layer',
>>> )
>>> print(emb_net)
Word2vecEmbedding(num_embeddings=10000, embedding_dim=50, num_sampled=100, activate_nce_loss=True, nce_loss_args={})
>>> embed_tensor = emb_net(inputs, use_nce_loss=False) # the nce loss is turned off and no need to provide labels
>>> embed_tensor = emb_net([inputs, labels], use_nce_loss=False) # the nce loss is turned off and the labels will be ignored
>>> embed_tensor, embed_nce_loss = emb_net([inputs, labels]) # the nce loss is calculated
>>> outputs = tlx.nn.Linear(out_features=10, name="linear")(embed_tensor)
>>> model = tlx.model.Model(inputs=[inputs, labels], outputs=[outputs, embed_nce_loss], name="word2vec_model") # a static model
>>> out = model([data_x, data_y], is_train=True) # where data_x is inputs and data_y is labels

References

https://www.tensorflow.org/tutorials/representation/word2vec

Embedding Layer

class tensorlayerx.nn.Embedding(num_embeddings, embedding_dim, E_init='random_uniform', name=None)[source]

A simple lookup table that stores embeddings of a fixed dictionary and size.

This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.

Parameters
  • num_embeddings (int) – size of the dictionary of embeddings.

  • embedding_dim (int) – the size of each embedding vector.

  • E_init (initializer or str) – The initializer for the embedding matrix.

  • E_init_args (dictionary) – The arguments for embedding matrix initializer.

  • name (str) – A unique layer name.

outputs

The embedding layer output is a 3D tensor in the shape: (batch_size, num_steps(num_words), embedding_dim).

Type

tensor

Examples

>>> import tensorlayerx as tlx
>>> input = tlx.nn.Input([8, 100], dtype=tlx.int32)
>>> embed = tlx.nn.Embedding(num_embeddings=1000, embedding_dim=50, name='embed')
>>> print(embed)
Embedding(num_embeddings=1000, embedding_dim=50)
>>> tensor = embed(input)
>>> print(tensor)
Tensor([...], shape=(8, 100, 50), dtype=float32)

Average Embedding Layer

class tensorlayerx.nn.AverageEmbedding(num_embeddings, embedding_dim, pad_value=0, E_init='random_uniform', name=None)[source]

The AverageEmbedding averages over embeddings of inputs. This is often used as the input layer for model like DAN[1] and FastText[2].

Parameters
  • num_embeddings (int) – size of the dictionary of embeddings.

  • embedding_dim (int) – the size of each embedding vector.

  • pad_value (int) – The scalar padding value used in inputs, 0 as default.

  • E_init (initializer or str) – The initializer of the embedding matrix.

  • name (str) – A unique layer name.

outputs

The embedding layer output is a 2D tensor in the shape: (batch_size, embedding_dim).

Type

tensor

References

  • [1] Iyyer, M., Manjunatha, V., Boyd-Graber, J., & Daum’e III, H. (2015). Deep Unordered Composition Rivals Syntactic Methods for Text Classification. In Association for Computational Linguistics.

  • [2] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of Tricks for Efficient Text Classification.

Examples

>>> import tensorlayerx as tlx
>>> batch_size = 8
>>> length = 5
>>> input = tlx.nn.Input([batch_size, length], dtype=tlx.int32)
>>> avgembed = tlx.nn.AverageEmbedding(num_embeddings=1000, embedding_dim=50, name='avg')
>>> print(avgembed)
AverageEmbedding(num_embeddings=1000, embedding_dim=50, pad_value=0)
>>> tensor = avgembed(input)
>>> print(tensor)
Tensor([...], shape=(8, 50), dtype=float32)

Convolutional Layers

Convolutions

Conv1d

class tensorlayerx.nn.Conv1d(out_channels=32, kernel_size=5, stride=1, act=None, padding='SAME', data_format='channels_last', dilation=1, W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

Applies a 1D convolution over an input signal composed of several input planes.

Parameters
  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int) – The kernel size

  • stride (int) – The stride step

  • dilation (int) – Specifying the dilation rate to use for dilated convolution.

  • act (activation function) – The function that is applied to the layer activations

  • padding (str or int) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channel_last” (NWC, default) or “channels_first” (NCW).

  • W_init (initializer or str) – The initializer for the kernel weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name

Examples

With TensorLayerx

>>> net = tlx.nn.Input([8, 100, 1], name='input')
>>> conv1d = tlx.nn.Conv1d(out_channels =32, kernel_size=5, stride=2, b_init=None, in_channels=1, name='conv1d_1')
>>> print(conv1d)
>>> tensor = tlx.nn.Conv1d(out_channels =32, kernel_size=5, stride=2, act=tlx.ReLU, name='conv1d_2')(net)
>>> print(tensor)

Conv2d

class tensorlayerx.nn.Conv2d(out_channels=32, kernel_size=(3, 3), stride=(1, 1), act=None, padding='SAME', data_format='channels_last', dilation=(1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

Applies a 2D convolution over an input signal composed of several input planes.

Parameters
  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (tuple or int) – The kernel size (height, width).

  • stride (tuple or int) – The sliding window stride of corresponding input dimensions. It must be in the same order as the shape parameter.

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • act (activation function) – The activation function of this layer.

  • padding (int, tuple or str) – The padding algorithm type: “SAME” or “VALID”. If padding is int or tuple, padding added to all four sides of the input. Default: ‘SAME’

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • W_init (initializer or str) – The initializer for the the kernel weight matrix.

  • b_init (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerx

>>> net = tlx.nn.Input([8, 400, 400, 3], name='input')
>>> conv2d = tlx.nn.Conv2d(out_channels =32, kernel_size=(3, 3), stride=(2, 2), b_init=None, in_channels=3, name='conv2d_1')
>>> print(conv2d)
>>> tensor = tlx.nn.Conv2d(out_channels =32, kernel_size=(3, 3), stride=(2, 2), act=tlx.ReLU, name='conv2d_2')(net)
>>> print(tensor)

Conv3d

class tensorlayerx.nn.Conv3d(out_channels=32, kernel_size=(3, 3, 3), stride=(1, 1, 1), act=None, padding='SAME', data_format='channels_last', dilation=(1, 1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

Applies a 3D convolution over an input signal composed of several input planes.

Parameters
  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (tuple or int) – The kernel size (depth, height, width).

  • stride (tuple or int) – The sliding window stride of corresponding input dimensions. It must be in the same order as the shape parameter.

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • act (activation function) – The activation function of this layer.

  • padding (int, tuple or str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NDHWC, default) or “channels_first” (NCDHW).

  • W_init (initializer or str) – The initializer for the the kernel weight matrix.

  • b_init (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerx

>>> net = tlx.nn.Input([8, 20, 20, 20, 3], name='input')
>>> conv3d = tlx.nn.Conv3d(out_channels =32, kernel_size=(3, 3, 3), stride=(2, 2, 2), b_init=None, in_channels=3, name='conv3d_1')
>>> print(conv3d)
>>> tensor = tlx.nn.Conv3d(out_channels =32, kernel_size=(3, 3, 3), stride=(2, 2, 2), act=tlx.ReLU, name='conv3d_2')(net)
>>> print(tensor)

Deconvolutions

ConvTranspose1d

class tensorlayerx.nn.ConvTranspose1d(out_channels=32, kernel_size=15, stride=1, act=None, padding='SAME', data_format='channels_last', dilation=1, W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

Applies a 1D transposed convolution operator over an input image composed of several input planes.

Parameters
  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int) – The kernel size

  • stride (int or list) – An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step.

  • dilation (int or list) – Specifying the dilation rate to use for dilated convolution.

  • act (activation function) – The function that is applied to the layer activations

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channel_last” (NWC, default) or “channels_first” (NCW).

  • W_init (initializer or str) – The initializer for the kernel weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name

Examples

With TensorLayerx

>>> net = tlx.nn.Input([8, 100, 1], name='input')
>>> conv1d = tlx.nn.ConvTranspose1d(out_channels=32, kernel_size=5, stride=2, b_init=None, in_channels=1, name='Deonv1d_1')
>>> print(conv1d)
>>> tensor = tlx.nn.ConvTranspose1d(out_channels=32, kernel_size=5, stride=2, act=tlx.ReLU, name='ConvTranspose1d_2')(net)
>>> print(tensor)

ConvTranspose2d

class tensorlayerx.nn.ConvTranspose2d(out_channels=32, kernel_size=(3, 3), stride=(1, 1), act=None, padding='SAME', data_format='channels_last', dilation=(1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

Applies a 2D transposed convolution operator over an input image composed of several input planes.

Parameters
  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (tuple or int) – The kernel size (height, width).

  • stride (tuple or int) – The sliding window stride of corresponding input dimensions. It must be in the same order as the shape parameter.

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • act (activation function) – The activation function of this layer.

  • padding (int, tuple or str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • W_init (initializer or str) – The initializer for the the kernel weight matrix.

  • b_init (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerx

>>> net = tlx.nn.Input([8, 400, 400, 3], name='input')
>>> conv2d_transpose = tlx.nn.ConvTranspose2d(out_channels=32, kernel_size=(3, 3), stride=(2, 2), b_init=None, in_channels=3, name='conv2d_transpose_1')
>>> print(conv2d_transpose)
>>> tensor = tlx.nn.ConvTranspose2d(out_channels=32, kernel_size=(3, 3), stride=(2, 2), act=tlx.ReLU, name='conv2d_transpose_2')(net)
>>> print(tensor)

ConvTranspose3d

class tensorlayerx.nn.ConvTranspose3d(out_channels=32, kernel_size=(3, 3, 3), stride=(1, 1, 1), act=None, padding='SAME', data_format='channels_last', dilation=(1, 1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

Applies a 3D transposed convolution operator over an input image composed of several input planes.

Parameters
  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (tuple or int) – The kernel size (depth, height, width).

  • stride (tuple or int) – The sliding window stride of corresponding input dimensions. It must be in the same order as the shape parameter.

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NDHWC, default) or “channels_first” (NCDHW).

  • W_init (initializer or str) – The initializer for the the kernel weight matrix.

  • b_init (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerx

>>> net = tlx.nn.Input([8, 20, 20, 20, 3], name='input')
>>> ConvTranspose3d = tlx.nn.ConvTranspose3d(out_channels=32, kernel_size=(3, 3, 3), stride=(2, 2, 2), b_init=None, in_channels=3, name='deconv3d_1')
>>> print(deconv3d)
>>> tensor = tlx.nn.ConvTranspose3d(out_channels=32, kernel_size=(3, 3, 3), stride=(2, 2, 2), act=tlx.ReLU, name='ConvTranspose3d_2')(net)
>>> print(tensor)

Deformable Convolutions

DeformableConv2d

class tensorlayerx.nn.DeformableConv2d(offset_layer=None, out_channels=32, kernel_size=(3, 3), act=None, padding='SAME', W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

The DeformableConv2d class is a 2D Deformable Convolutional Networks.

Parameters
  • offset_layer (tlx.Tensor) – To predict the offset of convolution operations. The shape is (batchsize, input height, input width, 2*(number of element in the convolution kernel)) e.g. if apply a 3*3 kernel, the number of the last dimension should be 18 (2*3*3)

  • out_channels (int) – The number of filters.

  • kernel_size (tuple or int) – The filter size (height, width).

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • W_init (initializer or str) – The initializer for the weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([5, 10, 10, 16], name='input')
>>> offset1 = tlx.nn.Conv2d(
...     out_channels=18, kernel_size=(3, 3), strides=(1, 1), padding='SAME', name='offset1'
... )(net)
>>> deformconv1 = tlx.nn.DeformableConv2d(
...     offset_layer=offset1, out_channels=32, kernel_size=(3, 3), name='deformable1'
... )(net)
>>> offset2 = tlx.nn.Conv2d(
...     out_channels=18, kernel_size=(3, 3), strides=(1, 1), padding='SAME', name='offset2'
... )(deformconv1)
>>> deformconv2 = tlx.nn.DeformableConv2d(
...     offset_layer=offset2, out_channels=64, kernel_size=(3, 3), name='deformable2'
... )(deformconv1)

References

  • The deformation operation was adapted from the implementation in here

Notes

  • The padding is fixed to ‘SAME’.

  • The current implementation is not optimized for memory usgae. Please use it carefully.

Depthwise Convolutions

DepthwiseConv2d

class tensorlayerx.nn.DepthwiseConv2d(kernel_size=(3, 3), stride=(1, 1), act=None, padding='SAME', data_format='channels_last', dilation=(1, 1), depth_multiplier=1, W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

Separable/Depthwise Convolutional 2D layer, see tf.nn.depthwise_conv2d.

Input:

4-D Tensor (batch, height, width, in_channels).

Output:

4-D Tensor (batch, new height, new width, in_channels * depth_multiplier).

Parameters
  • kernel_size (tuple or int) – The filter size (height, width).

  • stride (tuple or int) – The stride step (height, width).

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • dilation (tuple or int) – The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.

  • depth_multiplier (int) – The number of channels to expand to.

  • W_init (initializer or str) – The initializer for the weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip bias.

  • in_channels (int) – The number of in channels.

  • name (str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([8, 200, 200, 32], name='input')
>>> depthwiseconv2d = tlx.nn.DepthwiseConv2d(
...     kernel_size=(3, 3), stride=(1, 1), dilation=(2, 2), act=tlx.ReLU, depth_multiplier=2, name='depthwise'
... )(net)
>>> print(depthwiseconv2d)
>>> output shape : (8, 200, 200, 64)

References

Group Convolutions

GroupConv2d

class tensorlayerx.nn.GroupConv2d(out_channels=32, kernel_size=(1, 1), stride=(1, 1), n_group=1, act=None, padding='SAME', data_format='channels_last', dilation=(1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

The GroupConv2d class is 2D grouped convolution, see here.

Parameters
  • out_channels (int) – The number of filters.

  • kernel_size (tuple or int) – The filter size.

  • stride (tuple or int) – The stride step.

  • n_group (int) – The number of groups.

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer or str) – The initializer for the weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([8, 24, 24, 32], name='input')
>>> groupconv2d = tlx.nn.GroupConv2d(
...     out_channels=64, kernel_size=(3, 3), stride=(2, 2), n_group=2, name='group'
... )(net)
>>> print(groupconv2d)
>>> output shape : (8, 12, 12, 64)

Separable Convolutions

SeparableConv1d

class tensorlayerx.nn.SeparableConv1d(out_channels=32, kernel_size=1, stride=1, act=None, padding='SAME', data_format='channels_last', dilation=1, depth_multiplier=1, depthwise_init='truncated_normal', pointwise_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

The SeparableConv1d class is a 1D depthwise separable convolutional layer. This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels.

Parameters
  • out_channels (int) – The dimensionality of the output space (i.e. the number of filters in the convolution).

  • kernel_size (int) – Specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.

  • stride (int) – Specifying the stride of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation value != 1.

  • act (activation function) – The activation function of this layer.

  • padding (str) – One of “valid” or “same” (case-insensitive).

  • data_format (str) – One of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).

  • dilation (int) – Specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation value != 1 is incompatible with specifying any stride value != 1.

  • depth_multiplier (int) – The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier.

  • depthwise_init (initializer or str) – for the depthwise convolution kernel.

  • pointwise_init (initializer or str) – For the pointwise convolution kernel.

  • b_init (initializer or str) – For the bias vector. If None, ignore bias in the pointwise part only.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([8, 50, 64], name='input')
>>> separableconv1d = tlx.nn.SeparableConv1d(out_channels=32, kernel_size=3, stride=2, padding='SAME', act=tlx.ReLU, name='separable_1d')(net)
>>> print(separableconv1d)
>>> output shape : (8, 25, 32)

SeparableConv2d

class tensorlayerx.nn.SeparableConv2d(out_channels=32, kernel_size=(1, 1), stride=(1, 1), act=None, padding='VALID', data_format='channels_last', dilation=(1, 1), depth_multiplier=1, depthwise_init='truncated_normal', pointwise_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

The SeparableConv2d class is a 2D depthwise separable convolutional layer. This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels.

Parameters
  • out_channels (int) – The dimensionality of the output space (i.e. the number of filters in the convolution).

  • kernel_size (tuple or int) – Specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.

  • stride (tuple or int) – Specifying the stride of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation value != 1.

  • act (activation function) – The activation function of this layer.

  • padding (str) – One of “valid” or “same” (case-insensitive).

  • data_format (str) – One of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation value != 1 is incompatible with specifying any stride value != 1.

  • depth_multiplier (int) – The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier.

  • depthwise_init (initializer or str) – for the depthwise convolution kernel.

  • pointwise_init (initializer or str) – For the pointwise convolution kernel.

  • b_init (initializer or str) – For the bias vector. If None, ignore bias in the pointwise part only.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([8, 50, 50, 64], name='input')
>>> separableconv2d = tlx.nn.SeparableConv2d(out_channels=32, kernel_size=(3,3), stride=(2,2), depth_multiplier = 3 , padding='SAME', act=tlx.ReLU, name='separable_2d')(net)
>>> print(separableconv2d)
>>> output shape : (8, 24, 24, 32)

SubPixel Convolutions

SubpixelConv1d

class tensorlayerx.nn.SubpixelConv1d(scale=2, act=None, in_channels=None, name=None)[source]

It is a 1D sub-pixel up-sampling layer.

Calls a TensorFlow function that directly implements this functionality. We assume input has dim (batch, width, r)

Parameters
  • scale (int) – The up-scaling ratio, a wrong setting will lead to Dimension size error.

  • act (activation function) – The activation function of this layer.

  • in_channels (int) – The number of in channels.

  • name (str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([8, 25, 32], name='input')
>>> subpixelconv1d = tlx.nn.SubpixelConv1d(scale=2, name='subpixelconv1d')(net)
>>> print(subpixelconv1d)
>>> output shape : (8, 50, 16)

References

Audio Super Resolution Implementation.

SubpixelConv2d

class tensorlayerx.nn.SubpixelConv2d(scale=2, data_format='channels_last', act=None, name=None)[source]

It is a 2D sub-pixel up-sampling layer, usually be used for Super-Resolution applications, see SRGAN for example.

Parameters
  • scale (int) – factor to increase spatial resolution.

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • act (activation function) – The activation function of this layer.

  • name (str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([2, 16, 16, 4], name='input1')
>>> subpixelconv2d = tlx.nn.SubpixelConv2d(scale=2, data_format='channels_last', name='subpixel_conv2d1')(net)
>>> print(subpixelconv2d)
>>> output shape : (2, 32, 32, 1)
>>> net = tlx.nn.Input([2, 16, 16, 40], name='input2')
>>> subpixelconv2d = tlx.nn.SubpixelConv2d(scale=2, data_format='channels_last', name='subpixel_conv2d2')(net)
>>> print(subpixelconv2d)
>>> output shape : (2, 32, 32, 10)
>>> net = tlx.nn.Input([2, 16, 16, 250], name='input3')
>>> subpixelconv2d = tlx.nn.SubpixelConv2d(scale=5, data_format='channels_last', name='subpixel_conv2d3')(net)
>>> print(subpixelconv2d)
>>> output shape : (2, 80, 80, 10)

References

MaskedConv3d

class tensorlayerx.nn.MaskedConv3d(mask_type, out_channels, kernel_size=(3, 3, 3), stride=(1, 1, 1), dilation=(1, 1, 1), padding='SAME', act=None, in_channels=None, data_format='channels_last', kernel_initializer='he_normal', bias_initializer='zeros', name=None)[source]

MaskedConv3D. Reference: [1] Nguyen D T , Quach M , Valenzise G , et al. Lossless Coding of Point Cloud Geometry using a Deep Generative Model[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, PP(99):1-1.

Parameters
  • mask_type (str) – The mask type(‘A’, ‘B’)

  • out_channels (int) – The number of filters.

  • kernel_size (tuple or int) – The filter size (height, width).

  • stride (tuple or int) – The sliding window stride of corresponding input dimensions. It must be in the same order as the shape parameter.

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NDHWC, default) or “channels_first” (NCDHW).

  • kernel_initializer (initializer or str) – The initializer for the the weight matrix.

  • bias_initializer (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([8, 20, 20, 20, 3], name='input')
>>> conv3d = tlx.nn.MaskedConv3d(mask_type='A', out_channels=32, kernel_size=(3, 3, 3), stride=(2, 2, 2), bias_initializer=None, in_channels=3, name='conv3d_1')
>>> print(conv3d)
>>> tensor = tlx.nn.MaskedConv3d(mask_type='B', out_channels=32, kernel_size=(3, 3, 3), stride=(2, 2, 2), act=tlx.ReLU, name='conv3d_2')(net)
>>> print(tensor)

Linear Layers

Linear Layer

class tensorlayerx.nn.Linear(out_features, act=None, W_init='truncated_normal', b_init='constant', in_features=None, name=None)[source]

Applies a linear transformation to the incoming data: \(y = xA^T + b\)

Parameters
  • out_features (int) – The number of units of this layer.

  • act (activation function) – The activation function of this layer.

  • W_init (initializer or str) – The initializer for the weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_features (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (None or str) – A unique layer name. If None, a unique name will be automatically generated.

Examples

With TensorLayerx

>>> net = tlx.nn.Input([100, 50], name='input')
>>> linear = tlx.nn.Linear(out_features=800, act=tlx.ReLU, in_features=50, name='linear_1')
>>> tensor = tlx.nn.Linear(out_features=800, act=tlx.ReLU, name='linear_2')(net)

Notes

If the layer input has more than two axes, it needs to be flatten by using Flatten.

Drop Connect Linear Layer

class tensorlayerx.nn.DropconnectLinear(keep=0.5, out_features=100, act=None, W_init='truncated_normal', b_init='constant', in_features=None, name=None)[source]

The DropconnectLinear class is Dense with DropConnect behaviour which randomly removes connections between this layer and the previous layer according to a keeping probability.

Parameters
  • keep (float) – The keeping probability. The lower the probability it is, the more activations are set to zero.

  • out_features (int) – The number of units of this layer.

  • act (activation function) – The activation function of this layer.

  • W_init (weights initializer or str) – The initializer for the weight matrix.

  • b_init (biases initializer or str) – The initializer for the bias vector.

  • in_features (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (str) – A unique layer name.

Examples

>>> net = tlx.nn.Input([10, 784], name='input')
>>> net = tlx.nn.DropconnectLinear(keep=0.8, out_features=800, act=tlx.ReLU, name='DropconnectLinear1')(net)
>>> output shape :(10, 800)
>>> net = tlx.nn.DropconnectLinear(keep=0.5, out_features=800, act=tlx.ReLU, name='DropconnectLinear2')(net)
>>> output shape :(10, 800)
>>> net = tlx.nn.DropconnectLinear(keep=0.5, out_features=10, name='DropconnectLinear3')(net)
>>> output shape :(10, 10)

References

Dropout Layers

class tensorlayerx.nn.Dropout(p=0.5, seed=0, name=None)[source]

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.

Parameters
  • p (float) – probability of an element to be zeroed. Default: 0.5

  • seed (int or None) – The seed for random dropout.

  • name (None or str) – A unique layer name.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Dropout(p=0.2)(net)

Extend Layers

Expand Dims Layer

class tensorlayerx.nn.ExpandDims(axis=-1, name=None)[source]

The ExpandDims class inserts a dimension of 1 into a tensor’s shape, see tf.expand_dims() .

Parameters
  • axis (int) – The dimension index at which to expand the shape of input.

  • name (str) – A unique layer name. If None, a unique name will be automatically assigned.

Examples

>>> x = tlx.nn.Input([10, 3], name='in')
>>> y = tlx.nn.ExpandDims(axis=-1)(x)
[10, 3, 1]

Tile layer

class tensorlayerx.nn.Tile(multiples=None, name=None)[source]

The Tile class constructs a tensor by tiling a given tensor, see tf.tile() .

Parameters
  • multiples (tensor) – Must be one of the following types: int32, int64. 1-D Length must be the same as the number of dimensions in input.

  • name (None or str) – A unique layer name.

Examples

>>> x = tlx.nn.Input([10, 3], name='in')
>>> y = tlx.nn.Tile(multiples=[2, 3])(x)

Image Resampling Layers

2D UpSampling

class tensorlayerx.nn.UpSampling2d(scale, method='bilinear', antialias=False, data_format='channels_last', name=None, ksize=None)[source]

The UpSampling2d class is a up-sampling 2D layer.

See tf.image.resize_images.

Parameters
  • scale (int or tuple of int) – (scale_height, scale_width) scale factor. scale_height = new_height/height, scale_width = new_width/width.

  • method (str) –

    The resize method selected through the given string. Default ‘bilinear’.
    • ’bilinear’, Bilinear interpolation.

    • ’nearest’, Nearest neighbor interpolation.

    • ’bicubic’, Bicubic interpolation.

    • ’area’, Area interpolation.

  • antialias (boolean) – Whether to use an anti-aliasing filter when downsampling an image.

  • data_format (str) – channels_last ‘channel_last’ (default) or channels_first.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> ni = tlx.nn.Input([10, 50, 50, 32], name='input')
>>> ni = tlx.nn.UpSampling2d(scale=(2, 2))(ni)
>>> output shape : [10, 100, 100, 32]

2D DownSampling

class tensorlayerx.nn.DownSampling2d(scale, method='bilinear', antialias=False, data_format='channels_last', name=None, ksize=None)[source]

The DownSampling2d class is down-sampling 2D layer.

See tf.image.resize_images.

Parameters
  • scale (int or tuple of int) – (new_height, new_width) scale factor.scale_height = new_height/height, scale_width = new_width/width.

  • method (str) –

    The resize method selected through the given string. Default ‘bilinear’.
    • ’bilinear’, Bilinear interpolation.

    • ’nearest’, Nearest neighbor interpolation.

    • ’bicubic’, Bicubic interpolation.

    • ’area’, Area interpolation.

  • antialias (boolean) – Whether to use an anti-aliasing filter when downsampling an image.

  • data_format (str) – channels_last ‘channel_last’ (default) or channels_first.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> ni = tlx.nn.Input([10, 50, 50, 32], name='input')
>>> ni = tlx.nn.DownSampling2d(scale=(2, 2))(ni)
>>> output shape : [10, 25, 25, 32]

Merge Layers

Concat Layer

class tensorlayerx.nn.Concat(concat_dim=-1, name=None)[source]

A layer that concats multiple tensors according to given axis.

Parameters
  • concat_dim (int) – The dimension to concatenate.

  • name (None or str) – A unique layer name.

Examples

>>> class CustomModel(Module):
>>>     def __init__(self):
>>>         super(CustomModel, self).__init__(name="custom")
>>>         self.linear1 = tlx.nn.Linear(in_features=20, out_features=10, act=tlx.ReLU, name='relu1_1')
>>>         self.linear2 = tlx.nn.Linear(in_features=20, out_features=10, act=tlx.ReLU, name='relu2_1')
>>>         self.concat = tlx.nn.Concat(concat_dim=1, name='concat_layer')
>>>     def forward(self, inputs):
>>>         d1 = self.linear1(inputs)
>>>         d2 = self.linear2(inputs)
>>>         outputs = self.concat([d1, d2])
>>>         return outputs

ElementWise Layer

class tensorlayerx.nn.Elementwise(combine_fn=<function minimum>, act=None, name=None)[source]

A layer that combines multiple Layer that have the same output shapes according to an element-wise operation. If the element-wise operation is complicated, please consider to use ElementwiseLambda.

Parameters
  • combine_fn (a TensorFlow element-wise combine function) – e.g. AND is tlx.minimum ; OR is tlx.maximum ; ADD is tlx.add ; MUL is tlx.multiply and so on. See TensorFlow Math API . If the combine function is more complicated, please consider to use ElementwiseLambda.

  • act (activation function) – The activation function of this layer.

  • name (None or str) – A unique layer name.

Examples

>>> import tensorlayerx as tlx
>>> class CustomModel(tlx.nn.Module):
>>>     def __init__(self):
>>>         super(CustomModel, self).__init__(name="custom")
>>>         self.linear1 = tlx.nn.Linear(in_features=20, out_features=10, act=tlx.ReLU, name='relu1_1')
>>>         self.linear2 = tlx.nn.Linear(in_features=20, out_features=10, act=tlx.ReLU, name='relu2_1')
>>>         self.element = tlx.nn.Elementwise(combine_fn=tlx.minimum, name='minimum')
>>>     def forward(self, inputs):
>>>         d1 = self.linear1(inputs)
>>>         d2 = self.linear2(inputs)
>>>         outputs = self.element([d1, d2])
>>>         return outputs

Noise Layer

class tensorlayerx.nn.GaussianNoise(mean=0.0, stddev=1.0, is_always=True, seed=None, name=None)[source]

The GaussianNoise class is noise layer that adding noise with gaussian distribution to the activation.

Parameters
  • mean (float) – The mean. Default is 0.0.

  • stddev (float) – The standard deviation. Default is 1.0.

  • is_always (boolean) – Is True, add noise for train and eval mode. If False, skip this layer in eval mode.

  • seed (int or None) – The seed for random noise.

  • name (str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([64, 200], name='input')
>>> net = tlx.nn.Linear(in_features=200, out_features=100, act=tlx.ReLU, name='linear')(net)
>>> gaussianlayer = tlx.nn.GaussianNoise(name='gaussian')(net)
>>> print(gaussianlayer)
>>> output shape : (64, 100)

Normalization Layers

Batch Normalization

class tensorlayerx.nn.BatchNorm(momentum=0.9, epsilon=1e-05, act=None, is_train=True, beta_init='zeros', gamma_init='random_normal', moving_mean_init='zeros', moving_var_init='zeros', num_features=None, data_format='channels_last', name=None)[source]

This interface is used to construct a callable object of the BatchNorm class. For more details, refer to code examples. It implements the function of the Batch Normalization Layer and can be used as a normalizer function for conv2d and fully connected operations. The data is normalized by the mean and variance of the channel based on the current batch data.

the \(\mu_{\beta}\) and \(\sigma_{\beta}^{2}\) are the statistics of one mini-batch. Calculated as follows:

\[\begin{split}\mu_{\beta} &\gets \frac{1}{m} \sum_{i=1}^{m} x_i \qquad & //\ mini-batch\ mean \\ \sigma_{\beta}^{2} &\gets \frac{1}{m} \sum_{i=1}^{m}(x_i - \mu_{\beta})^2 \qquad & //\ mini-batch\ variance \\\end{split}\]
  • \(x\) : mini-batch data

  • \(m\) : the size of the mini-batch data

the \(\mu_{\beta}\) and \(\sigma_{\beta}^{2}\) are not the statistics of one mini-batch. They are global or running statistics (moving_mean and moving_variance). It usually got from the pre-trained model. Calculated as follows:

\[\begin{split}moving\_mean = moving\_mean * momentum + \mu_{\beta} * (1. - momentum) \quad &// global mean \\ moving\_variance = moving\_variance * momentum + \sigma_{\beta}^{2} * (1. - momentum) \quad &// global variance \\\end{split}\]

The normalization function formula is as follows:

\[\begin{split}\hat{x_i} &\gets \frac{x_i - \mu_\beta} {\sqrt{\ \sigma_{\beta}^{2} + \epsilon}} \qquad &//\ normalize \\ y_i &\gets \gamma \hat{x_i} + \beta \qquad &//\ scale\ and\ shift\end{split}\]
  • \(\epsilon\) : add a smaller value to the variance to prevent division by zero

  • \(\gamma\) : trainable proportional parameter

  • \(\beta\) : trainable deviation parameter

Parameters
  • momentum (float) – The value used for the moving_mean and moving_var computation. Default: 0.9.

  • epsilon (float) – a value added to the denominator for numerical stability. Default: 1e-5

  • act (activation function) – The activation function of this layer.

  • is_train (boolean) – Is being used for training or inference.

  • beta_init (initializer or str) – The initializer for initializing beta, if None, skip beta. Usually you should not skip beta unless you know what happened.

  • gamma_init (initializer or str) – The initializer for initializing gamma, if None, skip gamma. When the batch normalization layer is use instead of ‘biases’, or the next layer is linear, this can be disabled since the scaling can be done by the next layer. see Inception-ResNet-v2

  • moving_mean_init (initializer or str) – The initializer for initializing moving mean, if None, skip moving mean.

  • moving_var_init (initializer or str) – The initializer for initializing moving var, if None, skip moving var.

  • num_features (int) – Number of features for input tensor. Useful to build layer if using BatchNorm1d, BatchNorm2d or BatchNorm3d, but should be left as None if using BatchNorm. Default None.

  • data_format (str) – channels_last ‘channel_last’ (default) or channels_first.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 50, 50, 32], name='input')
>>> net = tlx.nn.BatchNorm()(net)

Notes

The BatchNorm is universally suitable for 3D/4D/5D input in static model, but should not be used in dynamic model where layer is built upon class initialization. So the argument ‘num_features’ should only be used for subclasses BatchNorm1d, BatchNorm2d and BatchNorm3d. All the three subclasses are suitable under all kinds of conditions.

References

Batch Normalization 1D

class tensorlayerx.nn.BatchNorm1d(momentum=0.9, epsilon=1e-05, act=None, is_train=True, beta_init='zeros', gamma_init='random_normal', moving_mean_init='zeros', moving_var_init='zeros', num_features=None, data_format='channels_last', name=None)[source]

The BatchNorm1d applies Batch Normalization over 2D/3D input (a mini-batch of 1D inputs (optional) with additional channel dimension), of shape (N, C) or (N, L, C) or (N, C, L). See more details in BatchNorm.

Examples

With TensorLayerX

>>> # in static model, no need to specify num_features
>>> net = tlx.nn.Input([10, 50, 32], name='input')
>>> net = tlx.nn.BatchNorm1d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tlx.nn.Conv1d(32, 5, 1, in_channels=3)
>>> bn = tlx.nn.BatchNorm1d(num_features=32)

Batch Normalization 2D

class tensorlayerx.nn.BatchNorm2d(momentum=0.9, epsilon=1e-05, act=None, is_train=True, beta_init='zeros', gamma_init='random_normal', moving_mean_init='zeros', moving_var_init='zeros', num_features=None, data_format='channels_last', name=None)[source]

The BatchNorm2d applies Batch Normalization over 4D input (a mini-batch of 2D inputs with additional channel dimension) of shape (N, H, W, C) or (N, C, H, W). See more details in BatchNorm.

Examples

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tlx.nn.Input([10, 50, 50, 32], name='input')
>>> net = tlx.nn.BatchNorm2d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tlx.nn.Conv2d(32, (5, 5), (1, 1), in_channels=3)
>>> bn = tlx.nn.BatchNorm2d(num_features=32)

Batch Normalization 3D

class tensorlayerx.nn.BatchNorm3d(momentum=0.9, epsilon=1e-05, act=None, is_train=True, beta_init='zeros', gamma_init='random_normal', moving_mean_init='zeros', moving_var_init='zeros', num_features=None, data_format='channels_last', name=None)[source]

The BatchNorm3d applies Batch Normalization over 5D input (a mini-batch of 3D inputs with additional channel dimension) with shape (N, D, H, W, C) or (N, C, D, H, W). See more details in BatchNorm.

Examples

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tlx.nn.Input([10, 50, 50, 50, 32], name='input')
>>> net = tlx.nn.BatchNorm3d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tlx.nn.Conv3d(32, (5, 5, 5), (1, 1), in_channels=3)
>>> bn = tlx.nn.BatchNorm3d(num_features=32)

Padding Layers

Pad Layer (Expert API)

Padding layer for any modes.

class tensorlayerx.nn.PadLayer(padding=None, mode='CONSTANT', constant_values=0, name=None)[source]

The PadLayer class is a padding layer for any mode and dimension. Please see tf.pad for usage.

Parameters
  • padding (list of lists of 2 ints, or a Tensor of type int32.) – The int32 values to pad.

  • mode (str) – “CONSTANT”, “REFLECT”, or “SYMMETRIC” (case-insensitive).

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([10, 224, 224, 3], name='input')
>>> padlayer = tlx.nn.PadLayer([[0, 0], [3, 3], [3, 3], [0, 0]], "REFLECT", name='inpad')(net)
>>> print(padlayer)
>>> output shape : (10, 230, 230, 3)

1D Zero padding

class tensorlayerx.nn.ZeroPad1d(padding, name=None, data_format='channels_last')[source]

The ZeroPad1d class is a 1D padding layer for signal [batch, length, channel].

Parameters
  • padding (tuple of 2 ints) –

    • If tuple of 2 ints, zeros to add at the beginning and at the end of the padding dimension.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([10, 100, 1], name='input')
>>> pad1d = tlx.nn.ZeroPad1d(padding=(3, 3))(net)
>>> print(pad1d)
>>> output shape : (10, 106, 1)

2D Zero padding

class tensorlayerx.nn.ZeroPad2d(padding, name=None, data_format='channels_last')[source]

The ZeroPad2d class is a 2D padding layer for image [batch, height, width, channel].

Parameters
  • padding (tuple of 2 tuples of 2 ints.) –

    • If tuple of 2 tuples of 2 ints, interpreted as ((top_pad, bottom_pad), (left_pad, right_pad)).

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([10, 100, 100, 3], name='input')
>>> pad2d = tlx.nn.ZeroPad2d(padding=((3, 3), (4, 4)))(net)
>>> print(pad2d)
>>> output shape : (10, 106, 108, 3)

3D Zero padding

class tensorlayerx.nn.ZeroPad3d(padding, name=None, data_format='channels_last')[source]

The ZeroPad3d class is a 3D padding layer for volume [batch, depth, height, width, channel].

Parameters
  • padding (tuple of 2 tuples of 2 ints.) –

    • If tuple of 2 tuples of 2 ints, interpreted as ((left_dim1_pad, right_dim1_pad), (left_dim2_pad, right_dim2_pad), (left_dim3_pad, right_dim3_pad)).

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([10, 100, 100, 100, 3], name='input')
>>> pad3d = tlx.nn.ZeroPad3d(padding=((3, 3), (4, 4), (5, 5)))(net)
>>> print(pad3d)
>>> output shape : (10, 106, 108, 110, 3)

Pooling Layers

1D Max pooling

class tensorlayerx.nn.MaxPool1d(kernel_size=3, stride=2, padding='SAME', data_format='channels_last', name=None)[source]

Max pooling for 1D signal.

Parameters
  • kernel_size (int) – Pooling window size.

  • stride (int) – Stride of the pooling operation.

  • padding (str or int) – The padding method: ‘VALID’ or ‘SAME’.

  • data_format (str) – One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 50, 32], name='input')
>>> net = tlx.nn.MaxPool1d(kernel_size=3, stride=2, padding='SAME', name='maxpool1d')(net)
>>> output shape : [10, 25, 32]

1D Avg pooling

class tensorlayerx.nn.AvgPool1d(kernel_size=3, stride=2, padding='SAME', data_format='channels_last', dilation_rate=1, name=None)[source]

Avg pooling for 1D signal.

Parameters
  • kernel_size (int) – Pooling window size.

  • stride (int) – Strides of the pooling operation.

  • padding (int、tuple or str) – The padding method: ‘VALID’ or ‘SAME’.

  • data_format (str) – One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 50, 32], name='input')
>>> net = tlx.nn.AvgPool1d(kernel_size=3, stride=2, padding='SAME')(net)
>>> output shape : [10, 25, 32]

2D Max pooling

class tensorlayerx.nn.MaxPool2d(kernel_size=(3, 3), stride=(2, 2), padding='SAME', data_format='channels_last', name=None)[source]

Max pooling for 2D image.

Parameters
  • kernel_size (tuple or int) – (height, width) for filter size.

  • stride (tuple or int) – (height, width) for stride.

  • padding (int、tuple or str) – The padding method: ‘VALID’ or ‘SAME’.

  • data_format (str) – One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 50, 50, 32], name='input')
>>> net = tlx.nn.MaxPool2d(kernel_size=(3, 3), stride=(2, 2), padding='SAME')(net)
>>> output shape : [10, 25, 25, 32]

2D Avg pooling

class tensorlayerx.nn.AvgPool2d(kernel_size=(3, 3), stride=(2, 2), padding='SAME', data_format='channels_last', name=None)[source]

Avg pooling for 2D image [batch, height, width, channel].

Parameters
  • kernel_size (tuple or int) – (height, width) for filter size.

  • stride (tuple or int) – (height, width) for stride.

  • padding (int、tuple or str) – The padding method: ‘VALID’ or ‘SAME’.

  • data_format (str) – One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 50, 50, 32], name='input')
>>> net = tlx.nn.AvgPool2d(kernel_size=(3, 3), stride=(2, 2), padding='SAME')(net)
>>> output shape : [10, 25, 25, 32]

3D Max pooling

class tensorlayerx.nn.MaxPool3d(kernel_size=(3, 3, 3), stride=(2, 2, 2), padding='VALID', data_format='channels_last', name=None)[source]

Max pooling for 3D volume.

Parameters
  • kernel_size (tuple or int) – Pooling window size.

  • stride (tuple or int) – Strides of the pooling operation.

  • padding (int、tuple or str) – The padding method: ‘VALID’ or ‘SAME’.

  • data_format (str) – One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Returns

A max pooling 3-D layer with a output rank as 5.

Return type

tf.Tensor

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 50, 50, 50, 32], name='input')
>>> net = tlx.nn.MaxPool3d(kernel_size=(3, 3, 3), stride=(2, 2, 2), padding='SAME')(net)
>>> output shape : [10, 25, 25, 25, 32]

3D Avg pooling

class tensorlayerx.nn.AvgPool3d(kernel_size=(3, 3, 3), stride=(2, 2, 2), padding='VALID', data_format='channels_last', name=None)[source]

Avg pooling for 3D volume.

Parameters
  • kernel_size (tuple or int) – Pooling window size.

  • stride (tuple or int) – Strides of the pooling operation.

  • padding (int、tuple or str) – The padding method: ‘VALID’ or ‘SAME’.

  • data_format (str) – One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Returns

A Avg pooling 3-D layer with a output rank as 5.

Return type

tf.Tensor

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 50, 50, 50, 32], name='input')
>>> net = tlx.nn.AvgPool3d(kernel_size=(3, 3, 3), stride=(2, 2, 2), padding='SAME')(net)
>>> output shape : [10, 25, 25, 25, 32]

1D Global Max pooling

class tensorlayerx.nn.GlobalMaxPool1d(data_format='channels_last', name=None)[source]

The GlobalMaxPool1d class is a 1D Global Max Pooling layer.

Parameters
  • data_format (str) – One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 100, 30], name='input')
>>> net = tlx.nn.GlobalMaxPool1d()(net)
>>> output shape : [10, 30]

1D Global Avg pooling

class tensorlayerx.nn.GlobalAvgPool1d(data_format='channels_last', name=None)[source]

The GlobalAvgPool1d class is a 1D Global Avg Pooling layer.

Parameters
  • data_format (str) – One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 100, 30], name='input')
>>> net = tlx.nn.GlobalAvgPool1d()(net)
>>> output shape : [10, 30]

2D Global Max pooling

class tensorlayerx.nn.GlobalMaxPool2d(data_format='channels_last', name=None)[source]

The GlobalMaxPool2d class is a 2D Global Max Pooling layer.

Parameters
  • data_format (str) – One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 100, 100, 30], name='input')
>>> net = tlx.nn.GlobalMaxPool2d()(net)
>>> output shape : [10, 30]

2D Global Avg pooling

class tensorlayerx.nn.GlobalAvgPool2d(data_format='channels_last', name=None)[source]

The GlobalAvgPool2d class is a 2D Global Avg Pooling layer.

Parameters
  • data_format (str) – One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 100, 100, 30], name='input')
>>> net = tlx.nn.GlobalAvgPool2d()(net)
>>> output shape : [10, 30]

3D Global Max pooling

class tensorlayerx.nn.GlobalMaxPool3d(data_format='channels_last', name=None)[source]

The GlobalMaxPool3d class is a 3D Global Max Pooling layer.

Parameters
  • data_format (str) – One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 100, 100, 100, 30], name='input')
>>> net = tlx.nn.GlobalMaxPool3d()(net)
>>> output shape : [10, 30]

3D Global Avg pooling

class tensorlayerx.nn.GlobalAvgPool3d(data_format='channels_last', name=None)[source]

The GlobalAvgPool3d class is a 3D Global Avg Pooling layer.

Parameters
  • data_format (str) – One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 100, 100, 100, 30], name='input')
>>> net = tlx.nn.GlobalAvgPool3d()(net)
>>> output shape : [10, 30]

1D Adaptive Max pooling

class tensorlayerx.nn.AdaptiveMaxPool1d(output_size, data_format='channels_last', name=None)[source]

The AdaptiveMaxPool1d class is a 1D Adaptive Max Pooling layer.

Parameters
  • output_size (int) – The target output size. It must be an integer.

  • data_format (str) – One of channels_last (default, [batch, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 32, 3], name='input')
>>> net = tlx.nn.AdaptiveMaxPool1d(output_size=16)(net)
>>> output shape : [10, 16, 3]

1D Adaptive Avg pooling

class tensorlayerx.nn.AdaptiveAvgPool1d(output_size, data_format='channels_last', name=None)[source]

The AdaptiveAvgPool1d class is a 1D Adaptive Avg Pooling layer.

Parameters
  • output_size (int) – The target output size. It must be an integer.

  • data_format (str) – One of channels_last (default, [batch, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 32, 3], name='input')
>>> net = tlx.nn.AdaptiveAvgPool1d(output_size=16)(net)
>>> output shape : [10, 16, 3]

2D Adaptive Max pooling

class tensorlayerx.nn.AdaptiveMaxPool2d(output_size, data_format='channels_last', name=None)[source]

The AdaptiveMaxPool2d class is a 2D Adaptive Max Pooling layer.

Parameters
  • output_size (int or list or tuple) – The target output size. It cloud be an int [int,int](int, int).

  • data_format (str) – One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 32, 32, 3], name='input')
>>> net = tlx.nn.AdaptiveMaxPool2d(output_size=16)(net)
>>> output shape : [10, 16, 16, 3]

2D Adaptive Avg pooling

class tensorlayerx.nn.AdaptiveAvgPool2d(output_size, data_format='channels_last', name=None)[source]

The AdaptiveAvgPool2d class is a 2D Adaptive Avg Pooling layer.

Parameters
  • output_size (int or list or tuple) – The target output size. It cloud be an int [int,int](int, int).

  • data_format (str) – One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10,32, 32, 3], name='input')
>>> net = tlx.nn.AdaptiveAvgPool2d(output_size=16)(net)
>>> output shape : [10,16, 16, 3]

3D Adaptive Max pooling

class tensorlayerx.nn.AdaptiveMaxPool3d(output_size, data_format='channels_last', name=None)[source]

The AdaptiveMaxPool3d class is a 3D Adaptive Max Pooling layer.

Parameters
  • output_size (int or list or tuple) – The target output size. It cloud be an int [int,int,int](int, int, int).

  • data_format (str) – One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10,32, 32, 32, 3], name='input')
>>> net = tlx.nn.AdaptiveMaxPool3d(output_size=16)(net)
>>> output shape : [10, 16, 16, 16, 3]

3D Adaptive Avg pooling

class tensorlayerx.nn.AdaptiveAvgPool3d(output_size, data_format='channels_last', name=None)[source]

The AdaptiveAvgPool3d class is a 3D Adaptive Avg Pooling layer.

Parameters
  • output_size (int or list or tuple) – The target output size. It cloud be an int [int,int,int](int, int, int).

  • data_format (str) – One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10,32, 32, 32, 3], name='input')
>>> net = tlx.nn.AdaptiveAvgPool3d(output_size=16)(net)
>>> output shape : [10, 16, 16, 16, 3]

2D Corner pooling

class tensorlayerx.nn.CornerPool2d(mode='TopLeft', name=None)[source]

Corner pooling for 2D image [batch, height, width, channel], see here.

Parameters
  • mode (str) – TopLeft for the top left corner, Bottomright for the bottom right corner.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> net = tlx.nn.Input([10, 32, 32, 8], name='input')
>>> net = tlx.nn.CornerPool2d(mode='TopLeft',name='cornerpool2d')(net)
>>> output shape : [10, 32, 32, 8]

Quantized Nets

This is an experimental API package for building Quantized Neural Networks. We are using matrix multiplication rather than add-minus and bit-count operation at the moment. Therefore, these APIs would not speed up the inferencing, for production, you can train model via TensorLayer and deploy the model into other customized C/C++ implementation (We probably provide users an extra C/C++ binary net framework that can load model from TensorLayer).

Note that, these experimental APIs can be changed in the future.

Scale

class tensorlayerx.nn.Scale(init_scale=0.05, name='scale')[source]

The Scale class is to multiple a trainable scale value to the layer outputs. Usually be used on the output of binary net.

Parameters
  • init_scale (float) – The initial value for the scale factor.

  • name (a str) – A unique layer name.

Examples

>>> inputs = tlx.nn.Input([8, 3])
>>> linear = tlx.nn.Linear(out_features=10, in_channels=3)(inputs)
>>> outputs = tlx.nn.Scale(init_scale=0.5)(linear)

Binary Linear Layer

class tensorlayerx.nn.BinaryLinear(out_features=100, act=None, use_gemm=False, W_init='truncated_normal', b_init='constant', in_features=None, name=None)[source]

The BinaryLinear class is a binary fully connected layer, which weights are either -1 or 1 while inferencing.

Note that, the bias vector would not be binarized.

Parameters
  • out_features (int) – The number of units of this layer.

  • act (activation function) – The activation function of this layer, usually set to tf.act.sign or apply Sign after BatchNorm.

  • use_gemm (boolean) – If True, use gemm instead of tf.matmul for inference. (TODO).

  • W_init (initializer or str) – The initializer for the weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_features (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (None or str) – A unique layer name.

Examples

>>> net = tlx.nn.Input([10, 784], name='input')
>>> net = tlx.nn.BinaryLinear(out_features=800, act=tlx.ReLU, name='BinaryLinear1')(net)
>>> output shape :(10, 800)
>>> net = tlx.nn.BinaryLinear(out_features=10, name='BinaryLineart')(net)
>>> output shape : (10, 10)

Binary (De)Convolutions

BinaryConv2d

class tensorlayerx.nn.BinaryConv2d(out_channels=32, kernel_size=(3, 3), stride=(1, 1), act=None, padding='VALID', data_format='channels_last', dilation=(1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

The BinaryConv2d class is a 2D binary CNN layer, which weights are either -1 or 1 while inference.

Note that, the bias vector would not be binarized.

Parameters
  • out_channels (int) – The number of filters.

  • kernel_size (tuple or int) – The filter size (height, width).

  • stride (tuple or int) – The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer or str) – The initializer for the the weight matrix.

  • b_init (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([8, 100, 100, 32], name='input')
>>> binaryconv2d = tlx.nn.BinaryConv2d(
    ... out_channels=64, kernel_size=(3, 3), stride=(2, 2), act=tlx.ReLU, in_channels=32, name='binaryconv2d')(net)
>>> print(binaryconv2d)
>>> output shape : (8, 50, 50, 64)

Ternary Linear Layer

TernaryLinear

class tensorlayerx.nn.TernaryLinear(out_features=100, act=None, use_gemm=False, W_init='truncated_normal', b_init='constant', in_features=None, name=None)[source]

The TernaryLinear class is a ternary fully connected layer, which weights are either -1 or 1 or 0 while inference. # TODO The TernaryDense only supports TensorFlow backend.

Note that, the bias vector would not be tenaried.

Parameters
  • out_features (int) – The number of units of this layer.

  • act (activation function) – The activation function of this layer, usually set to tf.act.sign or apply SignLayer after BatchNormLayer.

  • use_gemm (boolean) – If True, use gemm instead of tf.matmul for inference. (TODO).

  • W_init (initializer or str) – The initializer for the weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_features (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (None or str) – A unique layer name.

Ternary Convolutions

TernaryConv2d

class tensorlayerx.nn.TernaryConv2d(out_channels=32, kernel_size=(3, 3), stride=(1, 1), act=None, padding='SAME', use_gemm=False, data_format='channels_last', dilation=(1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

The TernaryConv2d class is a 2D ternary CNN layer, which weights are either -1 or 1 or 0 while inference.

Note that, the bias vector would not be tenarized.

Parameters
  • out_channels (int) – The number of filters.

  • kernel_size (tuple or int) – The filter size (height, width).

  • stride (tuple or int) – The sliding window stride of corresponding input dimensions. It must be in the same order as the shape parameter.

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • use_gemm (boolean) – If True, use gemm instead of tf.matmul for inference. TODO: support gemm

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • dilation_rate (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer or str) – The initializer for the the weight matrix.

  • b_init (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([8, 12, 12, 32], name='input')
>>> ternaryconv2d = tlx.nn.TernaryConv2d(
...     out_channels=64, kernel_size=(5, 5), stride=(1, 1), act=tlx.ReLU, padding='SAME', name='ternaryconv2d'
... )(net)
>>> print(ternaryconv2d)
>>> output shape : (8, 12, 12, 64)

DorefaLinear

class tensorlayerx.nn.DorefaLinear(bitW=1, bitA=3, out_features=100, act=None, use_gemm=False, W_init='truncated_normal', b_init='constant', in_features=None, name=None)[source]

The DorefaLinear class is a binary fully connected layer, which weights are ‘bitW’ bits and the output of the previous layer are ‘bitA’ bits while inferencing.

Note that, the bias vector would not be binarized.

Parameters
  • bitW (int) – The bits of this layer’s parameter

  • bitA (int) – The bits of the output of previous layer

  • out_features (int) – The number of units of this layer.

  • act (activation function) – The activation function of this layer, usually set to tf.act.sign or apply Sign after BatchNorm.

  • use_gemm (boolean) – If True, use gemm instead of tf.matmul for inferencing. (TODO).

  • W_init (initializer or str) – The initializer for the weight matrix.

  • b_init (initializer or None or str) – The initializer for the bias vector. If None, skip biases.

  • in_features (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (a str) – A unique layer name.

Examples

>>> net = tlx.nn.Input([10, 784], name='input')
>>> net = tlx.nn.DorefaLinear(out_features=800, act=tlx.ReLU, name='DorefaLinear1')(net)
>>> output shape :(10, 800)
>>> net = tlx.nn.DorefaLinear(out_features=10, name='DorefaLinear2')(net)
>>> output shape :(10, 10)

DoReFa Convolutions

DorefaConv2d

class tensorlayerx.nn.DorefaConv2d(bitW=1, bitA=3, out_channels=32, kernel_size=(3, 3), stride=(1, 1), act=None, padding='SAME', data_format='channels_last', dilation=(1, 1), W_init='truncated_normal', b_init='constant', in_channels=None, name=None)[source]

The DorefaConv2d class is a 2D quantized convolutional layer, which weights are ‘bitW’ bits and the output of the previous layer are ‘bitA’ bits while inferencing.

Note that, the bias vector would not be binarized.

Parameters
  • bitW (int) – The bits of this layer’s parameter

  • bitA (int) – The bits of the output of previous layer

  • out_channels (int) – The number of filters.

  • kernel_size (tuple or int) – The filter size (height, width).

  • stride (tuple or int) – The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • act (activation function) – The activation function of this layer.

  • padding (str) – The padding algorithm type: “SAME” or “VALID”.

  • data_format (str) – “channels_last” (NHWC, default) or “channels_first” (NCHW).

  • dilation (tuple or int) – Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer or str) – The initializer for the the weight matrix.

  • b_init (initializer or None or str) – The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) – The number of in channels.

  • name (None or str) – A unique layer name.

Examples

With TensorLayer

>>> net = tlx.nn.Input([8, 12, 12, 32], name='input')
>>> dorefaconv2d = tlx.nn.DorefaConv2d(
...     out_channels=32, kernel_size=(5, 5), stride=(1, 1), act=tlx.ReLU, padding='SAME', name='dorefaconv2d'
... )(net)
>>> print(dorefaconv2d)
>>> output shape : (8, 12, 12, 32)

Recurrent Layers

Common Recurrent layer

RNNCell layer

class tensorlayerx.nn.RNNCell(input_size, hidden_size, bias=True, act='tanh', name=None)[source]

An Elman RNN cell with tanh or ReLU non-linearity.

Parameters
  • input_size (int) – The number of expected features in the input x

  • hidden_size (int) – The number of features in the hidden state h

  • bias (bool) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True

  • act (activation function) – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’

  • name (None or str) – A unique layer name

Returns

  • outputs (tensor) – A tensor with shape [batch_size, hidden_size].

  • states (tensor) – A tensor with shape [batch_size, hidden_size]. Tensor containing the next hidden state for each element in the batch

forward(inputs, states=None)[source]
Parameters
  • inputs (tensor) – A tensor with shape [batch_size, input_size].

  • states (tensor or None) – A tensor with shape [batch_size, hidden_size]. When states is None, zero state is used. Defaults to None.

Examples

With TensorLayerx

>>> input = tlx.nn.Input([4, 16], name='input')
>>> prev_h = tlx.nn.Input([4,32])
>>> cell = tlx.nn.RNNCell(input_size=16, hidden_size=32, bias=True, act='tanh', name='rnncell_1')
>>> y, h = cell(input, prev_h)
>>> print(y.shape)

LSTMCell layer

class tensorlayerx.nn.LSTMCell(input_size, hidden_size, bias=True, name=None)[source]

A long short-term memory (LSTM) cell.

Parameters
  • input_size (int) – The number of expected features in the input x

  • hidden_size (int) – The number of features in the hidden state h

  • bias (bool) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True

  • name (None or str) – A unique layer name

Returns

  • outputs (tensor) – A tensor with shape [batch_size, hidden_size].

  • states (tensor) – A tuple of two tensor (h, c), each of shape [batch_size, hidden_size]. Tensors containing the next hidden state and next cell state for each element in the batch.

forward(inputs, states=None)[source]
Parameters
  • inputs (tensor) – A tensor with shape [batch_size, input_size].

  • states (tuple or None) – A tuple of two tensor (h, c), each of shape [batch_size, hidden_size]. When states is None, zero state is used. Defaults: None.

Examples

With TensorLayerx

>>> input = tlx.nn.Input([4, 16], name='input')
>>> prev_h = tlx.nn.Input([4,32])
>>> prev_c = tlx.nn.Input([4,32])
>>> cell = tlx.nn.LSTMCell(input_size=16, hidden_size=32, bias=True, name='lstmcell_1')
>>> y, (h, c)= cell(input, (prev_h, prev_c))
>>> print(y.shape)

GRUCell layer

class tensorlayerx.nn.GRUCell(input_size, hidden_size, bias=True, name=None)[source]

A gated recurrent unit (GRU) cell.

Parameters
  • input_size (int) – The number of expected features in the input x

  • hidden_size (int) – The number of features in the hidden state h

  • bias (bool) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True

  • name (None or str) – A unique layer name

Returns

  • outputs (tensor) – A tensor with shape [batch_size, hidden_size].

  • states (tensor) – A tensor with shape [batch_size, hidden_size]. Tensor containing the next hidden state for each element in the batch

forward(inputs, states=None)[source]
Parameters
  • inputs (tensor) – A tensor with shape [batch_size, input_size].

  • states (tensor or None) – A tensor with shape [batch_size, hidden_size]. When states is None, zero state is used. Defaults: None.

Examples

With TensorLayerx

>>> input = tlx.nn.Input([4, 16], name='input')
>>> prev_h = tlx.nn.Input([4,32])
>>> cell = tlx.nn.GRUCell(input_size=16, hidden_size=32, bias=True, name='grucell_1')
>>> y, h= cell(input, prev_h)
>>> print(y.shape)

RNN layer

class tensorlayerx.nn.RNN(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, act='tanh', name=None)[source]

Multilayer Elman network(RNN). It takes input sequences and initial states as inputs, and returns the output sequences and the final states.

Parameters
  • input_size (int) – The number of expected features in the input x

  • hidden_size (int) – The number of features in the hidden state h

  • num_layers (int) – Number of recurrent layers. Default: 1

  • bias (bool) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True

  • batch_first (bool) – If True, then the input and output tensors are provided as [batch_size, seq, input_size], Default: False

  • dropout (float) – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Default: 0

  • bidirectional (bool) – If True, becomes a bidirectional RNN. Default: False

  • act (activation function) – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’

  • name (None or str) – A unique layer name

Returns

  • outputs (tensor) – the output sequence. if batch_first is True, the shape is [batch_size, seq, num_directions * hidden_size], else, the shape is [seq, batch_size, num_directions * hidden_size].

  • final_states (tensor) – final states. The shape is [num_layers * num_directions, batch_size, hidden_size]. Note that if the RNN is Bidirectional, the forward states are (0,2,4,6,…) and the backward states are (1,3,5,7,….).

forward(input, states=None)[source]
Parameters
  • inputs (tensor) – the input sequence. if batch_first is True, the shape is [batch_size, seq, input_size], else, the shape is [seq, batch_size, input_size].

  • initial_states (tensor or None) – the initial states. The shape is [num_layers * num_directions, batch_size, hidden_size].If initial_state is not given, zero initial states are used. If the RNN is Bidirectional, num_directions should be 2, else it should be 1. Default: None.

Examples

With TensorLayer

>>> input = tlx.nn.Input([23, 32, 16], name='input')
>>> prev_h = tlx.nn.Input([4, 32, 32])
>>> cell = tlx.nn.RNN(input_size=16, hidden_size=32, bias=True, num_layers=2, bidirectional = True, act='tanh', batch_first=False, dropout=0, name='rnn_1')
>>> y, h= cell(input, prev_h)
>>> print(y.shape)

LSTM layer

class tensorlayerx.nn.LSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, name=None)[source]

Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.

Parameters
  • input_size (int) – The number of expected features in the input x

  • hidden_size (int) – The number of features in the hidden state h

  • num_layers (int) – Number of recurrent layers. Default: 1

  • bias (bool) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True

  • batch_first (bool) – If True, then the input and output tensors are provided as [batch_size, seq, input_size], Default: False

  • dropout (float) – If non-zero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to dropout. Default: 0

  • bidirectional (bool) – If True, becomes a bidirectional LSTM. Default: False

  • name (None or str) – A unique layer name

Returns

  • outputs (tensor) – the output sequence. if batch_first is True, the shape is [batch_size, seq, num_directions * hidden_size], else, the shape is [seq, batch_size, num_directions * hidden_size].

  • final_states (tensor) – final states. A tuple of two tensor. The shape of each is [num_layers * num_directions, batch_size, hidden_size]. Note that if the LSTM is Bidirectional, the forward states are (0,2,4,6,…) and the backward states are (1,3,5,7,….).

forward(input, states=None)[source]
Parameters
  • inputs (tensor) – the input sequence. if batch_first is True, the shape is [batch_size, seq, input_size], else, the shape is [seq, batch_size, input_size].

  • initial_states (tensor or None) – the initial states. A tuple of tensor (h, c), the shape of each is [num_layers * num_directions, batch_size, hidden_size].If initial_state is not given, zero initial states are used. If the LSTM is Bidirectional, num_directions should be 2, else it should be 1. Default: None.

Examples

With TensorLayerx

>>> input = tlx.nn.Input([23, 32, 16], name='input')
>>> prev_h = tlx.nn.Input([4, 32, 32])
>>> prev_c = tlx.nn.Input([4, 32, 32])
>>> cell = tlx.nn.LSTM(input_size=16, hidden_size=32, bias=True, num_layers=2, bidirectional = True,  batch_first=False, dropout=0, name='lstm_1')
>>> y, (h, c)= cell(input, (prev_h, prev_c))
>>> print(y.shape)

GRU layer

class tensorlayerx.nn.GRU(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, name=None)[source]

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

Parameters
  • input_size (int) – The number of expected features in the input x

  • hidden_size (int) – The number of features in the hidden state h

  • num_layers (int) – Number of recurrent layers. Default: 1

  • bias (bool) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True

  • batch_first (bool) – If True, then the input and output tensors are provided as [batch_size, seq, input_size], Default: False

  • dropout (float) – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0

  • bidirectional (bool) – If True, becomes a bidirectional LSTM. Default: False

  • name (None or str) – A unique layer name

Returns

  • outputs (tensor) – the output sequence. if batch_first is True, the shape is [batch_size, seq, num_directions * hidden_size], else, the shape is [seq, batch_size, num_directions * hidden_size].

  • final_states (tensor) – final states. A tuple of two tensor. The shape of each is [num_layers * num_directions, batch_size, hidden_size]. Note that if the GRU is Bidirectional, the forward states are (0,2,4,6,…) and the backward states are (1,3,5,7,….).

forward(input, states=None)[source]
Parameters
  • inputs (tensor) – the input sequence. if batch_first is True, the shape is [batch_size, seq, input_size], else, the shape is [seq, batch_size, input_size].

  • initial_states (tensor or None) – the initial states. A tuple of tensor (h, c), the shape of each is [num_layers * num_directions, batch_size, hidden_size].If initial_state is not given, zero initial states are used. If the GRU is Bidirectional, num_directions should be 2, else it should be 1. Default: None.

Examples

With TensorLayerx

>>> input = tlx.nn.Input([23, 32, 16], name='input')
>>> prev_h = tlx.nn.Input([4, 32, 32])
>>> cell = tlx.nn.GRU(input_size=16, hidden_size=32, bias=True, num_layers=2, bidirectional = True,  batch_first=False, dropout=0, name='GRU_1')
>>> y, h= cell(input, prev_h)
>>> print(y.shape)

Transformer Layers

Transformer layer

MultiheadAttention layer

class tensorlayerx.nn.MultiheadAttention(embed_dim, num_heads, dropout=0.0, kdim=None, vdim=None, bias=True, batch_first=False, need_weights=True, name=None)[source]

Allows the model to jointly attend to information from different representation subspaces.

Parameters
  • embed_dim (int) – total dimension of the model.

  • num_heads (int) – The number of heads in multi-head attention.

  • dropout (float) – a Dropout layer on attn_output_weights. Default: 0.0.

  • kdim (int) – total number of features in key. Default: None.

  • vdim (int) – total number of features in value. Default: None.

  • bias (bool) – add bias as module parameter. Default: True.

  • batch_first (bool) – If True, then the input and output tensors are provided as [batch, seq, feature]. Default: False [seq, batch, feature].

  • need_weights (bool) – Indicate whether to return the attention weights. Default False.

  • name (None or str) – A unique layer name.

Examples

With TensorLayerX

>>> q = tlx.nn.Input(shape=(4,2,128))
>>> attn_mask = tlx.convert_to_tensor(np.zeros((4,4)),dtype='bool')
>>> layer = MultiheadAttention(embed_dim=128, num_heads=4)
>>> output = layer(q, attn_mask=attn_mask)

References

forward(q, k=None, v=None, attn_mask=None, key_padding_mask=None)[source]
Parameters
  • q (Tensor) – The queries for multi-head attention. If batch_first is True, it is a tensor with shape [batch_size, query_length, embed_dim]. If batch_first is False, it is a tensor with shape [query_length, batch_size, embed_dim]. The data type should be float32 or float64.

  • k (Tensor) – The keys for multi-head attention. It is a tensor with shape [batch_size, key_length, kdim]. If batch_first is False, it is a tensor with shape [key_length, batch_size, kdim]. The data type should be float32 or float64. If None, use query as key. Default is None.

  • v (Tensor) – The values for multi-head attention. It is a tensor with shape [batch_size, value_length, vdim]. If batch_first is False, it is a tensor with shape [value_length, batch_size, vdim]. The data type should be float32 or float64. If None, use value as key. Default is None.

  • attn_mask (Tensor) – 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all the batches while a 3D mask allows to specify a different mask for the entries of each batch. if a 2D mask: \((L, S)\) where L is the target sequence length, S is the source sequence length. if a 3D mask: \((N\cdot ext{num\_heads}, L, S)\). Where N is the batch size, L is the target sequence length, S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged. If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged. If a FloatTensor is provided, it will be added to the attention weight.

  • key_padding_mask (Tensor) – if provided, specified padding elements in the key will be ignored by the attention. When given a binary mask and a value is True, the corresponding value on the attention layer will be ignored. When given a byte mask and a value is non-zero, the corresponding value on the attention layer will be ignored \((N, S)\) where N is the batch size, S is the source sequence length. If a ByteTensor is provided, the non-zero positions will be ignored while the position with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the value of True will be ignored while the position with the value of False will be unchanged.

Returns

  • attn_output (Tensor) – \((L, N, E)\) where L is the target sequence length, N is the batch size, E is the embedding dimension. \((N, L, E)\) if batch_first is True.

  • attn_output_weights\((N, L, S)\) where N is the batch size, L is the target sequence length, S is the source sequence length.

Transformer layer

class tensorlayerx.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, act='relu', custom_encoder=None, custom_decoder=None, layer_norm_eps=1e-05, batch_first=False)[source]

A transformer model. User is able to modify the attributes as needed.

Parameters
  • d_model (int) – the number of expected features in the encoder/decoder inputs.

  • nhead (int) – the number of heads in the multiheadattention model.

  • num_encoder_layers – the number of sub-encoder-layers in the encoder.

  • num_decoder_layers – the number of sub-decoder-layers in the decoder.

  • dim_feedforward (int) – the dimension of the feedforward network model.

  • dropout (float) – a Dropout layer on attn_output_weights. Default: 0.0.

  • act (str) – the activation function of encoder/decoder intermediate layer, ‘relu’ or ‘gelu’. Default: ‘relu’.

  • custom_encoder (Module or None) – custom encoder.

  • custom_decoder (Module or None) – custom decoder

  • layer_norm_eps (float) – the eps value in layer normalization components. Default: 1e-5.

  • batch_first (bool) – If True, then the input and output tensors are provided as [batch, seq, feature]. Default: False [seq, batch, feature].

Examples

With TensorLayerX

>>> src = tlx.nn.Input(shape=(4,2,128))
>>> tgt = tlx.nn.Input(shape=(4,2,128))
>>> layer = Transformer(d_model=128, nhead=4)
>>> output = layer(src, tgt)

References

forward(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Parameters
  • src (Tensor) – the sequence to the encoder.

  • tgt (Tensor) – the sequence to the decoder.

  • src_mask (Tensor) – the additive mask for the src sequence.

  • tgt_mask (Tensor) – the additive mask for the tgt sequence.

  • memory_mask (Tensor) – the additive mask for the encoder output.

  • src_key_padding_mask (Tensor) – mask for src keys per batch.

  • tgt_key_padding_mask (Tensor) – mask for tgt keys per batch.

  • memory_key_padding_mask (Tensor) – mask for memory keys per batch.

generate_square_subsequent_mask(length)[source]

Generate a square mask for the sequence. The masked positions are filled with float(‘-inf’). Unmasked positions are filled with float(0.0).

Parameters

length (int) – The length of sequence.

Examples

With TensorLayerX

>>> length = 5
>>> mask = transformer.generate_square_subsequent_mask(length)
>>> print(mask)
>>> [[  0. -inf -inf -inf -inf]
>>> [  0.   0. -inf -inf -inf]
>>> [  0.   0.   0. -inf -inf]
>>> [  0.   0.   0.   0. -inf]
>>> [  0.   0.   0.   0.   0.]]

TransformerEncoder layer

class tensorlayerx.nn.TransformerEncoder(encoder_layer, num_layers, norm=None)[source]

TransformerEncoder is a stack of N encoder layers

Parameters
  • encoder_layer (Module) – an instance of the TransformerEncoderLayer() class.

  • num_layers (int) – the number of sub-encoder-layers in the encoder.

  • norm (None) – the layer normalization component.

Examples

With TensorLayerX

>>> q = tlx.nn.Input(shape=(4,2,128))
>>> attn_mask = tlx.convert_to_tensor(np.zeros((4,4)),dtype='bool')
>>> encoder = TransformerEncoderLayer(128, 2, 256)
>>> encoder = TransformerEncoder(encoder, num_layers=3)
>>> output = encoder(q, mask=attn_mask)
forward(src, mask=None, src_key_padding_mask=None)[source]
Parameters
  • src (Tensor) – the sequence to the encoder.

  • mask (Tensor) – the mask for the src sequence.

  • src_key_padding_mask – the mask for the src keys per batch.

TransformerDecoder layer

class tensorlayerx.nn.TransformerDecoder(decoder_layer, num_layers, norm=None)[source]

TransformerDecoder is a stack of N decoder layers

Parameters
  • decoder_layer (Module) – an instance of the TransformerDecoderLayer() class.

  • num_layers (int) – the number of sub-decoder-layers in the decoder.

  • norm (None) – the layer normalization component.

Examples

With TensorLayerX

>>> q = tlx.nn.Input(shape=(4,2,128))
>>> decoder = TransformerDecoderLayer(128, 2, 256)
>>> decoder = TransformerDecoder(decoder, num_layers=3)
>>> output = decoder(q, q)
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Parameters
  • tgt (Tensor) – the sequence to the decoder.

  • memory (Tensor) – the sequence from the last layer of the encoder.

  • tgt_mask (Tensor) – the mask for the tgt sequence.

  • memory_mask (Tensor) – the mask for the memory sequence.

  • tgt_key_padding_mask (Tensor) – the mask for the tgt keys per batch.

  • memory_key_padding_mask (Tensor) – the mask for the memory keys per batch.

TransformerEncoderLayer layer

class tensorlayerx.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward, dropout=0.1, act='relu', layer_norm_eps=1e-05, batch_first=False)[source]

TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”.

Parameters
  • d_model (int) – total dimension of the model.

  • nhead (int) – The number of heads in multi-head attention.

  • dim_feedforward (int) – the dimension of the feedforward network model.

  • dropout (float) – a Dropout layer on attn_output_weights. Default: 0.1.

  • act (str) – The activation function in the feedforward network. ‘relu’ or ‘gelu’. Default ‘relu’.

  • layer_norm_eps (float) – the eps value in layer normalization components. Default 1e-5.

  • batch_first (bool) – If True, then the input and output tensors are provided as [batch, seq, feature]. Default: False [seq, batch, feature].

Examples

With TensorLayerX

>>> q = tlx.nn.Input(shape=(4,2,128))
>>> attn_mask = tlx.convert_to_tensor(np.zeros((4,4)),dtype='bool')
>>> encoder = TransformerEncoderLayer(128, 2, 256)
>>> output = encoder(q, src_mask=attn_mask)
forward(src, src_mask=None, src_key_padding_mask=None)[source]
Parameters
  • src (Tensor) – the sequence to the encoder layer.

  • src_mask (Tensor or None) – the mask for the src sequence.

  • src_key_padding_mask (Tensor or None) – the mask for the src keys per batch.

TransformerDecoderLayer layer

class tensorlayerx.nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward, dropout=0.1, act='relu', layer_norm_eps=1e-05, batch_first=False)[source]

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”.

Parameters
  • d_model (int) – total dimension of the model.

  • nhead (int) – The number of heads in multi-head attention.

  • dim_feedforward (int) – the dimension of the feedforward network model.

  • dropout (float) – a Dropout layer on attn_output_weights. Default: 0.1.

  • act (str) – The activation function in the feedforward network. ‘relu’ or ‘gelu’. Default ‘relu’.

  • layer_norm_eps (float) – the eps value in layer normalization components. Default 1e-5.

  • batch_first (bool) – If True, then the input and output tensors are provided as [batch, seq, feature]. Default: False [seq, batch, feature].

Examples

With TensorLayerX

>>> q = tlx.nn.Input(shape=(4,2,128))
>>> encoder = TransformerDecoderLayer(128, 2, 256)
>>> output = encoder(q, q)
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Parameters
  • tgt (Tensor) – the sequence to the decoder layer.

  • memory – the sequence from the last layer of the encoder.

  • tgt_mask – the mask for the tgt sequence.

  • memory_mask – the mask for the memory sequence.

  • tgt_key_padding_mask – the mask for the tgt keys per batch.

  • memory_key_padding_mask – the mask for the memory keys per batch.

Shape Layers

Flatten Layer

class tensorlayerx.nn.Flatten(name=None)[source]

A layer that reshapes high-dimension input into a vector.

Then we often apply Dense, RNN, Concat and etc on the top of a flatten layer. [batch_size, mask_row, mask_col, n_mask] —> [batch_size, mask_row * mask_col * n_mask]

Parameters

name (None or str) – A unique layer name.

Examples

>>> x = tlx.nn.Input([8, 4, 3], name='input')
>>> y = tlx.nn.Flatten(name='flatten')(x)
[8, 12]

Reshape Layer

class tensorlayerx.nn.Reshape(shape, name=None)[source]

A layer that reshapes a given tensor.

Parameters
  • shape (tuple of int) – The output shape, see tf.reshape.

  • name (str) – A unique layer name.

Examples

>>> x = tlx.nn.Input([8, 4, 3], name='input')
>>> y = tlx.nn.Reshape(shape=[-1, 12], name='reshape')(x)
(8, 12)

Transpose Layer

class tensorlayerx.nn.Transpose(perm=None, conjugate=False, name=None)[source]

A layer that transposes the dimension of a tensor.

See tf.transpose() .

Parameters
  • perm (list of int or None) – The permutation of the dimensions, similar with numpy.transpose. If None, it is set to (n-1…0), where n is the rank of the input tensor.

  • conjugate (bool) – By default False. If True, returns the complex conjugate of complex numbers (and transposed) For example [[1+1j, 2+2j]] –> [[1-1j], [2-2j]]

  • name (str) – A unique layer name.

Examples

>>> x = tlx.nn.Input([8, 4, 3], name='input')
>>> y = tlx.nn.Transpose(perm=[0, 2, 1], conjugate=False, name='trans')(x)
(8, 3, 4)

Shuffle Layer

class tensorlayerx.nn.Shuffle(group, in_channels=None, name=None)[source]

A layer that shuffle a 2D image [batch, height, width, channel], see here.

Parameters
  • group (int) – The number of groups.

  • name (str) – A unique layer name.

Examples

>>> x = tlx.nn.Input([1, 16, 16, 8], name='input')
>>> y = tlx.nn.Shuffle(group=2, name='shuffle')(x)
(1, 16, 16, 8)

Stack Layer

Stack Layer

class tensorlayerx.nn.Stack(axis=1, name=None)[source]

The Stack class is a layer for stacking a list of rank-R tensors into one rank-(R+1) tensor, see tf.stack().

Parameters
  • axis (int) – New dimension along which to stack.

  • name (str) – A unique layer name.

Examples

>>> import tensorlayerx as tlx
>>> ni = tlx.nn.Input([10, 784], name='input')
>>> net1 = tlx.nn.Linear(10, name='linear1')(ni)
>>> net2 = tlx.nn.Linear(10, name='linear2')(ni)
>>> net3 = tlx.nn.Linear(10, name='linear3')(ni)
>>> net = tlx.nn.Stack(axis=1, name='stack')([net1, net2, net3])
(10, 3, 10)

Unstack Layer

class tensorlayerx.nn.UnStack(num=None, axis=0, name=None)[source]

The UnStack class is a layer for unstacking the given dimension of a rank-R tensor into rank-(R-1) tensors., see tf.unstack().

Parameters
  • num (int or None) – The length of the dimension axis. Automatically inferred if None (the default).

  • axis (int) – Dimension along which axis to concatenate.

  • name (str) – A unique layer name.

Returns

The list of layer objects unstacked from the input.

Return type

list of Layer

Examples

>>> ni = tlx.nn.Input([4, 10], name='input')
>>> nn = tlx.nn.Linear(n_units=5)(ni)
>>> nn = tlx.nn.UnStack(axis=1)(nn)  # unstack in channel axis
>>> len(nn)  # 5
>>> nn[0].shape  # (4,)