API - Activations

To make TensorLayerX simple, we minimize the number of activation functions as much as we can. So we encourage you to use Customizes activation function. For parametric activation, please read the layer APIs.

Your activation

Customizes activation function in TensorLayerX is very easy. The following example implements an activation that multiplies its input by 2.

from tensorlayerx.nn import Module
class DoubleActivation(Module):
  def __init__(self):
      pass
  def forward(self, x):
      return x * 2
double_activation = DoubleActivation()

For more complex activation, TensorFlow(MindSpore, PaddlePaddle, PyTorch) API will be required.

activation list

ELU([alpha])

Applies the element-wise function:

PRelu([num_parameters, init, data_format, name])

Applies the element-wise function:

PRelu6([channel_shared, in_channels, …])

The PRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

PTRelu6([channel_shared, in_channels, …])

The PTRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

ReLU()

This function is ReLU.

ReLU6()

This function is ReLU6.

Softplus()

This function is Softplus.

LeakyReLU([negative_slope])

Applies the element-wise function:

LeakyReLU6([alpha])

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A.L.Maas et al., 2013].

LeakyTwiceRelu6([alpha_low, alpha_high])

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A.L.Maas et al., 2013].

Ramp([v_min, v_max])

Ramp activation function.

Swish()

Swish function.

HardTanh()

Hard tanh activation function.

Tanh()

Applies the Hyperbolic Tangent (Tanh) function element-wise.

Sigmoid()

Computes sigmoid of x element-wise.

Softmax([axis])

Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.

Mish()

Applies the Mish function, element-wise.

TensorLayerX Activations

ELU

class tensorlayerx.nn.activation.ELU(alpha=1.0)

Applies the element-wise function:

\[\begin{split}\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}\end{split}\]
Parameters
  • alpha (float) – the \(\alpha\) value for the ELU formulation. Default: 1.0

  • name (str) – The function name (optional).

Returns

A Tensor in the same type as x.

Return type

Tensor

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.ELU(alpha=0.5)(net)

PRelu

class tensorlayerx.nn.activation.PRelu(num_parameters=1, init=0.25, data_format='channels_last', name=None)

Applies the element-wise function:

\[\text{PReLU}(x) = \max(0,x) + a * \min(0,x)\]
Parameters
  • num_parameters (int) – number of a to learn. 1, or the number of channels at input. Default: 1

  • init (float) – the initial value of a. Default: 0.25

  • data_format (str) – Data format that specifies the layout of input. It may be ‘channels_last’ or ‘channels_first’. Default is ‘channels_last’.

  • name (None or str) – A unique layer name.

Examples

>>> inputs = tlx.nn.Input([10, 5, 10])
>>> prelulayer = tlx.nn.PRelu(num_parameters=5, init=0.25, data_format='channels_first')(inputs)

References

PRelu6

class tensorlayerx.nn.activation.PRelu6(channel_shared=False, in_channels=None, a_init='truncated_normal', name=None, data_format='channels_last', dim=2)

The PRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

This activation layer use a modified version tlx.nn.LeakyReLU() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also use a modified version of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This activation layer push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6.

Parameters
  • channel_shared (boolean) – If True, single weight is shared by all channels.

  • in_channels (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer or str) – The initializer for initializing the alpha(s).

  • name (None or str) – A unique layer name.

Examples

>>> inputs = tlx.nn.Input([10, 5])
>>> prelulayer = tlx.nn.PRelu6(channel_shared=True, in_channels=5)(inputs)

References

PTRelu6

class tensorlayerx.nn.activation.PTRelu6(channel_shared=False, in_channels=None, data_format='channels_last', a_init='truncated_normal', name=None)

The PTRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

This activation layer use a modified version tlx.nn.LeakyReLU() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also use a modified version of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This activation layer push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6 + (alpha_high * (x-6)).

This version goes one step beyond PRelu6 by introducing leaky behaviour on the positive side when x > 6.

Parameters
  • channel_shared (boolean) – If True, single weight is shared by all channels.

  • in_channels (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer or str) – The initializer for initializing the alpha(s).

  • name (None or str) – A unique layer name.

Examples

>>> inputs = tlx.nn.Input([10, 5])
>>> prelulayer = tlx.nn.PTRelu6(channel_shared=True, in_channels=5)(inputs)

References

ReLU

class tensorlayerx.nn.activation.ReLU

This function is ReLU.

The function return the following results:
  • When x < 0: f(x) = 0.

  • When x >= 0: f(x) = x.

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.ReLU()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

ReLU6

class tensorlayerx.nn.activation.ReLU6

This function is ReLU6.

The function return the following results:
  • ReLU6(x)=min(max(0,x),6)

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.ReLU6()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Softplus

class tensorlayerx.nn.activation.Softplus

This function is Softplus.

The function return the following results:
  • softplus(x) = log(exp(x) + 1).

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Softplus()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

LeakyReLU

class tensorlayerx.nn.activation.LeakyReLU(negative_slope=0.01)

Applies the element-wise function:

\[\text{LeakyReLU}(x) = \max(0, x) + negative\_slope * \min(0, x)\]
Parameters
  • negative_slope (float) – Controls the angle of the negative slope. Default: 1e-2

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.LeakyReLU(alpha=0.5)(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

LeakyReLU6

class tensorlayerx.nn.activation.LeakyReLU6(alpha=0.2)

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.ops.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6.

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) – Slope.

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.LeakyReLU6(alpha=0.5)(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

LeakyTwiceRelu6

class tensorlayerx.nn.activation.LeakyTwiceRelu6(alpha_low=0.2, alpha_high=0.2)

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.ops.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This function push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6 + (alpha_high * (x-6)).

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha_low (float) – Slope for x < 0: f(x) = alpha_low * x.

  • alpha_high (float) – Slope for x < 6: f(x) = 6 (alpha_high * (x-6)).

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.LeakyTwiceRelu6(alpha_low=0.5, alpha_high=0.2)(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

Ramp

class tensorlayerx.nn.activation.Ramp(v_min=0, v_max=1)

Ramp activation function.

Reference: [tf.clip_by_value]<https://www.tensorflow.org/api_docs/python/tf/clip_by_value>

Parameters
  • x (Tensor) – input.

  • v_min (float) – cap input to v_min as a lower bound.

  • v_max (float) – cap input to v_max as a upper bound.

Returns

A Tensor in the same type as x.

Return type

Tensor

Examples

>>> inputs = tlx.nn.Input([10, 5])
>>> prelulayer = tlx.nn.Ramp()(inputs)

Swish

class tensorlayerx.nn.activation.Swish

Swish function.

Parameters
  • x (Tensor) – input.

  • name (str) – function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Swish()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

HardTanh

class tensorlayerx.nn.activation.HardTanh

Hard tanh activation function.

Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is htanh.

Parameters
  • x (Tensor) – input.

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.HardTanh()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Mish

class tensorlayerx.nn.activation.Mish

Applies the Mish function, element-wise. Mish: A Self Regularized Non-Monotonic Neural Activation Function.

\[\]

text{Mish}(x) = x * text{Tanh}(text{Softplus}(x))

Reference: [Mish: A Self Regularized Non-Monotonic Neural Activation Function .Diganta Misra, 2019]<https://arxiv.org/abs/1908.08681>

Parameters

x (Tensor) – input.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Mish()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Tanh

class tensorlayerx.nn.activation.Tanh

Applies the Hyperbolic Tangent (Tanh) function element-wise.

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Tanh()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Sigmoid

class tensorlayerx.nn.activation.Sigmoid

Computes sigmoid of x element-wise. Formula for calculating sigmoid(x) = 1/(1+exp(-x))

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Sigmoid()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Softmax

class tensorlayerx.nn.activation.Softmax(axis=-1)

Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.

Parameters

axis (int) – A dimension along which Softmax will be computed

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Softmax()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Parametric activation

See tensorlayerx.nn.