API - Activations

To make TensorLayerX simple, we minimize the number of activation functions as much as we can. So we encourage you to use Customizes activation function. For parametric activation, please read the layer APIs.

Your activation

Customizes activation function in TensorLayerX is very easy. The following example implements an activation that multiplies its input by 2.

from tensorlayerx.nn import Module
class DoubleActivation(Module):
  def __init__(self):
      pass
  def forward(self, x):
      return x * 2
double_activation = DoubleActivation()

For more complex activation, TensorFlow(MindSpore, PaddlePaddle, PyTorch) API will be required.

activation list

ELU([alpha])

This function is a modified version of ReLU.

PRelu([channel_shared, in_channels, a_init, …])

The PRelu class is Parametric Rectified Linear layer.

PRelu6([channel_shared, in_channels, …])

The PRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

PTRelu6([channel_shared, in_channels, …])

The PTRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

ReLU()

This function is ReLU.

ReLU6()

This function is ReLU6.

Softplus()

This function is Softplus.

LeakyReLU([alpha])

This function is a modified version of ReLU, introducing a nonzero gradient for negative input.

LeakyReLU6([alpha])

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A.L.Maas et al., 2013].

LeakyTwiceRelu6([alpha_low, alpha_high])

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A.L.Maas et al., 2013].

Ramp([v_min, v_max])

Ramp activation function.

Swish()

Swish function.

HardTanh()

Hard tanh activation function.

Tanh()

This function is Tanh.

Sigmoid()

Computes sigmoid of x element-wise.

Softmax()

Computes softmax activations.

Mish()

Mish activation function.

TensorLayerX Activations

ELU

class tensorlayerx.nn.activation.ELU(alpha=1.0)

This function is a modified version of ReLU. It is continuous and differentiable at all points.

The function return the following results:
  • When x < 0: f(x) = alpha * (exp(x) - 1).

  • When x >= 0: f(x) = x.

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) – Scale for the negative factor.

  • name (str) – The function name (optional).

Returns

A Tensor in the same type as x.

Return type

Tensor

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.ELU(alpha=0.5)(net)

PRelu

class tensorlayerx.nn.activation.PRelu(channel_shared=False, in_channels=None, a_init='truncated_normal', name=None, data_format='channels_last', dim=2)

The PRelu class is Parametric Rectified Linear layer. It follows f(x) = alpha * x for x < 0, f(x) = x for x >= 0, where alpha is a learned array with the same shape as x.

Parameters
  • channel_shared (boolean) – If True, single weight is shared by all channels.

  • in_channels (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer or str) – The initializer for initializing the alpha(s).

  • name (None or str) – A unique layer name.

Examples

>>> inputs = tlx.nn.Input([10, 5])
>>> prelulayer = tlx.nn.PRelu(channel_shared=True, in_channels=5)(inputs)

References

PRelu6

class tensorlayerx.nn.activation.PRelu6(channel_shared=False, in_channels=None, a_init='truncated_normal', name=None, data_format='channels_last', dim=2)

The PRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

This activation layer use a modified version tlx.nn.LeakyReLU() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also use a modified version of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This activation layer push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6.

Parameters
  • channel_shared (boolean) – If True, single weight is shared by all channels.

  • in_channels (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer or str) – The initializer for initializing the alpha(s).

  • name (None or str) – A unique layer name.

Examples

>>> inputs = tlx.nn.Input([10, 5])
>>> prelulayer = tlx.nn.PRelu6(channel_shared=True, in_channels=5)(inputs)

References

PTRelu6

class tensorlayerx.nn.activation.PTRelu6(channel_shared=False, in_channels=None, data_format='channels_last', a_init='truncated_normal', name=None)

The PTRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

This activation layer use a modified version tlx.nn.LeakyReLU() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also use a modified version of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This activation layer push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6 + (alpha_high * (x-6)).

This version goes one step beyond PRelu6 by introducing leaky behaviour on the positive side when x > 6.

Parameters
  • channel_shared (boolean) – If True, single weight is shared by all channels.

  • in_channels (int) – The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer or str) – The initializer for initializing the alpha(s).

  • name (None or str) – A unique layer name.

Examples

>>> inputs = tlx.nn.Input([10, 5])
>>> prelulayer = tlx.nn.PTRelu6(channel_shared=True, in_channels=5)(inputs)

References

ReLU

class tensorlayerx.nn.activation.ReLU

This function is ReLU.

The function return the following results:
  • When x < 0: f(x) = 0.

  • When x >= 0: f(x) = x.

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.ReLU()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

ReLU6

class tensorlayerx.nn.activation.ReLU6

This function is ReLU6.

The function return the following results:
  • ReLU6(x)=min(max(0,x),6)

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.ReLU6()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Softplus

class tensorlayerx.nn.activation.Softplus

This function is Softplus.

The function return the following results:
  • softplus(x) = log(exp(x) + 1).

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Softplus()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

LeakyReLU

class tensorlayerx.nn.activation.LeakyReLU(alpha=0.2)

This function is a modified version of ReLU, introducing a nonzero gradient for negative input. Introduced by the paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x >= 0: f(x) = x.

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) – Slope.

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.LeakyReLU(alpha=0.5)(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

LeakyReLU6

class tensorlayerx.nn.activation.LeakyReLU6(alpha=0.2)

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.ops.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6.

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) – Slope.

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.LeakyReLU6(alpha=0.5)(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

LeakyTwiceRelu6

class tensorlayerx.nn.activation.LeakyTwiceRelu6(alpha_low=0.2, alpha_high=0.2)

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.ops.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This function push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6 + (alpha_high * (x-6)).

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha_low (float) – Slope for x < 0: f(x) = alpha_low * x.

  • alpha_high (float) – Slope for x < 6: f(x) = 6 (alpha_high * (x-6)).

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.LeakyTwiceRelu6(alpha_low=0.5, alpha_high=0.2)(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

Ramp

class tensorlayerx.nn.activation.Ramp(v_min=0, v_max=1)

Ramp activation function.

Reference: [tf.clip_by_value]<https://www.tensorflow.org/api_docs/python/tf/clip_by_value>

Parameters
  • x (Tensor) – input.

  • v_min (float) – cap input to v_min as a lower bound.

  • v_max (float) – cap input to v_max as a upper bound.

Returns

A Tensor in the same type as x.

Return type

Tensor

Examples

>>> inputs = tlx.nn.Input([10, 5])
>>> prelulayer = tlx.nn.Ramp()(inputs)

Swish

class tensorlayerx.nn.activation.Swish

Swish function.

Parameters
  • x (Tensor) – input.

  • name (str) – function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Swish()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

HardTanh

class tensorlayerx.nn.activation.HardTanh

Hard tanh activation function.

Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is htanh.

Parameters
  • x (Tensor) – input.

  • name (str) – The function name (optional).

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.HardTanh()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Mish

class tensorlayerx.nn.activation.Mish

Mish activation function.

Reference: [Mish: A Self Regularized Non-Monotonic Neural Activation Function .Diganta Misra, 2019]<https://arxiv.org/abs/1908.08681>

Parameters

x (Tensor) – input.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Mish()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Tanh

class tensorlayerx.nn.activation.Tanh

This function is Tanh. Computes hyperbolic tangent of x element-wise.

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Tanh()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Sigmoid

class tensorlayerx.nn.activation.Sigmoid

Computes sigmoid of x element-wise. Formula for calculating sigmoid(x) = 1/(1+exp(-x))

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Sigmoid()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Softmax

class tensorlayerx.nn.activation.Softmax

Computes softmax activations. softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)

Parameters

x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

Examples

>>> net = tlx.nn.Input([10, 200])
>>> net = tlx.nn.Softmax()(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

Parametric activation

See tensorlayerx.nn.