API - Losses

To make TensorLayerX simple, we minimize the number of cost functions as much as we can. For more complex loss function, TensorFlow(MindSpore, PaddlePaddle, PyTorch) API will be required.

Note

Please refer to Getting Started for getting specific weights for weight regularization.

softmax_cross_entropy_with_logits(output, target)

Softmax cross-entropy operation, returns the TensorLayerX expression of cross-entropy for two distributions, it implements softmax internally.

sigmoid_cross_entropy(output, target[, …])

Sigmoid cross-entropy operation, see tf.ops.sigmoid_cross_entropy_with_logits.

binary_cross_entropy(output, target[, reduction])

Binary cross entropy operation.

mean_squared_error(output, target[, reduction])

Return the TensorLayerX expression of mean-square-error (L2) of two batch of data.

normalized_mean_square_error(output, target)

Return the TensorLayerX expression of normalized mean-square-error of two distributions.

absolute_difference_error(output, target[, …])

Return the TensorLayerX expression of absolute difference error (L1) of two batch of data.

dice_coe(output, target[, loss_type, axis, …])

Soft dice (Sørensen or Jaccard) coefficient for comparing the similarity of two batch of data, usually be used for binary image segmentation i.e.

dice_hard_coe(output, target[, threshold, …])

Non-differentiable Sørensen–Dice coefficient for comparing the similarity of two batch of data, usually be used for binary image segmentation i.e.

iou_coe(output, target[, threshold, axis, …])

Non-differentiable Intersection over Union (IoU) for comparing the similarity of two batch of data, usually be used for evaluating binary image segmentation.

cross_entropy_seq(logits, target_seqs[, …])

Returns the expression of cross-entropy of two sequences, implement softmax internally.

cross_entropy_seq_with_mask(logits, …[, …])

Returns the expression of cross-entropy of two sequences, implement softmax internally.

cosine_similarity(v1, v2)

Cosine similarity [-1, 1].

li_regularizer(scale[, scope])

Li regularization removes the neurons of previous layer.

lo_regularizer(scale)

Lo regularization removes the neurons of current layer.

maxnorm_regularizer([scale])

Max-norm regularization returns a function that can be used to apply max-norm regularization to weights.

maxnorm_o_regularizer(scale)

Max-norm output regularization removes the neurons of current layer.

maxnorm_i_regularizer(scale)

Max-norm input regularization removes the neurons of previous layer.

Softmax cross entropy

tensorlayerx.losses.softmax_cross_entropy_with_logits(output, target, reduction='mean')[source]

Softmax cross-entropy operation, returns the TensorLayerX expression of cross-entropy for two distributions, it implements softmax internally. See tf.ops.sparse_softmax_cross_entropy_with_logits.

Parameters:
  • output (Tensor) – A batch of distribution with shape: [batch_size, num of classes].

  • target (Tensor) – A batch of index with shape: [batch_size, ].

  • reduction (str) – The optional values are “mean”, “sum”, and “none”. If “none”, do not perform reduction.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]])
>>> labels = tlx.convert_to_tensor([[1], [2]])
>>> loss = tlx.losses.softmax_cross_entropy_with_logits(logits, labels)

References

Sigmoid cross entropy

tensorlayerx.losses.sigmoid_cross_entropy(output, target, reduction='mean')[source]

Sigmoid cross-entropy operation, see tf.ops.sigmoid_cross_entropy_with_logits.

Parameters:
  • output (Tensor) – A batch of distribution with shape: [batch_size, num of classes].

  • target (Tensor) – same shape as the input.

  • reduction (str) – The optional values are “mean”, “sum”, and “none”. If “none”, do not perform reduction.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> losses = tlx.losses.sigmoid_cross_entropy(logits, labels)

Binary cross entropy

tensorlayerx.losses.binary_cross_entropy(output, target, reduction='mean')[source]

Binary cross entropy operation.

Parameters:
  • output (Tensor) – Tensor with type of float32 or float64.

  • target (Tensor) – The target distribution, format the same with output.

  • reduction (str) – The optional values are “mean”, “sum”, and “none”. If “none”, do not perform reduction.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[0.4, 0.2, 0.8], [1.1, 0.5, 0.3]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> losses = tlx.losses.binary_cross_entropy(logits, labels)

References

Mean squared error (L2)

tensorlayerx.losses.mean_squared_error(output, target, reduction='mean')[source]

Return the TensorLayerX expression of mean-square-error (L2) of two batch of data.

Parameters:
  • output (Tensor) – 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel].

  • target (Tensor) – The target distribution, format the same with output.

  • reduction (str) – The optional values are “mean”, “sum”, and “none”. If “none”, do not perform reduction.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[0.4, 0.2, 0.8], [1.1, 0.5, 0.3]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> losses = tlx.losses.mean_squared_error(logits, labels)

References

Normalized mean square error

tensorlayerx.losses.normalized_mean_square_error(output, target, reduction='mean')[source]

Return the TensorLayerX expression of normalized mean-square-error of two distributions.

Parameters:
  • output (Tensor) – 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel].

  • target (Tensor) – The target distribution, format the same with output.

  • reduction (str) – The optional values are “mean”, “sum”, and “none”. If “none”, do not perform reduction.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[0.4, 0.2, 0.8], [1.1, 0.5, 0.3]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> losses = tlx.losses.normalized_mean_square_error(logits, labels)

Absolute difference error (L1)

tensorlayerx.losses.absolute_difference_error(output, target, reduction='mean')[source]

Return the TensorLayerX expression of absolute difference error (L1) of two batch of data.

Parameters:
  • output (Tensor) – 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel].

  • target (Tensor) – The target distribution, format the same with output.

  • reduction (str) – The optional values are “mean”, “sum”, and “none”. If “none”, do not perform reduction.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[0.4, 0.2, 0.8], [1.1, 0.5, 0.3]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> losses = tlx.losses.absolute_difference_error(logits, labels)

Dice coefficient

tensorlayerx.losses.dice_coe(output, target, loss_type='jaccard', axis=(1, 2, 3), smooth=1e-05)[source]

Soft dice (Sørensen or Jaccard) coefficient for comparing the similarity of two batch of data, usually be used for binary image segmentation i.e. labels are binary. The coefficient between 0 to 1, 1 means totally match.

Parameters:
  • output (Tensor) – A distribution with shape: [batch_size, ….], (any dimensions).

  • target (Tensor) – The target distribution, format the same with output.

  • loss_type (str) – jaccard or sorensen, default is jaccard.

  • axis (tuple of int) – All dimensions are reduced, default [1,2,3].

  • smooth (float) –

    This small value will be added to the numerator and denominator.
    • If both output and target are empty, it makes sure dice is 1.

    • If either output or target are empty (all pixels are background), dice = `smooth/(small_value + smooth), then if smooth is very small, dice close to 0 (even the image values lower than the threshold), so in this case, higher smooth can have a higher dice.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[0.4, 0.2, 0.8], [1.1, 0.5, 0.3]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> dice_loss = tlx.losses.dice_coe(logits, labels, axis=-1)

References

Hard Dice coefficient

tensorlayerx.losses.dice_hard_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-05)[source]

Non-differentiable Sørensen–Dice coefficient for comparing the similarity of two batch of data, usually be used for binary image segmentation i.e. labels are binary. The coefficient between 0 to 1, 1 if totally match.

Parameters:
  • output (tensor) – A distribution with shape: [batch_size, ….], (any dimensions).

  • target (tensor) – The target distribution, format the same with output.

  • threshold (float) – The threshold value to be true.

  • axis (tuple of integer) – All dimensions are reduced, default (1,2,3).

  • smooth (float) – This small value will be added to the numerator and denominator, see dice_coe.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[0.4, 0.2, 0.8], [1.1, 0.5, 0.3]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> dice_loss = tlx.losses.dice_hard_coe(logits, labels, axis=-1)

References

IOU coefficient

tensorlayerx.losses.iou_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-05)[source]

Non-differentiable Intersection over Union (IoU) for comparing the similarity of two batch of data, usually be used for evaluating binary image segmentation. The coefficient between 0 to 1, and 1 means totally match.

Parameters:
  • output (tensor) – A batch of distribution with shape: [batch_size, ….], (any dimensions).

  • target (tensor) – The target distribution, format the same with output.

  • threshold (float) – The threshold value to be true.

  • axis (tuple of integer) – All dimensions are reduced, default (1,2,3).

  • smooth (float) – This small value will be added to the numerator and denominator, see dice_coe.

Examples

>>> import tensorlayerx as tlx
>>> logits = tlx.convert_to_tensor([[0.4, 0.2, 0.8], [1.1, 0.5, 0.3]])
>>> labels = tlx.convert_to_tensor([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
>>> dice_loss = tlx.losses.iou_coe(logits, labels, axis=-1)

Notes

  • IoU cannot be used as training loss, people usually use dice coefficient for training, IoU and hard-dice for evaluating.

Cross entropy for sequence

tensorlayerx.losses.cross_entropy_seq(logits, target_seqs, batch_size=None)[source]

Returns the expression of cross-entropy of two sequences, implement softmax internally. Normally be used for fixed length RNN outputs, see PTB example.

Parameters:
  • logits (Tensor) – 2D tensor with shape of [batch_size * n_steps, n_classes].

  • target_seqs (Tensor) – The target sequence, 2D tensor [batch_size, n_steps], if the number of step is dynamic, please use tlx.losses.cross_entropy_seq_with_mask instead.

  • batch_size (None or int.) –

    Whether to divide the losses by batch size.
    • If integer, the return losses will be divided by batch_size.

    • If None (default), the return losses will not be divided by anything.

Examples

>>> import tensorlayerx as tlx
>>> # see `PTB example <https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_ptb_lstm.py>`__.for more details
>>> # outputs shape : (batch_size * n_steps, n_classes)
>>> # targets shape : (batch_size, n_steps)
>>> losses = tlx.losses.cross_entropy_seq(outputs, targets)

Cross entropy with mask for sequence

tensorlayerx.losses.cross_entropy_seq_with_mask(logits, target_seqs, input_mask, return_details=False, name=None)[source]

Returns the expression of cross-entropy of two sequences, implement softmax internally. Normally be used for Dynamic RNN with Synced sequence input and output.

Parameters:
  • logits (Tensor) – 2D tensor with shape of [batch_size * ?, n_classes], ? means dynamic IDs for each example. - Can be get from DynamicRNNLayer by setting return_seq_2d to True.

  • target_seqs (Tensor) – int of tensor, like word ID. [batch_size, ?], ? means dynamic IDs for each example.

  • input_mask (Tensor) – The mask to compute loss, it has the same size with target_seqs, normally 0 or 1.

  • return_details (boolean) –

    Whether to return detailed losses.
    • If False (default), only returns the loss.

    • If True, returns the loss, losses, weights and targets (see source code).

Examples

>>> import tensorlayerx as tlx
>>> import tensorflow as tf
>>> import numpy as np
>>> batch_size = 64
>>> vocab_size = 10000
>>> embedding_size = 256
>>> ni = tlx.nn.Input([batch_size, None], dtype=tf.int64)
>>> net_lits = []
>>> net_list.append(tlx.nn.Embedding(
...         vocabulary_size = vocab_size,
...         embedding_size = embedding_size,
...         name = 'seq_embedding'))
>>> net_list.append(tlx.nn.RNN(
...         cell =tf.keras.layers.LSTMCell(units=embedding_size, dropout=0.1),
...         return_seq_2d = True,
...         name = 'dynamicrnn'))
>>> net_list.append(tlx.nn.Dense(n_units=vocab_size, name="output"))
>>> model = tlx.nn.Sequential(net_list)
>>> input_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64)
>>> target_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64)
>>> input_mask = np.random.randint(0, 2, size=(batch_size, 10), dtype=np.int64)
>>> outputs = model(input_seqs)
>>> loss = tlx.losses.cross_entropy_seq_with_mask(outputs, target_seqs, input_mask)

Cosine similarity

tensorlayerx.losses.cosine_similarity(v1, v2)[source]

Cosine similarity [-1, 1].

Parameters:

v2 (v1,) – Tensor with the same shape [batch_size, n_feature].

References

Regularization functions

For tf.nn.l2_loss, tf.contrib.layers.l1_regularizer, tf.contrib.layers.l2_regularizer and tf.contrib.layers.sum_regularizer, see tensorflow API. Maxnorm ^^^^^^^^^^ .. autofunction:: maxnorm_regularizer

Special

tensorlayerx.losses.li_regularizer(scale, scope=None)[source]

Li regularization removes the neurons of previous layer. The i represents inputs. Returns a function that can be used to apply group li regularization to weights. The implementation follows TensorFlow contrib.

Parameters:
  • scale (float) – A scalar multiplier Tensor. 0.0 disables the regularizer.

  • scope (str) – An optional scope name for this function.

Returns:

Return type:

A function with signature li(weights, name=None) that apply Li regularization.

:raises ValueError : if scale is outside of the range [0.0, 1.0] or if scale is not a float.:

tensorlayerx.losses.lo_regularizer(scale)[source]

Lo regularization removes the neurons of current layer. The o represents outputs Returns a function that can be used to apply group lo regularization to weights. The implementation follows TensorFlow contrib.

Parameters:

scale (float) – A scalar multiplier Tensor. 0.0 disables the regularizer.

Returns:

Return type:

A function with signature lo(weights, name=None) that apply Lo regularization.

:raises ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float.:

tensorlayerx.losses.maxnorm_o_regularizer(scale)[source]

Max-norm output regularization removes the neurons of current layer. Returns a function that can be used to apply max-norm regularization to each column of weight matrix. The implementation follows TensorFlow contrib.

Parameters:

scale (float) – A scalar multiplier Tensor. 0.0 disables the regularizer.

Returns:

Return type:

A function with signature mn_o(weights, name=None) that apply Lo regularization.

:raises ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float.:

tensorlayerx.losses.maxnorm_i_regularizer(scale)[source]

Max-norm input regularization removes the neurons of previous layer. Returns a function that can be used to apply max-norm regularization to each row of weight matrix. The implementation follows TensorFlow contrib.

Parameters:

scale (float) – A scalar multiplier Tensor. 0.0 disables the regularizer.

Returns:

Return type:

A function with signature mn_i(weights, name=None) that apply Lo regularization.

:raises ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float.: