node_ops

The node_ops module consists of a collection of mid to high level functions which take a tensor or structured list of tensors, perform a sequence of tensorflow operations, and return a tensor or structured list of tensors. All node_ops functions conform to the following specifications.

  • All tensor input (if it has tensor input) is received by the function’s first argument, which may be a single tensor, a list of tensors, or a structured list of tensors, e.g., a list of lists of tensors.
  • The return is a tensor, list of tensors or structured list of tensors.
  • The final argument is an optional name argument for variable_scope.

Use Cases

node_ops functions may be used in a tensorflow script wherever you might use an equivalent sequence of tensorflow
ops during the graph building portion of a script.

node_ops functions may be called in a .config file following the .config file syntax which is explained in Config Tutorial.

Making Custom ops For use With config module

The AntGraph constructor in the config module will add tensor operations to the tensorflow graph which are specified in a config file and fit the node_ops spec but not defined in the node_ops module. This leaves the user free to define new node_ops for use with the config module, and to use many pre-existing tensorflow and third party defined ops with the config module as well.

The AntGraph constructor has two arguments function_map and imports which may be used to incorporate custom node_ops.

  • function_map is a hashmap of function_handle:function, key value pairs
  • imports is a hashmap of module_name:path_to_module pairs for importing an entire module of custom node_ops.

Accessing Tensors Created in a node_ops Function

Tensors which are created by a node_ops function but not returned to the caller are kept track of in an intuitive fashion by calls to tf.add_to_collection. Tensors can be accessed later by calling tf.get_collection by the following convention:

For a node_ops function which was handed the argument name=’some_name’:

  • The nth weight tensor created may be accessed as
tf.get_collection('some_name_weights')[n]
  • The nth bias tensor created may be accessed as
tf.get_collection('some_name_bias')[n]
  • The nth preactivation tensor created may be accessed as
tf.get_collection('some_name_preactivation')[n]
  • The nth activation tensor created may be accessed as
tf.get_collection('some_name_activations')[n]
  • The nth post dropout tensor created may be accessed as
tf.get_collection('some_name_dropouts')[n]
  • The nth post batch normalization tensor created may be accessed as
tf.get_collection('some_name_bn')[n]
  • The nth tensor created not listed above may be accessed as
tf.get_collection('some_name')[n],
  • The nth hidden layer size skip transform (for residual_dnn):
tf.get_collection('some_name_skiptransform')[n]
tf.get_collection('some_name_skipconnection')[n]
tf.get_collection('some_name_transform')[n]

Weights

Here is a simple wrapper for common initializations of tensorflow `Variables`_. There is a option for l2 regularization which is automatically added to the objective function when using the generic_model module.

weights

Placeholders

Here is a simple wrapper for a tensorflow placeholder constructor that when used in conjunction with the config module, infers the correct dimensions of the placeholder from a string hashed set of numpy matrices.

placeholder

Neural Networks

Warning

The output of a neural network node_ops function is the output after activation of the last hidden layer. For regression an additional call to linear must be made and for classification and additional call to mult_log_reg must be made.

Initialization

Neural network weights are initialized with the following scheme where the range is dependent on the second dimension of the input layer:

if activation == 'relu':
   irange= initrange*numpy.sqrt(2.0/float(tensor_in.get_shape().as_list()[1]))
else:
   irange = initrange*(1.0/numpy.sqrt(float(tensor_in.get_shape().as_list()[1])))

initrange above is defaulted to 1. The user has the choice of several distributions,

  • ‘norm’, ‘tnorm’: irange scales distribution with mean zero and standard deviation 1.
  • ‘uniform’: irange scales uniform distribution with range [-1, 1].
  • ‘constant’: irange equals the initial scalar entries of the matrix.

Dropout

Dropout with the specified keep_prob is performed post activation.

Batch Normalization

If requested batch normalization is performed after dropout.

Custom Activations

ident

tanhlecun

mult_log_reg

Tensor Operations

Some tensor operations from Kolda and Bader’s Tensor Decompositions and Applications are provided here. For now these operations only work on up to order 3 tensors.

nmode_tensor_tomatrix

nmode_tensor_multiply

binary_tensor_combine

ternary_tensor_combine

Batch Normalization

batch_normalize

Dropout

Dropout is automatically ‘turned’ off during evaluation when used in conjuction with the generic_model module.

dropout

API

node_ops.placeholder(dtype, shape=None, data=None, name='placeholder')[source]

Wrapper to create tensorflow Placeholder which infers dimensions given data.

Parameters:
  • dtype – Tensorflow dtype to initiliaze a Placeholder.
  • shape – Dimensions of Placeholder
  • data – Data to infer dimensions of Placeholder from.
  • name – Unique name for variable scope.
Returns:

A Tensorflow Placeholder.

node_ops.cosine(operands, name='cosine')[source]

Takes the cosine of vectors in corresponding rows of the two matrix tensors in operands.

Parameters:
  • operands – A list of two tensors to take cosine of.
  • name – An optional name for unique variable scope.
Returns:

A tensor with dimensions (operands[0].shape[0], 1)

Raises:

ValueError when operands do not have matching shapes.

node_ops.x_dot_y(operands, name='x_dot_y')[source]

Takes the inner product for rows of operands[1], and operands[2], and adds optional bias, operands[3], operands[4]. If either operands[1] or operands[2] or both is a list of tensors then a list of the pairwise dot products (with bias when len(operands) > 2) of the lists is returned.

Parameters:
  • operands – A list of 2, 3, or 4 tensors (the first two tensors may be replaced by lists of tensors in which case the return value will a list of the dot products for all members of the cross product of the two lists.).
  • name – An optional identifier for unique variable_scope.
Returns:

A tensor or list of tensors with dimension (operands[1].shape[0], 1).

Raises:

Value error when operands is not a list of at least two tensors.

node_ops.lookup(dataname=None, data=None, indices=None, distribution='uniform', initrange=0.1, l2=0.0, shape=None, makeplace=True, name='lookup')[source]

A wrapper for tensorflow’s embedding_lookup which infers the shape of the weight matrix and placeholder value from the parameter data.

Parameters:
  • dataname – Used exclusively by config.py
  • data – A HotIndex object
  • indices – A Placeholder. If indices is none the dimensions will be inferred from data
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • shape – The dimensions of the output tensor, typically [None, output-size]
  • makeplace – A boolean to tell whether or not a placeholder has been created for this data (Used by config.py)
  • name – A name for unique variable scope.
Returns:

tf.nn.embedding_lookup(wghts, indices), wghts, indices

node_ops.embedding(tensors, name='embedding')[source]

A wrapper for tensorflow’s embedding_lookup

Parameters:
  • tensors – A list of two tensors , matrix, indices
  • name – Unique name for variable scope
Returns:

A matrix tensor where the i-th row = matrix[indices[i]]

node_ops.mult_log_reg(tensor_in, numclasses=None, data=None, dtype=tf.float32, initrange=1e-10, seed=None, l2=0.0, name='log_reg')[source]

Performs mulitnomial logistic regression forward pass. Weights and bias initialized to zeros.

Parameters:
Returns:

A tensor shape=(tensor_in.shape[0], numclasses)

node_ops.concat(tensors, output_dim, name='concat')[source]

Matrix multiplies each tensor in tensors by its own weight matrix and adds together the results.

Parameters:
  • tensors – A list of tensors.
  • output_dim – Dimension of output
  • name – An optional identifier for unique variable_scope.
Returns:

A tensor with shape [None, output_dim]

node_ops.dnn(tensor_in, hidden_units, activation='tanh', distribution='tnorm', initrange=1.0, l2=0.0, bn=False, keep_prob=None, fan_scaling=False, name='dnn')[source]
Creates fully connected deep neural network subgraph. Adapted From skflow dnn_ops.py

Neural Networks and Deep Learning

Using Neural Nets to Recognize Handwritten Digits

Parameters:
  • tensor_intensor or placeholder for input features.
  • hidden_units – list of counts of hidden units in each layer.
  • activation – activation function between layers. Can be None.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • bn – Whether or not to use batch normalization
  • keep_prob – if not None, will add a dropout layer with given probability.
  • name – A name for unique variable_scope.
Returns:

A tensor which would be a deep neural network.

node_ops.residual_dnn(tensor_in, hidden_units, activation='tanh', distribution='tnorm', initrange=1.0, l2=0.0, bn=False, keep_prob=None, fan_scaling=False, skiplayers=3, name='residual_dnn')[source]
Creates residual neural network with shortcut connections.
Deep Residual Learning for Image Recognition
Parameters:
  • tensor_intensor or placeholder for input features.
  • hidden_units – list of counts of hidden units in each layer.
  • activation – activation function between layers. Can be None.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • bn – Whether or not to use batch normalization
  • keep_prob – if not None, will add a dropout layer with given probability.
  • skiplayers – The number of layers to skip for the shortcut connection.
  • name – A name for unique variable scope
Returns:

A tensor which would be a residual deep neural network.

node_ops.highway_dnn(tensor_in, hidden_units, activation='tanh', distribution='tnorm', initrange=1.0, l2=0.0, bn=False, keep_prob=None, fan_scaling=False, bias_start=-1, name='highway_dnn')[source]
A highway deep neural network.
Training Very Deep Networks
Parameters:
  • tensor_in – A 2d matrix tensor.
  • hidden_units – list of counts of hidden units in each layer.
  • activation – Non-linearity to perform. Can be ident for no non-linearity.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • bn – Whether or not to use batch normalization
  • keep_prob – Dropout rate.
  • bias_start – initialization of transform bias weights
  • name – A name for unique variable_scope.
Returns:

A tensor which would be a highway deep neural network.

node_ops.linear(tensor_in, output_size, bias, bias_start=0.0, distribution='tnorm', initrange=1.0, l2=0.0, name="Linear")[source]

Linear map: \(\sum_i(args[i] * W_i)\), where \(W_i\) is a variable.

Parameters:
  • args – a 2D Tensor
  • output_size – int, second dimension of W[i].
  • bias – boolean, whether to add a bias term or not.
  • bias_start – starting value to initialize the bias; 0 by default.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • name – VariableScope for the created subgraph; defaults to “Linear”.
Returns:

A 2D Tensor with shape [batch x output_size] equal to \(\sum_i(args[i] * W_i)\), where \(W_i\) are newly created matrices.

Raises:

ValueError: if some of the arguments has unspecified or wrong shape.

node_ops.batch_normalize(tensor_in, epsilon=1e-5, decay=0.999, name="batch_norm")[source]

Batch Normalization: Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift

An exponential moving average of means and variances in calculated to estimate sample mean and sample variance for evaluations. For testing pair placeholder is_training with [0] in feed_dict. For training pair placeholder is_training with [1] in feed_dict. Example:

Let train = 1 for training and train = 0 for evaluation

Parameters:
  • tensor_in – input Tensor
  • epsilon – A float number to avoid being divided by 0.
  • name – For variable_scope
Returns:

Tensor with variance bounded by a unit and mean of zero according to the batch.

node_ops.nmode_tensor_multiply(tensors, mode, leave_flattened=False, keep_dims=False, name='nmode_multiply')[source]

Nth mode tensor multiplication (for order three tensor) from Kolda and Bader Tensor Decompositions and Applications Works for vectors (matrix with a 1 dimension or matrices)

Parameters:
  • tensors – A list of tensors the first is an order three tensor the second and order 2
  • mode – The mode to perform multiplication against.
  • leave_flattened – Whether or not to reshape tensor back to order 3
  • keep_dims – Whether or not to remove 1 dimensions
  • name – For variable scope
Returns:

Either an order 3 or order 2 tensor

node_ops.ternary_tensor_combine(tensors, initrange=1e-5, distribution='tnorm', l2=0.0, name='ternary_tensor_combine')[source]

For performing tensor multiplications with batches of data points against an order 3 weight tensor.

Parameters:
  • tensors
  • output_dim
  • initrange
  • name
Returns:

node_ops.khatri_rao(tensors, name='khatrirao')[source]

From David Palzer

Parameters:
  • tensors
  • name
Returns:

node_ops.binary_tensor_combine2(tensors, output_dim=10, initrange=1e-5, name='binary_tensor_combine2')[source]
node_ops.se(predictions, targets, name='squared_error')[source]

Squared Error.

node_ops.mse(predictions, targets, name='mse')[source]

Mean Squared Error.

node_ops.rmse(predictions, targets, name='rmse')[source]

Root Mean Squared Error

node_ops.mae(predictions, targets, name='mae')[source]

Mean Absolute Error

node_ops.other_cross_entropy(predictions, targets, name='logistic_loss')[source]

Logistic Loss

node_ops.cross_entropy(predictions, targets, name='cross_entropy')[source]
node_ops.perplexity(predictions, targets, name='perplexity')[source]
node_ops.detection(predictions, threshold, name='detection')[source]
node_ops.recall(predictions, targets, threshold=0.5, detects=None, name='recall')[source]

Percentage of actual classes predicted

Parameters:
  • targets – A one hot encoding of class labels (num_points X numclasses)
  • predictions – A real valued matrix with indices ranging between zero and 1 (num_points X numclasses)
  • threshold – The detection threshold (between zero and 1)
  • detects – In case detection is precomputed for efficiency when evaluating both precision and recall
Returns:

A scalar value

node_ops.precision(predictions, targets, threshold=0.5, detects=None, name='precision')[source]

Percentage of classes detected which are correct.

Parameters:
  • targets – A one hot encoding of class labels (num_points X numclasses)
  • predictions – A real valued matrix with indices ranging between zero and 1 (num_points X numclasses)
  • threshold – The detection threshold (between zero and 1)
  • detects – In case detection is precomputed for efficiency when evaluating both precision and recall
Returns:

A scalar value

node_ops.fscore(predictions=None, targets=None, threshold=0.5, precisions=None, recalls=None, name='fscore')[source]
node_ops.accuracy(predictions, targets, name='accuracy')[source]
exception node_ops.MissingShapeError[source]

Raised when placeholder can not infer shape.

node_ops.accuracy(*args, **kwargs)[source]
node_ops.batch_normalize(*args, **kwargs)[source]

Batch Normalization: Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift

An exponential moving average of means and variances in calculated to estimate sample mean and sample variance for evaluations. For testing pair placeholder is_training with [0] in feed_dict. For training pair placeholder is_training with [1] in feed_dict. Example:

Let train = 1 for training and train = 0 for evaluation

Parameters:
  • tensor_in – input Tensor
  • epsilon – A float number to avoid being divided by 0.
  • name – For variable_scope
Returns:

Tensor with variance bounded by a unit and mean of zero according to the batch.

node_ops.binary_tensor_combine(*args, **kwargs)[source]

For performing tensor multiplications with batches of data points against an order 3 weight tensor.

Parameters:
  • tensors – A list of two matrices each with first dim batch-size
  • output_dim – The dimension of the third mode of the weight tensor
  • initrange – For initializing weight tensor
  • name – For variable scope
Returns:

A matrix with shape batch_size X output_dim

node_ops.binary_tensor_combine2(*args, **kwargs)[source]
node_ops.concat(*args, **kwargs)[source]

Matrix multiplies each tensor in tensors by its own weight matrix and adds together the results.

Parameters:
  • tensors – A list of tensors.
  • output_dim – Dimension of output
  • name – An optional identifier for unique variable_scope.
Returns:

A tensor with shape [None, output_dim]

node_ops.convolutional_net(*args, **kwargs)[source]

See: Tensorflow Deep MNIST for Experts , Tensorflow Convolutional Neural Networks , ImageNet Classification with Deep Convolutional Neural Networks , skflow/examples/text_classification_character_cnn.py , skflow/examples/text_classification_cnn.py , Character-level Convolutional Networks for Text Classification

Parameters:in_progress
Returns:
node_ops.cosine(*args, **kwargs)[source]

Takes the cosine of vectors in corresponding rows of the two matrix tensors in operands.

Parameters:
  • operands – A list of two tensors to take cosine of.
  • name – An optional name for unique variable scope.
Returns:

A tensor with dimensions (operands[0].shape[0], 1)

Raises:

ValueError when operands do not have matching shapes.

node_ops.cross_entropy(*args, **kwargs)[source]
node_ops.detection(*args, **kwargs)[source]
node_ops.dnn(*args, **kwargs)[source]
Creates fully connected deep neural network subgraph. Adapted From skflow dnn_ops.py

Neural Networks and Deep Learning

Using Neural Nets to Recognize Handwritten Digits

Parameters:
  • tensor_intensor or placeholder for input features.
  • hidden_units – list of counts of hidden units in each layer.
  • activation – activation function between layers. Can be None.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • bn – Whether or not to use batch normalization
  • keep_prob – if not None, will add a dropout layer with given probability.
  • name – A name for unique variable_scope.
Returns:

A tensor which would be a deep neural network.

node_ops.dropout(*args, **kwargs)[source]
Adds dropout node. Adapted from skflow dropout_ops.py .
Dropout A Simple Way to Prevent Neural Networks from Overfitting
Parameters:
  • tensor_in – Input tensor.
  • prob – The percent of weights to keep.
  • name – A name for the tensor.
Returns:

Tensor of the same shape of tensor_in.

node_ops.embedding(*args, **kwargs)[source]

A wrapper for tensorflow’s embedding_lookup

Parameters:
  • tensors – A list of two tensors , matrix, indices
  • name – Unique name for variable scope
Returns:

A matrix tensor where the i-th row = matrix[indices[i]]

node_ops.fan_scale(initrange, activation, tensor_in)[source]
node_ops.fscore(*args, **kwargs)[source]
node_ops.highway_dnn(*args, **kwargs)[source]
A highway deep neural network.
Training Very Deep Networks
Parameters:
  • tensor_in – A 2d matrix tensor.
  • hidden_units – list of counts of hidden units in each layer.
  • activation – Non-linearity to perform. Can be ident for no non-linearity.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • bn – Whether or not to use batch normalization
  • keep_prob – Dropout rate.
  • bias_start – initialization of transform bias weights
  • name – A name for unique variable_scope.
Returns:

A tensor which would be a highway deep neural network.

node_ops.ident(tensor_in, name='ident')[source]

Identity function for grouping tensors in graph, during config parsing.

Parameters:tensor_in – A Tensor or list of tensors
Returns:tensor_in
node_ops.khatri_rao(*args, **kwargs)[source]

From David Palzer

Parameters:
  • tensors
  • name
Returns:

node_ops.linear(*args, **kwargs)[source]

Linear map: \(\sum_i(args[i] * W_i)\), where \(W_i\) is a variable.

Parameters:
  • args – a 2D Tensor
  • output_size – int, second dimension of W[i].
  • bias – boolean, whether to add a bias term or not.
  • bias_start – starting value to initialize the bias; 0 by default.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • name – VariableScope for the created subgraph; defaults to “Linear”.
Returns:

A 2D Tensor with shape [batch x output_size] equal to \(\sum_i(args[i] * W_i)\), where \(W_i\) are newly created matrices.

Raises:

ValueError: if some of the arguments has unspecified or wrong shape.

node_ops.lookup(*args, **kwargs)[source]

A wrapper for tensorflow’s embedding_lookup which infers the shape of the weight matrix and placeholder value from the parameter data.

Parameters:
  • dataname – Used exclusively by config.py
  • data – A HotIndex object
  • indices – A Placeholder. If indices is none the dimensions will be inferred from data
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • shape – The dimensions of the output tensor, typically [None, output-size]
  • makeplace – A boolean to tell whether or not a placeholder has been created for this data (Used by config.py)
  • name – A name for unique variable scope.
Returns:

tf.nn.embedding_lookup(wghts, indices), wghts, indices

node_ops.mae(*args, **kwargs)[source]

Mean Absolute Error

node_ops.mse(*args, **kwargs)[source]

Mean Squared Error.

node_ops.mult_log_reg(*args, **kwargs)[source]

Performs mulitnomial logistic regression forward pass. Weights and bias initialized to zeros.

Parameters:
Returns:

A tensor shape=(tensor_in.shape[0], numclasses)

node_ops.nmode_tensor_multiply(*args, **kwargs)[source]

Nth mode tensor multiplication (for order three tensor) from Kolda and Bader Tensor Decompositions and Applications Works for vectors (matrix with a 1 dimension or matrices)

Parameters:
  • tensors – A list of tensors the first is an order three tensor the second and order 2
  • mode – The mode to perform multiplication against.
  • leave_flattened – Whether or not to reshape tensor back to order 3
  • keep_dims – Whether or not to remove 1 dimensions
  • name – For variable scope
Returns:

Either an order 3 or order 2 tensor

node_ops.nmode_tensor_tomatrix(*args, **kwargs)[source]

Nmode tensor unfolding (for order three tensor) from Kolda and Bader Tensor Decompositions and Applications

Parameters:
  • tensor – Order 3 tensor to unfold
  • mode – Mode to unfold (0,1,2, columns, rows, or fibers)
  • name – For variable scoping
Returns:

A matrix (order 2 tensor) with shape dim(mode) X \(\Pi_{othermodes}\) dim(othermodes)

node_ops.other_cross_entropy(*args, **kwargs)[source]

Logistic Loss

node_ops.perplexity(*args, **kwargs)[source]
node_ops.placeholder(*args, **kwargs)[source]

Wrapper to create tensorflow Placeholder which infers dimensions given data.

Parameters:
  • dtype – Tensorflow dtype to initiliaze a Placeholder.
  • shape – Dimensions of Placeholder
  • data – Data to infer dimensions of Placeholder from.
  • name – Unique name for variable scope.
Returns:

A Tensorflow Placeholder.

node_ops.precision(*args, **kwargs)[source]

Percentage of classes detected which are correct.

Parameters:
  • targets – A one hot encoding of class labels (num_points X numclasses)
  • predictions – A real valued matrix with indices ranging between zero and 1 (num_points X numclasses)
  • threshold – The detection threshold (between zero and 1)
  • detects – In case detection is precomputed for efficiency when evaluating both precision and recall
Returns:

A scalar value

node_ops.recall(*args, **kwargs)[source]

Percentage of actual classes predicted

Parameters:
  • targets – A one hot encoding of class labels (num_points X numclasses)
  • predictions – A real valued matrix with indices ranging between zero and 1 (num_points X numclasses)
  • threshold – The detection threshold (between zero and 1)
  • detects – In case detection is precomputed for efficiency when evaluating both precision and recall
Returns:

A scalar value

node_ops.residual_dnn(*args, **kwargs)[source]
Creates residual neural network with shortcut connections.
Deep Residual Learning for Image Recognition
Parameters:
  • tensor_intensor or placeholder for input features.
  • hidden_units – list of counts of hidden units in each layer.
  • activation – activation function between layers. Can be None.
  • distribution – Distribution for lookup weight initialization
  • initrange – Initrange for weight distribution.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • bn – Whether or not to use batch normalization
  • keep_prob – if not None, will add a dropout layer with given probability.
  • skiplayers – The number of layers to skip for the shortcut connection.
  • name – A name for unique variable scope
Returns:

A tensor which would be a residual deep neural network.

node_ops.rmse(*args, **kwargs)[source]

Root Mean Squared Error

node_ops.se(*args, **kwargs)[source]

Squared Error.

node_ops.ternary_tensor_combine(*args, **kwargs)[source]

For performing tensor multiplications with batches of data points against an order 3 weight tensor.

Parameters:
  • tensors
  • output_dim
  • initrange
  • name
Returns:

node_ops.weights(*args, **kwargs)[source]

Wrapper parameterizing common constructions of tf.Variables.

Parameters:
  • distribution – A string identifying distribution ‘tnorm’ for truncated normal, ‘rnorm’ for random normal, ‘constant’ for constant, ‘uniform’ for uniform.
  • shape – Shape of weight tensor.
  • dtype – dtype for weights
  • initrange – Scales standard normal and trunctated normal, value of constant dist., and range of uniform dist. [-initrange, initrange].
  • seed – For reproducible results.
  • l2 – Floating point number determining degree of of l2 regularization for these weights in gradient descent update.
  • name – For variable scope.
Returns:

A tf.Variable.

node_ops.x_dot_y(*args, **kwargs)[source]

Takes the inner product for rows of operands[1], and operands[2], and adds optional bias, operands[3], operands[4]. If either operands[1] or operands[2] or both is a list of tensors then a list of the pairwise dot products (with bias when len(operands) > 2) of the lists is returned.

Parameters:
  • operands – A list of 2, 3, or 4 tensors (the first two tensors may be replaced by lists of tensors in which case the return value will a list of the dot products for all members of the cross product of the two lists.).
  • name – An optional identifier for unique variable_scope.
Returns:

A tensor or list of tensors with dimension (operands[1].shape[0], 1).

Raises:

Value error when operands is not a list of at least two tensors.