TensorFlow: Weighted Average Layer with Trainable Weights

This post implements TensorFlow code that estimates the weights used to construct an output composite time series from three input time series.


Weighted Average Layer with Trainable Parameters



In the previous post, we implemented the WeightedAverageLayer to create a weighted average time series from three given time series using fixed weights. However, we can modify this layer to allow for training parameters that address the reverse problem of finding weights.


Python Jupyter Notebook Code


The following TensorFlow code defines a weighted average layer with trainable parameters. It applies this layer to an output time series to calculate the weights of three constituent time series.

Given the output time series is constructed using fixed weights (0.5, 0.3, 0.2), we expect the estimated weight parameters to closely resemble these fixed weights.

These parameters represent weights, ensuring each falls between 0 and 1, summing up to one. To achieve this, the softmax function is employed to transform unconstrained parameters into constrained values.

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
 
class WALayer_trainable(tf.keras.layers.Layer):
    
    def __init__(self**kwargs):
        super(WALayer_trainable, self).__init__(**kwargs)
    
    def build(self, input_shape):
        self.custom_wgts = \
            self.add_weight(shape=(input_shape[-1],),
                #initializer='random_normal',
                initializer=tf.keras.initializers.Ones(),  
                trainable=True)
        super(WALayer_trainable, self).build(input_shape)
    
    def call(self, inputs):
        norm_wgts = self.get_norm_wgts()
        wgt_sum = tf.reduce_sum(inputs * norm_wgts, 
                                axis=-1, keepdims=True)
        return wgt_sum
    
    def get_norm_wgts(self):
        return tf.nn.softmax(self.custom_wgts, axis=-1)
 
#-------------------------------------------------
 
# Generating time values
= np.linspace(010, num=1000
                endpoint=False, dtype=np.float32)
= tf.constant(t, dtype=tf.float32)
 
# Defining different trigonometric time series
series0 = 0.3*tf.sin(t*0.5*np.pi) + t*0.05
series1 = 0.5*tf.sin(t*1*np.pi)
series2 = 0.2*tf.sin(t*5*np.pi) + 0.1
 
# Concatenating 3 time series to create the tensor
m_series = [series0, series1, series2]
tf_series = tf.stack(m_series, axis=1)
 
# Define the fixed wgts and true series
wgts = [0.50.30.2]
true_series = np.dot(tf_series, wgts)
 
#-------------------------------------------------
 
# Input layer with input dimension
inputs = tf.keras.layers.Input(shape=(3,))
# Custom weighted average layer
wgt_avg = WALayer_trainable(name="wgt_avg")(inputs)
# output layer
outputs = tf.keras.layers.Dense(1)(wgt_avg)
# Create the model
model = tf.keras.models.Model(inputs=inputs, 
                              outputs=outputs)
 
# Compile and train the model
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(tf_series, true_series, 
          epochs=300, batch_size=32, shuffle=False)
 
#-------------------------------------------------
 
# WALayer_trainable is at index 1 in the model
wgts_by_num = model.layers[1].get_norm_wgts()  
# or by its name
wgts_by_name = model.get_layer("wgt_avg").get_norm_wgts()
 
print("Estimated wgts by num: ", wgts_by_num.numpy())
print("Estimated wgts by name: ", wgts_by_name.numpy())
print("True wgts: ", wgts)
 
cs


After training, we can obtain the estimated weights as expected values.

Estimated wgts by num:  [0.5000006  0.30000034 0.19999915]
Estimated wgts by name: [0.5000006  0.30000034 0.19999915]
True wgts:              [0.50.30.2]
 
cs

From this post, which is linked to the previous one, we can create a custom layer with fixed parameters and then modify it to include trainable parameters.


No comments:

Post a Comment