15,433,964 members

See more:
, +

I spent a lot of time converting Lasagne's code to TensorFlow Keras. Although I successfully ran the code, however, I could not obtain the result reported in the paper.

Some help is available on python - convert Lasagne to Keras code (CNN -> LSTM) - Stack Overflow, but it is limited to specific layers and not the entire architecture.

**Lasagne Cod**

**Problem**

I believe that all of the lines I converted are correct, except lst two lines of Lasagne code, which are as follows:

I'm not sure how to convert these lines in Tensorflow Keras after my**output_layer**, or what these lines are for.

**What I have tried:**

**I come up with this in TensorFlow Keras**

Some help is available on python - convert Lasagne to Keras code (CNN -> LSTM) - Stack Overflow, but it is limited to specific layers and not the entire architecture.

Python

net = {} net['input'] = lasagne.layers.InputLayer((BATCH_SIZE, 1, SLIDING_WINDOW_LENGTH, NB_SENSOR_CHANNELS)) net['conv1/5x1'] = lasagne.layers.Conv2DLayer(net['input'], NUM_FILTERS, (FILTER_SIZE, 1)) net['conv2/5x1'] = lasagne.layers.Conv2DLayer(net['conv1/5x1'], NUM_FILTERS, (FILTER_SIZE, 1)) net['conv3/5x1'] = lasagne.layers.Conv2DLayer(net['conv2/5x1'], NUM_FILTERS, (FILTER_SIZE, 1)) net['conv4/5x1'] = lasagne.layers.Conv2DLayer(net['conv3/5x1'], NUM_FILTERS, (FILTER_SIZE, 1)) net['shuff'] = lasagne.layers.DimshuffleLayer(net['conv4/5x1'], (0, 2, 1, 3)) net['lstm1'] = lasagne.layers.LSTMLayer(net['shuff'], NUM_UNITS_LSTM) net['lstm2'] = lasagne.layers.LSTMLayer(net['lstm1'], NUM_UNITS_LSTM) # In order to connect a recurrent layer to a dense layer, it is necessary to flatten the first two dimensions # to cause each time step of each sequence to be processed independently (see Lasagne docs for further information) net['shp1'] = lasagne.layers.ReshapeLayer(net['lstm2'], (-1, NUM_UNITS_LSTM)) net['prob'] = lasagne.layers.DenseLayer(net['shp1'],NUM_CLASSES, nonlinearity=lasagne.nonlinearities.softmax) # Tensors reshaped back to the original shape net['shp2'] = lasagne.layers.ReshapeLayer(net['prob'], (BATCH_SIZE, FINAL_SEQUENCE_LENGTH, NUM_CLASSES)) # Last sample in the sequence is considered net['output'] = lasagne.layers.SliceLayer(net['shp2'], -1, 1)

I believe that all of the lines I converted are correct, except lst two lines of Lasagne code, which are as follows:

Python

# Tensors reshaped back to the original shape net['shp2'] = lasagne.layers.ReshapeLayer(net['prob'], (BATCH_SIZE, FINAL_SEQUENCE_LENGTH, NUM_CLASSES)) # Last sample in the sequence is considered net['output'] = lasagne.layers.SliceLayer(net['shp2'], -1, 1)

I'm not sure how to convert these lines in Tensorflow Keras after my

Python

def CNN_model(input_shape, total_classes): # input_shape = (1, 30, 52) total_classes=12 input_layer = tf.keras.Input(shape=input_shape, name="Time_Series_Activity") #Tensor("Time_Series_Activity:0", shape=(None, 1, 30, 52), dtype=float32) con_l1 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu", data_format='channels_first')(input_layer) # Tensor("conv2d/Relu:0", shape=(None, 64, 26, 52), dtype=float32) con_l2 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu", data_format='channels_first')(con_l1) #Tensor("conv2d_1/Relu:0", shape=(None, 64, 22, 52), dtype=float32) con_l3 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu", data_format='channels_first')(con_l2) # Tensor("conv2d_2/Relu:0", shape=(None, 64, 18, 52), dtype=float32) con_l4 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu", data_format='channels_first')(con_l3) # Tensor("conv2d_3/Relu:0", shape=(None, 64, 14, 52), dtype=float32) permute_layer = tf.keras.layers.Permute((2, 1, 3))(con_l4) # Tensor("permute/transpose:0", shape=(None, 14, 64, 52), dtype=float32) rl = Reshape((int(permute_layer.shape[1]), int(permute_layer.shape[2]) * int(permute_layer.shape[3])))(permute_layer) # Tensor("reshape/Reshape:0", shape=(None, 14, 3328), dtype=float32) lstm_l5 = tf.keras.layers.LSTM(128, return_sequences=True, dropout=0.5)(rl) # Tensor("lstm/PartitionedCall:1", shape=(None, 14, 128), dtype=float32) lstm_l6 = tf.keras.layers.LSTM(128, dropout=0.5)(lstm_l5) # Tensor("lstm_1/PartitionedCall:0", shape=(None, 128), dtype=float32) output_layer = tf.keras.layers.Dense(total_classes, activation="softmax")(lstm_l6) # Tensor("dense/Softmax:0", shape=(None, 12), dtype=float32) return tf.keras.models.Model(inputs=input_layer, outputs=output_layer)

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

CodeProject,
20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8
+1 (416) 849-8900