Week 3 — Topify

Mehmet Ali Korkmaz
BBM406 Spring 2021 Projects
3 min readMay 2, 2021

--

This week, we made adjustments on data such as increasing number of our top tracks and normal tracks. For now, we have 5255 top tracks and 20558 normal tracks.

We used this data on Tensorflow Keras Sequential Model training. And we tested our first model. We split our data to 3 portion which are Train, Validation and Test.

You can see the model code below;

Tensorflow Keras Sequential Model

We used “ReLu” as activation function in our hidden layers. And used “Softmax” in output layer. We tried variety of hyperparameters to observe our model is working properly.

After trying our model with different parameter values, we plotted training and validation loss with respect to epoch.

We have 3 hidden layers with 50 neuron on the left side with 9 input and 2 output neurons. We have portion of 80% Train, 10% Validation and 10% Test data. If we set the batch size to 300, validation loss increases irregularly and training loss decreases with a curve. We know that if validation loss is greater than training loss, our model has overfitting problem. We completed this model testing with 78.20% accuracy.

On the right side, we have 2 hidden layers with 9 neuron with 9 input and 2 output neurons. We have same portion with left plot. But this time, we set batch size to 32 to see the differences on loss graph. The training loss curve flattens after 10 epoch and validation loss has unstable values in between (0.46 - 0.47). We completed this model testing with 79.65% accuracy.

We have 2 hidden layers with 5 neuron on the left side with 9 input and 2 output neurons. We have portion of 80% Train, 10% Validation and 10% Test data. If we set the batch size to 32, validation loss and training loss flattens after 7–8 epoch. And we can say that the loss curves becomes less noisy. Thus, our model’s complexity has decreased. We completed this model testing with 78.98% accuracy.

On the right side, we have same amount of neurons and same batch size with the left plot. But this time, we have portion of 60% Train, 20% Validation and 20% Test data. The training loss and validation loss curve flattens after 10 epoch. We completed this model testing with 80.12% accuracy.

Related Paper:

--

--