Week 6 — Topify

Mehmet Ali Korkmaz
BBM406 Spring 2021 Projects
3 min readMay 23, 2021

--

Siamese Twins

We could not get the result we wanted from the Few-Shot Learning which we tried last week. Thus, we turned our direction to the Siamese Neural Networks.

First of all we used Synthetic Minority Oversampling Technique which is a newly developed technique for imbalanced datasets. Before giving up the normal MLP model, we obtained the following results using SMOTE;

Without SMOTE — — — — — — — — — — — — — — — — — -— With SMOTE

If we look at the left figure, we can see that imbalanced data causes our model to predict every sample as normal track. Therefore, our accuracy score is high but F1 Score is very low. But, after oversampling and balancing our data Accuracy becomes less than left graph but F1 score is really high, comparing to the left. That means we are getting more accurate Accuracies on our predictions.

Accuracy: 78.776% — — — — — — — — — — — — — — Accuracy: 68.385%

F1 Score: 0.494% — — — — — — — — — — — — — — — F1 Score: 72.849%

Siamese Networks

The Siamese network is made of two artificial neural networks, each of which may learn the hidden representation of an input vector. Both neural networks are feedforward perceptrons that use error back-propagation during training. They work in parallel and compare their outputs at the end, often using a cosine distance.

After implementing our Siamese Network model with using ReLu functions which includes 3 hidden layer with 128 nodes per each, we got some tremendous results. You can see the charts below;

Imbalanced DATA — — — — — — — — — — — — — — — — Balanced DATA

Using the same hyperparameters, we tried SMOTE on our Siamese model this time. As a result, our loss values decreased more than usual. Finally, We get 97% accuracy with very high F1 Score. Also you can find the confusion matrix of these predictions below;

Related Works

--

--