Thursday, April 26, 2018

Weight Initialization of Convolutional Neural Networks

Initialization of weights at first glance looks a small and less useful step in deciding other factors of a CNN but actually it has can put a huge impact over convergence rate and quality of CNN network.
There is a common practice of choosing magnitude of initial weights to be as minimum as possible or near to zero but not zero at all.
There are many ways of weight initialization:

1. Initialization with zeros
 Initialization with zeros will result in no learning at all. The left most figure in the following diagram shows a graph between the Training Loss VS Epochs for training a simple CNN over MNIST dataset. The batch size was 128 images and no. of Epoch were 12.
2. Initialization with ones



3. Initialization with constants



4. Initialization with Normal Random Values (mean, stddev, seed)
 Initialization with values with normal distribution

 https://intoli.com/blog/neural-network-initialization/img/training-losses.png
5. Initialization with Uniform Random Values(minval, maxval, seed)



6. Initialization with Truncated Normal Random Values(mean, stddev, seed)



7. Initialization with Variance Scaled Values (scale, mode, distribution, seed)


Wednesday, April 25, 2018

Siamese Neural Network

It is a class of neural network architectures that contain two or more identical sub-networks.
Identical means they both have same configuration with parameters and weights.
These networks are applied where we need to find similarity between two inputs like paraphrase scaling.
In paraphrase scaling we have to find the similarity between two text sentences.
Similarly we can find feature matching in two images of the same pose taken from a different viewpoint from the same camera or from two different cameras.

End-To-End Learning

End-to-end learning in computer vision can be defined as the method of learning in which we directly learn output from given data-sets omitting the handcrafted intermediate algorithms.