Collection of statistical hypothesis tests

This post is a collection of hypothesis test methodologies. The full collection is listed here: http://www.biostathandbook.com/testchoice.html. My post just goes over several hypothesis tests that are relevant to my research.   One-way ANOVA: http://www.biostathandbook.com/onewayanova.html If you have one measurement variable and one nominal variable, and the nominal variable separates subjects into multiple groups, you want to test  …

Replicate the vanishing and exploding gradient problems in Recurrent Neural Network

I’ve talked about the vanishing gradient problem in one old post in normal multiple layer neural networks. Pascanur et al. (the first in References below) particularly discussed the vanishing gradient problem as well as another type of gradient instable issue, the exploding gradient problem in the scope of recurrent neural network.   Let’s recap the …

Sparse AutoEncoder

Andrew Ng Tutorial: https://web.stanford.edu/class/cs294a/sparseAutoencoder_2011new.pdf UFLDL Exercise: http://ufldl.stanford.edu/wiki/index.php/Exercise:Sparse_Autoencoder A Chinese blog: http://www.cnblogs.com/tornadomeet/archive/2013/03/20/2970724.html Stacked Denoising Autoencoder (paper): http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

Convolutional Neural Network Simple Tutorial

I am going to demonstrate a simple version of convolutional neural network, a type of deep learning structure in this post.  Motivation Knowing normal multi-layer neural network (probably the one with input layer, output layer and one hidden layer) is helpful before you proceed reading this post. A good tutorial of MLN can be found …

How does gradient vanish in Multi-Layer Neural Network?

Background This post reviews how we update weights using the back propagation approach in a neural network. The goal of the review is to illustrate a notorious phenomenon in training MLNN, called “gradient vanish”. Start Let’s suppose that we have a very simple NN structure, with only one unit in each hidden layer, input layer …

Logical deduction in NP-completeness proof

Maybe it sounds simple to computer scientists, I just want to backup some logical deduction of myself in NP-completeness proofs in case I will deal with such proofs in the future. In NP-completeness proof, we always want to find a polynomial-time reduction (transformation) from problem A to problem B, denoted as: We say that there …