On the equivalence of maximizing entropy and mutual information
This study is conducted under the context of unsupervised training of neural networks with two layers, using the concepts of information theory to perform the training. The two criteria here addressed are: i) maximizing the entropy of the outputs (MaxEnt) and ii) maximization the mutual information between the inputs and outputs (MaxMI). The research question pursued is “are these two approaches equivalent?”. With base on the existing literature, it is possible to conclude that the two approaches are theoretically equivalent provided the system is noiseless.
There are currently no items in this folder.