Posts

Advanced Topics in Machine Learning - CST396 KTU CS Sixth Semester Honours Notes - Dr Binu V P 9847390760

About Me Syllabus Module-1 (Supervised Learning) Overview of  Machine Learning Supervised Learning,Regression Naive Bayes Classifier Decision Trees-ID3 Discriminative and Generative Learning Algorithms Module-2 ( Unsupervised Learning) Similarity Measures Clustering -K Means, EM Clustering Hierarchical Clustering  K-Medoids Clustering Module -3 (Practical aspects in machine learning) Classification Performance Measures Cross Validation, Bias variance, Bagging ,Boosting, Adaboost Module -4 (Statistical Learning Theory) PAC( Probably Approximately Correct) learning Vapnik-Chervonenkis(VC) dimension. Module -5 (Advanced Machine Learning Topics) Graphical Models-Bayesian Belief Networks, Markov Random Fields(MRFs) Inference in Graphical Models-Inference on Chain , Trees and factor Graphs Sampling Methods Auto Encoders and Variational Auto Encoders (VAE) GAN

Auto Encoder, Variational Auto Encoder

Image
  Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible. Autoencoder, by design, reduces data dimensions by learning how to ignore the noise in the data. An autoencoder, which is a neural network made up of two parts: • An encoder network that compresses high-dimensional input data into a lower dimensional representation vector • A decoder network that decompresses a given representation vector back to the original domain The network is trained to find weights for the encoder and decoder that minimize the loss between the original input and the reconstruction of the input after it has passed through the encoder and decoder. The representation vector is a compression of the original image into a lower dimensional,latent space. The idea is that by choosing any point in the

Sampling Method

Image
In this section, we consider some simple strategies for generating random samples from a given distribution. Because the samples will be generated by a computer algorithm they will in fact be pseudo-random numbers, that is, they will be deterministically calculated, but must nevertheless pass appropriate tests for randomness. Here we shall assume that an algorithm has been provided that generates pseudo-random numbers distributed uniformly over (0, 1), and indeedmost software environments have such a facility built in. Standard distributions We first consider how to generate random numbers from simple nonuniform distributions, assuming that we already have available a source of uniformly distributed random numbers. Suppose that $z$ is uniformly distributed over the interval $(0, 1)$, and that we transform the values of $z$ using some function $f(·)$ so that $y = f(z)$. The distribution of $y$ will be governed by $p(y)=p(z) \left | \frac{dz}{dy}\right|$ where, in this case, $p(z) =

Inference in Graphical Models-Inference on Chain , Trees and factor Graphs

Image
We turn now to the problem of inference in graphical models, in which some of  the nodes in a graph are clamped to observed values, and we wish to compute the posterior distributions of one or more subsets of other nodes. As we shall see, we can exploit the graphical structure both to find efficient algorithms for inference, and to make the structure of those algorithms transparent. Specifically, we shall see that many algorithms can be expressed in terms of the propagation of local messages around the graph. To start with, let us consider the graphical interpretation of Bayes’ theorem.Suppose we decompose the joint distribution $p(x, y)$ over two variables $x$ and $y$ into a product of factors in the form $p(x, y) = p(x)p(y|x)$. This can be represented by the directed graph shown in Figure 8.37(a). Now suppose we observe the value of $y$, as indicated by the shaded node in Figure 8.37(b). We can view the marginal distribution $p(x)$ as a prior over the latent variable $x$, and our

Generative Adversarial Network( GAN)

Image
A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in June 2014. Two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. A GAN is a battle between two adversaries, the generator and the discriminator. The generator tries to convert random noise into observations that look as if they have been sampled from the original dataset and the discriminator tries to pred

Graphical Models-Bayesian Belief Networks, Markov Random Fields(MRFs)

Image
Probabilities play a central role in modern pattern recognition.it highly advantageous to augment the analysis using diagrammatic representations of probability distributions, called probabilistic graphical models. These offer several useful properties 1. They provide a simple way to visualize the structure of a probabilistic model and can be used to design and motivate new models. 2. Insights into the properties of the model, including conditional independence properties, can be obtained by inspection of the graph. 3. Complex computations, required to perform inference and learning in sophisticated models, can be expressed in terms of graphical manipulations, in which underlying mathematical expressions are carried along implicitly. A graph comprises nodes (also called vertices) connected by links (also known as edges or arcs). In a probabilistic graphical model, each node represents a random variable (or group of random variables), and the links express probabilistic relationship