Posts

Showing posts from February, 2023

Advanced Topics in Machine Learning - CST396 KTU CS Sixth Semester Honours Notes - Dr Binu V P 9847390760

About Me Syllabus Module-1 (Supervised Learning) Overview of  Machine Learning Supervised Learning,Regression Naive Bayes Classifier Decision Trees-ID3 Discriminative and Generative Learning Algorithms Module-2 ( Unsupervised Learning) Similarity Measures Clustering -K Means, EM Clustering Hierarchical Clustering  K-Medoids Clustering Module -3 (Practical aspects in machine learning) Classification Performance Measures Cross Validation, Bias variance, Bagging ,Boosting, Adaboost Module -4 (Statistical Learning Theory) PAC( Probably Approximately Correct) learning Vapnik-Chervonenkis(VC) dimension. Module -5 (Advanced Machine Learning Topics) Graphical Models-Bayesian Belief Networks, Markov Random Fields(MRFs) Inference in Graphical Models-Inference on Chain , Trees and factor Graphs Sampling Methods Auto Encoders and Variational Auto Encoders (VAE) GAN

Auto Encoder, Variational Auto Encoder

Image
  Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible. Autoencoder, by design, reduces data dimensions by learning how to ignore the noise in the data. An autoencoder, which is a neural network made up of two parts: • An encoder network that compresses high-dimensional input data into a lower dimensional representation vector • A decoder network that decompresses a given representation vector back to the original domain The network is trained to find weights for the encoder and decoder that minimize the loss between the original input and the reconstruction of the input after it has passed through the encoder and decoder. The representation vector is a compression of the original image into a lower dimensional,latent space. The idea is that by choosing any point in the

Sampling Method

Image
In this section, we consider some simple strategies for generating random samples from a given distribution. Because the samples will be generated by a computer algorithm they will in fact be pseudo-random numbers, that is, they will be deterministically calculated, but must nevertheless pass appropriate tests for randomness. Here we shall assume that an algorithm has been provided that generates pseudo-random numbers distributed uniformly over (0, 1), and indeedmost software environments have such a facility built in. Standard distributions We first consider how to generate random numbers from simple nonuniform distributions, assuming that we already have available a source of uniformly distributed random numbers. Suppose that $z$ is uniformly distributed over the interval $(0, 1)$, and that we transform the values of $z$ using some function $f(·)$ so that $y = f(z)$. The distribution of $y$ will be governed by $p(y)=p(z) \left | \frac{dz}{dy}\right|$ where, in this case, $p(z) =

Inference in Graphical Models-Inference on Chain , Trees and factor Graphs

Image
We turn now to the problem of inference in graphical models, in which some of  the nodes in a graph are clamped to observed values, and we wish to compute the posterior distributions of one or more subsets of other nodes. As we shall see, we can exploit the graphical structure both to find efficient algorithms for inference, and to make the structure of those algorithms transparent. Specifically, we shall see that many algorithms can be expressed in terms of the propagation of local messages around the graph. To start with, let us consider the graphical interpretation of Bayes’ theorem.Suppose we decompose the joint distribution $p(x, y)$ over two variables $x$ and $y$ into a product of factors in the form $p(x, y) = p(x)p(y|x)$. This can be represented by the directed graph shown in Figure 8.37(a). Now suppose we observe the value of $y$, as indicated by the shaded node in Figure 8.37(b). We can view the marginal distribution $p(x)$ as a prior over the latent variable $x$, and our

Generative Adversarial Network( GAN)

Image
A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in June 2014. Two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. A GAN is a battle between two adversaries, the generator and the discriminator. The generator tries to convert random noise into observations that look as if they have been sampled from the original dataset and the discriminator tries to pred