ROUGH DRAFT authorea.com/103165
Main Data History
Export
Show Index Toggle 0 comments
  •  Quick Edit
  • Dimensional Reduction Work Group

    Please download the toolbox and pdf from here. While by no means “

    Introduction

    Manifold learning is a significant problem across a wide variety of information processing fields including pattern recognition, data compression, machine learning, and database navigation. In many problems, the measured data vectors are high-dimensional but we may have reason to believe that the data lie near a lower-dimensional manifold. In other words, we may believe that high-dimensional data are multiple, indirect measurements of an underlying source, which typically cannot be directly measured. Learning a suitable low-dimensional manifold from high-dimensional data is essentially the same as learning this underlying source. Dimensionality reduction1 can also be seen as the process of deriving a set of degrees of freedom which can be used to reproduce most of the variability of a data set. Consider a set of images produced by the rotation of a face through different angles. Clearly only one degree of freedom is being altered, and thus the images lie along a continuous one- dimensional curve through image space. Figure 1 shows an example of image data that exhibits one intrinsic dimension.

    A canonical dimensionality reduction problem from visual perception. The input consists of a sequence of 4096-dimensional vectors, representing the brightness values of 64 pixel by 64 pixel images of a face. Applied to N = 698 raw images. The first coordinate axis of the embedding correlates highly with one of the degrees of freedom underlying the original data: left-right pose.

    Manifold learning techniques can be used in different ways including:

    • Data dimensionality reduction: Produce a compact low-dimensional encoding of a given high-dimensional data set.

    • Data visualization: Provide an interpretation of a given data set in terms of intrinsic degree of freedom, usually as a by-product of data dimensionality reduction.

    • Preprocessing for supervised learning: Simplify, reduce, and clean the data for sub- sequent supervised training.

    Many algorithms for dimensionality reduction have been developed to accomplish these tasks. However, since the need for such analysis arises in many areas of study, contributions to the field have come from many disciplines. While all of these methods have a similar goal, approaches to the problem are different. Principal components analysis (PCA) is a classical method that provides a sequence of best linear approximations to a given high-dimensional observation. It is one of the most popular techniques for dimensionality reduction. However, its effectiveness is limited by its global linearity. Multidimensional scaling (MDS), which is closely related to PCA, suffers from the same drawback. Factor analysis and independent component analysis (ICA) also assume that the underling manifold is a linear subspace. However, they differ from PCA in the way they identify and model the subspace. The subspace modeled by PCA captures the maximum variability in the data, and can be viewed as modeling the covariance structure of the data, whereas factor analysis models the correlation structure. ICA starts from a factor analysis solution and searches for rotations that lead to independent components.The main drawback with all these classical dimensionality reduction approaches is that they only characterize linear subspaces (manifolds) in the data. In order to resolve the problem of dimensionality reduction in nonlinear cases, many recent techniques, including kernel PCA, locally linear embedding (LLE), Laplacian eigenmaps (LEM), Isomap, and semidefinite embedding (SDE) have been proposed.

    Faces Data Set

    Find the faces data set on our shared folder in .mat format.

    %% Visualizing Face Data
    
    load face_data.mat
    IND = [1:4096];
    s = [64,64];
    [I,J] = ind2sub(s,IND);
    
    NN=698;%number of images
    
    for jj=1:NN
        tmp=images(:,jj);
    for ii=1:length(IND)
        image(jj,I(ii),J(ii))=tmp(IND(ii));
    end
    clear tmp
    end
    
    kk=1;%if you want to plot the kth image
    imshow(squeeze(image(kk,:,:)),[])

    If you wish to visualize the kkth of 698 image just copy and paste the last line. Read through the above lines of code to make sure you know what the data set actually is.