Daniel Stanley Tan edited untitled.tex  about 8 years ago

Commit id: 766471159f65dac5728bf8ce85115cc5c91dd086

deletions | additions      

       

\section{Background of the Proposed Topic} \section{Background}  Visualizing data helps reveal interesting patterns from the data that might not be obvious in some representations. It also aids domain experts in extracting information, generating ideas, and formulating hypotheses from the data. Which is why data visualization plays a huge role in the data analytics process. However visualizing high dimensional data is challenging due to the human limitation of only being able to visualize up to three dimensions. Moreover, traditional techniques are also incapable of visualizing huge amounts of data due to their long processing time which increases exponentially as the number of data points increases. This poses a problem because the data being generated in the world is also rapidly growing. In fact, data generated in the past decade is much larger than all data collected in the past century combined \cite{data2013}. For now, no algorithm exists that tackles all the problems of handling big data, although there has been many works that address some specific aspects of it. \cite{xu2016exploring}   Some existing ways for tackling high-dimensional data are through dimensionality reduction techniques like Random Projections \cite{bingham2001random,kaski1998dimensionality}, Multidimensional Scaling (MDS) \cite{kruskal1964multidimensional} and Principal Components Analysis (PCA) \cite{dunteman1989principal}. These algorithms significantly reduce the number of dimensions by mapping the high dimensional data into lower dimensions. This mapping inevitably lose information but these algorithms are creative in doing this in such a way that useful distances are preserved and information loss is minimized. For data visualization, the number of dimensions have to be reduced to at most three dimensions. The most commonly used dimensionality reduction techniques for visualizing high dimensional data are Self Organizing Maps (SOM) \cite{kohonen1990self}, Multidimensional Scaling (MDS) \cite{kruskal1964multidimensional} and Principal Components Analysis (PCA) \cite{dunteman1989principal}. All three algorithms reduce dimensions based on certain properties such as local neighborhood relations for SOM, inter-point distances for MDS, and data variance for PCA. The only problem is that the time complexity of these algorithms are exponential which is not suitable for handling big data. Parallelizable implementations of SOM \cite{carpenter1987massively}, MDS \cite{varoneckas2015parallel} and PCA \cite{andrecut2009parallel} exist but it only reduces the complexity by a linear factor, which may be good for now but it will not scale well for larger and larger datasets.