Daniel Stanley Tan edited untitled.tex  about 8 years ago

Commit id: 7cde2591dc369a5319cf6e8117b74580b645291c

deletions | additions      

       

\textit{Oh, In the recent years, there has been  an empty article!}  You can explosion of data and it continues to grow by the second. In fact, data generated in the past decade is much larger than all data collected in the past century combined. This enables scientists to  get started a deeper understanding of the data that was not possible before. But the huge amount of data being collected also poses a bunch of new problems. Data is growing faster than manufacturers can build computers that can process them. Traditional techniques for data analytics are not capable of analyzing these huge amounts of data due to their processing time increasing exponentially as the number of data increases. To make matters more challenging, this is usually coupled with high dimensionality thus, increasing the complexity of the problem further. No algorithm exists yet that tackles all the problems of handling big data but there has been many works that address some aspects of it.   I am particularly interested in pursuing further research on visualizing big data. Visualizing data helps reveal interesting patterns from large data sets that might not be obvious in some representations. It also aids domain experts in extracting information, generating ideas, and formulating hypotheses from the data. However visualizing big and high dimensional data is challenging due to the human limitation of only being able to visualize up to three dimensions.   A common way to handle these are through dimensionality reduction techniques like Multidimensional Scaling (MDS) and Principal Components Analysis (PCA) which maps high dimensional data into lower dimensions. This mapping inevitably loses information but these algorithms are creative in doing this in such a way that useful distances are preserved and information loss is minimized. The only problem is that the time complexity of these algorithms are exponential which is not suitable for handling big data. Parallelizable implementations of MDS and PCA exist but at the end of the day it only reduces the complexity  by \textbf{double clicking} a linear factor, which may be good for now but it won't scale well for the future.   Clustering is another technique used in data mining. For big data, the clustering algorithm needs to run in at least quasilinear time. There are many clustering algorithms that can do  this text block such as BIRCH, DBSCAN, EM,  and begin editing. You OPTICS to name a few. BFR and CLIQUE seems promising for the task of big data visualization. BFR (Bradley-Fayyad-Reina) algorithm is a variant of K-Means that  can also click handle large data. The idea is that if we assume  the \textbf{Text} button below clusters  to add new block elements. Or you be normally distributed then we  can \textbf{drag summarize the clusters using its mean  and drop an image} right onto this text. Happy writing! standard deviation, effectively reducing the number of data points to be processed in the succeeding iterations. The notion of summarizing the data points and creatively reducing the number of data points may be applied to visualization to increase the speed with minimal loss of information. CLIQUE on the other hand is a subspace clustering algorithm, it looks for clusters in subsets of the dimensions. This may be useful in reducing the number of dimensions and also in revealing patterns that may be hidden due to the inclusion of some dimensions.