{"id":483,"date":"2018-08-08T12:14:36","date_gmt":"2018-08-08T12:14:36","guid":{"rendered":"https:\/\/datagradient.com\/?p=483"},"modified":"2019-09-08T14:09:11","modified_gmt":"2019-09-08T14:09:11","slug":"dimension-reduction","status":"publish","type":"post","link":"https:\/\/datasciencediscovery.com\/index.php\/2018\/08\/08\/dimension-reduction\/","title":{"rendered":"Dimension Reduction"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">What is Dimension Reduction?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">More importantly why do I need it?<\/h3>\n\n\n\n<p>Before we get into the different types of dimension reduction techniques, let&#8217;s build some understanding regarding the need.<\/p>\n\n\n\n<p>I was working on a Kaggle data set which had over 4K dimensions. Large number of dimensions adds to the <em>complexity<\/em>, the <em>computational load<\/em> in the predictive tasks and basic transformations. Visual intuition is out of the grasp at this point.<\/p>\n\n\n\n<p>Similarly, think of datasets with variables having several factors which in classification problems are treated as separate variables. Deep learning tasks might involve an even larger set of variables such as the use case of image classification. <br>\nOf course, the most basic steps such as removing independent variables with very high correlation and the variables with near zero variance (example: a variable with a smaller spread in values and mostly concentrated at a single value), do help but only to a small extent.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>What can be done? Enter <strong>Dimension Reduction<\/strong> Techniques<\/p><\/blockquote>\n\n\n\n<p>Certain Statistical techniques that allow us to transform variables to a lower dimensional space without much loss in information. We wish to find the <strong>Latent features<\/strong> in the data or features that provide useful information. To start with let us understand some basic techniques that help us thin the herd and then develop the intuition on more advanced techniques:<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"> <strong>Basic Techniques<\/strong>   <\/h2>\n\n\n\n<h4 class=\"wp-block-heading\"> <strong>Missing Values:<\/strong>  <\/h4>\n\n\n\n<p>In case a variable has too many missing values making it an unlikely candidate for missing value imputation. We can drop such a variable.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"> <strong>Low Variance:<\/strong>  <\/h4>\n\n\n\n<p>Very low variance that is it has a very high concentration of observations with the same value. For example if we have a numeric variable with 99% value of 100 and remaining 1% have values in range of 110-120, it is a variable that does not help your model learn much.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"> <strong>High Correlation:<\/strong>  <\/h4>\n\n\n\n<p>If there are two variables with correlation say 0.99, we can drop one of these variables. Now, there are some techniques mentioned later on, that will take care of these type of variables during their process making this step necessary only in certain conditions for example if there is a computational restraint in some of the other techniques, then using this and following it with those techniques will help a little.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Advanced Techniques<\/strong><\/h2>\n\n\n\n<h4 class=\"wp-block-heading\"> <strong>Variable Importance:<\/strong> <\/h4>\n\n\n\n<p>This technique might still be heavy computationally if the purpose is to solely reduce the dimensions, but it involves fitting a model such as Random forest and then evaluating which are the most important variables to the model. This process is synonymous to forward\/backward feature selection in Regression. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\"> <strong>Factor Analysis:<\/strong>  <\/h4>\n\n\n\n<p>This is a technique of grouping correlated variables to reduce the dimensional space. <br> If there are say two sets of variables: <strong>X1,X2,X3<\/strong> are a <strong>correlated<\/strong> set and <strong>Y1,Y2<\/strong> are the other <strong>correlated<\/strong> set. <br> Here, Y1 has no correlation with X1 that is, any variable of set A will have low correlation with variable of Set B. However, high correlation if the variables were of the same set. In such a case this problem could be reduced a set of <strong>two latent variables<\/strong>.<br> The <em>idea<\/em> behind Factor analysis is to find the representative variable from a set of similar variables. <\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">  X1 measures a Car's fuel efficiency \n  &amp; X2 measures the engine power.\n  Here X1,X2 are trying to measure how well the Car runs. \n  Y1,Y2 represent the interiors of the car, \n  the leg space and so on.\n  Comfort is the underlying representative variable \n  being measured via Y1,Y2. <\/pre>\n\n\n\n<figure class=\"wp-block-pullquote\" style=\"border-color:#0693e3\"><blockquote class=\"has-text-color has-very-dark-gray-color\"><p><strong>Matrix Based Techniques:<\/strong> Involves <em>Linear Transformation<\/em><\/p><\/blockquote><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Principal Component Analysis:<\/strong><\/h3>\n\n\n\n<p>PCA is a novel way to explain the data. Using transformation on the existing set of variables it reduces the dimensions. These transformations are carried out with the prime objective that the variation present in the data is explained and if we were to reconstruct the old variables from these new variables the error is minimum. We will explore this technique further <a href=\"http:\/\/8https:\/\/datasciencediscovery.com\/index.php\/2018\/09\/08\/pca\/\">here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Latent Dirichlet Allocation:<\/strong><\/h3>\n\n\n\n<p>LDA more popular for topic modeling, but can also be used for dimension reduction. Statistically, it assumes data points to be of two separate multivariate normal distributions with different means but the same co-variance matrix. Further, a hyper-plane that separates these two data points will be computed. <br>     Unlike PCA, the idea here is not to minimize the reconstruction error, but to maximize the chance of separation of two classes. Hence, LDA is a <strong>supervised<\/strong> dimension reduction technique that finds a subspace that separates the classes as much as possible.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"asm\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">On a 2 D Graph, if a line is shift parallel to itself\nsuch that the entire space can be filled up, \nthen it is a hyper-plane.\nA hyper-plane is a subspace whose dimension is \none less than that of its ambient space.\nIn different settings, the objects which are \nhyper-planes may have different properties.<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Other Techniques<\/h3>\n\n\n\n<p>Non-negative matrix factorization (<strong>NMF<\/strong>) is a method for discovering low dimensional<br> representations of non-negative data. It tries to find two non-negative matrices (W, H) whose product approximates the non-negative matrix X by matrix factorization. <strong>Generalized Low Rank Models<\/strong> can also be used for dimension reduction.<\/p>\n\n\n\n<figure class=\"wp-block-pullquote\" style=\"border-color:#0693e3\"><blockquote><p><strong>Neighbor (Graphs) Based Techniques:<\/strong> Involves <em>Non-Linear Transformation<\/em><\/p><\/blockquote><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Laplacian Eigenmaps:<\/h3>\n\n\n\n<p>Let&#8217;s say we take a data point on a graph and draw a connecting edge to it&#8217;s closest points. We provide weights to these edges based on the similarity of points. This map of connected points is treated as our objective function which we are trying to minimize to a low dimensional space. <br>\n    The locality-preserving character of the Laplacian eigenmap algorithm makes it relatively insensitive to outliers and noise. <em>Laplacian Eigenmaps is thus simply trying to connect similar points and represent them on a lower dimensional space.<\/em><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Isomap:<\/h3>\n\n\n\n<p>Multidimensional scaling (<strong>MDS<\/strong>) is a form of non-linear dimensionality reduction. It involves a matrix of pair-wise distances between all points, and computes a position for the point. <br>\nMDS, tries to preserve the euclidean distances while reducing the dimensions. In Isomap, we calculate the geodesic distance instead of euclidean distance in MDS. This method is computationally intensive however, at the time of release it was hailed as a major development in the field.<\/p>\n\n\n\n<blockquote style=\"text-align:left\" class=\"wp-block-quote\"><p><strong>What is geodesic distance? <\/strong><br>        The distance between two vertices in a graph<br>        is the number of edges in a shortest path. <br>        If we wish to measure the distance between poles of the Earth, <br>        Consider a series of points connected to each other, <br>        to form the shortest path between the poles<br>        as the geodesic distance.<br><strong>Why was this used? <\/strong><br>        Geodesic distance is more effective <br>        in capturing the structure of the manifold <br>        as compared to euclidean distance.<\/p><\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">Locally Linear Embedding:<\/h3>\n\n\n\n<p>A non-linear unsupervised technique that computes a new co-ordinate for each data point on a low dimensional manifold. We first look for the neighbors of a data point and then assume that if our selected point was not available, can we recover it from the identified neighbors. Thus, we have weights assigned that allow us to reconstruct our selected point. Finally, we map this point to a lower dimensional space while preserving these weights.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">T-SNE:<\/h3>\n\n\n\n<p>T-distributed stochastic neighbor embedding (t-SNE) is a neighborhood preserving embedding. While preserving the local structure of the manifold and mapping it to a low dimensional space, t-SNE also tries to preserve geometry at all scales. <br>     To put it plainly, when we shift from a high dimensional space to a low dimensional space, it ensures points that were close still are and points far-apart remain so. It calculates the probability of the similarity of the points in high dimensional space as well as in low dimensional space, and minimizes the difference between them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">UMAP:<\/h3>\n\n\n\n<p>A new technique that involves manifold learning and is computationally faster than t-SNE. It tries to preserve the local and global structure and map it onto a low dimensional space using k-nearest neighbor and some concepts of topology. We will further discuss this technique <a href=\"https:\/\/datasciencediscovery.com\/index.php\/2018\/09\/18\/umap\/\">here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Autoencoder:<\/h3>\n\n\n\n<p>An autoencoder are a family of methods. It is a neural network whose primary purpose is to learn the underlying manifold or the feature space in the data set. In other words, it tries to encode (transform input data) in a hidden neural net layer and then decode it to get back as close to the input values as possible. The assumption here is that the transformations resulting in the hidden layer represent the properties of the data that are of value. We will further discuss this technique in a future post.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft\"><img decoding=\"async\" loading=\"lazy\" width=\"888\" height=\"606\" src=\"https:\/\/i0.wp.com\/datasciencediscovery.com\/wp-content\/uploads\/2019\/08\/iris_dimensionality_reduction.jpg?resize=888%2C606&#038;ssl=1\" alt=\"\" class=\"wp-image-486\" srcset=\"https:\/\/i0.wp.com\/datasciencediscovery.com\/wp-content\/uploads\/2019\/08\/iris_dimensionality_reduction.jpg?w=888&amp;ssl=1 888w, https:\/\/i0.wp.com\/datasciencediscovery.com\/wp-content\/uploads\/2019\/08\/iris_dimensionality_reduction.jpg?resize=300%2C205&amp;ssl=1 300w, https:\/\/i0.wp.com\/datasciencediscovery.com\/wp-content\/uploads\/2019\/08\/iris_dimensionality_reduction.jpg?resize=768%2C524&amp;ssl=1 768w\" sizes=\"(max-width: 888px) 100vw, 888px\" title=\"\" data-recalc-dims=\"1\"><\/figure><\/div>\n\n\n\n<p><\/p>\n\n\n\n<p>Consider, the above image as an example of implementing some of these techniques on the IRIS data set with the axes consisting of the  components obtained from application of the respective techniques (color is with respect to the target variable). Which algorithm is right for you depends on the data set.<\/p>\n\n\n\n<p>There are several dimension reduction techniques available however,here we have only explored some techniques that are currently prevalent in the industry or are truly novel methods.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">About Us<\/h4>\n\n\n\n<p>Data science discovery is a step on the path of your data science journey. Please follow us on <a href=\"https:\/\/www.linkedin.com\/company\/data-science-discovery\/\" target=\"_blank\" rel=\"noopener\">LinkedIn<\/a> to stay updated.<\/p>\n\n\n\n<p>About the writers:<\/p>\n\n\n\n<ul><li><a href=\"http:\/\/linkedin.com\/in\/gadiankit\/\" target=\"_blank\" rel=\"noopener\">Ankit Gadi<\/a>: Driven by a knack and passion for data science coupled with a strong foundation in Operations Research and Statistics has helped me embark on my data science journey.<\/li><\/ul>\n","protected":false},"excerpt":{"rendered":"<p>What is Dimension Reduction? More importantly why do I need it? Before we get into the different types of dimension reduction techniques, let&#8217;s build some understanding regarding the need. I was working on a Kaggle data set which had over 4K dimensions. Large number of dimensions adds to the complexity, the computational load in the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":487,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[53],"tags":[64,54,60,65,63,25,61,62,55,57,59,56,58],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/datasciencediscovery.com\/wp-content\/uploads\/2018\/08\/black-and-white-cube-rubik-s-cube-437345.jpg?fit=4415%2C3456&ssl=1","jetpack_publicize_connections":[],"_links":{"self":[{"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/posts\/483"}],"collection":[{"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/comments?post=483"}],"version-history":[{"count":4,"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/posts\/483\/revisions"}],"predecessor-version":[{"id":500,"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/posts\/483\/revisions\/500"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/media\/487"}],"wp:attachment":[{"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/media?parent=483"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/categories?post=483"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/datasciencediscovery.com\/index.php\/wp-json\/wp\/v2\/tags?post=483"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}