ROUGH DRAFT authorea.com/113284

# [section] [theorem] [theorem]Lemma [subsection]Machine Learning Chapter

[section] [theorem] [theorem]Lemma [subsection]

# Meta Section: Writing plan for this Chapter

Primero, dar una introduccion a Machine Learning, que es, que hace, notacion, como se usa, supervised learning, clasificadores y algunos aspectos sobre el “cambio de paradigma” o “filosofia” o como quieran llamarlo. Comentar algo del foco en la data, el training error, predictive power, generalizaciones,etc . Ir llevando todo con un ejemplo sobre un dataset de llamados y con un logistic classifier.

Despues cross validation, variance, bias, errors, metrics, scores y similar.

Finalizar con 4/5 clasificadores usuales y explicar los conceptos que los motivan, las formulas, los hiperparametros y las funciones a minmizar. Tambien dar algun overview de como sirven para tal o cual caso, ventajas y desventajas segun tal autor:

• multinomial naive bayes (por su simplicidad y eficiencia computacional)

• “full” logistic regression (a diferencia de la intro anterior con la parte de regularizacion, con comentarios a sgd y su eficiencia, etc. )

• Random Forests

[section] [theorem] [theorem]Lemma [subsection]

# Brief ovierview to machine learning and supervised classification problems

\label{section-introduction}

Machine learning is a subfield of computer science with broad theoeretical intersections with statistics and mathematical optimization. At present time it has a wide range of application. A non-exhausitve list of applicaitons include self-driving cars, spam detection systems, face and/or voice recognition, temperature prediction in weather, AI opponents in games, disease prediction in patients, stock pricing, etc. Examples of these machine learning programs are now widespread to the point where their use has direct impact on the lives of millions of people. Due to this, machine learning has practical intersections with data and software engineering.

The most widely used definition of machine learning is attributed to Tom Mitchell: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E” (Mitchell 1997). To our purpose it is clear though that this definition 1 is not formally well-defined. However it serves to convey the idea of algorithms that automatically learns better over time and with more data. Note that the “goodness” of their performance is inherently subjected to the evaluation criteria chosen for the task. Because of this,learning is less associated with a cognitive definition in this context and more to a performance based approach.

It is divided into two main categories: Supervised and Unsupervised Learning. The difference is that in the fist case algorithms are set to produce outputs, noted by $$Y$$, from input data, noted by $$X$$ i.e. the computer has access to examples of outputs and tries to reproduce them based on information contained in $$X$$. In this context, the algorithm is generally referred to as a learner.

The second type of problems is where the output $$Y$$ is missing altogether from the data. In this scenario the most common objectives are clustering samples, density estimation and data compression. Linear regression and K-means clustering are examples of algorithms in each of these categories respectively.

Supervised learning can also be sub-categorized according to the nature of the problem. When the type of the output variable $$Y$$ takes a discrete set of values then it is said that it is a supervised classification problem. On the other hand, when the output takes continuous (or dense in an open set of $$\mathbb{R}$$) range of quantitative values, the problem is said to be of supervised regression. Note that regression problems can be encoded into classification problems by grouping the output values into ranges as categories. For this work we will focus on the classification aspect of machine learning.

Suppose our objective is to predict $$y$$ given a new sample $$x$$. In supervised regression problems, $$y$$ will comprise a continuous variable. for classification problems on the other hand, $$y$$ will represent a label for a certain class. For the case of $$K$$ classes, $$y$$ most commonly takes values in the ranges $$0$$ through $$K-1$$ or $$1$$ through $$K$$. In both cases, the joint probability distribution $$p(X, Y)$$, called the true distribution, gives all of the information we need on these two variables. However the values of this probability is most often unknown. The idea is to then user estimations and inference on the most likely values for new samples and take decisions with the information at hand. These decisions will be based on the most probabilistic characterization of the data we have.

In this type of problems the theoretical and the computational aspects are both of interest. The algorithms used need to consider the technical requirements of the software and hardware being used, as well as the constrains imposed by the problem setting. As such, they are expected to be executed in a reasonable amount of time, imposed by the task specification, and limited by the computing power at hand. 2 There are problems which require that the algorithm output predictions in real-time, to the resolution of milliseconds. Picture a system where a credit-card transaction needs to be approved or labeled as fraud. Here the system needs to respond in short time if the transaction is fraudulent.

Other use cases might require the system to process a big volume of data at once, not a single event, but a batch of these and output this answer. The system in use needs to be prepared to run lean with a large inflow of data, without crashing the available hardware. These examples show that for a given problem, multiple algorithms are available to use but while all of them are theoretically doing the same task, we must also consider the practical advantages of these. Computational efficiency and scalability are relevant when working with these problems. Even though we won’t delve into these aspects in this work, it has an important consideration in the application of Machine Learning solutions.

In its essence, a machine learning method is a probabilistic model so it is very much the same as a statistical model. However, it differs specially in that its focus is generally on the models’ predictive abilities more than in the model’s parameters estimates.(Breiman 2001) The algorithms will be built and used to try and replicate as best as possible a given phenomena, without really identifying the true nature of the mechanisms behind this phenomena. As such, most applications will try to generalize a problem rather than identify the system behind it. These subtle differences in the way of approaching a problem also reflect themselves in the terminology used. Here most terms have equivalent or similar notations in statistics. To start off, the dependent $$Y$$ is called the target or label and the independent variables, covariates or input variables are named features in this case.

1. Other authors might reference machine learning as statistical learning. See (Hastie 2009) as an example.

2. Here the word reasonable is used in a broad sense. It will depend entirely on time constraints, computational capacity, usage and other aspects of each learning application.