Awaiting Activation edited section_More_Detail_subsection_What__.tex  about 8 years ago

Commit id: 9aa658a0515705b29d0cffdb4d77abd398e69c46

deletions | additions      

       

{L(\beta)} = \prod_{i=1}^np(x_i)^{y_i}(1-p(x_i))^{1-y_i}  \end{equation}  We could substitute the equation for \textit{p(x)} from above but we will not here. The form of the likelihood function, which is the same as the distribution of the response, reflects the fact that we assume the response follows a \textit{binomial distribution}. Finding the parameters $\beta_i$ that maximizes the likelihood function involves numerical analysis that are outside the scope of this paper and are not a concern for a modeler. \subsection{Making Predictions}  As stated in the overview, logistic regression gives a model whose output is the probabilty of y=1. To actually predict if the instance is in the class or not, the user must choose a cutoff value, say 0.5. Then we can say that all instances that have $p(x)>0.5$ are in the class. In the case of only one predictor variable, this corresponds to choosing a cutoff x value. With two predictors, a line is choosen that divides the values in and out of the class. And so on for higher dimensions. This is called the decision boudary, and the classification becomes clear.