What is Predictive Modeling?

Predictive Modeling is a process through which a future outcome or behavior is predicted based on the past and current data at hand. It is a statistical analysis technique that enables the evaluation and calculation of the probability of certain results. Predictive modeling works by collecting data, creating a statistical model and applying probabilistic techniques to predict the likely outcome.

What is Power Analysis?

Power Analysis is an important aspect of experimental design. It allows us to determine the sample size required to detect an effect of a given size with a given degree of confidence.
There are four parameters involved in a power analysis.  The research must ‘know’ 3 and solve
for the 4th.
1. Alpha:   
 Probability of finding significance where there is none  
 False positive  
 Probability of a Type I error   
 Usually set to.05  
2. Power    
 Probability of finding true significance  
 True positive  
 1 – beta, where beta is:  
 Probability of not finding significance when it is there  
 False negative  
 Probability of a Type II error  
 Usually set to.80  
3. N:  
 The sample size (usually the parameter you are solving for)  
 May be known and fixed due to study constraints  
4. Effect size:  
 Usually, the ‘expected effect’ is ascertained from:  
 Pilot study results  
 Published findings from a similar study or studies  
 May need to be calculated from results if not reported  
 May need to be translated as design specific using rules of thumb  
 Field defined ‘meaningful effect’  
 Educated guess (based on informal observations and knowledge of the
field) 

What is Overfitting?

Overfitting in mathematics and statistics is one of the most common tasks consisting in attempts to fit a “model” to a set of training data, so as to be able to make reliable predictions on generally untrained data. In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitting has poor predictive performance, as it overreacts to minor fluctuations in the training data. The potential for overfitting depends not only on the number of parameters and data but also the conformability of the model structure with the data shape, and the magnitude of model error compared to the expected level of noise or error in the data. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new data set than on the data set used for fitting. In particular, the value of the coefficient of determination will shrink relative to the original training data.

What is Out-Of-Sample Evaluation?

Out-Of-Sample Evaluation means to withhold some of the sample data from the model identification and estimation process, then use the model to make predictions for the hold-out data in order to see how accurate they are and to determine whether the statistics of their errors are similar to those that the model made within the sample of data that was fitted.

What is Outlier?

Outlier is an observation point that is distant from other observations. An outlier may be due to variability in the measurement or it may indicate an experimental error, the latter are sometimes excluded from the data set. Outliers can occur by chance in any distribution, but they often indicate either measurement error or that the population has a heavy-tailed distribution. In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate ‘correct trial’ versus ‘measurement error’; this is modeled by a mixture model. In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition).

What is Nearest Neighbor Algorithm?

Nearest Neighbor Algorithm was one of the first algorithms used to determine a solution to the traveling salesman problem. In it, the salesman starts in a random city and repeatedly visits the nearest city until all have been visited. It quickly yields a short tour, but usually not the optimal one. The nearest neighbor algorithm is easy to implement and executes quickly, but it can sometimes miss shorter routes which are easily noticed with human insight, due to its “greedy” nature. As a general guide, if the last few stages of the tour are comparable in length to the first stages, then the tour is reasonable; if they are much greater, then it is likely that there are much better tours. Another check is to use an algorithm such as the lower bound algorithm to estimate if this tour is good enough. In the worst case, the algorithm results in a tour that is much longer than the optimal tour.

What is Multiple Regression?

Multiple Regression is an extension of simple linear regression. It is used when we want to predict the value of a variable based on the value of two or more other variables. The variable we want to predict is called the dependent variable (or sometimes, the outcome, target or criterion variable). The independent variables can be continuous or categorical (dummy coded as appropriate).

What is Multinomial Logistic Regression?

Multinomial Logistic Regression is the linear regression analysis to conduct when the dependent variable is nominal with more than two levels. Thus it is an extension of logistic regression, which analyzes dichotomous (binary) dependents. Since the output of the analysis is somewhat different to the logistic regression’s output, multinomial regression is sometimes used instead. Like all linear regressions, the multinomial regression is a predictive analysis. Multinomial regression is used to describe data and to explain the relationship between one dependent nominal variable and one or more continuous-level(interval or ratio scale) independent variables.