Conclusions

Now, it is time to look at and comment on the final model for DISTANCE after removing outliers, working through the process of model building, adding interactions, and assessing accuracy and stability.

My latest model attempts to predict DISTANCE uses ARRIVAL_DELAY, DEPARTURE_DELAY, AIR_TIME, and the WEATHER*DISTANCE interaction. Of course, I’ve already divided the data to create training and test sets.

Or as the model equation:

DISTANCE = -157.99 + 8.28*air_time-0.017*arrival_delay+0.0082*departure_time

Also, R-squared is about 94%, which means we’re doing an excellent job with the data we have, but in order to get our model perfect and closer to 100%, there’s clearly more work to be done.

But let’s also look at prediction accuracy and stability, if we were to make predictions of BMI for a new person not already in our dataset.

  • tstpred=fit$coeff[1]+fit$coeff[2]*dtst*AIR_TIME+fit$coeff[3]*dtst$weather+fit$coeff[4]*dtst$ad
  • tstresid=dtst$DISTANCE-tstpred
  • mean(abs(fit$resid))
    • 69.83145
  • mean(abs(tstresid))
    • 68.30
  • mean(abs(fit$resid))

Using the mean and the two histograms above, we were able to discuss stability and accuracy. The two histograms shown above are very similar. They are both centered at zero and have most observations within the range of 150 and 200. Also, our means are very similar which show that the model is pretty stable.  On average, the model is not accurate and it is off by a large amount. 

If we were working with actual flights, we would need to consider other variables involved in order to accurately make better predictions. 

Thank you for following along!!!

Interaction

Now it’s time to try adding interaction terms to the model. My existing variables are AIR_TIME, ARRIVAL_DELAY, and DEPARTURE_TIME, so I could try adding combinations of these three. Alternatively, I could try adding interactions between these and other variables that were not significant when I worked through the backward stepwise selection process–perhaps distance becomes significant when interacting with departure time, for example?

So I’ll create the following two interaction terms to try adding to my model. If this were a real analysis, I would try many more combinations to be sure that I have extracted all of the value from my existing options for predictors.

Note: I’m starting with the dataset where outliers have been removed.

we=d$WEATHER*d$AIR_TIME 

ad=d$ARRIVAL_DELAY*d$AIR_TIME

d=cbind(d,we,ad)

  • Using weather, a binary variable, to make the explanation easier to see and explain. 
  • This code, along with every other variable selection, did not run for me. 

Now I’ll divide my dataset into training and test sets, and build the model using the training set.

  • fit=lm(dtrn$DISTANCE~dtrn$AIR_TIME+dtrn$ARRIVAL_DELAY+dtrn$DEPARTURE_TIME+dtrn$we+dtrn$ad)
  • summary(fit)
  • There is no output because the first three codes did not run. I kept getting an error.
  • However, from this output, we would be able to see the interaction between weather and distance, and see whether or not it is significant. We would also be able to see if R-squared improved. We would also get a new model equation if there was an output, and see if there was a positive or negative slope. 
  • Also, since none of our variables in our model ran, this means that there are no variables with the same length, and that all the variables act independently. 

Interaction terms are always challenging to interpret, but it’s hard to deny that they add value to a model when they improve and increase the R-squared. To be sure the model is actually improving, I would want to check if the prediction, accuracy, and stability have improved as well, but I won’t be doing that here. 

Outliers

While investigating the distributions of my variables with R, I could have looked for outliers, but here I’ll look again and consider removing any unusual observations.

I will draw charts of DISTANCE versus each of my three significant predictors, AIR_TIME, ARRIVAL_DELAY, and DEPARTURE_TIME. The outliers have to be removed before we build the model. I’ll use the original dataset, d, to investigate if my dataset contains outliers.

  • par(mfrow=c(2,2))
  • plot(d$AIR_TIME,d$DISTANCE,pch=20,cex=.5)
  • plot(d$ARRIVAL_DELAY,d$DISTANCE,pch=20,cex=.5)
  • plot(d$DEPARTURE_TIME,d$DISTANCE,pch=20,cex=.5)

I do see a couple of extremes, and I also see values that are close to or equal to 0, and that is quite strange. I won’t focus my efforts on the top and bottom percentiles, but I will focus on removing those odd behaviors.

First, we will count how often these “things” occur before we remove them:

From this, we can see that between the relationship of DISTANCE and ARRIVAL_DELAY, there are 11,438 centered at or around 0. 

It’s concerning that ARRIVAL_DELAY is 0 in most cases. Measurements of 0 are likely strange, so I’ll remove them here. Here’s the code I’ll use to remove the 0 observations:

  • d=d[which(d$DISTANCE>0),]
  • d=d[which(d$ARRIVAL_DELAY>0),]
  • d=d[which(d$DEPARTURE_TIME>0),]
  • d=d[which(d$AIR_TIME>0),]

Now, let’s see if removing these outliers improves the overall fit of our model to the other, more typical, observations in the dataset.

Now I’ll divide the dataset and build my model:

  • n=length(d$DISTANCE) 
  • train=sample(1:n,.7*n,replace=FALSE)
  • test=setdiff(1:n,train)
  • dtrn=d[train,]
  • dtst=d[test,]
  • fit=lm(dtrn$DISTANCE~dtrn$AIR_TIME+dtrn$ARRIVAL_DELAY+dtrn$DEPARTURE_TIME)
  • summary(fit)

The original dataset (before removing outliers) had an R-squared of about 95.36%. Now that I’ve removed the strange observations, my R-squared has oddly decreased 93.73%. This is odd, and in the future we will need to reassess and try to remove the outliers again. Also, it’s a good idea to check the prediction accuracy and stability before and after removing the outliers to be sure the model is really improving. 

Thanks!

Variable Selection

Overfitting is adding too many variables to a model and making it fit “perfectly” into the training data.

We are trying to build a great model that explains variation in our response, and to do so we want to add all of the possible predictor variables. We will be using backward stepwise selection to look at each predictor more closely.

It is easiest to illustrate this with a linear regression model. For my linear regression model, I’ll continue to use DISTANCE as the response variable. Again, we are only using the training data set!

I will divide my data into training and test sets, as done in previous posts (randomly assigning 70% of subjects to the training set), using the following code:

  • n=length(d$DISTANCE) 
  • train=sample(1:n,.7*n,replace=FALSE)
  • test=setdiff(1:n,train)
  • dtrn=d[train,]
  • dtst=d[test,]

I have many variables in my dataset, so I’ll use DISTANCE as the response and I will only be using 4 others as predictors:

  • fit=lm(dtrn$DISTANCE~dtrn$AIR_TIME+dtrn$ARRIVAL_DELAY+dtrn$DEPARTURE_TIME+dtrn$airline)
  • summary(fit)

In the model above with 4 predictors, we can see that only 3 are chosen as significant contributors to the model: AIR_TIME, ARRIVAL_DELAY, and DEPARTURE_TIME. The predictor airline was NOT a useful contributor.

So my new baseline model is:

  • fit=lm(dtrn$DISTANCE~dtrn$AIR_TIME+dtrn$ARRIVAL_DELAY+dtrn$DEPARTURE_TIME)
  • summary(fit)

For my first model, R-squared was 0.9536, and for the second model as, excluding the non significant predictor, airline, the R-squared still remains 0.9536. All together, AIR_TIME, ARRIVAL_DELAY, and DEPARTURE_TIME make up 95.36% of the variability of DISTANCE. 

If there is something contributing only a fraction of a percent, we should consider removing it. To make that determination, we need to use backward stepwise selection.

Using backward stepwise selection, we will be removing each of these three variables, one at a time:

  1. fit=lm(dtrn$DISTANCE~+dtrn$ARRIVAL_DELAY+dtrn$DEPARTURE_TIME)
    1. Air time was removed. It is contributing the least amount to distance and the R-squared has now dropped to 0.0003911.
  1. fit=lm(dtrn$DISTANCE~+dtrn$ARRIVAL_DELAY+dtrn$AIR_TIME)
  1. fit=lm(dtrn$DISTANCE~dtrn$AIR_TIME+dtrn$DEPARTURE_TIME)

From the output, we see that arrival delay and departure time both have a similar, and high % towards contributing to distance; however, we can also see that air time contributes very little to distance. 

Conclusions: In this first round, where each of the 3 variables was removed one at a time, it seems that air time does not contribute to distance and lowers R-squared tremendously, not even contributing 1%. From here on out and in the future tests, I would want to remove air time as a predictor.  

I will need to do more rounds of selection, removing one variable at a time until each has been assessed in each round. The process stops when no variables have to be removed.

Even though one of my predictors is not important to the model, I feel more confident knowing the other two are the most important ones to build the best possible linear regression model I can generate from this dataset. 

Logistic Regression Prediction

Now we have to return to the logistic regression model and assess its prediction accuracy and stability. 

First, we divide the data into training and test sets. To do this, we will again randomly select 70% of the subjects for the training set. We will build the model in the training data and compare the prediction performance in the training set vs. the unseen subjects in the test set. 

  • n=length(d$DISTANCE) 
  • train=sample(1:n,.7*n,replace=FALSE)
  • test=setdiff(1:n,train)
  • dtrn=d[train,]
  • dtst=d[test,]

The code above creates our two datasets. The dimensions are shown below: 

  • dim(dtrn)
    • 410510
  • dim(dtst)
    • 175946 

Now, we will build our logistic regression model to predict the training data only:

  • fit=glm(dtrn$DISTANCE~dtrn$AIR_TIME+dtrn$weather,family=”binomial”)
  • summary(fit)

Model Equation:

  • log(p/(1-p)) = -151.4 + 8.6*Air_Time -59.7*weather, where p is probability of weather affecting flights

For the model run on training dataset (as opposed to the full dataset, as was done before), it seems that all of our variables are significant predictors. 

Note: I should go back and look to see if including this predictor changes any of my tests of assumptions!

Now I can make predictions for the test set using this model equation:

  • k=fit$coef[1]+fit$coef[2]*dtst$AIR_TIME+fit$coef[3]*dtst$weather
  • tstpred=exp(k)/(1+exp(k))

Once I have predictions I can round those probabilities into 0s and 1s, officially saying a YES or NO prediction for each variable:

trnactual=dtrn$DISTANCE

trnpred=round(fit$fitted,0)

tstactual=dtst$DISTANCE

tstpred=round(tstpred,0)

Now, trnactual, trnpred, tstactual, and tstpred are lists of 0s and 1s for each subject. I can use confusion matrices to compare my predictions (trnpred and tstpred) to the actual patient values (trnactual and tstactual). 

cfm1=aggregate(trnpred,by=list(trnpred,trnactual),length)

cfm1[,3]=cfm1[,3]/sum(cfm1[,3])

colnames(cfm1)=c(“trnpred”,”trnactual”,”pct”)

cfm1

cfm2=aggregate(tstpred,by=list(tstpred,tstactual),length)

cfm2[,3]=cfm2[,3]/sum(cfm2[,3])

colnames(cfm2)=c(“tstpred”,”tstactual”,”pct”)

cfm2

None of the code to build confusion matrices ever changes! So, this code is always useful once I have built the trnactual, trnpred, tstactual, and tstpred variables. The R output for the confusion matrices are shown below:

Cfm1 – did not get an output, kept getting error

Group 1 = trnpred 

Group 2 = trnactual 

X = pct 

Accuracy and Stability: We did not get an output for cmf1, therefore it is harder to look for stability and accuracy between cfm1 and cfm2. However, if we were to look for the stability between cfm1 and cfm2 we would look from left (training data, cfm1) to right (test data, cmf2) to see if the percentages were close to each other. If we were to look at accuracy, we would look at 0s and 1s. 0s would predict that weather did not have an impact on the distance of the flights. 1s would predict that weather had a strong impact on distance of the flights. 

Further, in the future we could look at how to improve our fit=glm model in order to predict results in the cfm1 (training set). We would probably have to research further, after making improvements to our models, in order to have a more in depth look at the causes of the distance of flights. 

Linear Regression Prediction Post

First, we have to divide our data set into a training set and a test set. 

We have to divide the data set into the training set, which will act as the working set to build the model, and then the test set to test the data of the model we already created. 

We will randomly divide the data into two sets:

  • n=length(d$DISTANCE) 
  • train=sample(1:n,.7*n,replace=FALSE)
  • test=setdiff(1:n,train)
  • dtrn=d[train,]
  • dtst=d[test,]

Now, we have the two data sets that we will use to further look at our model that was previously created. 

  • dim(dtrn)
  • dim(dtst)

Next, we will build the model.

  • fit=lm(dtrn$DISTANCE~dtrn$AIR_TIME+dtrn$weather)
  • summary(fit)

Model equation is:

Distance= -151.78 +8.6*AIR_TIME – 58.44*weather

The data is slightly different from our previous posts, and that is because we only used 70% of the original data. However, it is good because all of our original variables are staying relatively close to their original estimates. 

Now, I’ll make predictions and calculate the errors:

  • tstpred=fit$coeff[1]+fit$coef[2]*dtst*AIR_TIME+fit$coef[3]*dtst$weather
  • tstresid=dtst$DISTANCE-tstpred

Once the errors are completed and calculated for the test set, I can now draw histograms in both sets and look at the average size of the errors.

  • par(mfrow=c(1,2))
  • hist(fit$resid)
  • hist(tstresid)
  • mean(abs(fit$resid))
  • mean(abs(tstresid))
  • mean(abs(fit$resid))
    • 69.83145
  • mean(abs(tstresid))
    • 68.30

Now, we have to look at the stability and accuracy of the charts above.

Stability – The two histograms shown above are very similar. They are both centered at zero and have most observations within the range of 150 and 200. Also, our means are very similar which show that the model is pretty stable. 

Accuracy –  On average, the model is not accurate and it is off by a large amount. Even though this model is stable, looking at the histograms show that it is horribly inaccurate and that we have over and underestimated by about 500 miles. 

It is important that a model is both stable and accurate, not just one or the other. Since our model is inaccurate, this means that it is consistently “bad.”

Logistic Regression Assumptions

It’s time to test the assumptions and requirements of logistic regression models.

The output of my model from the last post is shown below:

The model equation is as follows:

log(p/1-p)) = -5.11 + -0.005*DISTANCE + 0.04*AIR_TIME where p=probability of weather affecting flights 

Next, we will be testing the 6 assumptions of logistic regression.

1.Good, Linear Model

To have a good linear model, we need to include all of our significant/relevant variables and exclude all irrelevant variables. We can move on from this assumption.

2.No Perfect Multicollinearity

Below is the code I will be using to test for multicollinearity:

  • cor(d$DISTANCE,d$AIR_TIME) 
    • cor(cbind(dist=d$DISTANCE,at=d$AIR_TIME)
  • The above output is showing that there is not a correlation between distance and air time. This means we might have perfect multicollinearity. 

3.Independent Errors

Our independent errors will be focused on only by using our intuition. For this dataset we have to look at independent observations in order for the assumption to be passed. We would look at one flight, and one flight only, and what we say when observing this independent variable is that the study design contains only one flight. 

4.Complete Information

When testing for complete information, we need to create histograms for all of our variables, distance, air time, and weather. This will show if there is any information missing. This will also allow us to understand the ranges of each variable. Below are the histograms:

  • par(mfrow=c(2,2))
  • hist(d$DISTANCE)
  • hist(d$AIR_TIME)
  • hist(d$weather)

From these histograms, we can see that the data range for values of interest for all X and Y are covered and portrayed in the charts. We test for complete information by building these graphs and looking at our data. However, R will never give us an error. 

There is a violation of complete information here! Our data is skewed to the left in all of our histograms, and fully skewed to the left in the weather histogram. We can still continue to test our other assumptions; however, for a solution in the future, we would have to collect more data to fill in the gaps.

5.Complete Separation

To test for complete separation, we need to create two scatterplots using code. They are shown below:

  • par(mfrow=c(1,2))
  • plot(d$DISTANCE,d$weather)
  • plot(d$AIR_TIME,d$weather)

No vertical line can be drawn through the plots which tells us that we can not separate the data, which means flights can have good or bad weather conditions. If we were to draw a vertical line, they would be overlapping in the charts, and this means no complete separation. This model is not suffering from complete separation. 

6.Large Sample Size

A logistic regression model requires thousands and hundreds of observations. In the flights data set, there are 586,486 observations. This model has a better chance of being accurate because of the high amount of observations; however, even though the sample size is large, we can still be missing certain aspects of our dataset. 

We will learn to assess prediction accuracy in our models in the future, and may need to collect more data if we see that the model we have created has poor prediction accuracy.

Logistic Regression Basics

I can now begin working to understand the weather of flights in my data set. Weather will serve as our binary variable. 

I am building this model: 

  • fit=glm(d$weather~d$DISTANCE+d$AIR_TIME, family=”binomial”)
  • summary(fit)

The output from R appears below:

The variables are significant and have important slopes to look at. 

The model equation should be written as follows:

  • log(p/1-p)) = -5.11 + -0.005*DISTANCE + 0.04*AIR_TIME where p=probability of weather affecting flights 
  • P is the probability of a match. 

As stated previously, the intercept and slopes for the variables are both significant.

However, we have to remember the rule of thumb; whenever the log odds ratio < -3 then p is near 0, when log odds ratio > 3 then p is near 1, and when log odds ratio=0 p is 5. 

The intercept of -5.11 can be interpreted as follows: for flights with very low distance covered, it is unlikely that they were diverted or affected by weather. There is a very unlikely case of weather being a major effect of flights when DISTANCE and AIR_TIME are close to 0, because p (our probability) is also near 0. 

For DISTANCE, as the distance increases, the increase in weather affecting flights also increases, because the slope for distance is a negative (-0.005), creating a more negative impact.

For AIR_TIME, increase air time increases the chance of weather affecting a flight by 0.04. 

The slope for AIR_TIME is larger than the slope for DISTANCE. However, we cannot conclude if air time relates more to the factor that weather has on flights. This is because we have to look at the scales of each variable and see which one is larger. When looking at the scale of both air time and distance, we can see that they are the same (0.0000000000000002). Even though the scale is the same for air time and distance, we cannot conclude if air time or distance relate the same or contribute the same amount, more, or less to the factor that weather has on flights. 

All of our slopes and variables are significant when researching the cause weather has on flights.

Testing Assumptions of Linear Regression

In this post, we will test five assumptions for linear regression on the model I’ve been working on in my last few posts. The model equation I am continuing to work with is:
Distance = -150.99+8.6*AirTime-.23*ArrivalDelay-37.53*Weather

R-squared: 0.9536

The first assumption is a model design equation question, so we can move on to focus on implementation. 

Assumption #2: No perfect multicollinearity 

       air    arrival        weat

air     1.000000000 0.006195645 0.004883475

arrival 0.006195645 1.000000000 0.211454910

weat    0.004883475 0.211454910 1.000000000

There was no concerning multicollinearity in our model tested, therefore there is no perfect multicollinearity. The chart above shows the correlation matrix. There is no off-diagonal 1 or -1 entries, therefore, one again we can say we do not have perfect multicollinearity.

Assumption #3: Independent Errors 

  • First, we will use intuition to test assumption #3. In order for this assumption to be passed, we look at independent observations. For an independent observation, we would look at one flight, and one flight only. We will say the study design contains only one flight. 
  • Second, we will use a plot to see if there is or isn’t a pattern. If there is a pattern then the errors would be related, but first we need to build the model.
  • plot(fit$resid)
  • The errors seem to be randomly distributed, but most stay around 0 when looking at the plot horizontally; however the dots are all bunched and grouped together, meaning that the variables/predictors are not independent.
  • Third, we look at the ACF plot to see if the errors are correlated with each other or not. 
  • acf(fit$resid)
  • In the above graph, if there are lines that go through the blue threshold, this means that the model can be determined as significant. Looking at the first lag (Lag=0), there is a correlation of 1. Looking at the second lag (Lag=1), we can see that it is not significant because it does not go through the blue threshold. This means that our variables may not be independent.
  • Assumption 3 is violated.

Assumption #4: No heteroskedasticity 

  • In order to test the heteroskedasticity, we perform 2 tests. In the first test, we will plot X variable vs. Y variable. This will check if from left to right, the observations are distributed evenly. In the second test, we will plot fit$resid vs. fit$fitted to see if there is random distribution, meaning the errors are not predictable. The following shows all the plots:
    • par(mfrow=c(2,2))
    • plot(d$Air_Time,d$Distance)
    • plot(d$ArrivalDelay,d$Distance)
    • plot(d$Weather,d$Distance)
    • ^ the above code would not create a plot
    • plot(fit$resid,fit$fitted)
  • If there are any dots off by themselves in any of the plots this shows heteroskedasticity and needs further investigation. These dots can be referred to as outlier dots, or dots that stand alone and are easy to spot. If we wanted to investigate further, we would look at our observations and see what is unusual about them. For example, maybe for this model certain variables/predictors do not belong. Therefore, if this is the case the model is violating assumption #4 because of the charts that do show heteroskedasticity. 

Assumption #5: Normally Distributed Errors 

  • For assumption #5, we need to look at one of the charts. I will plot four charts just to have extra data, using the following codes:
    • par(mfrow=c(2,2))
    • hist(d$Distance)
    • hist(fit$resid)
    • qqnorm(d$Distance)
    • qqline(d$Distance)
    • qqnorm(fit$resid)
    • qqline(fit$resid)
  • While the histograms look somewhat normal, there are some outliers. In the histogram of distance, there is not normal distribution, as this chart favors distances of 2000 miles or less. Then the bottom left normal Q-Q plot does not follow the curve of the line. This means the plots are showing strange deviation from normality in the lower percentiles of distance. This is also showing errors. These observations indicate that there are errors that are not normally distributed, and therefore we are violating Assumption #5. 

Conclusions 

  • Testing the 5 assumptions, we can come to a conclusion saying that there are violations of assumption 3, 4, and 5.  
  • Violations can be bias of the significance tests and nullify any interesting results that may come up in any research done
  • Usually the violations would be taken care of, but we will continue to move on and look at the in depth relationship between the distance of flights and our variables/predictors.

Multicollinearity

In my last post, I presented a model with three variables/predictors.

  • Distance = -150.99+8.6*AirTime-.23*ArrivalDelay-37.53*Weather

Now this model has to be tested for multicollinearity. Multicollinearity is a correlation among predictors, or in this case, our X variables. It shows if the predictors have a strong correlation, or if two predictors, for example, have a correlation of 0, that the two predictors are independent. The stronger the correlation is, means the more alike the two (or more) predictors are.

We check to see if the predictors are significant by looking for the three stars to check statistical significance. Then we test the correlation if the predictors are significant.

We can use the “cor” function to look at the predictors in relation to pairs.

  • cor(cbind(at=d$AIR_TIME,ad=d$ARRIVAL_DELAY,weat=d$weather))

The new code above outputs the following results:

              air    arrival        weat

air     1.000000000 0.006195645 0.004883475

arrival 0.006195645 1.000000000 0.211454910

weat    0.004883475 0.211454910 1.000000000

A correlation matrix is symmetric. This means that the measurements above the diagonal match with the measurements below. The diagonal matrix includes a lot of 1’s because every variable correlates very well with itself. We found some other findings using other correlations.

  • Air time and arrival has a correlation of .006; however, this is not large enough to be certain of anything.
  • Air time and weather has a correlation of .005; however, weather in this model is a binary variable. Binary variables cannot have traditional correlation metrics, therefore we must ignore this finding.
  • Arrival delay and weather has a correlation of .211, but again there is no correlation with binary variables.

In conclusion, none of our predictors share a large enough correlation with each other to be of any concern. Therefore, we state that there is “no multicollinearity” in this model.

Create your website at WordPress.com
Get started