Using gret l for Principles of Econometrics, 4th Edition
Logit
The logit model is very similar to probit. Rather than the probability of an event being described by a normal distribution, it is modeled using a logistic distribution. The logistic and normal have very similar shapes and the outcomes from the logit estimation are usually very similar to those in
probit. The probability that individual i chooses the alternative is
In logit the probability is modeled using Л^) rather than T(zi) as in the probit model.
Below we estimate the probability of purchasing Coca-Cola rather than Pepsi using probit, logit, and the linear probability model. The data are contained in coke. gdt and consist of 1140 individuals who purchased either Coke or Pepsi. The gretl script to estimate the models and put the results in a model table is
1 open "@gretldirdatapoecoke. gdt"
2 list x = pratio disp_coke disp_pepsi const
3 probit coke x —quiet
4 modeltab add
5 logit coke x —quiet
6 modeltab add
7 ols coke x —robust
8 modeltab add
9 modeltab show
The result obtained is:
Dependent variable: coke
^statistics in parentheses
** indicates significance at the 5 percent level
For logit and probit, R[86] 2 [87] is McFadden’s pseudo-R2
Ybogit = 4(3Lpm
e Probit — 2.5/3lpm
YLogit = 1.6 fi' Probit
(1 + e-zi )[88]’ |
So, 4(—0.4009) = -1.6036, which is fairly close to the estimate -1.996 for the pratio coefficient in the logit column. More importantly, there are even closer similarities between the marginal effects implies by logit and probit. Their averages (AME) are usually very close to the corresponding coefficient in the linear probability model.
My function is
1 function matrix dlogist(matrix *param, list x)
2 matrix p = lincomb(x, param)
3 matrix d = exp(-p)./(1.+exp(-p))."2
4 return d
5 end function
It uses the ‘dot’ operators for division, multiplication, and exponentiation. These work element - by-element. Vectorizing this as done here may or may not be a good idea, but the syntax is straightforward. A more versatile approach would probably be to loop over the available observations.
Now we need a function that computes and average the marginal effects. Minor modification of the ame function that was used for the probit model yields the average marginal effect for the logit function ame_l below:
matrix amfx = meanc(me_matrix)
5 printf "nThe average marginal effects are %8.4f n", amfx
6 return me_matrix
7 end function
The only change to the original probit ame function comes in line 3 where the dlogist function replaces dnorm.
Estimating the model by logit and getting the AME is done using:
1 list x = const pratio disp_coke disp_pepsi
2 logit coke x
3 matrix coef = $coeff
4 ame_l(&coef, x) which produces
The average marginal effects are 0.4175 -0.4333 0.0763 -0.1587
The average marginal effect for a increase in the price ratio is -0.4333. That compares to -0.4097 in probit and -0.4009 in the linear probability model. It would certainly be easy at this point to compute standard errors for these marginal effects, but we will save that as an exercise.
The models can also be compared based on predictions. Gretl produces a table in the standard probit and logit outputs that facilitates this. The table is 2 x 2 and compares predictions from the model to actual choices. The table for the beverage choice model is:
Number of cases 'correctly predicted' = 754 (66.1%) f(beta'x) at mean of independent vars = 0.394 Likelihood ratio test: Chi-square(3) = 145.823 [0.0000]
Predicted 01 Actual 0 507 123
1 263 247
The table reveals that with probit, of the (507 +123) = 630 consumers that chose Pepsi (Pepsi=0), the model predicted 507 of these correctly (80.48% correct for Pepsi). It predicted 247/(263 + 247) = 247/510 = 48.43% correct for Coke. The overall percentage that was correctly predicted is 754/1140 = 66.1%. The table for logit is exactly the same, so there is no reason to prefer one over the other for their predictive accuracy.
In fact, look at the correlations between the predictions of the three estimators:
logit ols
0.9996 0.9950 probit
1.0 0.9924 logit
1.0 ols
As you can see, these are VERY highly correlated, all over 0.99 and significant at 5%.