This file is indexed.

/usr/lib/R/site-library/glmnet/doc/glmnet_beta.Rmd is in r-cran-glmnet 2.0-5-1.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
---
output: html_document
---

<!-- 
%\VignetteEngine{knitr::rmarkdown} 
%\VignetteIndexEntry{An Introduction to Glmnet}
--> 

<a id="top"></a>

# Glmnet Vignette
### Trevor Hastie and Junyang Qian
#### Stanford June 26, 2014

> [Introduction](#intro)

> [Installation](#install)

> [Quick Start](#qs)

> [Linear Regression](#lin)

> [Logistic Regression](#log)

> [Poisson Models](#poi)

> [Cox Models](#cox)

> [Sparse Matrices](#spa)

> [Appendix 1: Internal Parameters](#int)

> [Appendix 2: Comparison with Otherther Packages](#cmp)

<a id="intro"></a>

## Introduction 
 
Glmnet is a package that fits a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso or elasticnet penalty at a grid of values for the regularization parameter lambda. The algorithm is extremely fast, and can exploit sparsity in the input matrix `x`. It fits linear, logistic and multinomial, poisson, and Cox regression models. A variety of predictions can be made from the fitted models. It can also fit multi-response linear regression.

The authors of glmnet are Jerome Friedman, Trevor Hastie, Rob Tibshirani and Noah Simon, and the R package is maintained by Trevor Hastie. The matlab version of glmnet is maintained by Junyang Qian. This vignette describes the usage of glmnet in R.

`glmnet` solves the following problem
$$
\min_{\beta_0,\beta} \frac{1}{N} \sum_{i=1}^{N} w_i l(y_i,\beta_0+\beta^T x_i) + \lambda\left[(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1\right],
$$
over a grid of values of $\lambda$ covering the entire range. Here $l(y,\eta)$ is the negative log-likelihood contribution for observation $i$; e.g. for the Gaussian case it is $\frac{1}{2}(y-\eta)^2$. The _elastic-net_ penalty is controlled by $\alpha$, and  bridges the gap between lasso ($\alpha=1$, the default) and ridge ($\alpha=0$). The tuning parameter $\lambda$ controls the overall strength of the penalty.

It is known that the ridge penalty shrinks the coefficients of correlated predictors towards each other while the lasso tends to pick one of them and discard the others. The elastic-net penalty mixes these two; if predictors are correlated in groups, an $\alpha=0.5$ tends to select the groups in or out together. This is a higher level parameter, and users might pick a value upfront, else experiment with a few different values. One use of $\alpha$ is for numerical stability; for example, the elastic net with $\alpha = 1 - \epsilon$ for some small $\epsilon > 0$ performs much like the lasso, but removes any degeneracies and wild behavior caused by extreme correlations.

The `glmnet` algorithms use cyclical coordinate descent, which successively optimizes the objective function over each parameter with others fixed, and cycles repeatedly until convergence. The package also makes use of the strong rules for efficient restriction of the active set. Due to highly efficient updates and techniques such as warm starts and active-set convergence, our algorithms can compute the solution path very fast.

The code can handle sparse input-matrix formats, as well as range constraints on coefficients. The core of `glmnet` is a set of fortran subroutines, which make for very fast execution. 

The package also includes methods for prediction and plotting, and a function that performs K-fold cross-validation. 

<a id="install"></a>

## Installation
 
Like many other R packages, the simplest way to obtain `glmnet` is to install it directly from CRAN. Type the following command in R console:

```{r, eval=FALSE}
install.packages("glmnet", repos = "http://cran.us.r-project.org")
```

Users may change the `repos` options depending on their locations and preferences. Other options such as the directories where to install the packages can be altered in the command. For more details, see `help(install.packages)`.

Here the R package has been downloaded and installed to the default directories.

Alternatively, users can download the package source at http://cran.r-project.org/package=glmnet and type Unix commands to install it to the desired location.

[Back to Top](#top)

<a id="qs"></a>

## Quick Start
 
The purpose of this section is to give users a general sense of the package, including the components, what they do and some basic usage. We will briefly go over the main functions, see the basic operations and have a look at the outputs. Users may have a better idea after this section what functions are available, which one to choose, or at least where to seek help. More details are given in later sections.

First, we load the `glmnet` package:
```{r}
library(glmnet)
```
The default model used in the package is the Guassian linear model or "least squares", which we will demonstrate in this section. We load a set of data created beforehand for illustration. Users can either load their own data or use those saved in the workspace.
```{r}
data(QuickStartExample)
```
The command loads an input matrix `x` and a response vector `y` from this saved R data archive. 

We fit the model using the most basic call to  `glmnet`.
```{r}
fit = glmnet(x, y)
```
"fit" is an object of class `glmnet` that contains all the relevant information of the fitted model for further use. We do not encourage users to extract the components directly. Instead, various methods are provided for the object such as `plot`, `print`, `coef` and `predict` that enable us to execute those tasks more elegantly.

We can visualize the coefficients by executing the `plot` function:
```{r}
plot(fit)
```

Each curve corresponds to a variable. It shows the path of its coefficient against the $\ell_1$-norm of the whole coefficient vector at as $\lambda$ varies. The axis above indicates the number of nonzero coefficients at the current $\lambda$, which is the effective degrees of freedom (_df_) for the lasso. Users may also wish to annotate the curves; this can be done by setting `label = TRUE` in the plot command. 

A summary of the `glmnet` path at each step is displayed if we just enter the object name or use the `print` function:
```{r height = 4}
print(fit)
```
It shows from left to right the number of nonzero coefficients (`Df`), the percent (of null) deviance explained (`%dev`) and the value of $\lambda$ (`Lambda`). Although by default `glmnet` calls for 100 values of `lambda` the program stops early if `%dev% does not change sufficently from one lambda to the next (typically near the end of the path.)

We can obtain the actual coefficients at one or more $\lambda$'s within the range of the sequence:
```{r}
coef(fit,s=0.1)
```
(why `s` and not `lambda`? In case later we want to allow one to specify the model size in other ways.)
Users can also make predictions at specific $\lambda$'s with new input data:
```{r}
nx = matrix(rnorm(10*20),10,20)
predict(fit,newx=nx,s=c(0.1,0.05))
```

The function `glmnet` returns a sequence of models for the users to choose from. In many cases, users may prefer the software to select one of them. Cross-validation is perhaps the simplest and most widely used method for that task.

`cv.glmnet` is the main function to do cross-validation here, along with various supporting methods such as plotting and prediction. We still act on the sample data loaded before.
```{r}
cvfit = cv.glmnet(x, y)
```
`cv.glmnet` returns a `cv.glmnet` object, which is "cvfit" here, a list with all the ingredients of the cross-validation fit. As for `glmnet`, we do not encourage users to extract the components directly except for viewing the selected values of $\lambda$. The package provides well-designed functions for potential tasks.

We can plot the object.
```{r}
plot(cvfit)
```

It includes the cross-validation curve (red dotted line), and upper and lower standard deviation curves along the $\lambda$ sequence (error bars). Two selected $\lambda$'s are indicated by the vertical dotted lines (see below).

We can view the selected $\lambda$'s and the corresponding coefficients. For example,
```{r}
cvfit$lambda.min
```
`lambda.min` is the value of $\lambda$ that gives minimum mean cross-validated error. The other $\lambda$ saved is  `lambda.1se`, which gives the most regularized model such that error is within one standard error of the minimum. To use that, we only need to replace `lambda.min` with `lambda.1se` above.
```{r}
coef(cvfit, s = "lambda.min")
```
Note that the coefficients are represented in the sparse matrix format. The reason is that the solutions along the regularization path are often sparse, and hence it is more efficient in time and space to use a sparse format. If you prefer non-sparse format, pipe the output through `as.matrix()`.

Predictions can be made based on the fitted `cv.glmnet` object. Let's see a toy example.
```{r}
predict(cvfit, newx = x[1:5,], s = "lambda.min")
```
`newx` is for the new input matrix and `s`, as before, is the value(s) of $\lambda$ at which predictions are made. 

That is the end of `glmnet` 101. With the tools introduced so far, users are able to fit the entire elastic net family, including ridge regression, using squared-error loss. In the package, there are many more options that give users a great deal of flexibility. To learn more, move on to later sections.

[Back to Top](#top)

<a id="lin"></a>

## Linear Regression
 
Linear regression here refers to two families of models. One is `gaussian`, the Gaussian family, and the other is `mgaussian`, the multiresponse Gaussian family. We first discuss the ordinary Gaussian and the multiresponse one after that.

### Gaussian Family

`gaussian ` is the default family option in the function `glmnet`. Suppose we have observations $x_i \in \mathbb{R}^p$ and the responses $y_i \in \mathbb{R}, i = 1, \ldots, N$. The objective function for the Gaussian family is 
$$
\min_{(\beta_0, \beta) \in \mathbb{R}^{p+1}}\frac{1}{2N} \sum_{i=1}^N (y_i -\beta_0-x_i^T \beta)^2+\lambda \left[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\right],
$$
where $\lambda \geq 0$ is a complexity parameter and $0 \leq \alpha \leq 1$ is a compromise between ridge ($\alpha = 0$) and lasso ($\alpha = 1$). 

Coordinate descent is applied to solve the problem. Specifically, suppose we have current estimates $\tilde{\beta_0}$ and $\tilde{\beta}_\ell$ $\forall j\in 1,]\ldots,p$. By computing the gradient at $\beta_j = \tilde{\beta}_j$ and simple calculus, the update is
$$
\tilde{\beta}_j \leftarrow \frac{S(\frac{1}{N}\sum_{i=1}^N x_{ij}(y_i-\tilde{y}_i^{(j)}),\lambda \alpha)}{1+\lambda(1-\alpha)},
$$
where $\tilde{y}_i^{(j)} = \tilde{\beta}_0 + \sum_{\ell \neq j} x_{i\ell} \tilde{\beta}_\ell$, and $S(z, \gamma)$ is the soft-thresholding operator with value $\text{sign}(z)(|z|-\gamma)_+$.

This formula above applies when the `x` variables are standardized to have unit variance (the default); it is slightly more complicated when they are not. Note that for "family=gaussian", `glmnet` standardizes $y$ to have unit variance before computing its lambda sequence (and then unstandardizes the resulting coefficients); if you wish to reproduce/compare results with other software, best to supply a standardized $y$ first (Using the "1/N" variance formula).

`glmnet` provides various options for users to customize the fit. We introduce some commonly used options here and they can be specified in the `glmnet` function.

* `alpha` is for the elastic-net mixing parameter $\alpha$, with range $\alpha \in [0,1]$. $\alpha = 1$ is the lasso (default) and $\alpha = 0$ is the ridge.

* `weights` is for the observation weights. Default is 1 for each observation. (Note: `glmnet` rescales the weights to sum to N, the sample size.)

* `nlambda` is the number of $\lambda$ values in the sequence. Default is 100. 

* `lambda` can be provided, but is typically not and the program constructs a sequence. When automatically generated, the $\lambda$ sequence is determined by `lambda.max` and `lambda.min.ratio`. The latter is the ratio of smallest value of the generated  $\lambda$ sequence (say `lambda.min`) to `lambda.max`.  The program then generated `nlambda` values linear on the log scale from `lambda.max` down to `lambda.min`. `lambda.max` is not given, but easily computed from the input $x$ and $y$; it is the smallest value for `lambda` such that all the coefficients are zero.  For `alpha=0` (ridge) `lambda.max` would be $\infty$; hence for this case we pick a value corresponding to a small value for `alpha` close to zero.)

* `standardize` is a logical flag for `x` variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is `standardize=TRUE`.  

For more information, type `help(glmnet)` or simply `?glmnet`.

As an example, we set $\alpha = 0.2$ (more like a ridge regression), and give double weights to the latter half of the observations. To avoid too long a display here, we set `nlambda` to 20. In practice, however, the number of values of $\lambda$ is recommended to be 100 (default) or more. In most cases, it does not come with extra cost because of the warm-starts used in the algorithm, and for nonlinear models leads to better convergence properties.
```{r}
fit = glmnet(x, y, alpha = 0.2, weights = c(rep(1,50),rep(2,50)), nlambda = 20)
```
We can then print the `glmnet` object. 
```{r}
print(fit)
```
This displays the call that produced the object `fit` and a three-column matrix with columns `Df` (the number of nonzero coefficients), `%dev` (the percent deviance explained) and `Lambda` (the corresponding value of $\lambda$).

(Note that the `digits` option can used to specify significant digits in the printout.)

Here the actual number of $\lambda$'s here is less than specified in the call. The reason lies in the stopping criteria of the algorithm. According to the default internal settings, the computations stop if either the fractional change in deviance down the path is less than $10^{-5}$ or the fraction of explained deviance reaches $0.999$. From the last few lines , we see the fraction of deviance does not change much and therefore the computation ends when meeting the stopping criteria. We can change such internal parameters. For details, see the Appendix section or type `help(glmnet.control)`.

We can plot the fitted object as in the previous section. There are more options in the `plot` function. 

Users can decide what is on the X-axis. `xvar` allows three measures: "norm" for the $\ell_1$-norm of the coefficients (default), "lambda" for the log-lambda value and "dev" for %deviance explained. 

Users can also label the curves with variable sequence numbers simply by setting `label = TRUE`.

Let's plot "fit" against the log-lambda value and with each curve labeled.
```{r}
plot(fit, xvar = "lambda", label = TRUE)
```

Now when we plot against %deviance we get a very different picture. This is percent deviance explained on the training data. What we see here is that toward the end of the path this value are not changing much, but the coefficients are "blowing up" a bit. This lets us focus attention on the parts of the fit that matter. This will especially be true for other models, such as logistic regression.
```{r}
plot(fit, xvar = "dev", label = TRUE)
```


We can extract the coefficients and make predictions at certain values of $\lambda$. Two commonly used options are:

* `s` specifies the value(s) of $\lambda$ at which extraction is made. 

* `exact` indicates whether the exact values of coefficients are desired or not. That is, if `exact = TRUE`, and predictions are to be made at values of s not included in the original fit, these values of s are merged with `object$lambda`, and the model is refit before predictions are made. If `exact=FALSE` (default), then the predict function uses linear interpolation to make predictions for values of s that do not coincide with lambdas used in the fitting algorithm. 

A simple example is:

```{r}
any(fit$lambda == 0.5)
coef.exact = coef(fit, s = 0.5, exact = TRUE)
coef.apprx = coef(fit, s = 0.5, exact = FALSE)
cbind2(coef.exact, coef.apprx)
```
The left column is for `exact = TRUE` and the right for `FALSE`. We see from the above that 0.01 is not in the sequence and that hence there are some difference, though not much. Linear interpolation is mostly enough if there are no special requirements. 

Users can make predictions from the fitted object. In addition to the options in `coef`,  the primary argument is `newx`, a matrix of new values for `x`. The `type` option allows users to choose the type of prediction:
* "link" gives the fitted values

* "response" the sames as "link" for "gaussian" family.

* "coefficients" computes the coefficients at values of `s`

* "nonzero" retuns a list of the indices of the nonzero coefficients for each value of `s`.

For example,
```{r}
predict(fit, newx = x[1:5,], type = "response", s = 0.05)
```
gives the fitted values for the first 5 observations at $\lambda = 0.05$. If multiple values of `s` are supplied, a matrix of predictions is produced.

Users can customize K-fold cross-validation. In addition to all the `glmnet` parameters, `cv.glmnet` has its special parameters including `nfolds` (the number of folds), `foldid` (user-supplied folds), `type.measure`(the loss used for cross-validation):
* "deviance" or "mse" uses squared loss

* "mae" uses mean absolute error

As an example,
```{r}
cvfit = cv.glmnet(x, y, type.measure = "mse", nfolds = 20)
```
does 20-fold cross-validation, based on mean squared error criterion (default though). 

Parallel computing is also supported by `cv.glmnet`. To make it work, users must register parallel beforehand. We give a simple example of comparison here. Unfortunately, the package `doMC` is not available on Windows platforms (it is on others), so we cannot run the code here, but we make it looks as if we have.
```{r, eval=FALSE}
require(doMC)
registerDoMC(cores=2)
X = matrix(rnorm(1e4 * 200), 1e4, 200)
Y = rnorm(1e4)
```

```{r, eval=FALSE}
system.time(cv.glmnet(X, Y))
```
```{r, echo=FALSE}
structure(c(2.44, 0.08, 2.518, 0, 0), class = "proc_time", .Names = c("user.self", 
"sys.self", "elapsed", "user.child", "sys.child"))
```
```{r, eval=FALSE}
system.time(cv.glmnet(X, Y, parallel = TRUE))
```
```{r, echo=FALSE}
structure(c(0.508999999999999, 0.057, 1.56699999999999, 1.941, 
0.1), class = "proc_time", .Names = c("user.self", "sys.self", 
"elapsed", "user.child", "sys.child"))
```

As suggested from the above, parallel computing can significantly speed up the computation process especially for large-scale problems.

Functions `coef` and `predict` on cv.glmnet object are similar to those for a `glmnet` object, except that two special strings are also supported by `s` (the values of $\lambda$ requested):
* "lambda.1se": the largest $\lambda$ at which the MSE is within one standard error of the minimal MSE.

* "lambda.min": the $\lambda$ at which the minimal MSE is achieved.

```{r}
cvfit$lambda.min
coef(cvfit, s = "lambda.min")
predict(cvfit, newx = x[1:5,], s = "lambda.min")
```

Users can control the folds used. Here we use the same folds so we can also select a value for $\alpha$.

```{r}
foldid=sample(1:10,size=length(y),replace=TRUE)
cv1=cv.glmnet(x,y,foldid=foldid,alpha=1)
cv.5=cv.glmnet(x,y,foldid=foldid,alpha=.5)
cv0=cv.glmnet(x,y,foldid=foldid,alpha=0)
```
There are no built-in plot functions to put them all on the same plot, so we are on our own here:
```{r}
par(mfrow=c(2,2))
plot(cv1);plot(cv.5);plot(cv0)
plot(log(cv1$lambda),cv1$cvm,pch=19,col="red",xlab="log(Lambda)",ylab=cv1$name)
points(log(cv.5$lambda),cv.5$cvm,pch=19,col="grey")
points(log(cv0$lambda),cv0$cvm,pch=19,col="blue")
legend("topleft",legend=c("alpha= 1","alpha= .5","alpha 0"),pch=19,col=c("red","grey","blue"))
```

We see that lasso (`alpha=1`) does about the best here. We also see that the range of lambdas used differs with alpha.


#### Coefficient upper and lower bounds

These are recently added features that enhance the scope of the models. Suppose we want to fit our model, but limit the coefficients to be bigger than -0.7 and less than 0.5. This is easily achieved via the `upper.limits` and `lower.limits` arguments:

```{r}
tfit=glmnet(x,y,lower=-.7,upper=.5)
plot(tfit)
```

These are rather arbitrary limits; often we want the coefficients to be positive, so we can set only `lower.limit` to be 0.
(Note, the lower limit must be no bigger than zero, and the upper limit no smaller than zero.)
These bounds can be a vector, with different values for each coefficient. If given as a scalar, the same number gets recycled for all.

#### Penalty factors

This argument allows users to apply separate penalty factors to each coefficient. Its default is 1 for each parameter, but other values can be specified. In particular, any variable with `penalty.factor` equal to zero is not penalized at all! Let $v_j$ denote the penalty factor for $j$ th variable. The penalty term becomes
$$
\lambda \sum_{j=1}^p \boldsymbol{v_j} P_\alpha(\beta_j) = \lambda \sum_{j=1}^p \boldsymbol{v_j} \left[ (1-\alpha)\frac{1}{2} \beta_j^2 + \alpha |\beta_j| \right].
$$
Note the penalty factors are internally rescaled to sum to nvars. 

This is very useful when people have prior knowledge or preference over the variables. In many cases, some variables may be so important that one wants to keep them all the time, which can be achieved by setting corresponding penalty factors to 0:

```{r}
p.fac = rep(1, 20)
p.fac[c(5, 10, 15)] = 0
pfit = glmnet(x, y, penalty.factor = p.fac)
plot(pfit, label = TRUE)
```

We see from the labels that the three variables with 0 penalty factors always stay in the model, while the others follow typical regularization paths and shrunken to 0 eventually. 

Some other useful arguments. `exclude` allows one to block certain variables from being the model at all. Of course, one could simply subset these out of `x`, but sometimes `exclude` is more useful, since it returns a full vector of coefficients, just with the excluded ones set to zero. There is also an `intercept` argument which defaults to `TRUE`; if `FALSE` the intercept is forced to be zero.

#### Customizing plots

Sometimes, especially when the number of variables is small, we want to add variable labels to a plot. Since `glmnet` is intended primarily for wide data, this is not supprted in `plot.glmnet`. However, it is easy to do, as the following little toy example shows.

We first generate some data, with 10 variables, and for lack of imagination and ease we give them simple character names. 
We then fit a glmnet model, and make the standard plot.
```{r}
set.seed(101)
x=matrix(rnorm(1000),100,10)
y=rnorm(100)
vn=paste("var",1:10)
fit=glmnet(x,y)
plot(fit)
```

We wish to label the curves with the variable names. Here s a simple way to do this, using the `axis` command in R (and a little research into how to customize it). We need to have the positions of the coefficients at the end of the path, and we need to make some space using the `par` command, so that our labels will fit in.
This requires knowing how long your labels are, but here they are all quite short.
 
```{r}
par(mar=c(4.5,4.5,1,4))
plot(fit)
vnat=coef(fit)
vnat=vnat[-1,ncol(vnat)] # remove the intercept, and get the coefficients at the end of the path
axis(4, at=vnat,line=-.5,label=vn,las=1,tick=FALSE, cex.axis=0.5) 
```

We have done nothing here to avoid overwriting of labels, in the event that they are close together. This would be a bit more work, but perhaps best left alone, anyway.


### Multiresponse Gaussian Family

The multiresponse Gaussian family is obtained using `family = "mgaussian"` option in `glmnet`. It is very similar to the single-response case above. This is useful when there are a number of (correlated) responses - the so-called "multi-task learning" problem. Here the sharing involves which variables are selected, since when a variable is selected, a coefficient is fit for each response. Most of the options are the same, so we focus here on the differences with the single response model.

Obviously, as the name suggests, $y$ is not a vector, but a matrix of quantitative responses in this section. The coefficients at each value of lambda are also a matrix as a result. 

Here we solve the following problem:
$$
\min_{(\beta_0, \beta) \in \mathbb{R}^{(p+1)\times K}}\frac{1}{2N} \sum_{i=1}^N ||y_i -\beta_0-\beta^T x_i||^2_F+\lambda \left[ (1-\alpha)||\beta||_F^2/2 + \alpha\sum_{j=1}^p||\beta_j||_2\right].
$$
Here $\beta_j$ is the jth row of the $p\times K$ coefficient matrix $\beta$, and we replace the absolute penalty on each single coefficient by a group-lasso penalty on each coefficient K-vector $\beta_j$ for a single predictor $x_j$.

We use a set of data generated beforehand for illustration.
```{r}
data(MultiGaussianExample)
```
We fit the data, with an object "mfit" returned.
```{r}
mfit = glmnet(x, y, family = "mgaussian")
```
For multiresponse Gaussian, the options in `glmnet` are almost the same as the single-response case, such as `alpha`, `weights`, `nlambda`, `standardize`. A exception to be noticed is that `standardize.response` is only for `mgaussian` family. The default value is `FALSE`. If `standardize.response = TRUE`, it standardizes the response variables.

To visualize the coefficients, we use the `plot` function.
```{r}
plot(mfit, xvar = "lambda", label = TRUE, type.coef = "2norm")
```

Note that we set `type.coef = "2norm"`. Under this setting, a single curve is plotted per variable, with value equal to the $\ell_2$ norm. The default setting is `type.coef = "coef"`, where a coefficient plot is created for each response (multiple figures). 

`xvar` and `label` are two other options besides ordinary graphical parameters. They are the same as the single-response case.

We can extract the coefficients at requested values of $\lambda$ by using the function `coef` and make predictions by `predict`. The usage is similar and we only provide an example of `predict` here.
```{r}
predict(mfit, newx = x[1:5,], s = c(0.1, 0.01))
```
The prediction result is saved in a three-dimensional array with the first two dimensions being the prediction matrix for each response variable and the third indicating the response variables. 

We can also do k-fold cross-validation. The options are almost the same as the ordinary Gaussian family and we do not expand here.
```{r}
cvmfit = cv.glmnet(x, y, family = "mgaussian")
```
We plot the resulting `cv.glmnet` object "cvmfit". 
```{r}
plot(cvmfit)
```

To show explicitly the selected optimal values of $\lambda$, type
```{r}
cvmfit$lambda.min
cvmfit$lambda.1se
```
As before, the first one is the value at which the minimal mean squared error is achieved and the second is for the most regularized model whose mean squared error is within one standard error of the minimal.

Prediction for `cv.glmnet` object works almost the same as for `glmnet` object. We omit the details here.

[Back to Top](#top)
<a id="log"></a>

## Logistic Regression
 
Logistic regression  is another widely-used model when the response is categorical. If there are two possible outcomes, we use the binomial distribution, else we use the multinomial.

### Binomial Models

For the binomial model, suppose the response variable takes value in $\mathcal{G}=\{1,2\}$. Denote $y_i = I(g_i=1)$. We model
$$\mbox{Pr}(G=2|X=x)+\frac{e^{\beta_0+\beta^Tx}}{1+e^{\beta_0+\beta^Tx}},$$
which can be written in the following form
$$\log\frac{\mbox{Pr}(G=2|X=x)}{\mbox{Pr}(G=1|X=x)}=\beta_0+\beta^Tx,$$
the so-called "logistic" or log-odds transformation.

The objective function for the penalized logistic regression uses the negative binomial log-likelihood, and is 
$$
\min_{(\beta_0, \beta) \in \mathbb{R}^{p+1}} -\left[\frac{1}{N} \sum_{i=1}^N y_i \cdot (\beta_0 + x_i^T \beta) - \log (1+e^{(\beta_0+x_i^T \beta)})\right] + \lambda \big[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\big].
$$
Logistic regression is often plagued with degeneracies when $p > N$ and exhibits wild behavior even when $N$ is close to $p$;
the elastic-net penalty alleviates these issues, and regularizes and selects variables as well.

Our algorithm uses a quadratic approximation to the log-likelihood,  and then coordinate descent on the resulting penalized weighted least-squares problem. These constitute an outer and inner loop.

 
For illustration purpose, we load pre-generated input matrix `x` and the response vector `y` from the data file.
```{r}
data(BinomialExample)
```
The input matrix $x$ is the same as other families. For binomial logistic regression, the response variable $y$ should be either a factor with two levels, or a two-column matrix of counts or proportions.

Other optional arguments of `glmnet` for binomial regression are almost same as those for Gaussian family. Don't forget to set `family` option to "binomial". 
```{r}
fit = glmnet(x, y, family = "binomial")
```
Like before, we can print and plot the fitted object, extract the coefficients at specific $\lambda$'s and also make predictions. For plotting, the optional arguments such as `xvar` and `label` are similar to the Gaussian. We plot against the deviance explained and show the labels.
```{r}
plot(fit, xvar = "dev", label = TRUE)
```

Prediction is a little different for logistic from Gaussian, mainly in the option `type`. "link" and "response" are never equivalent and "class" is only available for logistic regression. In summary,
* "link" gives the linear predictors 

* "response" gives the fitted probabilities

* "class" produces the class label corresponding to the maximum probability.

* "coefficients" computes the coefficients at values of `s`

* "nonzero" retuns a list of the indices of the nonzero coefficients for each value of `s`.

For "binomial" models, results ("link", "response", "coefficients", "nonzero") are returned only for the class corresponding to the second level of the factor response.

In the following example, we make prediction of the class labels at $\lambda = 0.05, 0.01$.
```{r}
predict(fit, newx = x[1:5,], type = "class", s = c(0.05, 0.01))
```
For logistic regression, `cv.glmnet` has similar arguments and usage as Gaussian. `nfolds`, `weights`, `lambda`, `parallel` are all available to users. There are some differences in `type.measure`: "deviance" and "mse" do not both mean squared loss and "class" is enabled. Hence,
* "mse" uses squared loss.

* "deviance" uses actual deviance.

* "mae" uses mean absolute error.

* "class" gives misclassification error.

* "auc" (for two-class logistic regression ONLY) gives area under the ROC curve.

For example,
```{r}
cvfit = cv.glmnet(x, y, family = "binomial", type.measure = "class")
```
It uses misclassification error as the criterion for 10-fold cross-validation.

We plot the object and show the optimal values of $\lambda$.
```{r}
plot(cvfit)
```
```{r}
cvfit$lambda.min
cvfit$lambda.1se
```

`coef` and `predict` are simliar to the Gaussian case and we omit the details. We review by some examples.
```{r}
coef(cvfit, s = "lambda.min")
```
As mentioned previously, the results returned here are only for the second level of the factor response. 

```{r}
predict(cvfit, newx = x[1:10,], s = "lambda.min", type = "class")
```

Like other GLMs, glmnet allows for an "offset". This is a fixed vector of N numbers that is added into the linear predictor.
For example, you may have fitted some other logistic regression using other variables (and data), and now you want to see if the present variables can add anything. So you use the predicted logit from the other model as an offset in.

### Multinomial Models

For the multinomial model, suppose the response variable has $K$ levels ${\cal G}=\{1,2,\ldots,K\}$. Here we model 
$$\mbox{Pr}(G=k|X=x)=\frac{e^{\beta_{0k}+\beta_k^Tx}}{\sum_{\ell=1}^Ke^{\beta_{0\ell}+\beta_\ell^Tx}}.$$

Let ${Y}$ be the $N \times K$ indicator response matrix, with elements $y_{i\ell} = I(g_i=\ell)$. Then the elastic-net penalized negative log-likelihood function becomes
$$
\ell(\{\beta_{0k},\beta_{k}\}_1^K) = -\left[\frac{1}{N} \sum_{i=1}^N \Big(\sum_{k=1}^Ky_{il} (\beta_{0k} + x_i^T \beta_k)- \log \big(\sum_{k=1}^K e^{\beta_{0k}+x_i^T \beta_k}\big)\Big)\right] +\lambda \left[ (1-\alpha)||\beta||_F^2/2 + \alpha\sum_{j=1}^p||\beta_j||_q\right].
$$
Here we really abuse notation! $\beta$ is a $p\times K$ matrix of coefficients. $\beta_k$ refers to the kth column (for outcome category k), and $\beta_j$ the jth row (vector of K coefficients for variable j).
The last penalty term is $||\beta_j||_q$, we have two options for q: $q\in \{1,2\}$.
When q=1, this is a lasso penalty on each of the parameters. When q=2, this is a grouped-lasso penalty on all the K coefficients for a particular variables, which makes them all be zero or nonzero together.
  

The standard Newton algorithm can be tedious here. Instead, we use a so-called partial Newton algorithm by making a partial quadratic approximation to the log-likelihood, allowing only $(\beta_{0k}, \beta_k)$ to vary for a single class at a time. 
For each value of $\lambda$, we first cycle over all classes indexed by $k$, computing each time a partial quadratic approximation about the parameters of the current class. Then the inner procedure is almost the same as for the binomial case.
This is the case for lasso (q=1). When q=2, we use a different approach, which we wont dwell on here.


For the multinomial case, the usage is similar to logistic regression, and we mainly illustrate by examples and address any differences. We load a set of generated data.
```{r}
data(MultinomialExample)
```
The optional arguments in `glmnet` for multinomial logistic regression are mostly similar to binomial regression except for a few cases. 

The response variable can be a `nc >= 2` level factor, or a `nc`-column matrix of counts or proportions. 
Internally glmnet will make the rows of this matrix sum to 1, and absorb the total mass into the weight for that observation.

`offset` should be a `nobs x nc` matrix if there is one. 

A special option for multinomial regression is `type.multinomial`, which allows the usage of a grouped lasso penalty if `type.multinomial = "grouped"`. This will ensure that the multinomial coefficients for a variable are all in or out together, just like for the multi-response Gaussian.

```{r}
fit = glmnet(x, y, family = "multinomial", type.multinomial = "grouped")
```

We plot the resulting object "fit".
```{r}
plot(fit, xvar = "lambda", label = TRUE, type.coef = "2norm")
```

The options are `xvar`, `label` and `type.coef`, in addition to other ordinary graphical parameters. 

`xvar` and `label` are the same as other families while `type.coef` is only for multinomial regression and multiresponse Gaussian model. It can produce a figure of coefficients for each response variable if `type.coef = "coef"` or a figure showing the $\ell_2$-norm in one figure if `type.coef = "2norm"`

We can also do cross-validation and plot the returned object.
```{r}
cvfit=cv.glmnet(x, y, family="multinomial", type.multinomial = "grouped", parallel = TRUE)
plot(cvfit)
```

Note that although `type.multinomial` is not a typical argument in `cv.glmnet`, in fact any argument that can be passed to `glmnet` is valid in the argument list of `cv.glmnet`. We also use parallel computing to accelerate the calculation.

Users may wish to predict at the optimally selected $\lambda$:
```{r}
predict(cvfit, newx = x[1:10,], s = "lambda.min", type = "class")
```

[Back to Top](#top)

<a id="poi"></a>

## Poisson Models
 
Poisson regression is used to model count data under the assumption of Poisson error, or otherwise non-negative data where the mean and variance are proportional. Like the Gaussian and binomial model, the Poisson is a member of the exponential family of distributions. We usually model its positive mean on the log scale:   $\log \mu(x) = \beta_0+\beta' x$.  
The log-likelihood for observations $\{x_i,y_i\}_1^N$ is given my
$$
l(\beta|X, Y) = \sum_{i=1}^N (y_i (\beta_0+\beta' x_i) - e^{\beta_0+\beta^Tx_i}.
$$
As before, we optimize the penalized log-lielihood:
 $$
\min_{\beta_0,\beta} -\frac1N l(\beta|X, Y)  + \lambda \left((1-\alpha) \sum_{i=1}^N \beta_i^2/2) +\alpha \sum_{i=1}^N |\beta_i|\right).
$$

Glmnet uses an outer Newton loop, and an inner weighted least-squares loop (as in logistic regression) to optimize this criterion. 



First, we load a pre-generated set of Poisson data.
```{r}
data(PoissonExample)
```

We apply the function `glmnet` with the `"poisson"` option.
```{r}
fit = glmnet(x, y, family = "poisson")
```
The optional input arguments of `glmnet` for `"poisson"` family are similar to those for others. 

`offset` is a useful argument particularly in Poisson models.

When dealing with rate data in Poisson models, the counts collected are often based on different  exposures, such as length of time observed, area and years. A poisson rate $\mu(x)$ is relative to a unit exposure time, so if an observation $y_i$ was exposed for $E_i$ units of time, then the expected count would be $E_i\mu(x)$, and the log mean would be $\log(E_i)+\log(\mu(x)$. In a case like this, we would supply an *offset* $\log(E_i)$ for each observation.
Hence `offset` is a vector of length `nobs` that is included in the linear predictor.   Other families can also use options, typically for different reasons.

(Warning: if `offset` is supplied in `glmnet`, offsets must also also be supplied to `predict` to make reasonable predictions.)

Again, we plot the coefficients to have a first sense of the result.
```{r}
plot(fit)
```

Like before, we can extract the coefficients and make predictions at certain $\lambda$'s by using `coef` and `predict` respectively. The optional input arguments are similar to those for other families. In function `predict`, the option `type`, which is the type of prediction required, has its own specialties for Poisson family. That is,
* "link" (default) gives the linear predictors like others
* "response" gives the fitted mean
* "coefficients" computes the coefficients at the requested values for `s`, which can also be realized by `coef` function
* "nonzero" returns a a list of the indices of the nonzero coefficients for each value of `s`.

For example, we can do as follows.
```{r}
coef(fit, s = 1)
predict(fit, newx = x[1:5,], type = "response", s = c(0.1,1))
```

We may also use cross-validation to find the optimal $\lambda$'s and thus make inferences.
```{r}
cvfit = cv.glmnet(x, y, family = "poisson")
```
Options are almost the same as the Gaussian family except that for `type.measure`,
* "deviance" (default) gives the deviance
* "mse" stands for mean squared error
* "mae" is for mean absolute error.

We can plot the `cv.glmnet` object.
```{r}
plot(cvfit)
```

We can also show the optimal $\lambda$'s and the corresponding coefficients.
```{r}
opt.lam = c(cvfit$lambda.min, cvfit$lambda.1se)
coef(cvfit, s = opt.lam)
```
The `predict` method is similar and we do not repeat it here.

[Back to Top](#top)

<a id="cox"></a>

## Cox Models
 
The Cox proportional hazards model is commonly used for the study of the relationship beteween predictor variables and survival time. In the usual survival analysis framework, we have data of the form $(y_1, x_1, \delta_1), \ldots, (y_n, x_n, \delta_n)$ where $y_i$, the observed time, is a time of failure if $\delta_i$ is 1 or right-censoring if $\delta_i$ is 0. We also let $t_1 < t_2 < \ldots < t_m$ be the increasing list of unique failure times, and $j(i)$ denote the index of the observation failing at time $t_i$.

The Cox model assumes a semi-parametric form for the hazard
$$
h_i(t) = h_0(t) e^{x_i^T \beta},
$$
where $h_i(t)$ is the hazard for patient $i$ at time $t$, $h_0(t)$ is a shared baseline hazard, and $\beta$ is a fixed, length $p$ vector. In the classic setting $n \geq p$, inference is made via the partial likelihood
$$
L(\beta) = \prod_{i=1}^m \frac{e^{x_{j(i)}^T \beta}}{\sum_{j \in R_i} e^{x_j^T \beta}},
$$
where $R_i$ is the set of indices $j$ with $y_j \geq t_i$ (those at risk at time $t_i$).

Note there is no intercept in the Cox mode (its built into the baseline hazard, and like it, would cancel in the partial likelihood.)

We penalize the negative log of the partial likelihood, just like the other models, with an elastic-net penalty.  

We use a pre-generated set of sample data and response. Users can load their own data and follow a similar procedure. In this case $x$ must be an $n\times p$ matrix of covariate values - each row corresponds to a patient and each column a covariate. $y$ is an $n \times 2$  matrix, with a column "time" of failure/censoring times, and "status" a 0/1 indicator, with 1 meaning the time is a failure time, and zero a censoring time.

```{r}
data(CoxExample)
y[1:5,]
```
The `Surv` function in the package `survival` can create such a matrix. Note, however, that the `coxph` and related linear models can handle interval and other fors of censoring, while glmnet can only handle right censoring in its present form.  

We apply the `glmnet` function to compute the solution path under default settings.
```{r}
fit = glmnet(x, y, family = "cox")
```
All the standard options are available such as `alpha`, `weights`, `nlambda` and `standardize`. Their usage is similar as in the Gaussian case and we omit the details here. Users can also refer to the help file `help(glmnet)`.

We can plot the coefficients.
```{r}
plot(fit)
```

As before, we can extract the coefficients at certain values of $\lambda$.
```{r}
coef(fit, s = 0.05)
```

Since the Cox Model is not commonly used for prediction, we do not give an illustrative example on prediction. If needed, users can refer to the help file by typing `help(predict.glmnet)`.

Also, the function `cv.glmnet` can be used to compute $k$-fold cross-validation for the Cox model. The usage is similar to that for other families except for two main differences. 

One is that `type.measure` only supports "deviance"(also default), which gives the partial-likelihood. 

The other is in the option `grouped`. `grouped = TRUE` obtains the CV partial likelihood for the Kth fold by subtraction; by subtracting the log partial likelihood evaluated on the full dataset from that evaluated on the on the (K-1)/K dataset. This makes more efficient use of risk sets. With `grouped=FALSE` the log partial likelihood is computed only on the Kth fold, which is only reasonable if each fold has a large number of observations.
```{r}
cvfit = cv.glmnet(x, y, family = "cox")
```
Once fit, we can view the optimal $\lambda$ value and a cross validated error plot to help evaluate our model. 
```{r}
plot(cvfit)
```

As previously, the left vertical line in our plot shows us where the CV-error curve hits its minimum. The right vertical line shows us the most regularized model with CV-error within 1 standard deviation of the minimum. We also extract such optimal $\lambda$'s.
```{r}
cvfit$lambda.min
cvfit$lambda.1se
```
We can check the active covariates in our model and see their coefficients.
```{r}
coef.min = coef(cvfit, s = "lambda.min")
active.min = which(coef.min != 0)
index.min = coef.min[active.min]
```
```{r}
index.min
coef.min
```

[Back to Top](#top)

<a id="spa"></a>

## Sparse Matrices

 Our package supports sparse input matrices, which allow efficient storage and operations of large matrices but with only a few nonzero entries. It is available for all families except for the `cox` family. The usage of sparse matrices (inherit from class `"sparseMatrix"` as in package `Matrix`) in `glmnet ` is the same as if a regular matrix is provided.

We load a set of sample data created beforehand.
```{r}
data(SparseExample)
```
It loads `x`, a 100*20 sparse input matrix and `y`, the response vector.
```{r}
class(x)
```
Users can create a sparse matrix with the function `sparseMatrix` by providing the locations and values of the nonzero entries. Alternatively, `Matrix` function can also be used to contruct a sparse matrix by setting `sparse = TRUE`, but this defeats the purpose somewhat.

We can fit the model the same way as before.
```{r}
fit = glmnet(x, y)
```
We also do the cross-validation and plot the resulting object.
```{r}
cvfit = cv.glmnet(x, y)
plot(cvfit)
```

The usage of other functions are similar and we do not expand here. 

Note that sparse matrices can also be used for `newx`, the new input matrix in the `predict` function. For example,
```{r}
i = sample(1:5, size = 25, replace = TRUE)
j = sample(1:20, size = 25, replace = TRUE)
x = rnorm(25)
nx = sparseMatrix(i = i, j = j, x = x, dims = c(5, 20))
predict(cvfit, newx = nx, s = "lambda.min")
```

[Back to Top](#top)

<a id="int"></a>

## Appendix 1: Internal Parameters
 
Our package has a set of internal parameters which control some aspects of the computation of the path. The *factory default* settings are expected to serve in most cases, and users do not need to make changes unless there are special requirements.

There are several parameters that users can change:

`fdev` - minimum fractional change in deviance for stopping path; factory default = 1.0e-5

`devmax` - maximum fraction of explained deviance for stopping path; factory default = 0.999

* `eps` - minimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6

* `big` - large floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big

* `mnlam` - minimum number of path points (lambda values) allowed; factory default = 5

* `pmin` - minimum null probability for any class; factory default = 1.0e-5

* `exmx` - maximum allowed exponent; factory default = 250.0

* `prec` - convergence threshold for multi-response bounds adjustment solution; factory default = 1.0e-10

* `mxit` - maximum iterations for multiresponse bounds adjustment solution; factory default = 100

* `factory` - If `TRUE`, reset all the parameters to the factory default; default is `FALSE`

We illustrate the usage by an example. Note that any changes made hold for the duration of the R session, or unless they are changed by the user with a subsequent call to `glmnet.control`.

```{r}
data(QuickStartExample)
fit = glmnet(x, y)
print(fit)
```
We can change the minimum fractional change in deviance for stopping path and compare the results. 
```{r}
glmnet.control(fdev = 0)
fit = glmnet(x, y)
print(fit)
```
We set `fdev = 0` to continue all along the path, even without much change. The length of the sequence becomes 100, which is the default of `nlambda`. 

Users can also reset to the default settings.
```{r}
glmnet.control(factory = TRUE)
```
The current settings are obtained as follows.
```{r}
glmnet.control()
```
[Back to Top](#top)

<a id="cmp"></a>

## Appendix 2: Comparison with Other Packages
Some people may want to use `glmnet` to solve the Lasso or elastic-net problem at a single $\lambda$. We compare here the solution by `glmnet` with other packages (such as CVX), and also as an illustration of parameter settings in this situation.

__Warning__: Though such problems can be solved by `glmnet`, it is __not recommended__ and is not the spirit of the package. `glmnet` fits the __entire__ solution path for Lasso or elastic-net problems efficiently with various techniques such as warm start. Those advantages will disappear if the $\lambda$ sequence is forced to be only one value.

Nevertheless, we still illustrate with a typical example in linear model in the following for the purpose of comparison. Given $X, Y$ and $\lambda_0 > 0$, we want to find $\beta$ such that
$$
\min_{\beta} ||Y - X\beta||_2^2 + \lambda_0 ||\beta||_1,
$$
where, say, $\lambda_0 = 8$.

We first solve using `glmnet`. Notice that there is no intercept term in the objective function, and the columns of $X$ are not necessarily standardized. Corresponding parameters have to be set to make it work correctly. In addition, there is a $1/(2n)$ factor before the quadratic term by default, we need to adjust $\lambda$ accordingly. For the purpose of comparison, the `thresh` option is specified to be 1e-20. However, this is not necessary in many practical applications.
```{r, echo=FALSE}
data(QuickStartExample)
```
```{r,eval=FALSE}
fit = glmnet(x, y, intercept = F, standardize = F, lambda = 8/(2*dim(x)[1]), thresh = 1e-20)
```
We then extract the coefficients (with no intercept).
```{r,eval=FALSE}
beta_glmnet = as.matrix(predict(fit, type = "coefficients")[-1,])
```

In linear model as here this approach worked because we were using squared error loss, but with any nonlinear family, it will probably fail. The reason is we are not using step length optimization, and so rely on very good warm starts to put us in the quadratic region of the loss function.

Alternatively, a more stable and __strongly recommended__ way to perform this task is to first fit the entire Lasso or elastic-net path without specifying `lambda`, but then provide the requested $\lambda_0$ to `predict` function to extract the corresponding coefficients. In fact, if $\lambda_0$ is not in the $\lambda$ sequence generated by `glmnet`, the path will be refitted along a new $\lambda$ sequence that includes the requested value $\lambda_0$ and the old sequence, and the coefficients will be returned at $\lambda_0$ based on the new fit. Remember to set `exact = TRUE` in `predict` function to get the exact solution. Otherwise, it will be approximated by linear interpolation.

```{r}
fit = glmnet(x, y, intercept = F, standardize = F, thresh = 1e-20)
beta_glmnet = as.matrix(predict(fit, s = 8/(2*dim(x)[1]), type = "coefficients", exact = T)[-1,])
```

We also use CVX, a general convex optimization solver, to solve this specific Lasso problem. Users could also call CVX from R using the `CVXfromR` package and solve the problem as follows.
```{r, eval=FALSE}
library(CVXfromR)
setup.dir = "change/this/to/your/cvx/directory"
n = dim(x)[1]; p = dim(x)[2]
cvxcode = paste("variables beta(p)",
                "minimize(square_pos(norm(y - x * beta, 2)) + lambda * norm(beta, 1))",
                sep = ";")
Lasso = CallCVX(cvxcode, const.var = list(p = p, x = x, y = y, lambda = 8), opt.var.names = "beta", setup.dir = setup.dir, matlab.call = "change/this/to/path/to/matlab")
beta_CVX = Lasso$beta
```

For convenience here, the results were saved in `CVXResult.RData`, and we simply load in the results. 

```{r}
data(CVXResults)
```

In addition, we use `lars` to solve the same problem. 
```{r,message=FALSE}
require(lars)
```
```{r}
fit_lars = lars(x, y, type = "lasso", intercept = F, normalize = F)
beta_lars = predict(fit_lars, s = 8/2, type = "coefficients", mode = "lambda")$coefficients
```

The results are listed below up to 6 decimal digits (due to convergence thresholds).

```{r}
cmp = round(cbind(beta_glmnet, beta_lars, beta_CVX), digits = 6)
colnames(cmp) = c("beta_glmnet", "beta_lars", "beta_CVX")
cmp
``` 

[Back to Top](#top)


## References
 

<p>Jerome Friedman, Trevor Hastie and Rob Tibshirani. (2008). <br>
<a href="http://www.jstatsoft.org/v33/i01/">Regularization Paths for Generalized Linear Models via Coordinate Descent</a><br>
<em>Journal of Statistical Software</em>, Vol. 33(1), 1-22 Feb 2010.</p>
<p>Noah Simon, Jerome Friedman, Trevor Hastie and Rob Tibshirani. (2011).<br>
<a href="http://www.jstatsoft.org/v39/i05/">Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent</a><br>
<em>Journal of Statistical Software</em>, Vol. 39(5) 1-13.</p>
<p>Robert Tibshirani, Jacob Bien, Jerome Friedman, Trevor Hastie, Noah Simon, Jonathan Taylor, Ryan J. Tibshirani. (2010).<br>
<a href="http://www-stat.stanford.edu/~tibs/ftp/strong.pdf">Strong Rules for Discarding Predictors in Lasso-type Problems</a><br>
<em>Journal of the Royal Statistical Society: Series B (Statistical Methodology)</em>, 74(2), 245-266.</p>
<p> Noah Simon, Jerome Friedman and Trevor Hastie (2013). <br>
<a href="http://www.stanford.edu/~hastie/Papers/multi_response.pdf">A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression </a><br>
<i>(in arXiv, submitted) </i></p>