Title: | Matrix Completion via Iterative Soft-Thresholded SVD |
---|---|
Description: | Iterative methods for matrix completion that use nuclear-norm regularization. There are two main approaches.The one approach uses iterative soft-thresholded svds to impute the missing values. The second approach uses alternating least squares. Both have an 'EM' flavor, in that at each iteration the matrix is completed with the current estimate. For large matrices there is a special sparse-matrix class named "Incomplete" that efficiently handles all computations. The package includes procedures for centering and scaling rows, columns or both, and for computing low-rank SVDs on large sparse centered matrices (i.e. principal components). |
Authors: | Trevor Hastie <[email protected]> and Rahul Mazumder <[email protected]> |
Maintainer: | Trevor Hastie <[email protected]> |
License: | GPL-2 |
Version: | 1.4-1 |
Built: | 2024-10-07 04:32:24 UTC |
Source: | https://github.com/cran/softImpute |
A function for standardizing a matrix in a symmetric
fashion. Generalizes the scale
function in R. Works with
matrices with NAs, matrices of class "Incomplete", and matrix in
"sparseMatrix" format.
biScale(x, maxit = 20, thresh = 1e-09, row.center = TRUE, row.scale =TRUE, col.center = TRUE, col.scale = TRUE, trace = FALSE)
biScale(x, maxit = 20, thresh = 1e-09, row.center = TRUE, row.scale =TRUE, col.center = TRUE, col.scale = TRUE, trace = FALSE)
x |
matrix, possibly with NAs, also of class "Incomplete" or "sparseMatrix" format. |
maxit |
When both row and column centering/scaling is requested, iteration is may be necessary |
thresh |
Convergence threshold |
row.center |
if |
row.scale |
if |
col.center |
Similar to |
col.scale |
Similar to |
trace |
with |
This function computes a transformation
to transform the matrix . It uses an iterative algorithm based
on "method-of-moments". At each step, all but one of the parameter
vectors is fixed, and the remaining vector is computed to solve the
required condition. Although in genereal this is not guaranteed to
converge,
it mostly does, and quite rapidly. When there are convergence
problems, remove some of the required constraints. When any of the
row/column centers or scales are provided, they are used rather than
estimated in the above model.
A matrix like x
is returned, with attributes:
biScale:row |
a list with elements |
biScale:column |
Same details as |
For matrices with missing values, the constraints apply to the
non-missing entries. If x
is of class "sparseMatrix"
,
then the sparsity is maintained, and an object of class
"SparseplusLowRank"
is returned, such that the low-rank part
does the centering.
This function will be described in detail in a forthcoming paper
Trevor Hastie, with help from Andreas Buja and Steven Boyd
,
Maintainer: Trevor Hastie [email protected]
softImpute
,Incomplete
,lambda0
,impute
,complete
,
and class "SparseplusLowRank"
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 xc=biScale(x) ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA xnab=biScale(xna,row.scale=FALSE,trace=TRUE) xnaC=as(xna,"Incomplete") xnaCb=biScale(xnaC) nnz=trunc(np*.3) inz=sample(seq(np),nnz,replace=FALSE) i=row(x)[inz] j=col(x)[inz] x=rnorm(nnz) xS=sparseMatrix(x=x,i=i,j=j) xSb=biScale(xS) class(xSb)
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 xc=biScale(x) ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA xnab=biScale(xna,row.scale=FALSE,trace=TRUE) xnaC=as(xna,"Incomplete") xnaCb=biScale(xnaC) nnz=trunc(np*.3) inz=sample(seq(np),nnz,replace=FALSE) i=row(x)[inz] j=col(x)[inz] x=rnorm(nnz) xS=sparseMatrix(x=x,i=i,j=j) xSb=biScale(xS) class(xSb)
These functions produce predictions from the low-rank solution of softImpute
complete(x, object, unscale = TRUE) impute(object, i, j, unscale = TRUE)
complete(x, object, unscale = TRUE) impute(object, i, j, unscale = TRUE)
x |
a matrix with NAs or a matrix of class |
object |
an svd object with components u, d and v |
i |
vector of row indices for the locations to be predicted |
j |
vector of column indices for the locations to be predicted |
unscale |
if |
impute
returns a vector of predictions, using the reconstructed
low-rank matrix representation represented by object
. It is used by complete,
which returns a complete matrix with all the missing values imputed.
Either a vector of predictions or a complete matrix. WARNING: if
x
has large dimensions, the matrix returned by complete
might be too large.
Trevor Hastie
softImpute
, biScale
and Incomplete
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA fit1=softImpute(xna,rank=50,lambda=30) complete(xna,fit1)
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA fit1=softImpute(xna,rank=50,lambda=30) complete(xna,fit1)
$d
component of a "softImpute"
object
through regression.
softImpute
uses shrinkage when completing a matrix with
missing values. This function debiases the singular values using
ordinary least squares.
deBias(x, svdObject)
deBias(x, svdObject)
x |
matrix with missing entries, or a matrix of class |
svdObject |
an SVD object, the output of |
Treating the "d"
values as parameters, this function recomputes
them by linear regression.
An svd object is returned, with components "u", "d", and "v".
Trevor Hastie
Maintainer: Trevor Hastie [email protected]
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA fit1=softImpute(xna,rank=50,lambda=30) fit1d=deBias(xna,fit1)
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA fit1=softImpute(xna,rank=50,lambda=30) fit1d=deBias(xna,fit1)
Incomplete
creates an object of class Incomplete
, which inherits from
class dgCMatrix
, a specific instance of class sparseMatrix
Incomplete(i, j, x)
Incomplete(i, j, x)
i |
row indices |
j |
column indices |
x |
a vector of values |
The matrix is represented in sparse-matrix format, except the "zeros" represent missing values. Real zeros are represented explicitly as values.
a matrix of class Incomplete
which inherits from
class dgCMatrix
Trevor Hastie and Rahul Mazumder
softImpute
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA xnaC=as(xna,"Incomplete") ### here we do it a different way to demonstrate Incomplete ### In practise the observed values are stored in this market-matrix format. i = row(xna)[-imiss] j = col(xna)[-imiss] xnaC=Incomplete(i,j,x=x[-imiss])
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA xnaC=as(xna,"Incomplete") ### here we do it a different way to demonstrate Incomplete ### In practise the observed values are stored in this market-matrix format. i = row(xna)[-imiss] j = col(xna)[-imiss] xnaC=Incomplete(i,j,x=x[-imiss])
"Incomplete"
a sparse matrix inheriting from class dgCMatrix
with the NAs
represented as zeros
Objects can be created by calls of the form new("Incomplete", ...)
.
or by calling the function Incomplete
i
:Object of class "integer"
~~
p
:Object of class "integer"
~~
Dim
:Object of class "integer"
~~
Dimnames
:Object of class "list"
~~
x
:Object of class "numeric"
~~
factors
:Object of class "list"
~~
Class "dgCMatrix"
, directly.
Class "CsparseMatrix"
, by class "dgCMatrix", distance 2.
Class "dsparseMatrix"
, by class "dgCMatrix", distance 2.
Class "generalMatrix"
, by class "dgCMatrix", distance 2.
Class "dMatrix"
, by class "dgCMatrix", distance 3.
Class "sparseMatrix"
, by class "dgCMatrix", distance 3.
Class "compMatrix"
, by class "dgCMatrix", distance 3.
Class "Matrix"
, by class "dgCMatrix", distance 4.
signature(x = "Incomplete")
: ...
signature(from = "matrix", to = "Incomplete")
: ...
signature(x = "Incomplete")
: ...
Trevor Hastie and Rahul Mazumder
biScale
,softImpute
,Incomplete
,impute
,complete
showClass("Incomplete") set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA xnaC=as(xna,"Incomplete")
showClass("Incomplete") set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA xnaC=as(xna,"Incomplete")
lambda
such that
softImpute(x,lambda)
returns the zero solution.
this determines the "starting" lambda for a sequence of values for
softImpute
, and all nonzero solutions would require a smaller
value for lambda
.
lambda0(x, lambda = 0, maxit = 100, trace.it = FALSE, thresh = 1e-05)
lambda0(x, lambda = 0, maxit = 100, trace.it = FALSE, thresh = 1e-05)
x |
An m by n matrix. Large matrices can be in "sparseMatrix" format, as
well as "SparseplusLowRank". The latter arise after centering sparse
matrices, for example with |
The remaining arguments only apply to matrices x
in
"sparseMatrix"
, "Incomplete"
, or "SparseplusLowRank"
format.
lambda |
As in |
maxit |
maximum number of iterations. |
trace.it |
with |
thresh |
convergence threshold, measured as the relative changed in the Frobenius norm between two successive estimates. |
It is the largest singular value for the matrix,
with zeros replacing missing values. It uses svd.als
with
rank=2
.
a single number, the largest singular value
Trevor Hastie, Rahul Mazumder
Maintainer: Trevor Hastie [email protected]
Rahul Mazumder, Trevor Hastie and Rob Tibshirani (2010)
Spectral Regularization Algorithms for Learning Large Incomplete
Matrices,
https://web.stanford.edu/~hastie/Papers/mazumder10a.pdf
Journal of Machine Learning Research 11 (2010) 2287-2322
softImpute
,Incomplete
, and svd.als
.
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA lambda0(xna)
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA lambda0(xna)
fit a low-rank matrix approximation to a matrix with missing values via nuclear-norm regularization. The algorithm works like EM, filling in the missing values with the current guess, and then solving the optimization problem on the complete matrix using a soft-thresholded SVD. Special sparse-matrix classes available for very large matrices.
softImpute(x, rank.max = 2, lambda = 0, type = c("als", "svd"), thresh = 1e-05, maxit = 100, trace.it = FALSE, warm.start = NULL, final.svd = TRUE)
softImpute(x, rank.max = 2, lambda = 0, type = c("als", "svd"), thresh = 1e-05, maxit = 100, trace.it = FALSE, warm.start = NULL, final.svd = TRUE)
x |
An m by n matrix with NAs. For large matrices can be of class
|
rank.max |
This restricts the rank of the solution. If sufficiently large, and with
|
lambda |
nuclear-norm regularization parameter. If |
type |
two algorithms are implements, |
thresh |
convergence threshold, measured as the relative change in the Frobenius norm between two successive estimates. |
maxit |
maximum number of iterations. |
trace.it |
with |
warm.start |
an svd object can be supplied as a warm start. This is particularly
useful when constructing a path of solutions with decreasing values of
|
final.svd |
only applicable to |
SoftImpute solves the following problem for a matrix with
missing entries:
Here is the Frobenius norm, restricted to the entries
corresponding to the
non-missing entries of
, and
is the nuclear norm
of
(sum of singular values).
For full details of the "svd" algorithm are described in the reference
below. The "als" algorithm will be described in a forthcoming
article. Both methods employ special sparse-matrix tricks for large
matrices with many missing values. This package creates a new
sparse-matrix class
"SparseplusLowRank"
for matrices of the form
where is sparse and
and
are tall
skinny matrices, hence
is low rank. Methods for efficient left
and right matrix multiplication are provided for this class. For large
matrices, the function
Incomplete()
can be used to build the
appropriate
sparse input matrix from market-format data.
An svd object is returned, with components "u", "d", and "v".
If the solution has zeros in "d", the solution is truncated to rank one
more than the number of zeros (so the zero is visible). If the input
matrix had been centered and scaled by biScale
, the scaling
details are assigned as attributes inherited from the input matrix.
Trevor Hastie, Rahul Mazumder
Maintainer: Trevor Hastie [email protected]
Rahul Mazumder, Trevor Hastie and Rob Tibshirani (2010)
Spectral Regularization Algorithms for Learning Large Incomplete
Matrices,
https://web.stanford.edu/~hastie/Papers/mazumder10a.pdf
Journal of Machine Learning Research 11 (2010) 2287-2322
biScale
, svd.als
,Incomplete
,
lambda0
, impute
, complete
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA ###uses regular matrix method for matrices with NAs fit1=softImpute(xna,rank=50,lambda=30) ###uses sparse matrix method for matrices of class "Incomplete" xnaC=as(xna,"Incomplete") fit2=softImpute(xnaC,rank=50,lambda=30) ###uses "svd" algorithm fit3=softImpute(xnaC,rank=50,lambda=30,type="svd") ximp=complete(xna,fit1) ### first scale xna xnas=biScale(xna) fit4=softImpute(xnas,rank=50,lambda=10) ximp=complete(xna,fit4) impute(fit4,i=c(1,3,7),j=c(2,5,10)) impute(fit4,i=c(1,3,7),j=c(2,5,10),unscale=FALSE)#ignore scaling and centering
set.seed(101) n=200 p=100 J=50 np=n*p missfrac=0.3 x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 ix=seq(np) imiss=sample(ix,np*missfrac,replace=FALSE) xna=x xna[imiss]=NA ###uses regular matrix method for matrices with NAs fit1=softImpute(xna,rank=50,lambda=30) ###uses sparse matrix method for matrices of class "Incomplete" xnaC=as(xna,"Incomplete") fit2=softImpute(xnaC,rank=50,lambda=30) ###uses "svd" algorithm fit3=softImpute(xnaC,rank=50,lambda=30,type="svd") ximp=complete(xna,fit1) ### first scale xna xnas=biScale(xna) fit4=softImpute(xnas,rank=50,lambda=10) ximp=complete(xna,fit4) impute(fit4,i=c(1,3,7),j=c(2,5,10)) impute(fit4,i=c(1,3,7),j=c(2,5,10),unscale=FALSE)#ignore scaling and centering
"SparseplusLowRank"
A structured matrix made up of a sparse part plus a low-rank part, all which can be stored and operated on efficiently.
Objects can be created by calls of the form
new("SparseplusLowRank", ...)
or by a call to splr
x
:Object of class "sparseMatrix"
a
:Object of class "matrix"
b
:Object of class "matrix"
signature(x = "ANY", y = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank", y = "ANY")
: ...
signature(x = "Matrix", y = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank", y = "Matrix")
: ...
signature(x = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank", type = "character")
: ...
signature(x = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank")
: ...
signature(x = "SparseplusLowRank")
: ...
Trevor Hastie and Rahul Mazumder
softImpute
,splr
showClass("SparseplusLowRank") x=matrix(sample(c(3,0),15,replace=TRUE),5,3) x=as(x,"sparseMatrix") a=matrix(rnorm(10),5,2) b=matrix(rnorm(6),3,2) new("SparseplusLowRank",x=x,a=a,b=b) splr(x,a,b)
showClass("SparseplusLowRank") x=matrix(sample(c(3,0),15,replace=TRUE),5,3) x=as(x,"sparseMatrix") a=matrix(rnorm(10),5,2) b=matrix(rnorm(6),3,2) new("SparseplusLowRank",x=x,a=a,b=b) splr(x,a,b)
SparseplusLowRank
object
create an object of class SparseplusLowRank
which can be
efficiently stored and for which efficient linear algebra operations are possible.
splr(x, a = NULL, b = NULL)
splr(x, a = NULL, b = NULL)
x |
sparse matrix with dimension say m x n |
a |
matrix with m rows and number of columns r less than |
b |
matrix with n rows and number of columns r less than |
an object of S4 class SparseplusLowRank
is returned with slots
x
, a
and b
Trevor Hastie
SparseplusLowRank-class
, softImpute
x=matrix(sample(c(3,0),15,replace=TRUE),5,3) x=as(x,"sparseMatrix") a=matrix(rnorm(10),5,2) b=matrix(rnorm(6),3,2) new("SparseplusLowRank",x=x,a=a,b=b) splr(x,a,b)
x=matrix(sample(c(3,0),15,replace=TRUE),5,3) x=as(x,"sparseMatrix") a=matrix(rnorm(10),5,2) b=matrix(rnorm(6),3,2) new("SparseplusLowRank",x=x,a=a,b=b) splr(x,a,b)
fit a low-rank svd to a complete matrix by alternating orthogonal ridge regression. Special sparse-matrix classes available for very large matrices, including "SparseplusLowRank" versions for row and column centered sparse matrices.
svd.als(x, rank.max = 2, lambda = 0, thresh = 1e-05, maxit = 100, trace.it = FALSE, warm.start = NULL, final.svd = TRUE)
svd.als(x, rank.max = 2, lambda = 0, thresh = 1e-05, maxit = 100, trace.it = FALSE, warm.start = NULL, final.svd = TRUE)
x |
An m by n matrix. Large matrices can be in "sparseMatrix" format, as
well as "SparseplusLowRank". The latter arise after centering sparse
matrices, for example with |
rank.max |
The maximum rank for the solution. This is also the dimension of the left and right matrices of orthogonal singular vectors. 'rank.max' should be no bigger than 'min(dim(x)'. |
lambda |
The regularization parameter. |
thresh |
convergence threshold, measured as the relative changed in the Frobenius norm between two successive estimates. |
maxit |
maximum number of iterations. |
trace.it |
with |
warm.start |
an svd object can be supplied as a warm start. If the solution requested has higher rank than the warm start, the additional subspace is initialized with random Gaussians (and then orthogonalized wrt the rest). |
final.svd |
Although in theory, this algorithm converges to the solution to a
nuclear-norm regularized low-rank matrix approximation problem,
with potentially some singular values equal to zero, in practice only
near-zeros are achieved. This final step does one more iteration with
|
This algorithm solves the problem
subject to , where
is the nuclear norm of
(sum of singular values).
It achieves this by solving the related problem
subject to
. The solution is a rank-restricted,
soft-thresholded SVD of
.
An svd object is returned, with components "u", "d", and "v".
u |
an m by |
d |
a vector of length |
v |
an n by |
Trevor Hastie, Rahul Mazumder
Maintainer: Trevor Hastie [email protected]
Rahul Mazumder, Trevor Hastie and Rob Tibshirani (2010)
Spectral Regularization Algorithms for Learning Large Incomplete
Matrices,
https://web.stanford.edu/~hastie/Papers/mazumder10a.pdf
Journal of Machine Learning Research 11 (2010) 2287-2322
biScale
, softImpute
, Incomplete
,
lambda0
, impute
, complete
#create a matrix and run the algorithm set.seed(101) n=100 p=50 J=25 np=n*p x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 fit=svd.als(x,rank=25,lambda=50) fit$d pmax(svd(x)$d-50,0) # now create a sparse matrix and do the same nnz=trunc(np*.3) inz=sample(seq(np),nnz,replace=FALSE) i=row(x)[inz] j=col(x)[inz] x=rnorm(nnz) xS=sparseMatrix(x=x,i=i,j=j) fit2=svd.als(xS,rank=20,lambda=7) fit2$d pmax(svd(as.matrix(xS))$d-7,0)
#create a matrix and run the algorithm set.seed(101) n=100 p=50 J=25 np=n*p x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 fit=svd.als(x,rank=25,lambda=50) fit$d pmax(svd(x)$d-50,0) # now create a sparse matrix and do the same nnz=trunc(np*.3) inz=sample(seq(np),nnz,replace=FALSE) i=row(x)[inz] j=col(x)[inz] x=rnorm(nnz) xS=sparseMatrix(x=x,i=i,j=j) fit2=svd.als(xS,rank=20,lambda=7) fit2$d pmax(svd(as.matrix(xS))$d-7,0)