The parameter calculation method used for models is determined by classes that derive from the abstract base class RWRegressionCalc. The parameter calculation method used by a particular regression object may be specified by providing an instance of a class derived from RWRegressionCalc to the regression object at construction time, or through the regression class member function setCalcMethod(). If you do not specify a calculation method at construction time, a default method is provided.
Encapsulating parameter calculations in a class yields two benefits:
Calculation methods can be changed at runtime. For example, if you choose a calculation method that is fast but fails on a particular set of data, you can switch to a slower, more robust method.
You can use your own calculation method by deriving a class from RWRegressionCalc and providing the calculation method.
Here is an example of how to switch calculation methods at runtime:
RWMathVec<double> observations; RWGenMat<double> predictorMatrix; . . . // Construct a linear regression object using the default // calculation method class RWLeastSqQRCalc. This // method is fast, but will fail if the regression matrix does // not have full rank. RWLinearRegression lr( predictorMatrix, observations); if ( lr.fail() ) { // Try the more robust, but slower QR with pivoting method: lr.setCalcMethod(RWLeastSqQRPvtCalc()); if ( lr.fail() ) { // Matrix must have a column of 0s or something. // Deal with the error. cerr << "Parameter calculation failed for input data." << endl; } else { cout << "Parameters calculated using the QR with pivoting method: " << lr.parameters() << endl; } } else { cout << "Parameters calculated using the QR method: " << lr.parameters() << endl; } . . .
All parameter calculation classes have a member function, called name(), which returns a string identifying the calculation method. In the convention used by Analytics.h++, name() returns the class static variable methodName. For example, if you want to know whether a particular logistic regression object uses the Levenberg-Marquardt calculation method, you would proceed as follows:
. . . RWMathVec<double> observations; RWGenMat<double> predictorMatrix; . . . RWLogisticRegression lr(predictorMatrix, observations); . . . // Check which calculation method is being used by the regression. if ( lr.calcMethod().name() ==>
RWLogisticLevenbergMarquardt::methodName ) { cout << "using Levenberg-Marquardt calculation method" << endl; } else { cout << "using something else" << endl; } . . .
Given the linear regression model , finding the least squares solution is equivalent to solving the normal equations . Thus the solution for is given by:
.
Analytics.h++ includes three classes for calculating multiple linear regression parameters: RWLeastSqQRCalc, RWLeastSqQRPvtCalc, and RWLeastSqSVDCalc. The following three sections provide a brief description of the method encapsulated by each class, and its pros and cons.
>Class RWLeastSqQRCalc encapsulates the QR method. This method begins by decomposing the regression matrix into the product of an orthogonal matrix and an upper triangular matrix . The QR representation is then substituted into the equation in Section 5.5.1 to obtain the solution .
Pros: | Good performance. Parameter values are recalculated very quickly when adding or removing predictor variables. Model selection performance is best with this calculation method. |
Cons: | Calculation fails when the regression matrix has less than full rank. (A matrix has less than full rank if the columns of are linearly dependent.) Results may not be accurate if is extremely ill-conditioned. |
Class RWLeastSqQRPvtCalc uses essentially the same QR method described in Section 5.5.1.1, except that the QR decomposition is formed using pivoting.
Pros: | Calculation succeeds for regression matrices of less than full rank. However, calculations fail if the regression matrix contains a column of all 0s. |
Cons: | Slower than the straight QR technique described in Section 5.5.1.1. |
Class RWLeastSqSVDCalc employs singular value decomposition (SVD). The method solves the least squares problem by decomposing the regression matrix into the form , where is an matrix consisting of orthonormalized eigenvectors associated with the largest eigenvalues of , is a orthogonal matrix consisting of the orthonormalized eigenvectors of , and is a diagonal matrix of singular values of . This singular value decomposition of is used to solve the equation in Section 5.5.1.
Pros: | Works on matrices of less than full rank. Produces accurate results when has full rank, but is highly ill-conditioned. |
Cons: | Slower than the straight QR technique described in Section 5.5.1.1. |
Unlike linear regression, where finding parameters involves solving a system of linear equations, parameter calculation for logistic regression requires the solution of a system of nonlinear equations. The equations become nonlinear because each prediction from the logistic regression model has its own estimated variance; the particular variance estimate influences the prediction, while the estimated prediction influences the estimated variance. The only way to find a solution to these nonlinear equations involves using an iterative, gradient-based algorithm.
For finding the parameters to a logistic regression model, Analytics.h++ supplies two classes: RWLogisticIterLSQ and RWLogisticLevenbergMarquardt. The following sections provide a brief description of the method encapsulated by each class, along with its pros and cons.
>Class RWLogisticIterLSQ uses iterative least squares for finding logistic regression parameters. Some people also refer to this algorithm as the Newton-Raphson method. The algorithm starts with a set of parameters that corresponds to a linear fit of the data using the normal equations. Then the method repeatedly forms at iteration k by solving the linear equations:
,
where is the regression matrix, is the diagonal matrix of variance estimates at iteration , and is a vector of adjusted predictions at iteration . Element i of is defined as:
.
The algorithm stops iterating when the size of the change in parameter values falls below a small, predetermined value. The default value is , where macheps is the value of machine epsilon.
Pros: | Iterative least squares is one of the fastest algorithms for finding logistic regression parameters. |
Cons: | If the initial parameter estimate is poor, the algorithm is not guaranteed to converge successfully, while a more sophisticated algorithm might converge. |
Class RWLogisticLevenbergMarquardt finds logistic regression parameters using a more sophisticated technique than iterative least squares. It implements what is known as the Levenberg-Marquardt method. The extra sophistication of this algorithm often causes a recovery from poor initial estimates for . The starting vector of parameters is the same as for iterative least squares, and at each iteration, the algorithm tries to take a step that is similar to the one taken by iterative least squares. However, it checks to make sure that the step improves the likelihood of the model producing the data. If the step does improve likelihood, the step is taken. If the step does not improve likelihood, the algorithm tries a modified step that falls closer to the gradient. This process of checking and trying a step even closer to the gradient repeats until a step is found that finally improves the likelihood.1
Pros: | If the initial parameter estimate is poor, the algorithm often still converges to a set of finite-valued parameters, while iterative least squares may not. |
Cons: | The algorithm is slower than iterative least squares. |
You can incorporate your own parameter calculation methods into Analytics.h++ by supplying your own parameter calculation class. Your class must be derived from the abstract base class RWRegressionCalc and must implement its five pure virtual functions:
virtual void calc(const RWGenMat<T>& r, const RWMathVec<S>& o) = 0;
Calculates the model parameters from the input regression matrix r and observation vector v.
virtual RWMathVec<T> parameters() = 0;
Returns the calculated parameters.
virtual RWBoolean fail() const = 0;
Returns TRUE if the most recent calculation failed. Otherwise, returns FALSE.
virtual RWCString name() const = 0;
Returns name identifying the calculation method.
virtual RWRegressionCalc<T,S>* clone() = 0;
Returns a copy of self off the heap.
Here is an example of a calculation class for linear regression that uses the DoubleLeastSqCh class found in the Rogue Wave product LAPACK.h++:
class CholeskyLeastSquaresCalc : public RWRegressionCalc<double,double> { public: static const char *methodName; // Constructors----------------------------------------------- CholeskyLeastSquaresCalc (){;} CholeskyLeastSquaresCalc ( const CholeskyLeastSquaresCalc & c ) :parameters_(c.parameters_),fail_(c.fail_) { // Make sure I have my own copy of the parameters. parameters_.deepenShallowCopy(); } //------------------------------------------------------------ // Pure virtual functions inherited from RWRegressionCalc // (see regcalc.h for function descriptions). //------------------------------------------------------------ virtual void calc( const RWGenMat<double>& regressionMatrix, const RWMathVec<double>& observations) { DoubleLeastSqCh lsqch( regressionMatrix ); fail_ = lsqch.fail(); if ( !lsqch.fail() ) { parameters_.reference( lsqch.solve( observations ) ); } } virtual RWRegressionCalc<double,double>* clone() const { return new CholeskyLeastSquaresCalc ( *this ); } virtual RWMathVec<double> parameters() { if ( fail() ) // Clients should check fail status before // they call parameters(). { RWTHROW( RWInternalErr(RWMessage(RWSTAT_PARAMCALCFAIL)) ); // Keeps some compilers happy. return parameters_; } else { return parameters_; } } virtual RWBoolean fail() const { return fail_; } virtual RWCString name() const { return methodName; } // Assignment operator. RWLeastSqSVDCalc& operator=( const RWLeastSqSVDCalc& rhs ) { parameters_.reference( rhs.parameters_ ); parameters_.deepenShallowCopy(); fail_ = rhs.fail_; return *this; } private: RWMathVec<double> parameters_; RWBoolean fail_; };>
©Copyright 1999, Rogue Wave Software, Inc.
Contact Rogue Wave about documentation or support issues.