Trilinos (and hence ML) requires the variable HAVE_CONFIG_H
to be defined, either in the Makefile, or in the file itself. We suggest the following:
#ifndef HAVE_CONFIG_H #define HAVE_CONFIG_H #endif
Then, we need to include several header files. Note that the example works for MPI and non-MPI configurations. However, some coarse solver requires MPI (like for instance AMESOS-Superludist
and AMESOS-Mumps
). Trilinos_Util_CrsMatrixGallery.h
is required by this example, and not by ML.
#include "ml_include.h" #ifdef HAVE_MPI #include "mpi.h" #include "Epetra_MpiComm.h" #else #include "Epetra_SerialComm.h" #endif #include "Epetra_Map.h" #include "Epetra_IntVector.h" #include "Epetra_SerialDenseVector.h" #include "Epetra_Vector.h" #include "Epetra_CrsMatrix.h" #include "Epetra_VbrMatrix.h" #include "Epetra_LinearProblem.h" #include "Epetra_Time.h" #include "AztecOO.h" #include "ml_epetra_preconditioner.h" #include "Trilinos_Util_CommandLineParser.h" #include "Trilinos_Util_CrsMatrixGallery.h" #include "ml_epetra_utils.h" #include "ml_epetra_operator.h"
HAVE_ML_TRIUTILS
, and the header files are required by the example, and not by the ML source. Class Trilinos_Util::CrsMatrixGallery is used to create an example matrix. This Epetra_CrsMatrix is then converted to an ML_Operator (heavy-weight conversion).
using namespace ML_Epetra; using namespace Teuchos; using namespace Trilinos_Util; int main(int argc, char *argv[]) { #ifdef EPETRA_MPI MPI_Init(&argc,&argv); Epetra_MpiComm Comm(MPI_COMM_WORLD); #else Epetra_SerialComm Comm; #endif Epetra_Time Time(Comm);
CommandLineParser CLP(argc,argv); CrsMatrixGallery Gallery("", Comm); // default values for problem type and size if( CLP.Has("-problem_type") == false ) CLP.Add("-problem_type", "laplace_2d" ); if( CLP.Has("-problem_size") == false ) CLP.Add("-problem_size", "100" ); // initialize MatrixGallery object with options specified in the shell Gallery.Set(CLP); // get pointer to the linear system matrix Epetra_CrsMatrix * A = Gallery.GetMatrix(); // get a pointer to the map const Epetra_Map * Map = Gallery.GetMap(); // get a pointer to the linear system problem Epetra_LinearProblem * Problem = Gallery.GetLinearProblem(); // Construct a solver object for this problem AztecOO solver(*Problem);
This is the beginning of the ML part. We need to create an ML_handle, convert the input matrix into ML_Operator format, create an ML_Aggregate structure (that will hold the data about the aggregates).
int nLevels = 10; int maxMgLevels = 6; ML_Set_PrintLevel(10); ML *ml_handle; ML_Create(&ml_handle, maxMgLevels); // convert to epetra matrix, put finest matrix into // position maxMgLevels - 1 of the hierarchy EpetraMatrix2MLMatrix(ml_handle, maxMgLevels-1, A); // create an Aggregate object; this will contain information // about the aggregation process for each level ML_Aggregate *agg_object; ML_Aggregate_Create(&agg_object);
ML_Aggregate_Set_CoarsenScheme_Uncoupled(agg_object); nLevels = ML_Gen_MGHierarchy_UsingAggregation(ml_handle, maxMgLevels-1, ML_DECREASING, agg_object); \encode Now, we define the ID of the coarsest level and set up the smoothers. We suppose to deal with a symmetric problem. We also set a simple coarse solver -- symmetric Gauss-Seidel. \code int coarsestLevel = maxMgLevels - nLevels; int nits = 1; for(int level = maxMgLevels-1; level > coarsestLevel; level--) ML_Gen_Smoother_Cheby(ml_handle, level, ML_BOTH, 30., 3); ML_Gen_Smoother_SymGaussSeidel(ml_handle, coarsestLevel, ML_BOTH, nits, ML_DEFAULT); // generate the solver ML_Gen_Solver(ml_handle, ML_MGV, maxMgLevels-1, coarsestLevel); // create an Epetra_Operator based on the previously created // hierarchy MultiLevelOperator MLPrec(ml_handle, Comm, *Map, *Map);
This is the end of the ML settings. We just need to instruct AztecOO about the existence of MLPrec
.
solver.SetPrecOperator(&MLPrec); solver.SetAztecOption(AZ_solver, AZ_cg_condnum); solver.SetAztecOption(AZ_output, 16); // solve with AztecOO solver.Iterate(500, 1e-5); #ifdef EPETRA_MPI MPI_Finalize() ; #endif return 0 ; }
To execute this example,/from the command line, you may try something like that:
$ mpirun -np 4 ./ml_operator.exe -problem_type=laplace_3d -problem_size=1000