Flow123d  jenkins-Flow123d-windows32-release-multijob-51
la_linsys_new.hh
Go to the documentation of this file.
1 /*!
2  *
3  * Copyright (C) 2007 Technical University of Liberec. All rights reserved.
4  *
5  * Please make a following refer to Flow123d on your project site if you use the program for any purpose,
6  * especially for academic research:
7  * Flow123d, Research Centre: Advanced Remedial Technologies, Technical University of Liberec, Czech Republic
8  *
9  * This program is free software; you can redistribute it and/or modify it under the terms
10  * of the GNU General Public License version 3 as published by the Free Software Foundation.
11  *
12  * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
13  * without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
14  * See the GNU General Public License for more details.
15  *
16  * You should have received a copy of the GNU General Public License along with this program; if not,
17  * write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 021110-1307, USA.
18  *
19  *
20  * $Id$
21  * $Revision$
22  * $LastChangedBy$
23  * $LastChangedDate$
24  *
25  * @file
26  * @brief
27  *
28  * @author
29  */
30 
31 /**
32  * Classes for block linear systems. Namely it should provide following functionality:
33  * 1) interface to various PETSC (or possibly other) matrices, with preallocation
34  * through the set_values method
35  * 2) For coupling problems creation of the global linear system by connectiong sub systems of individual
36  * equations. Working cycle could be:
37  * 1. creation of sub problems up to the global problem (just matrix dimensions and types)
38  * 2. pre assembly - > preallocation
39  * 3. assembly - by the same functions
40  * 4. matrix modifications (schur complements, assembly of one global matrix, preconditioners ...
41  * 5. iterative solution in parallel
42  * 6. for nonlinear/time problems: jump to 3. and reuse the structures
43  *
44  * 3) Solution through explicit or implicit schur complements
45  * 4) Preconditioning for coupled problems .
46  * 5) In assembly, the access to the matrices in LinSys hierarchy is through successive call of block(i,j) method, but
47  * there also should be a class or another mechanism how to represent a coordinate in the hierarchy. Then one can access a MatriSimple
48  * directly by two coordinates and constantly access correct part of rhs vector
49  *
50  * Virtual test cases:
51  * - Only water flow solved by 1, 2, 3, or 4 (Newton BC) Schur complements, explict and implicit
52  * Ideal: switch between Implicit and explicit only changing the particular class or some thing similarly simple
53  * -
54  *
55  * Unsolved design problems:
56  * - PETSC neumoznuje blokove vektory, tj. vektory by mely byt globalni a z nich by se mely pouzivat podvektory
57  * mozne reseni: blokovy globalni vektor se hierarchicky vytvari (pouze velikosti) a pak slouzi jako
58  * "fabrika" na globalni vektory i subvektory a nebo pro pristup k subvektorum globalniho vektoru
59  * .. detaily
60  *
61  */
62 
63 
64 
65 /**
66  * Abstract class for frontend for PETSc matrices.
67  * purpose:
68  * - allow preallocation through double assembly
69  * - make interface to PETSC since PETSC evolve and it is simpler to make changes on one place
70  * - make documented interface to PETSC and document used PETSC functionality this way
71  *
72  * Questions:
73  * - how to setup column distribution. It is necessary only for some of PETSC formats, isn't it?
74  */
76 {public:
77  /// possible states of the matrix
78  typedef enum {
79  INSERT=INSERT_VALUES,
80  ADD=ADD_VALUES,
84  } SetValuesMode;
85 
86  /// Construct a parallel system with given local size.
87  MatrixSimple(unsigned int loc_row_size, unsigned int col_size);
88 
89  /// multiply PETSC vector: y=Ax
90  void multiply(Vec x, Vec y);
91  /// multiply vector and ad onother one: y=Ax+b
92  void multiply_add(Vec x, Vec b, Vec y);
93  /// multiply matrix .. has to deal with preallocation of result and reuse of structure
94  void multiply_mat(...);
95  /// add and scale sparse matrices ... has to deal with different nonzero structures,
96  /// in such a case petsc rutines are inceribly slow
97  void axpy(...);
98  void aypx(...);
99 
100  /// @name access members @{
101  /// Access to distribution of rows, thhrou that you can get also matix size and local size. But for convenience
102  /// we provide also direct functions
105  /// Get global system size.
106  inline unsigned int size();
107  /// Get matrix. SHOULD NOT BE USED !!!
108  inline const Mat &get_matrix()
109  { return matrix; }
110  /// @}
111 
112  virtual void start_allocation()=0;
113  void start_add_assembly();
114  void start_insert_assembly();
115  void finalize(MatAssemblyType assembly_type=MAT_FINAL_ASSEMBLY);
116 
117  virtual void preallocate_matrix()=0;
118  virtual void preallocate_values(int nrow,int *rows,int ncol,int *cols)=0;
119 
120  virtual void view_local_matrix()=0;
121 
122  /// Set full rectangular submatrix of the system matrix.
123  void mat_set_values(int nrow,int *rows,int ncol,int *cols,PetscScalar *vals);
124 
125  /// Set one element of the system matrix.
126  inline void mat_set_value(int row,int col,PetscScalar val);
127 
128  inline void set_symmetric(bool flag = true);
129  inline bool is_symmetric();
130 
131  inline void set_positive_definite(bool flag = true);
132  inline bool is_positive_definite();
133 
134  /// Output the matrix in the Matlab format possibly with given renumbering of rows/cols
135  void view(std::ostream output_stream, int * row_mapping = NULL, int * col_mapping = NULL);
136 
137  virtual ~LinSys();
138 
139 protected:
140  Distribution row_ds; ///< Distribution of continuous blocks of system rows among the processors.
141  Distribution col_ds; ///< Distribution of multiplied vector.
142  bool symmetric; ///< Flag for the symmetric system.
143  bool positive_definite; ///< Flag for positive definite system.
144  SetValuesMode status; ///< Set value status of the linear system.
145 
146  Mat matrix; ///< Petsc matrix of the problem.
147 
148 };
149 
150 /**
151  * Array of MatrixSimple
152  *
153  */
155 {
156 private:
158 
159 public:
160  MatrixArray ... vice konstruktoru pro ruzne slepovani MatrixSimple a jiz existujicich MatrixArray
161 
162  MatrixSimple *block(i_row, i_col)
163  multiply_vec( Vector x, Vector y)
164 }
165 
166 /**
167  * Array or hierarchy over the global vector.
168  */
169 class VecArray
170 {
171 
172 };
173 
174 
175 /**
176  * LinSys - matrix and a particular way how to compute solution for given RHS, i.e. this class can perform
177  * action of the matrix inverse
178  *
179  * This is abstract class for members of possible hierarchical tree of the whole system.
180  *
181  */
182 class LinSys {
183  LinSys *block(i,j)
184  MatrixArray *matrix()
185 
186  virtual solve( Vector solution, Vector RHS);
187 }
188 
189 /**
190  * The LinSys for direct factorization, its matrix has to be simple. Factorization will be made through PETSc
191  * so the used backend library library can be influenced by on-line parameters.
192  */
193 class LinSysDirect : public LinSys
194 {
196 }
197 
198 /**
199  * Array of four submatrices A00 A10 A01 A11 where A00 has to be LinSys and A11 has to be Simple.
200  * Solves the system by recursive solution of schur complement system. Can be implemented as
201  * interface to PETSC functionality see PCFIELDSPLIT
202  */
203 class LinSysImplicitSchur : public LinSys
204 {
205  LinSys *block(i,j)
206  MatrixArray *matrix()
207 }
208 /**
209  * This LinSys should after assembly construct an explicit Schur complemnt system. User has to provide
210  * an LinSysDirect for one of the diagonal blocks.
211  *
212  * There is a problem how to make possible several successive schur complements.
213  * This can not be organised bottom up like ImplicitSchur.
214  */
215 class LinSysExplicitSchur : public LinSys
216 {
217 }
218 
219 /**
220  * Array of four matrices 00 01 10 11, where 00 and 11 are LinSys and their application is used as preconditioner for the whole system.
221  * again see PCFIELDSPLIT
222  */
223 class LinSysJacobiCoupling : public LinSys
224 {
225  LinSys *block11()
226  MatrixArray *block12()
227  MatrixArray *block21()
228  LinSys *block22()
229 }
230 
231 
232 
233 
234 
SetValuesMode status
Set value status of the linear system.
void set_positive_definite(bool flag=true)
void axpy(...)
void finalize(MatAssemblyType assembly_type=MAT_FINAL_ASSEMBLY)
virtual void view_local_matrix()=0
bool symmetric
Flag for the symmetric system.
Mat matrix
Petsc matrix of the problem.
Distribution row_ds
Distribution of continuous blocks of system rows among the processors.
bool is_positive_definite()
void start_insert_assembly()
bool is_symmetric()
MatrixSimple(unsigned int loc_row_size, unsigned int col_size)
Construct a parallel system with given local size.
std::vector< std::vector< MatrixSimple * > > mat_array
class MatrixArray VecArray
const Mat & get_matrix()
Get matrix. SHOULD NOT BE USED !!!
MatrixArray vice konstruktoru pro ruzne slepovani MatrixSimple a jiz existujicich MatrixArray MatrixSimple * block(i_row, i_col) multiply_vec(Vector x
void start_add_assembly()
void multiply_mat(...)
multiply matrix .. has to deal with preallocation of result and reuse of structure ...
virtual ~LinSys()
Distribution col_ds
Distribution of multiplied vector.
void set_symmetric(bool flag=true)
void mat_set_value(int row, int col, PetscScalar val)
Set one element of the system matrix.
virtual void start_allocation()=0
unsigned int size()
Get global system size.
void aypx(...)
void multiply_add(Vec x, Vec b, Vec y)
multiply vector and ad onother one: y=Ax+b
void mat_set_values(int nrow, int *rows, int ncol, int *cols, PetscScalar *vals)
Set full rectangular submatrix of the system matrix.
Distribution & get_col_ds()
SetValuesMode
possible states of the matrix
virtual void preallocate_matrix()=0
Abstract linear system class.
Distribution & get_row_ds()
bool positive_definite
Flag for positive definite system.
void view(std::ostream output_stream, int *row_mapping=NULL, int *col_mapping=NULL)
Output the matrix in the Matlab format possibly with given renumbering of rows/cols.
void multiply(Vec x, Vec y)
multiply PETSC vector: y=Ax
virtual void preallocate_values(int nrow, int *rows, int ncol, int *cols)=0