Chapter 8
Algorithms
The complete example is in algo.edpfile.
8.1 conjugate Gradient/GMRES
Suppose we want to solve the Euler problem: find x
n such that
 | (8.1) |
where J is a functional (to minimize for example) from
n to
.
If the function is convex we can use the conjugate gradient to solve the problem, and we just need the function
(named dJfor example) which compute ∇J, so the two parameters are the name of the function with prototype
func real[int] dJ(real[int] & xx)which compute ∇J, a vector xof type real[int]to initialize the
process and get the result.
Given an initial value
(0), a maximum number imax of iterations, and an error tolerance 0 < ϵ < 1: Put
=
(0) and
write
NLCG(∇J,
, precon= M, nbiter= imax, eps= ϵ);
will give the solution of
of ∇J(
) = 0. We can omit parameters precon, nbiter, eps. Here M is the
preconditioner whose default is the identity matrix. The stopping test is
Writing the minus value in eps=, i.e.,
NLCG(∇J,
, precon= M, nbiter= imax, eps= -ϵ);
we can use the stopping test
The parameters of these three functions are:
-
nbiter=
- set the number of iteration (by default 100)
-
precon=
- set the preconditioner function (Pfor example) by default it is the identity, remark the prototype
is func real[int] P(real[int] &x).
-
eps=
- set the value of the stop test ε (= 10-6 by default) if positive then relative test ||∇J(x)||P ≤ ε||∇J(x0)||P,
otherwise the absolute test is ||∇J(x)||P2 ≤ |ε|.
-
veps=
- set and return the value of the stop test, if positive then relative test ||∇J(x)||P ≤ ε||∇J(x0)||P,
otherwise the absolute test is ||∇J(x)||P2 ≤ |ε|. The return value is minus the real stop test (remark: it
is useful in loop).
Example 8.1 (algo.edp) For a given function b, let us find the minimizer u of the functional
under the boundary condition u = 0 on ∂Ω.
func real J(real[int] & u)
{
Vh w;w[]=u; // copy array u in the finite element function
w
real r=int2d(Th)(0.5⋆f( dx(w)⋆dx(w) + dy(w)⋆dy(w) ) - b⋆w) ;
cout << "J(u) =" << r << " " << u.min << " " << u.max << endl;
return r;
}
//
-----------------------
Vh u=0; // the current value of the
solution
Ph alpha; // of store
df (|∇u|2)
int iter=0;
alpha=df( dx(u)⋆dx(u) + dy(u)⋆dy(u) ); //
optimization
func real[int] dJ(real[int] & u)
{
int verb=verbosity; verbosity=0;
Vh w;w[]=u; // copy array u in the finite element function
w
alpha=df( dx(w)⋆dx(w) + dy(w)⋆dy(w) ); //
optimization
varf au(uh,vh) = int2d(Th)( alpha⋆( dx(w)⋆dx(vh) + dy(w)⋆dy(vh) ) - b⋆vh)
+ on(1,2,3,4,uh=0);
u= au(0,Vh);
verbosity=verb;
return u; // warning no return of local
array
}
We want to construct also a preconditioner C with solving the problem: find uh
V0h such that
where α = f′(|∇u|2). */
varf alap(uh,vh)= int2d(Th)( alpha ⋆( dx(uh)⋆dx(vh) + dy(uh)⋆dy(vh) ))
+ on(1,2,3,4,uh=0);
varf amass(uh)= int2d(Th)( uh⋆vh) + on(1,2,3,4,uh=0);
matrix Amass = alap(Vh,Vh,solver=CG); //
matrix Alap= alap(Vh,Vh,solver=Cholesky,factorize=1); //
// the preconditionner
function
func real[int] C(real[int] & u)
{
real[int] w = Amass⋆u;
u = Alap^-1⋆w;
return u; // no return of local array
variable
}
/* To solve the problem, we make 10 iteration of the conjugate gradient, recompute the preconditioner and restart
the conjugate gradient: */
verbosity=5;
int conv=0;
real eps=1e-6;
for(int i=0;i<20;i++)
{
conv=NLCG(dJ,u[],nbiter=10,precon=C,veps=eps); //
if (conv) break; // if converge break
loop
alpha=df( dx(u)⋆dx(u) + dy(u)⋆dy(u) ); // recompute alpha
optimization
Alap = alap(Vh,Vh,solver=Cholesky,factorize=1);
cout << " restart with new preconditionner " << conv
<< " eps =" << eps << endl;
}
plot (u,wait=1,cmm="solution with NLCG");
For a given symmetric positive matrix A, consider the quadratic form
then J(
) is minimized by the solution
of A
=
. In this case, we can use the function LinearCG
LinearCG(A,
, precon= M, nbiter= imax, eps= ±ϵ);
If A is not symmetric, we can use GMRES(Generalized Minimum Residual) algorithm by
LinearGMRES(A,
, precon= M, nbiter= imax, eps= ±ϵ);
Also, we can use the non-linear version of GMRES algorithm (the functional J is just convex)
LinearGMRES(∇J,
, precon= M, nbiter= imax, eps= ±ϵ);
For detail of these algorithms, refer to [14][Chapter IV, 1.3].
8.2 Optimization
Two algorithms of COOOL a package [27] are interfaced with the Newton Raphson method (call Newton) and the
BFGSmethod. Be careful of these algorithms, because their implementation use full matrices.
Example of utilization
real[int] b(10),u(10);
func real J(real[int] & u)
{
real s=0;
for (int i=0;i<u.n;i++)
s +=(i+1)⋆u[i]⋆u[i]⋆0.5 - b[i]⋆u[i];
cout << "J ="<< s << " u =" << u[0] << " " << u[1] << "...\n" ;
return s;
}
// the grad of J (this is a affine version (the RHS is in
)
func real[int] DJ(real[int] &u)
{
for (int i=0;i<u.n;i++)
u[i]=(i+1)⋆u[i]-b[i];
return u; // return of global variable
ok
};
b=1; u=2; // set right hand side and initial
gest
BFGS(J,dJ,u,eps=1.e-6,nbiter=20,nbiterline=20);
cout << "BFGS: J(u) = " << J(u) << endl;