[mesa-users] convergence

Natasha Ivanova nata.ivanova at ualberta.ca
Fri May 13 13:05:08 EDT 2016


Hi all,

This is a followup to the question that my student Kenny sent yesterday.

I understand perfectly how implicit and explicit scheme MT work in
principle (per say, implicit scheme has some roots from my code way back in
the past), however the oscillations in the MT rate that MESA has are the
purely numerical feature of MESA. The fact that they are shown by people in
MESA tutorials does not validate the oscallations, just demonstrates that
this is a big unfixed problem. A similar issue was fixed in other binary
codes ages ago, and I believe the nature is the same -- having a too small
time step rather then improving convergence. Generally speaking, I do not
understand why this concept was accepted for MESA as it did not work well
for previous stellar codes, but we can have a longer discussion with Frank
in a few months when we both will be at KITP for a long time.

Let us see that on the  simple example.  I use real data from MESA.

age                                            radius
5.018075106014762074e+06 4.813512093255021829e-01
5.026973111848642118e+06 4.812979394829801083e-01
5.035887099594448693e+06 4.812442647659711525e-01
5.044816387584990822e+06 4.811937099257804773e-01
5.053761369794959202e+06 4.811492877224504694e-01
5.062721936671409756e+06 4.811004430349119509e-01
5.071697463133651763e+06 4.810552508037580499e-01
5.080688271018664353e+06 4.810112151819854742e-01
5.089694798251089640e+06 4.808135777471279626e-01

Do you see the jump in the last model? This is because the accumulated
numerical noise from all the previous models (any number 0.000x is
meaningless here, and one can trust only to 0.001), as in all of the
models,  iterations were *stopped once* the residual (?) becomes
"acceptable".

So, the models have a kind of common "offset" from the true value, and the
offset from the true value is growing with each model till it reaches
0.001. While this jump in radius is OK for single star evolution, this
offset causes MT rate to jump by a factor of 10.

This is because the true change in the radius dR/dt due to physics is
comparable to numerical noise epsilon_R, where dt is the timestep and
epsilon_R is accepted in each model residual.If one decreases timestep,
then the effect of the numerical noise is only increasing for the MT
calculations.

The way how it is fixed in other mass transfer codes, is to increase the
number of iterations (getting to a smaller epsilon_R) while increasing by
all possible means dt. Then obtained dR/dt has less artificial noise on it
and represents better what MT is. This is especially so for the implicit
methods, and this is exactly why the implicit method was introduced in
Ivanova and Taam 2004 - to boost the timestep compared to the explicit
method while removing the numerical oscillations! One has to understand
that by its nature, an implicit method is meaningless when the timestep is
getting too small, and if one makes timestep too small, then anything what
one gets out of its could be a numerical artifact.

I am really surprised the astroseismology studies did not run into the same
problem with small timesteps producing a bunch of artificial waves, with
those waves even having *period* that can be traced back to the timestep
and convergence. And generally, for whenever your accepted change of
variables between the models is smaller than the residual, it must be
worrisome and been flagged by the code!

Anyway, the standard and easy fix that should be done is the introduction
of regidly enforced minimum timestep in the way that the model is first
iterated to the first strong criterion, then accepted even if the
convergence is at least OK by the second criterion, and it does not
decrease the timestep below the minimum value.

If I understand correctly, then the use of

tol_correction_norm = 1.e-7
tol_max_correction = 1.e-5
tol_residual_norm1 = 1d-7
tol_max_residual1 = 1d-6
iter_for_resid_tol2 = 200
tol_residual_norm2 = 1d99
tol_max_residual2 = 1d99

should force the code first do 200 interaction before decreasing the
timestep , and that should help the MT issue. The problem here of course
that there is only one parameter for all. It is more sense if, as in other
codes, the user could manage *which* variables he/she needs to keep at a
tighter convergence.  I used to have different set of values of acceptable
tolerance during MT calculations, and definitely such values are not the
same for all main variables. Having four variables for which one can give
different values: P, R, T, L should make the code flexible enough to attend
various problems, not just MT.
If I miss some other conditions for the solver, please correct me, but
without enforced minimum timestep is will not work anyway.

Best
Natasha
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.mesastar.org/pipermail/mesa-users/attachments/20160513/6cef8aaf/attachment.html>


More information about the Mesa-users mailing list