[mesa-users] Sudden envelope mixing in a low-mass star, and a segmentation fault in debugging info

Bill Paxton paxton at kitp.ucsb.edu
Mon Jan 18 15:50:20 EST 2016


1st, concerning the segfault: check the terminal output just before it happens -- I get lines like this:

 failed to open plot_data/solve_logs/names.data
 failed in append_data for plot_data/solve_logs/corr_lnPgas.log
 failed in append_data for plot_data/solve_logs/corr_lnT.log

It is assuming the existence of some directories and crashes when they aren't there.
You can get the necessary stuff by copying the directories 'plot_data' and 'plotters' from any of the test_suite cases.

2nd, concerning the jump in envelope mixing.  The general rule when encountering something like this is to crank up the resolution to see what happens.  Mesa isn't magic -- it isn't smart enough to notice that it is producing results that are bogus because it is taking timesteps that are too large or has a grid that is too coarse.   It is up to the users to check the results and make sure they aren't artifacts of inadequate resolution.   I know that you know all of this -- I'm just taking advantage of this chance to preach to mesa-users again!  ;)

For this case, the jump happens along with the 1st retry, so might set max_number_retries = 1 to get it to stop when the problem happens.  Then do multiple restarts back about 50 steps before the problem, each time with reduced values for max_years_for_timestep and mesh_delta_coeff.  The inlist is currently setting max_years_for_timestep = 5d6, so keep reducing that until it is down by at least a factor of 100.  Similarly the inlist has mesh_delta_coeff = 1; decrease that until you have at least 2000 grid points (instead of the 800 or so you have now).    Then once you have better resolution in time and space, let the run continue beyond the problem long enough to make plots to compare to what you have now.    Sometimes the problem just goes away when the resolution is increased.  But in other cases, it stubbornly stays around even at high resolution.   Then it gets interesting!    Let us know what you find.


On Jan 18, 2016, at 7:58 AM, Warrick Ball wrote:

> Hi all,
> Earlier, Earl Bellinger asked me about a suspicious HR track he found in his modelling.  I've attached a plot showing the main sequence: you can see the jumps around the middle (logT ~= 3.88).  I've also attached the relevant inlist.  Just watch out: it saves a *lot* of data (several GB) in the form of all the profiles.  (This is MESA revision 7624.)
> We've been looking at the output and found that the issue appears at model number 1180.  I've attached a plot with the hydrogen abundance at models 1180 and 1181, which are before and after the first jump.  As can be seen, the outer 28% (by radius) of model 1181 appears to be mixed, even though all the diffusion coefficients of mixing are effectively nil.  This also corresponds to a convergence failure, so the first question we have is why the star is behaving this way.
> Diffusion is on and all the metals and helium have settled out of the envelope, but I don't see why this should be a numerical problem.  The back and forth seems to occur as the star mixes part of the envelope, the metals and helium drain out, then the star mixes part of the envelope again (remixes the envelope?), and so on.  But the sudden mixing near the surface is a mystery.
> The second issue is related.  Following Bill Wolf's excellent tutorial [1], I tried to get the debug data for the hydro solver.  You can activate this by uncommenting the last three lines in &controls:
>      ! report_hydro_solver_progress = .true.
>      ! hydro_inspectB_flag = .true.
>      ! hydro_dump_call_number = 1281
> Much to my surprise, this causes a segfault on my machine at hydro call 1272.  I've attached the last ~150 lines from the terminal as "segfault.txt".  The backtrace reads:
> Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
> Backtrace for this error:
> #0  0x7FAB239F4557
> #1  0x7FAB239F4B6E
> #2  0x7FAB22AEBD9F
> #3  0x6B9447 in __hydro_newton_procs_MOD_write_solve_logs
> #4  0x6BA03C in __hydro_newton_procs_MOD_inspectb
> #5  0x69EAA8 in __star_newton_MOD_do_newton
> #6  0x6A1779 in __star_newton_MOD_newton
> #7  0x617D29 in newt.10440 at solve_hydro.f90:0
> #8  0x618F61 in __solve_hydro_MOD_hydro_newton_step
> #9  0x61A697 in __solve_hydro_MOD_do_hydro_newton
> #10  0x61BE46 in __solve_hydro_MOD_do_hydro_converge
> #11  0x627567 in __struct_burn_mix_MOD_do_struct_burn_mix
> #12  0x5200DC in __evolve_MOD_do_evolve_step_part2
> #13  0x40B291 in __star_lib_MOD_star_evolve_step
> #14  0x41D364 in __run_star_support_MOD_run1_star
> #15  0x406AD2 in __run_star_MOD_do_run_star
> #16  0x406B6F in MAIN__ at run.f:0
> ./rn: line 9:  8120 Segmentation fault      ./star
> So the second question we have is why the code segfaults.  I haven't yet dug down into where this is coming from, and I'll try to in the next few days if I have a chance.  For now, any help is as always very welcome and hugely appreciated!
> Cheers,
> Warrick
> [1] http://wmwolf.github.io/projects/mesa_debugging/
> ------------
> Warrick Ball
> Postdoc, Institut für Astrophysik Göttingen
> wball at astro.physik.uni-goettingen.de
> +49 (0) 551 39 5069<inlist_1.0><earl_HR.png><earls_bug.png><segfault.txt>------------------------------------------------------------------------------
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_______________________________________________
> mesa-users mailing list
> mesa-users at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/mesa-users

More information about the Mesa-users mailing list