[Mesa-users] No mass loss secondary star binary with large nuclear network
Rob Farmer
robert.j.farmer37 at gmail.com
Mon Sep 27 18:09:43 UTC 2021
Hi
So i can reproduce your problem. I found that the fix is to increase
max_num_accretion_species to 300 but it only worked after doing a ./clean
&& ./mk in $MESA_DIR not just ./mk. I guess the build system isn't picking
up the dependencies properly between files so the changes don't propagate
properly.
>But also, that number is only linked to the amount of accreted species and
not to the mass loss of the secondary star.
Two things, one there is wind accretion so we trigger the code even without
rlof (See set_accretion_composition). Secondly, if the value isn't large
enough we end up trying to set array elements beyond the edge of an array.
Which is undefined and bad. The 63 isotope network gets lucky and never
over writes anything important but the ~200 does enough damage to set the
mdot to 0.
Rob
On Mon, 27 Sept 2021 at 15:03, Hannah Brinkman <brinkmanhe at gmail.com> wrote:
> Heey Rob,
>
> I've changed that number in all of the mesa versions I've used, and I did
> check whether it was changed when I found this issue. But also, that number
> is only linked to the amount of accreted species and not to the mass loss
> of the secondary star. And, as it was initially set to 50, it should also
> impact the 63 network, which is not the case here nor was it the case when
> I encountered that particular issue the first time. Before I changed the
> number, the mass-loss was fine, it was only when I wanted to try accretion
> that this issue came up.
>
> And I do think the wind routine is getting called for star 2, because for
> the 63 isotope network, the mass loss is working for both stars. I am only
> changing the network size, nothing else in the inlists or wind-routine, and
> suddenly the mass loss is no longer happening from my secondary star while
> the primary stars are as good as identical. To me it seems the issue is
> somewhere deeper in MESA, but I have no idea what is exactly going on. I
> hope that we can solve this somehow.
>
> Cheers,
> Hannah
>
> Op ma 27 sep. 2021 13:41 schreef Rob Farmer <robert.j.farmer37 at gmail.com>:
>
>> Hi,
>> Thanks, I wanted to see if the wind routine was getting called to rule
>> out that somehow the wind routine isn't being called for star 2 (always
>> start with the simple things to debug first).
>>
>> This reminded me of an old email exchange where you reported to Josiah
>> problems with binaries and large networks. I notice there the solution was
>> to increase the variable
>> max_num_accretion_species to 300 but that discussion was for 11701. Have
>> you tried that change with your 10398 version of mesa?
>>
>> Rob
>>
>>
>> On Mon, 27 Sept 2021 at 13:13, Hannah Brinkman <brinkmanhe at gmail.com>
>> wrote:
>>
>>> Heey Rob,
>>>
>>> I would expect both stars to have a non-rlof mass loss rate of about
>>> 10^-(6-4) (when running a 40 and a 38Msun star). I am not evolving the
>>> system far enough to get rlof from the secondary star (it is working fine
>>> for the primary star), so I don't know whether that is working, but the
>>> normal wind-loss from the secondary does not work for the large networks.
>>> Because it does work with the smaller network, I am not sure the wind
>>> prescription is the problem here. Also, the secondary is unable to accrete
>>> mass even when I change the beta-parameter to something that should produce
>>> accretion.
>>>
>>> When I check the terminal output (see below), the primary star generally
>>> has a mass-loss rate around the expected rate, 10^-4.88 in this case, but
>>> the secondary gives a mass loss rate of 10^-99, which is pretty weird, for
>>> the large 209-network, but both stars have a mass-loss rate of about 10^-6
>>> for both stars when the smaller 63-network is used.
>>>
>>> Also, I forgot to add in my earlier email, I am using mesa version 10398.
>>> I hope this answers your question.
>>>
>>> Cheers,
>>> Hannah
>>>
>>>
>>> step lg_Tcntr Teff lg_LH lg_Lnuc Mass
>>> H_rich H_cntr N_cntr Y_surf X_avg eta_cntr zones retry
>>> lg_dt_yr lg_Dcntr lg_R lg_L3a lg_Lneu lg_Mdot
>>> He_core He_cntr O_cntr Z_surf Y_avg gam_cntr iters bckup
>>> age_yr lg_Pcntr lg_L lg_LZ lg_Psurf lg_Dsurf
>>> C_core C_cntr Ne_cntr Si_cntr Z_avg v_div_cs dt_limit
>>>
>>> __________________________________________________________________________________________________________________________________________________
>>>
>>> 320 7.917905 2.493E+04 4.470532 4.470650 33.313714
>>> 18.262532 0.000001 0.008622 0.280000 0.270384 -6.194855 1127
>>> 0
>>> 0.811019 1.402673 1.560609 -9.326081 3.308120 *-4.885319*
>>> 15.051182 0.986340 0.000044 0.014000 0.715866 0.020675 3
>>> 0
>>> 4.6988E+06 17.395131 5.661272 0.904684 3.186307 -9.755941
>>> 0.000000 0.000133 0.001131 0.000640 1.375E-02 0.399E-03 max
>>> increase
>>>
>>>
>>>
>>> __________________________________________________________________________________________________________________________________________________
>>>
>>> step lg_Tcntr Teff lg_LH lg_Lnuc Mass
>>> H_rich H_cntr N_cntr Y_surf X_avg eta_cntr zones retry
>>> lg_dt_yr lg_Dcntr lg_R lg_L3a lg_Lneu lg_Mdot
>>> He_core He_cntr O_cntr Z_surf Y_avg gam_cntr iters bckup
>>> age_yr lg_Pcntr lg_L lg_LZ lg_Psurf lg_Dsurf
>>> C_core C_cntr Ne_cntr Si_cntr Z_avg v_div_cs dt_limit
>>>
>>> __________________________________________________________________________________________________________________________________________________
>>>
>>> 2 320 7.689454 2.888E+04 5.586301 5.586319 35.974185
>>> 35.974185 0.056064 0.008693 0.280000 0.360420 -6.952929 972
>>> 0
>>> 0.811019 0.702614 1.394861 -19.765704 4.418987 *-99.000000*
>>> 0.000000 0.930268 0.000052 0.014000 0.625772 0.017552 8
>>> 0
>>> 4.6988E+06 16.494141 5.585612 1.201553 3.456575 -9.525587
>>> 0.000000 0.000082 0.001141 0.000640 1.381E-02 0.120E-07 max
>>> increase
>>>
>>>
>>>
>>> __________________________________________________________________________________________________________________________________________________
>>>
>>> binary_step M1+M2 separ Porb e M2/M1
>>> pm_i donor_i dot_Mmt eff Jorb dot_J dot_Jmb
>>> lg_dt M1 R1 P1 dot_e vorb1
>>> RL1 Rl_gap1 dot_M1 dot_Medd spin1 dot_Jgr dot_Jls
>>> age_yr M2 R2 P2 Eorb vorb2
>>> RL2 Rl_gap2 dot_M2 L_acc spin2 dot_Jml rlo_iters
>>>
>>> __________________________________________________________________________________________________________________________________________________
>>>
>>> bin 320 69.287899 1.229E+02 18.975597 0.000E+00 1.079861
>>> 0 1 -2.052E-28 0.000E+00 9.652E+54 -6.208E+40 0.000E+00
>>> 0.811019 33.313714 36.358769 0.000000 0.000E+00 170.243504
>>> 45.766631 -2.056E-01 -1.302E-05 1.000E+99 0.000E+00 -9.251E+34 0.000E+00
>>> 4.6988E+06 35.974185 24.823359 0.000000 -1.850E+49 157.653147
>>> 47.401811 -4.763E-01 0.000E+00 0.000E+00 0.000E+00 -6.208E+40 1
>>>
>>> Op ma 27 sep. 2021 om 13:01 schreef Rob Farmer <
>>> robert.j.farmer37 at gmail.com>:
>>>
>>>> Hi Hannah,
>>>>
>>>> Thanks for the inlists, just checking but what mass loss are you
>>>> expecting to occur (and which isn't occurring) winds or rlof (or both)?
>>>>
>>>> If its the wind mass loss that is missing, then I see you have a
>>>> other_wind routine, so if you add:
>>>>
>>>> write(*,*) id, w
>>>>
>>>> to your other wind routine, do you get output for both id=1 and id=2
>>>> (one for each star?)
>>>>
>>>> Rob
>>>>
>>>> On Mon, 27 Sept 2021 at 10:50, Hannah Brinkman via Mesa-users <
>>>> mesa-users at lists.mesastar.org> wrote:
>>>>
>>>>> Dear mesa-users,
>>>>>
>>>>> During one of my recent runs with a binary star in MESA, I noticed
>>>>> that my secondary star was not losing any mass. The star is fully evolved
>>>>> and behaves normally as far as I can see for a star of constant mass.
>>>>> However, when I use the same inlists and run_star_extras.f file with a
>>>>> smaller nuclear network (63 isotopes instead of 209), the star is losing
>>>>> mass, as it should. I did another test with a network of 125 isotopes, and
>>>>> the secondary is not losing mass for this network either.
>>>>>
>>>>> When I use the inlists for a single star, the star is behaving
>>>>> normally, and also the primary star of the binary star is behaving
>>>>> normally. Can someone explain to me what is going on with my secondary
>>>>> star? I've attached the inlists, run_star_extras.f and the 209 isotope
>>>>> network.
>>>>>
>>>>> With kind regards,
>>>>> Hannah
>>>>> _______________________________________________
>>>>> mesa-users at lists.mesastar.org
>>>>> https://lists.mesastar.org/mailman/listinfo/mesa-users
>>>>>
>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.mesastar.org/pipermail/mesa-users/attachments/20210927/74a1bf4e/attachment.htm>
More information about the Mesa-users
mailing list