Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

After ndown get ‘Segmentation fault’ error

sophiazjjj

New member
Hi,
I have completed a 4-nested simulation with resolutions of 12150-4050-1350-450m using WRF 4.0, and used ndown to make the driving file for 150m. However, when continuing the 2-nested simulation with 150m and 50m resolutions, I encountered "Segmentation fault" in module_sf_sfclay. This error initially occurred after about 10 minutes of simulation. I tried increasing the wrfout frequency of the 450m to generate wrfbdy with high frequency, so I extended the simulation time to 3 hours. However, I encountered the same issue again and there was still no significant effect with the wrfbdy frequency increasing to every 10 seconds. How can I resolve this issue?
Hope someone can help me.


[e2403r3n3:86661:0:86661] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffe15e229c0)
==== backtrace (tid: 86661) ====
0 0x0000000000050b35 ucs_debug_print_backtrace() ???:0
1 0x0000000002cc91c1 module_sf_sfclayrev_mp_psim_stable_() ???:0
2 0x0000000002cc4734 module_sf_sfclayrev_mp_sfclayrev1d_() ???:0
3 0x0000000002cc2333 module_sf_sfclayrev_mp_sfclayrev_() ???:0
4 0x000000000258ccf3 module_surface_driver_mp_surface_driver_() ???:0
5 0x0000000001c1a409 module_first_rk_step_part1_mp_first_rk_step_part1_() ???:0
6 0x0000000001557e07 solve_em_() ???:0
7 0x00000000013f7ed0 solve_interface_() ???:0
8 0x0000000000582b93 module_integrate_mp_integrate_() ???:0
9 0x0000000000411b91 module_wrf_top_mp_wrf_run_() ???:0
10 0x0000000000411b4f MAIN__() ???:0
11 0x0000000000411ae2 main() ???:0
12 0x00000000000223d5 __libc_start_main() ???:0
13 0x00000000004119e9 _start() ???:0
 

Attachments

  • namelist (1).input
    9.2 KB · Views: 1
  • rsl.tar.gz
    6.4 MB · Views: 1
Last edited:
Hi,
After running ndown, are you then trying to run wrf for two domains? You should only be running wrf.exe for one domain (the new higher-resolution domain) at a time, when using the ndown option. It's possible I'm just misunderstanding what I'm reading. If so, can you explain what stage you are in when you're running both domains together, and getting the segmentation fault?

Can you try using more processors (say, 576 - or a 24x24 processor decomposition) and see if that makes any difference?
 
Hi,
After running ndown, are you then trying to run wrf for two domains? You should only be running wrf.exe for one domain (the new higher-resolution domain) at a time, when using the ndown option. It's possible I'm just misunderstanding what I'm reading. If so, can you explain what stage you are in when you're running both domains together, and getting the segmentation fault?

Can you try using more processors (say, 576 - or a 24x24 processor decomposition) and see if that makes any difference?
Thanks for your response. I am indeed running WRF for two domains. Regarding the segmentation fault, after debugging and modifying an uninitialized variable in sf_sfclay, I encountered the “CFL” issue. As a result, I reduced the time step, and it is now running normally.

Could you please let me know what problems might arise when running wrf for two domains after using ndown? Can I use the wrfbdyd02 obtained from ndown (renamed as wrfbdyd01) and wrfinputd01 and wrfinputd02 created by ERA5 to drive the one-way nested WRF simulations for the next two domains?
 
Hi,
The more I think about it, I suppose it may be okay to run it for two domains. As long at the output looks reasonable, it's likely not a problem! I'm glad you were able to solve the issue. Thanks for the update!
 
Top