Meeting Notes 2019 Software - ESCOMP/CTSM GitHub Wiki

Dec 16th, 2019

CLM Software

Agenda

  • Erik -- met with Naoki/Mariana on mizuRoute. Naoki is going to do the nuopc cap prototype with Erik and Mariana's help. Note, that we won't allow for multi-instance for the initial version. ROF when running with mizuRoute.
  • Erik -- note, there are changes in cime needed to get a standalone mizuRoute (new "R" compset) working through CESM. The path that makes the most sense is to make a R2000MOSART compset to pave the way. The additions in MOSART itself are small. Some of the work for DLND I also need to do for Delft. We'll also need to run a case for mizuRoute to run on.
  • Erik -- We need to run an I compset with CPLHIST data saved to use in DLND (I2000Clm50SP. GSWP3 forcing, at half degree), save the output so we can use it for the future. I could ask Naoki to run his case with extra CPLHIST output.
  • Erik -- Naoki proposes three resolutions for mizuRoute: HDMA, MERIT, NHD. They have hundreds of thousands of polygons and millions of fine scale vertices, so the syntax we've used for degrees or nlat x nlon don't work here.
  • Erik -- Code of Conduct. Mariana and Jim asked to have a softlink to the DOI for it, rather than have the written text. If the code is going to be understood and followed, I think it HAS to be written out. Having a softlink says "I don't care about this -- it's someone else's problem". So I want to push back against this desire. But, I'm likely to need to do more education and advocacy on this.
  • Bill - Massive module renaming: https://github.com/ESCOMP/CTSM/issues/869
  • Erik -- We noticed that the User's Guide links aren't working. This is just a holdover from the transition from docBook. The links, figures and tables need to be gone through one by one to fix them.
  • Erik -- we met with the VR mesh people. We are going to use user-mods directories and changes to config_grid.xml as largely the way forward. There's also the suggestion of a few changes in cime and CAM to help this effort as well. And one request for CLM which is to remove setting of resolution in namelist_definition_ctsm.xml, see next.
  • Erik -- To remove setting of resolution in namelist_definition_ctsm.xml we need to read the list of resolutions from config_grid, probably using query_config. We have a perl script to do this among other things, this needs to be extended to use query_config. Or should be redone in python. Possibly, cime already has python code to read the list of resolutions. I thought there was an issue for this, but don't see it, so should create an issue for this.

Removing the list of valid resolutions in namelist_definition

One stumbling block to adding a new resolution is that CTSM lists valid resolutions.

This is needed for the sake of some tools.

Erik suggests querying config_grids for this.

Bill wonders if it might be hard to determine the resolutions that are valid for CTSM based on this, and whether we should instead keep a hard-coded list of resolutions, but put that list in some different xml variable so that it isn't checked in a system run, but is only used for the toolchain.

mizuRoute grids

Is there a problem with having names that don't describe the resolution? There probably isn't anything we can do about that.

But we may want to at least clearly distinguish between global vs. regional grid. e.g., prefix of 'global' or 'conus', or maybe 'global' is implied. Maybe have a prefix of 'r' for all regional cases, or some other way to easily distinguish regional grids.

Dec 5, 2019

LILAC status

Discussion on lilac_in:

  • Do we want the mesh files in here or in lnd_in / mosart_in? LILAC doesn't actually need the lnd or rof mesh files. Bill suggests putting those in lnd_in / mosart_in because they are grid-dependent and so we might be able to pick up the correct file more easily if we have logic in the component's build-namelist script, though note that LILAC does need the atmosphere mesh file.

Discussion on ctsm.cfg:

  • Mike asks if we can move this information into the wrf namelist. Currently, the issue is that this is read into python, which knows how to parse a config file but not a Fortran namelist formatted file. We could write a little python function to find the right namelist group and either read it directly or convert it to a config file.

River coupling: Mariana will put in a logical to turn it off. Mike notes that WRF people won't care about river coupling.

Dec, 2nd, CLM Software meeting

Agenda

  • Erik: Question about last meeting I missed on case for constants? I like caps for CPP tokens, but otherwise for non-CPP. I saw that Negin liked caps for constants, and Bill lowercase.
  • Erik: Following up again, how do we encourage outside people to follow coding style? I've tried to tell people to follow the style, but haven't had too much luck in getting to actually change their code.
  • Erik: Follow up on Mod.F90 and Type.F90 endings. I see the point in changing this. It would be cleanest if we change the code all at once, so we have the examples we want people to see.
  • Erik: When can we add a ctsm5_1 placeholder? Also note that if we use ctsm5_1 this enables the perl code to use it's current system of comparing version numbers for specific namelist items.
  • Erik: Note, on incrementing by .1 rather than .5. Once, a number is introduced it sets up a set of defaults to preserve going forward. Changes that aren't just a default change, will end up changing answers for a larger set of versions. The previous view was that .5 versions were CLM-standalone and .0 versions were fully coupled versions. Those would be the only ones tuned. So my assumption would be that the other versions wouldn't be tuned, but we'd still be preserving their answers? (at least to some level). Would tuning changes be applied to all versions or just the finalized .0, .5 versions?
  • Erik: Should fates be updated for ctsm2.1.2? Also there is a MOSART bug fix that could be updated on the release branch (cold start fix and 8th degree file fix). Should that be done?
  • Erik: For variable-mesh toolkit they asked to have the ability to have a single directory you can put all your files in, and for the model to recognize and use them from there. This would be both for CAM and CLM. See: https://docs.google.com/document/d/1a2yqh2qzf3Xt4dzT7ZiGrXnjHsPkagdB_xxQsh-Rdf4/edit#heading=h.aeg729lw6d19 I'm meeting with them next Wednesday, will present the solution used in PTCLM where a user-mods directory is created that can be used.
  • Erik: FYI. Based on feedback planning on $CASE.mizuRoute.r.yyyy-mm-dd-sssss.nc being filenames
  • Erik: FYI. E3SM people are setting up runoff forcing for DLND. So this is something we could take advantage of for mizuRoute.
  • Erik: Question is MOSART being developed for E3SM, but not CTSM? If so can we bring in changes they are making for CESM?
  • Erik: CN Matrix code has updates for non-matrix that happen in CN?StateUpdate?Mod.F90, but the corresponding update for matrix happens in a different place. I don't think this is good. Chris was arguing that having these updates in StateUpdate is bad, but that argues to put both the Matrix update and the non-Matrix update elsewhere as well. I think they should be handled in the same way, rather than a different way. Part of this is because the real state updates don't actually happen until the matrix operations, so the code that replaces the state updates isn't quite doing the same thing. For now I have comments in the code to direct between the two.
  • Erik: I updated CN matrix with linking the Matrix spinup method in. The short spinup tests (SPP) run fine. There is a longer 3-month test that isn't running (and it didn't run before either).
  • Erik: Option to run with a "climatological" forcing, rather than cycling over years? A true three hour climatology would have constant drizzling rain, so might not be suitable. But, you could construct some type of "average" day for each month, that would only span a single year, but represent the climatological average.
  • Bill: including LILAC in CTSM repo?
  • Bill: Next Thurs software meeting?

Style guide

For constants: Erik prefers lowercase in general, caps for CPP. We're happy with this.

How to get people to follow this? Probably best to present it up-front.

Mod.F90 and Type.F90 endings: Erik suggests changing this all at once.

ctsm5.1

For now, this would require duplicating a bunch of default xml entries.

Dave feels that it's really getting time to do this.

There are two larger outstanding changes that are desired:

  • Albedo changes (requires a different param file)
  • Biomass heat storage

Thoughts about incrementing version numbers by 0.1 or 0.5

Erik has concerns about having more versions that we're supporting. This extra complexity has a real cost.

Given that we're no longer tied to CESM versions, it may be unavoidable to have somewhat more frequent versions than before - but we don't want to go crazy with this.

FATES updates

Bill and Dave feel that FATES should be updated on master and not on the release branch. But Erik says this is the opposite of what the FATES group had decided. He'll talk to them about this.

CN Matrix: location of state updates

Some matrix updates are done alongside the non-matrix counterparts in the CN?StateUpdate?Mod files, whereas others happen in different places. We generally agree that it makes sense to have everything done in the CN?StateUpdate?Mod files, so that analogous code is kept together.

In the CN Matrix case, note that the actual state updates aren't happening there: it's setting matrix elements, and the state updates happen during the later matrix solve.

Bill asks: In the CN Matrix case, are there separate state updates for the different pieces that are done separately in the non-CN matrix piece (StateUpdate1, StateUpdate2, StateUpdate3)? Or are they all done together? The latter seems like it might be questionable scientifically. This question is tied in with the original one, because it is a little confusing to have the matrix state updates done somewhere other than the state update routines, and it seems ideal if both methods are doing their state updates at the same time.

Including LILAC in CTSM repository?

Dave & Erik are fine either way. Bill, Mariana and Negin will discuss this further tomorrow.

Nov, 7th, CTSM Software meeting

Agenda

  • Bill: APSIM
  • Bill: Concerns about shanty-town development: FAN, etc., where the code looks so different from most CTSM code, and doesn't follow typical patterns
  • Bill: proposed naming conventions (see below)
  • Bill: What standards should we hold code to before we integrate it into master - vs. cleaning it up after the fact? And specifically, what standards do we want FAN to meet before integrating it? (And should I help with its review?)
    • I'd be okay with features coming in in an "experimental" mode with lower standards, as long as we have standards for doing more careful reviews before something moves out of "experimental" to "standard / production-ready" modes.
  • Naoki: ESMF mesh file

CLM5.5 (CTSM5.5) - or CLM5.1 (CTSM5.1)

There are a number of science changes that have been waiting to come in. Are we ready to start things going on a 5.5 - or 5.1 - version?

One reason we haven't moved ahead with this yet is because we were waiting for the python version in order to have more robust handling of multiple version numbers. But since that might still be a while, we should probably just move ahead with something a bit ugly for now.

Our proposal (which we'd like to run by the SSC):

  • Let's count by .1 instead of .5
  • Let's renumber CTSM to 5 so that CTSM and CLM science versioning are connected
  • In the future, as soon as we're done with one version (say, 5.1), we'd introduce a science option for the next version (say, 5.2) so that new options could go into that.

Style guide

Let's gradually work up a stylistic guide. Feeling is that we should ask people to follow these, but maybe don't need to be completely strict about it.

Each variable should have a meaningful name, with description and units.

Style for subroutine headers.

Standard for spacing (i.e., indentation)

If there's a module we particularly like, point to that.

Proposed naming conventions

Bill's notes before the meeting

We currently are inconsistent in our naming of modules, types, subroutines, etc. I'd like to propose the following moving forward. I don't plan to go back and change old code any time soon, but would like to have more consistency moving forward, and we can document this for the sake of new code coming in. I don't care very much about the particular conventions we choose, but I would like us to have some convention here.

  • Module names: PascalCase. For the increasingly-common case where we have both a type and some science in a single module (object-oriented), I'd say no need to end this in Type or Mod (since it's ambiguous which should be used). We can continue to use Type for "dumb" types and Mod for typeless modules... though maybe it makes sense to start dropping "Mod" on the latter... and maybe it also makes sense to drop "Type" on types (it may be a "dumb" type now, but may grow to have some logic attached to it).

  • Subroutines and functions: PascalCase

  • Type names: lowercase_with_underscores

  • Variable names: lowercase_with_underscores

  • Constants: lowercase_with_underscores (I have often used UPPERCASE_WITH_UNDERSCORES in the past, and I could see sticking with that, too)

Discussion

Sean's feeling: prefers lowercase with underscore, at least for variables, less strong of a feeling for modules.

Maybe go with lowercase with underscores for everything except module names.

For constants:

  • Negin: prefers all caps for constants.
  • Dave: prefers lowercase
  • Bill: used to prefer caps, now prefers lowercase

Ending with "Mod" or "Type"? Feeling is: get rid of "Mod", but keep "Type" for things that are truly just dumb types without significant science code?

Standards for bringing in code

We had some discussion of Bill's agenda item:

What standards should we hold code to before we integrate it into master - vs. cleaning it up after the fact? And specifically, what standards do we want FAN to meet before integrating it? (And should I help with its review?)

  • I'd be okay with features coming in in an "experimental" mode with lower standards, as long as we have standards for doing more careful reviews before something moves out of "experimental" to "standard / production-ready" modes.

But with no real conclusions for now.

General thoughts on allowing others to bring in options that we may not want

Should we have constraints on people bringing things in that we may not want and don't want to support (e.g., from agencies)? Dave feels we should be pretty open in terms of small things (e.g., a few-line parameterization), though we should put more control on bigger things like a whole new crop model.

November 4th, 2019

Agenda

  • Erik: On release branch I brought in the change to PATH, but PGI is still not running on izumi
  • Erik: For mizuRoute, I think I should make the initial buildnml that will just make a hardwired namelist. It will listen to a couple CESM variables, but mostly be fixed. The initial cut won't be able to use user_nl_mizuroute to change anything, that will need to be a stage-II development. Does this all sound good? See updates Naoki has made to the doc: https://docs.google.com/document/d/1gYesvDTF1nxoiSaiNaST0oOmiLtGDFjBtGlF8P8okJM
  • Erik: Still have unexpected answer change
  • Bill: small tag with testing just on izumi

pgi on izumi on release branch

Feeling is: let's not worry about this for now

mizuroute parameter input

We're okay with not having support for user_nl_mizuroute, etc. for now.

Unexpected answer change with soil moisture stream

He did figure out some issues:

  • Issue with quietly allowing missing values on dataset
  • Linear interpolation of stream doesn't trap the spval, but instead averages the spval with something else. (Ideally, it seems like the streams would trap spval and avoid trying to interpolate it, but not sure if that's possible with streams.)

However, he's still having a problem with the unexpected answer change that arose when he put in place the threading fix. Erik has done a fair amount of checking to rule out that there's something terribly wrong with the new code; as far as he can tell, the new code seems to be working right. Dave suggests letting this go; we're all okay with that.

October 28, 2019

Agenda

  • Dave: irrigation question
  • Erik: mizuRoute changes for CESM https://docs.google.com/document/d/1gYesvDTF1nxoiSaiNaST0oOmiLtGDFjBtGlF8P8okJM
  • Erik: Julius found using the half degree dataset for f19 gets about 20% more output (because of variability in the dataset). So we should probably use the prepared datasets for f09 and f19 and then half degree for everything else.
  • Erik: With above should we look into adding a conservative mapping option to streams?
  • Erik: Chris figured out CN-matrix issue. One issue is running with and without the vertical soil C/N profile. Right now there is a _vr array that needs to be set to zero in all cases. But, the better way to do it might be to NOT use it when vr is off. Should I do that?
  • Erik: Did Sam get answers with mask identical to before, or just reasonably close now?
  • Erik: I'm adding a new history variable for the prescribed soil moisture to get the stream value read in from the file (applied to all soil columns, not just nat-veg as is done in the code, and with the time-interpolation applied). Plan to have it as an optional history variable, only allowed if prescribed soil moisture is turned on.

Turning off irrigation in an SP case

Our recollection is that, on master, you should be able to turn irrigation off in an SP case, but not on the release branch.

FAN dataset resolution

When running at f19 but with the half degree dataset, fluxes changed by about 20%. So for now, will stick with the plan of using different datasets for f09 and f19 that have been generated with conservative remapping.

Eventually, if FAN becomes standard, it seems like it would be good to keep things consistent with only needing the source data to be at a single resolution - either by using mksurfdata_map (which supports conservative remapping), or build in support for conservative mapping in streams.

October 21, 2019

Agenda

  • Bill: https://github.com/ESCOMP/CTSM/issues/118 - should this be assigned to Keith & Sean?
  • Erik: FAN stream datasets at 0.5x0.5, f09, f19. So assume should use f09 and f19 for f09, f19, and then interpolate f19 for lower resolutions explicitly (T62,T31,T5,T21, 4x5, 10x15, ne4np4, ne16np4), and then 0.5x0.5 for everything else. By, the way the issue with running FAN for 2000_control, was actually for interpolating the FAN dataset to a different resolution. With the new datasets the interpolation should work now.
  • Erik: FYI. CN-matrix I'm tracking down a change that makes a bunch of the tests to fail (C balance error). Chris committed several changes that were NOT just to spinup mode. I think I need to show that those are causing problems and have Chris resolve it.
  • Erik: Still stuck on threading bug fix for soil moisture streams. It seems to me it shouldn't change answers and yet it does. One thing that makes this difficult is that it changes results at the beginning of the time-step, but other physics will draw it away from the input values. So I can't just verify that it's the same as the input dataset. So I'm adding an extra history variable that I can verify it's identical to the input dataset. I talked to Sean about this situation, he thought that having it done at the beginning of the time-step is correct, so that you can view the deviation from the dataset as what the physics is trying to pull the solution toward. So the difference between the model and the dataset could be something useful to look at.
  • Erik: What should go in the CESM2.2.0 release from CTSM's perspective?
  • Erik: Should Bill and I talk about the LGM spinup?
  • Bill: I've been stuck on some frac_sno-related issues

FAN stream datasets

Why 3 different resolutions? It sounds like there may be some particularities about interpolation. Erik will talk with Will about this.

If we do stick with having these three datasets, Bill asks if we can keep things simpler by using 0.5 deg for all interpolation, rather than f19 for a certain set of resolutions.

CESM2.2 release

Water isotopes could be a target for this.

We could think about including some other optional features like hillslope hydrology, but this doesn't seem critical to get into a CESM2.2 release.

But it's not worth holding up a CESM2.2 release for any of this.

October 9, 2019

Agenda

  • Negin: Python help?
  • Erik: Next weeks meeting day/time?
  • Erik: Note, I started removing "branch: release" labels from issues that are on the release branch, but not master.
  • Erik: Anything else needed for cesm2.1.2 release? Note, that Tier II SSP future scenarios can't be run because we aren't doing simulations for them, so datasets don't exist (ndep and presaero are missing [CO2, C13/C14, and landuse change do exist]).
  • Erik: CN matrix work
  • Erik: FAN project
  • Erik: mizuRoute into ctsm/cime? Maybe we should make a R compset that runs stubs with DLND and ROF component?
  • Bill: Bin Peng's visit: goals, how much should I help, etc.

python help

Dave supports Negin helping scientists with this (along with Joe), as long as it doesn't become too much of a time sink.

branch: release labels

Erik is adding this label to things that need to go on the release branch. If it's on the release branch but not yet master, then he removes this tag.

CN Matrix work

Erik resolved the major problems where the array sizes weren't large enough (wasn't allocating enough space when fire was happening). There's one other problem he's fixing, then he thinks he'll have something that's working.

Seems like balance check is working now.

Should we do a simulation now that we have something that's working? Dave feels we should; Danica or Keith could do this. Should merge with the TRENDY branch, so can redo TRENDY simulations using the CN Matrix work. We can also do a performance assessment at that time. Once the simulation is done (or at least running), we should talk to Yiqi.

Erik notes that he sees a bunch of performance things that could be done. Dave feels we should see how much this is getting used before we spend a lot more time on it.

Coupling MizuRoute

Erik: worth making an R compset, with DLND and ROF? Bill thinks so, for testing.

Dave: Would be worried about going down this rabbit hole, though, since for now we could run with CTSM.

Let's talk more about this on Friday.

Issue triage

Next up in our bug triage will be #135

September 30, 2019

Agenda

  • Bill: Erik, can you give your okay to my draft email to Mark on Bugzilla?
  • Bill: Uppercasing github repos (CTSM, RTM, MOSART, PTCLM), but plan to keep ctsm-docs. Any objections?
  • Bill: Negin: helping with WRF GeoGrid to surface dataset conversion?

Uppercasing github repos

No objections to this

WRF GeoGrid conversion

Negin has scripts to convert GeoGrid to SCRIP... we'll start with that.

mksurfdat toolchain: portability across platforms

Discussion of https://github.com/ESCOMP/ctsm/issues/645

Bill: bigger picture of this issue includes build of mksurfdata_map, etc.... we haven't worked out the details of that. (We don't want to rely on the cime configure script, because we want this to be usable on machines that haven't been ported to cime.)

Probably assume ocgis is installed system-wide rather than having ocgis be an external of ctsm.

Probably provide instructions for people to do the necessary installations, rather than trying to manage that ourselves. (Note that the conda install is a one-time thing per machine. Then you do a 'conda activate' to activate the appropriate environment.)

Issue triage

Next up in bug triage will be bug #90

September 23, 2019

Agenda

  • Bill: Attendees at meeting with Ben
  • Bill: https://github.com/ESCOMP/ctsm/issues/674 (nlevgrnd < nlevurb): Do we want to work to allow nlevgrnd = 4 (which may be causing the problems), or is it enough to allow 5 <= nlevgrnd < nlevurb? Should Keith keep working on this, or someone else?
  • Erik: FAN branch. NFERTILIZATION was divided out in NFERTILIZATION (just commercial) and NMANURE. Is having NFERTILIZATION remain the sum how we should do it? Also FYI gnu is really slow, and there might be a threading problem (and Julius leaves soon in mid October)
  • Erik: FAN branch. As implemented now fan is turned on by an option to CLM_BLDNML_OPTS. This is similar to other fields going to the coupler (drydep, megan, fire-emiss), but it also controls two CLM namelist items. Should we change how this is done?
  • Erik: CN-matrix branch. I've added SHR_ASSERT* statements, and checking for error conditions. Chris has also improved the CN error balance, check. I'm still seeing 50 tests fail, but it is working better. Note, also I've seen performance issues (too much memory is allocated, and subscripts may be ordered the wrong way).
  • Erik: FYI. for the Delft project I'm working on a change to mksurfdata so that you can prescribe the PFT/CFT fractions, but use the PCT_NATVEG, and PCT_CROP from the dataset. This will allow us to set all veg types everywhere to see what grows for LGM conditions (it also simplifies some nasty logic that was still in place to read in old raw PFT datasets without crop).
  • Bill: Next tasks for Sam

FAN branch

Fertilization diagnostics: Dave suggests keeping the two coming out separately, with different names, like NFERT_MANURE and NFERT_INDUSTRIAL (but come up with the right name for the latter). Though Dave notes that this would affect our cmip scripts.

gnu slowness with FAN: This isn't just a factor of 2 or 3, but a HUGE difference. Possibly related to outputting a lot to log files???

Use of CLM_BLDNML_OPTS: Bill's suggestion is to use a separate xml variable for this (doesn't like CLM_BLDNML_OPTS).

September 9, 2019

Agenda

  • Erik: surface datasets that Chris is making?
  • Bug triage

Bug triage

Next up will be #64 in "type: bug"

September 5, 2019

CMake build

Joe feels that it will be easier to maintain and could give additional value if CTSM has its own standalone cmake build (not putting the CTSM standalone build in LILAC). Then people could build CTSM for various purposes outside of a cime case.

Others feel that make sense.

Debrief about WRF meeting

We should keep in mind that it seems like land models in WRF and MPAS will probably be linked in via the CPF.

Mike emphasizes that the footprint of this should be small in WRF, in that there aren't new variables they need (since we're only passing back things that are already there). And allowing them to remove the old CLM could help them.

Is it okay that CTSM will write its own history file (in contrast to with Noah-MP, where land variables are included in the main history files)? It seems like this will be okay.

Aug-26, 2019 CLM CMT meeting

Agenda

  • Erik: Note limit of 30 tags in git hub mailer. They are moving cronjobs over to a different server.
  • Erik: They are removing bugz.cgd.ucar.edu, which we used for history of closed bugs. Are there any of these old bugs we should move over to github? Bill worked out a plan with Mark Moore to move all the bugs over to a NCAR github repo using Ben's scripts.
  • Erik: CN Matrix branch has argument arrays passed in with a set size. The nag compiler is showing that some tests show these assumed sizes are wrong. Our standard for this is to pass in arrays as "(:)" and then use SHR_ASSERT to make sure it's the right size. I'm nervous that this is being done wrong on the matrix branch. It's makes me untrusting of the branch as a whole. I also noticed that Chris made committed changes that change answers for the matrix solution, and I thought he wasn't doing anything to change answers.
  • Erik: Note, I closed the "future" and "clm5" milestones. We should maybe delete the "future" one. The cesm2.1.1 milestone is still open waiting for changes to go to master. The other open milestone is cesm2.1.2.
  • Erik: FYI. Need to put more effort in the Delft project. Bill -- I think you and I should discuss this.
  • Erik: Mariana would like the paramsfile update to come in quickly. Order of Fire/bug and that tag?
  • Erik: Some questions on "next" issues.
  • Negin: Preparing an agenda for meeting with WRF people. What are the main points we'd like to discuss in tomorrow's meeting?
  • Bill: Follow up on Erik's recent issues about threading

CN Matrix branch issues

Given these issues, Dave suggests not trying to use this for TRENDY runs.

Milestones

We're fine with deleting the "future" milestone.

August 22, 2019 CTSM software

Agenda

LILAC / CTSM inclusion in WRF

Some questions:

  • Timing of this - feasible for next WRF release?
  • Should WRF-LILAC interface go in LILAC repo or WRF repo?
  • What requirements do we need to hit for build, for the sake of release?
  • What requirements do we need to hit for tool chain, for the sake of release?

August 12, 2019: CLM software meeting

Agenda

  • Bill: go through recently-opened issues (in the last few weeks) (determine if any are high priority)
  • Bill: Silently allowing use_init_interp in a branch run (this was confusing for the tutorial)
  • Erik: FYI: There is a claim that cheyenne is now more stable. Sheri says that CMIP simulations ran over the weekend, but she did have one MPT launch error. They also see worse MPI performance since the last shutdown. Queues will still be down for the afternoon this week.
  • Erik: FYI. pre-millennial 1 & 2 degree tomorrow. release-clm5.0.27 today. TRENDY dataset branch tomorrow, and create new f09 datasets for Bgc-Crop tomorrow.
  • Erik: CN matrix project. We added a project board for it, had a meeting last Friday. There are people working on the CN matrix spinup, but there is more to do (10-year averaging). Will had the impression that the spinup had more details (and art to it) besides just turning it on.
  • Erik: TRENDY CN-matrix and shifting cultivation merge. Peter is looking this over. I'll try a merge to see how nasty the conflicts are. Anything else we should do? Note, CN-matrix isn't passing tests. What do we do about that?
  • Erik: TRENDY branch -- should we put some to all of the TRENDY changes in place so that it can run out of the box? Or leave that for Danica?
  • Erik: FYI FAN branch. Will try to do another telecon next week when Julius is available.
  • Bill: next tasks for Sam?

Silently allowing use_init_interp in a branch run?

Bill argues for silently ignoring this in a branch run.

Erik and Dave are okay with this.

Dave suggests putting in the log file some message saying that we're ignoring this because it's a branch run.

TRENDY

We shouldn't spend time trying to get this out-of-the-box.

August 5, 2019: Special meeting on design of SnowCoverFractionMod

Bill led a discussion on the design of the object-oriented SnowCoverFractionMod, related to https://github.com/ESCOMP/ctsm/pull/769#discussion_r310183971

Present: Bill Sacks, Mariana Vertenstein, Erik Kluzek, Sean Swenson, Will Wieder.

Some conclusions from this:

  • We discussed pros & cons of object orientation vs. a purely procedural implementation. The main con of object orientation is the difficulty in handling a new parameterization that requires some additional inputs: with the current implementation, this would require small changes to all of the existing methods in order for the code to compile. (Some alternative solutions to this problem are sketched out here: https://github.com/billsacks/prototypes-clm-polymorphism_with_different_interfaces) Pros of object orientation are:
    • Avoids the need for select case statements sprinkled throughout the code
    • Allows you to have data that is local to a specific parameterization
    • Lends itself better to unit testing - specifically, replacing a dependency with a stub at runtime in the unit tests The software engineers felt that the pros of object orientation outweigh the cons; Sean and Will did not object.
  • We'll use separate files for each piece; this removes the infrastructurey abstract interface from the science code, and keeps each science parameterization separate (making it easier to see what to do if you want to introduce a new parameterization). The files will be:
    1. abstract interface
    2. factory method (ideally would be in the same file as (1), but can't be because of circular dependencies)
    3. ny07 implementation
    4. sl12 implementation
  • Because this leads to a proliferation of files, at some point we'll probably want to organize the biogeophys directory into subdirectories. We weren't sure how to do this, though, so we're deferring this until we have even more files and/or a better sense of how we want to organize these files. Sean was initially suggesting separate "science" vs. "infrastructure" subdirectories, but Bill and Mariana felt that it's best to have (for example) all of the files for SnowCoverFraction next to each other in the same subdirectory. Another possibility could be subdirectories for snow, soil, etc. We could also have a subdirectory for the various Water* modules.
  • Within a given file, we'll put the science routines first, and the infrastructure (initialization) routines later. We'll use section headers (as in IrrigationMod) to clearly denote these two sections of the file.
  • We discussed whether to go back to existing code and reorganize it according to this discussion. Bill thinks it mostly follows what we've already discussed, except that infrastructure comes before science. Should we flip infrastructure and science, at least for the handful of other cases that have multiple OO implementations (so, for which we have a file for the base class, a file for each implementation, and a file for the factory)? Feeling is: not for now. Main concern is that doing this reorganization will interfere with outstanding branches, particularly CNMatrix (causing hard-to-resolve merge conflicts). So for now, we'll apply this moving forward; at some point we may go back and fix up older modules.
  • Let's spell out the options as SwensonLawrence2012 rather than SL12 - in code as well as namelist options.

July 15, 2019: CLM-CMT Meeting:

Agenda

  • Erik: zenhub, I installed the extension. Negin, have you used some of these new features and know how to use them? Some of it's tied into certain types of agile software methodology. It seems worth looking into just for that reason.
  • Bill: Go through "every week" items: https://github.com/ESCOMP/ctsm/wiki/Issue-Labels-and-Issue-Management#check-every-week
  • Bill (if time): Bug triage: next up in "type: bug": #48
    • Note: we didn't do this

ZenHub

Allows use of epics. Bill: similar to github projects, but he likes github projects better.

Bill's thinking: ZenHub may be too much right now. But he doesn't feel strongly about this.

Conclusions: let's stick with issue labels & GitHub projects for now, but if someone ends up trying ZenHub and likes it, they could show others.

July 11, 2019

Agenda

  • Bill S / Sam: UI for https://github.com/ESCOMP/ctsm/issues/728
  • Bill S / Sam: Issue with 5 layers and BGC: https://github.com/ESCOMP/ctsm/pull/759#issuecomment-509867394 https://github.com/ESCOMP/ctsm/pull/759#issuecomment-510255272
  • Bill S: oldfflag == 1 cannot be combined with subgridflag == 1 (https://github.com/ESCOMP/ctsm/issues/502). Is it also the case that oldfflag == 0 cannot be combined with subgridflag == 0? (That seems to be the case based on https://github.com/ESCOMP/ctsm/issues/503#issuecomment-419185688)
  • Bill S: Are the conditionals on use_subgrid_fluxes in BulkDiag_NewSnowDiagnostics (at least the first two) really supposed to be on oldfflag?
  • Bill S: For oldfflag, we have: (1) int_snow is limited based on frac_snow computed the new way, then (2) frac_snow is recomputed the old way. Is that really what's intended? If so, that's going to make https://github.com/ESCOMP/ctsm/issues/503#issuecomment-419185688 messy. (Note that int_snow seems to be used in at least one place even with oldfflag: for compaction during melt.)
  • Bill S: Should we change our CTSM dev tag version numbering to start at 5? This would align the CTSM dev tags with our physics/configuration version numbers. Originally we thought we'd have separate numbering for CLM vs. NWP, but if we're planning to have the same numbering for both, then it seems to make sense to align our tag numbers with these version numbers.
    • On the one hand, I'd like to make this change sooner rather than later, to avoid too much confusion.
    • On the other hand, once we make this change, I don't think we should change back to 1 again, so we should be pretty sure we want to make this change before making it.

Specifying soil layers dynamically

In general, we assume nlevgrnd = nlevsoi + 5. The exception is the 5-layer case for NWP.

One problem Sam ran into is: in the 5 layer case, nlevgrnd = nlevsoi, which causes problems for some BGC code: the BGC code assumes that there is an extra layer below nlevsoi.

Sean feels the BGC code should be general enough to handle the case nlevgrnd = nlevsoi - presumably transfers at bottom layer should be 0. Charlie could probably answer this more definitively.

But maybe in the short-term, set nlevsoi = 4 for the NWP case. We should then rename 5SL_3m to 4SL_2m. Sam will make this change.

Long-term, Sean suggests getting rid of nlevgrnd and using nbedrock to set this dynamically. One issue is that nlevsoi really affects the cost of the model, especially for BGC - and especially for spinup.

For now, user interface should be:

  • Specify dz that includes nlevgrnd
  • If you specify that, then also need to set max number of soil layers (i.e., nlevsoi). For now, we require that this be one less than the vector length, or less.

What do we want to do for out-of-the-box cases? Use string or have the new vector of values? For now, maybe stick with string; we may decide to change that later.

Version numbering

Bill S: Should we change our CTSM dev tag version numbering to start at 5? This would align the CTSM dev tags with our physics/configuration version numbers. Originally we thought we'd have separate numbering for CLM vs. NWP, but if we're planning to have the same numbering for both, then it seems to make sense to align our tag numbers with these version numbers.

  • On the one hand, I'd like to make this change sooner rather than later, to avoid too much confusion.
  • On the other hand, once we make this change, I don't think we should change back to 1 again, so we should be pretty sure we want to make this change before making it.

Mike doesn't see a problem with bumping up the code version to 5. His main concern is that he could see pressure to increase the version number more frequently. People are fine with this (we can increment by .1 instead of .5 moving forward).

People are generally happy with the idea Bill proposed of going to tag versioning of 5.x. But let's sit on it for a little while before making the change.

Questions about oldfflag / subgridflag

Sean: It is okay for oldfflag == 0 with subgridflag == 0.

The conditionals on use_subgrid_fluxes really should use that, though they may need some tweaking.

int_snow should only be needed when oldfflag == 0.

July 8, 2019: CLM-CMT

Agenda

  • Erik: Issue #741 has been a problem for several CMIP simulations (Keith L., Simone, Peter). Keith L. has a solution, and Keith O. has a better one. What should we do about this?
  • Erik: According to Bill's notes. inputdata won't be on a clock, but our forcing data will be. That's probably OK for the data that's imported into svn inputdata. But, would be a problem for data that's not which we sometimes have (such as CRUJRA, Princeton, ERAI). So do we need to periodically have a script that touches all the files or store in svn?
  • Erik: Marysa's SLIM model. What's the priority/timeline on bringing this in?
  • Erik: FYI. There's a way to set organization level project access, but didn't find one for repository level.
  • Bill (if time): bug triage

Forcing data that might be scrubbed

Not sure what we want to do about this... might need to talk with the broader group about this.

July 3, 2019: CLM-CMT Meeting:

Agenda

  • Erik: Machine change requires a new cime tag for both master and release-clm5.0. Should we send out a message about this being a problem to ctsm-dev? What's our plan to get this fixed for CESM and CTSM?
  • Erik: FAN Project, only a couple people can make changes. Can we open that up? Looks like we can edit the level of access to projects for a team to the project board. The default seems to be a pretty high level.
  • Erik: Along same lines there is a "triage" level for access, we possibly should put some people in this level.
    • Bill: Thanks for pointing out that new feature. I'd suggest that we turn the CTSM-read team into a CTSM-triage team.
  • Erik: The change in disk space on glade to have a clock on all data. Does that apply to our inputdata and forcing data? Is it going to be deleting our input data if it doesn't happen to be used often enough?
  • Erik: The cn-matrix branch substantially lowers the tolerance for CN balance checks and CNPrecision when matrixcn is on. We've now started asking them about it. What do we do to move forward on this? And do I put any time into the other things I was working on?
  • Erik: At the workshop John Dennis suggested we directly output in CMIP format (so time-series and with CMIP names and units). Is this something we want to work on? I can see how to do this by adding a new namelist item, and some new history data types (we can't base it solely on what's existing)
  • Erik: CESM Software Eng. group asked for priorities. I'm curious to match what we thought with what scientist's think.
  • Erik: Should we try to get master to a regular cime tag?
  • Erik: Note latest ctsm issue where a symlink to a file (rather than through a directory) saves the symlink itself and NOT the datafile. Symlinks for directories seem to work as expected.
  • Erik: NOTE: I've had to use patch to bring the release changes to master. I wasn't able to get it to work through git merges, or even git apply. But, doing patches by hand has been working.
  • Bill: tag planning

Sam's next priorities

In addition to what he's working on now:

Canopy iterations

Sam couldn't find any solution that led to consistent improvement. So he feels that we should leave this as is for now.

Triage permission level

From some experimentation after the meeting: triage permission level currently does NOT give you the ability to edit projects. However, Bill went ahead and changed the CTSM team to have triage permission levels anyway, because it will be helpful to allow those people to edit issue labels.

Bill sent a feature request to GitHub asking them to give people with triage permission levels the ability to modify projects.

CN Matrix branch: changed tolerances

For precision control:

  • In our standard code: Erik recalls that we ran into trouble with setting all negative values to 0, so we just set slightly negative values to 0, and leave very negative values as is.
  • It looks like, on the CN Matrix branch, very negative values are now set to 0. (Though it may be that there is a window of moderately negative values that are left as is?)

A separate issue is the general relaxation of tolerances in the balance checks.

  • Erik wonders if there is some process that is left out.
  • Path forward: ask the developer about this first, and see if he's able to provide guidance or figure this out. Ask if he has any hypotheses about this.

Outputting directly in CMIP format

Dave feels this should be a CESM-wide decision: there probably isn't much benefit in our doing it if it isn't being done CESM-wide.

The easiest thing is changing the field names and units. Changing to write out time series rather than time slices is harder, though Erik sees a way this could be done.

SEWG survey

If we ask the scientists, we could reframe it as: what are the priorities of a scientist, and/or in which of these area(s) do they feel CTSM could particularly need work.

Getting master to a cime tag

Bill: maybe not super-high priority. More important is adding a test to the prealpha test suite that would expose the problem on hobart. (Though that may not have helped in this particular case.)

Update after the meeting: Erik says:

  • I checked to make sure there were prealpha tests in our testlist for hobart and there is. So Chris would've run into cime issue #3142 if he would've done prealpha testing on that cime tag he made for me.
  • Do we as a practice make them run prealpha testing before trying out a cime tag in ctsm? I'm not sure there's a clear procedural way to prevent this problem in the future, when I needed a cime tag to make the ctsm tag. I think they do make cime tags before testing, and if problems are found in testing they then make a new cime tag. So we could try to only use cime tags that have gone through prealpha testing. But, the cesm tag had both the cime and ctsm tag in it -- so do you do a round of testing with only a cime update when new cime tags are made?

Moving diffs from release to master

Part of the problem is that there were file renames.

https://stackoverflow.com/questions/32843857/git-cherry-pick-with-target-file-renamed/32844015 and https://stackoverflow.com/questions/9772598/backport-changes-from-renamed-file and https://diego.assencio.com/?index=3a9fab02df4edf42cf495bf087b37c2d suggest solutions for cherry-pick. Negin suggests --find-renames.

June 12, 2019: CLM-CMT meeting

Agenda

  • Erik: should we remove or fix things that are broken like CNDV or SUPL_NITRO? Or mark them as deeply broken?
  • Erik: Note, that we've inadvertently brought in a bug in mosart, by using the version on master rather than on the release branch (ESCOMP/mosart/#18).
  • Erik: pre-millennial datasets. I had the question on what level of support we should do for this? Anywhere from just creating the datasets, to having compsets to support it. What level and what priority is this?
  • Erik: what priority for anomaly forcing, data creation script, and example?

Things that are deeply broken

CNDV: Dave isn't prepared to remove this yet. One idea is to stop people from running it unless the user specifies a flag to allow running it.

SUPL_NITRO: There isn't yet a bug about this, but there should be. If you turn on SUPL_NITRO in clm5, it does something weird, because it's set up to work with the clm45 way of doing things rather than clm5. Again, we'll probably test for incompatibility and die (it's probably one or a set of clm5 parameterizations). Dave feels it could be good to keep this around as an option, but (at least eventually) just have it do the right thing with clm5. (A possible "simple" solution would be to just crank up N deposition.)

Anomaly forcing

Dave: ideally, would be in next release.

Script needs some work: it has some assumptions for cmip5.

Issue review

We're reviewing type: bug, sorted by oldest first. Next up is #20.

June 3, 2019: CLM-CMT meeting

Agenda

  • Bill S: Suggestion for issue triage: periodically pick one "type" category and go through issues in that category that (a) aren't in a project and (b) aren't labeled "low priority".
  • Bill S: Tag planning
    • Do others feel it's worth the time (computer & human) for me to separate my different small changes?

Issue review

We'll try to go through issues more frequently, grouped by type (excluding issues in a project and those already labeled "low priority"; we'd stay on top of projects separately, and occasionally review "low priority" issues to see if they should be moved up in priority or closed). Maybe try to cycle through issues about once a year?

Erik asked why we stopped going through issues in the past. Bill thinks it's because we didn't get a lot of value out of it last time. This may have been because we were too ambitious in what issues we selected as ones to tackle for CLM5. Moving forward, Bill suggests that we try to be less ambitious in what we select as "high priority", and more aggressively close issues as wontfixes.

Today we went through the three types: discussion, support and investigation.

We also talked a bit about milestones. Erik's feeling (which Bill agrees with) is that we can get rid of the "Future" milestone, using "Low priority" instead. We should just have milestones for particular upcoming tags we're targeting, like CESM2.1.1.

Tag planning

Erik's general feeling is that it can be okay to combine multiple small bug fixes into a single tag, as long as the separation between the two is obvious enough that you could back one out without the other later if need be.

However, for Bill's specific upcoming tags, it probably makes sense to keep them separate, since Bette or others may want to back out the snow depth bug fix, but trying to back out the h2osno-update-before-snowCapping would generate conflicts.

May 23, 2019

Agenda

  • Bill S: Error checks on h2osno (and for similar error checks in the future): Should these be done always, despite a performance penalty? Or should they just be done if running in debug mode?
    • Just checking these in debug mode is probably sufficient for catching problems on master, but only catches problems in other people's code if they run their code in debug mode for a few days.
    • If we just want to check this when running in debug mode, how should we do this? We could use SHR_ASSERT and count on the compiler to optimize out the do-nothing loop in non-debug mode. But this might be more straightforward (and I think I could give a more informative error message) if we enclosed the check in #ifndef NDEBUG (note that we already have conditional compilation on the DEBUG/NDEBUG cpp defs implicitly via using SHR_ASSERT).
  • Bill S: Priority of flexible soil & snow layer structures. If relatively high priority, who should do this? (Sam might be a good person to do it.)
  • Bill S: Danica's idea of model orientation

Flexible soil & snow layer structures

Importance of this? Mike thinks this is tied in with nudging using observations.

Error checks on h2osno

Erik feels: just do this for debug.

Follow the #ifdef style used for SHR_ASSERT - fine to use #ifdef explicitly in the code.

Bill's updates after meeting

I think I'll use #ifndef NDEBUG for the following reasons:

  • NDEBUG is what shr_assert.h checks, I think for consistency with C

  • We've had problems in the past where DEBUG wasn't being defined, yet NDEBUG was

  • If the build process breaks such that the given token isn't defined, I'd rather have logic that causes the error checks to happen in production mode rather than logic that causes the error checks to NOT happen in debug mode.

Model orientation

Maybe on a CTSM web page, which we need....

May 20, 2019 CLM-CMT

Agenda

  • Erik/Bill: One type label for issues/PRs?
  • Erik: Which master tag is most important?
  • Erik: Problem with mksurfdata_ for wetlands bug fix is showing up for more resolutions besides conus_20_x8: https://github.com/ESCOMP/ctsm/issues/673
  • Bill S: My funding for fy2020?
  • Bill S: https://github.com/ESCOMP/ctsm/issues/573
  • Bill S: https://github.com/ESCOMP/ctsm/issues/715
  • Bill S: Keith Lindsay's issue with out-of-range ciso values
  • Erik: Making pre-millenium datasets (0850-1850). Plan to NOT add to XML database nor add compsets for it. Just make datasets and users will have to point to them, and modify namelists.
  • Erik: FYI another release tag is being made.

May 17, 2019

Agenda

  • Bill S: confirm outline for my talk Wed
    • LILAC slides?
  • Bill S: How important is it to move to netcdf for input parameters?
  • Bill S: Erik: auto-emailer broken again? If you think it's likely to take more than 10 min to fix, I'd say let's drop it and just add this to the trunk checklist.

Moving to netcdf for input parameters

Bill's notes before the meeting

I'm pretty sure we want to do this, but I want to make sure people are aware that this will come with a cost to all users: we'll now have dependencies outside of the python standard library, which will require people to load an appropriate python environment before running any cime-related scripts.

On cheyenne, I think this would mean making sure you have run ncar_pylib before running any scripts.

On other machines, this could be harder, and would add another step to porting CTSM / CESM to a new machine.

This may be inevitable at some point for cime/cesm, but right now it looks like the use of netcdf for input parameters is the hardest dependency that is driving this. So I want to be sure that everyone is in agreement that it's worth adding this complexity for users before we continue down this path.

(See also https://github.com/ESMCI/cime/issues/2935.)

Notes from meeting

This doesn't feel too burdensome for users. In practice, you'd probably load the appropriate environment in your dot file. Unless we can come up with something that feels about equally good, let's stick with our plan.

Bill raised a compromise solution: Rather than having the python build-namelist generate a netCDF file directly (via a python netCDF library), it would write text output in cdl format, then call ncgen. That requires having ncgen in your path, but that can be managed by cime using the same module mechanism that we use elsewhere (I think that, if you have loaded a netcdf module, you should have ncgen in your path). To Bill, this feels somewhat less robust than writing a netCDF file directly, but not necessarily that much less robust than our current scheme of writing a Fortran namelist-formatted text file. Bill feels like this could be a reasonable solution in order to avoid the python dependency issues. Erik and Negin don't really like this solution, though, and feel like having an appropriate python environment loaded really isn't too much to ask.

Our tentative plan is still to stick with our long-standing plan of having the python write a netCDF file directly, but the above seems like a reasonable fall-back, particularly if strong objections are raised to introducing python dependencies.

May 6, 2019

Agenda

  • Erik: NOTE, the git auto-emailer tagger isn't getting the list of files in the release tags correctly, hence I've got it hardcoded for the file it needs. But, it is now at least working.
  • Erik: I did the needed testing for my release tag on cheyenne, before it went down, so I'll be making the tag before it comes back up.
  • Negin (and Sam): Issues and obstacles using ocgis package.
  • Erik: I've created the CO2 SSP files. I need to get these and presaero into a cime tag for release.
  • Erik: Should Sam be added to ctsm-software group?

May 2, 2019

Agenda

  • Bill S: CanopyHydrology tracerization options code review

CanopyHydrology tracerization options code review

Notes from this discussion can be found in the README file of this repository (note: most notes in that file were from Bill from before the discussion; notes from the discussion are labeled as such in that file):

https://github.com/billsacks/prototypes-ctsm-canopyhydrology_tracers

Here is a more general / long-term discussion item:

Short-circuiting some state updates?

Sean: Can we short-circuit some state updates? In particular: For canopy excess, can we avoid doing the update of liqcan, instead calculating the excess and putting it in a runoff flux without ever updating liqcan to be greater than the holding capacity?

Mike points out that this would have different implications for the tracer concentration: the current scheme mixes together the inputs with the existing state, whereas Sean's suggestion doesn't do this mixing before the runoff.

We like Sean's idea for bulk, but aren't sure if that's the right thing to do with respect to tracers. Let's come back to this....

April 29, 2019

Agenda

  • Erik: Change our testing to izumi?
  • Erik: We do need to get the buildlib changes to master. Although I wonder if this can be after the cesm2.1.1 work? This is only needed for CESM2.2...
  • Erik: FYI. Phase out of DIN_LOC_ROOT_CLMFORC for use of soft-links on cheyenne.
  • Erik: Tacking on historical data in front of SSP datasets. We've done this in the past and for many of the SSP datasets. Should we continue to do that? One reason is to easily use SSP data for historical periods including periods we don't currently have new updated historical data for.
  • Erik: FYI. I'm going to plan on updating the masks/mapping files for the new hirespft data. Will probably happen over a few release tags (See #697)
  • Erik: Issue #700, do I make these changes? Also note that making ndep extend means that there will be no seasonal variability after you get beyond the end of the dataset (held constant at value of last month).
  • Bill: Moving baselines; to where?

Tacking on historical data in front of SSP datasets

Dave is fine with sticking with doing this. The main use case is when you're running out to 2018: this is fairly common.

ndep

Now that we have monthly, the idea of extending the data past the end of the specified period doesn't make sense. So we should change the streams settings so that it aborts the run if you try to run past the end of the dataset.

April 25, 2019

Agenda

  • Erik/Naoki -- mizuRoute needs PIO, how should we handle that?
  • Bill S (mainly for Mike): number of snow layers and h2osno_max
  • Bill S: Where should we store the 1 TB of NLDAS atm forcing data?
  • Bill S: With nldas datm: Years to use for yr_start, yr_end, yr_align?
  • Bill S (mainly for Erik, I guess): use mapalgo=copy when running nldas forcing with nldas grid?
  • Bill S: Okay with my plan for the first part of CanopyHydrology?

mizuRoute and PIO

Erik is bringing manage_externals into mizuRoute, in order to pull in pio (and might as well bring in cime).

h2osno_max

Noah-MP uses 3 layers, but thicker top two layers.

Mike points out that memory may be as much or more of a consideration than performance.

Dave suggests: If there isn't much performance hit of using 5 layers, go with that for consistency. Otherwise, we can add flexibility of defining the layer structure.

Another thing to consider longer-term is whether we use snicar: Noah-MP doesn't. This probably has a bigger impact on the science.

So for short-term, I'll use 5 layers and h2osnomax = 5 m.

Where to put nldas forcing data

Dave is fine with putting this in the lmwg space on glade for now. Mike is also fine with this.

But longer-term we may want to discuss some coordinated space on glade.

coszen interpolation

Are there any problems with using coszen interpolation?

nldas yr start, end, align

First off: Run start date should be year 2000. And let's just change all of our year-2000 compsets to that.

Let's have the datm start year be 1980, and end year 2018. datm year align will be 1980.

And let's change all of our datm forcings to have year align = year start for year-2000.

2003 and 2010? Follow what's done for gswp.

mapalgo = copy when using nldas grid

Erik: yes, do this in order to bypass interpolation.

CanopyHydrology rework

Sean prefers not having so many small subroutines. Bill will talk more with Sean about different design possibilities (also invite Mike).

April 21, 2019 - CLM-CMT

Agenda

  • Erik/Negin: Containers and machine learning in Python were big topics at the SEA conference.
  • Erik: Issue with Carbon isotopes changing answers for some cases.
  • Dave L: Archiving information from ELM group on numerical stability
  • Bill S: Confirming okay with h2ocan change
  • Bill S: How can I help with the ciso bug fallout?
  • Bill S: Priority of gridcell-level balance checks (#314, #315, #201)? Also #231? How much of this should I do (vs. others)?
  • Erik: hirespft issue, LANDMASK different than before, but also we probably need to use the LANDMASK that has Caspian Sea as LAND.
  • Erik: mizuRoute probably should add manage_externals to it, and bring in externals (needs PIO).
  • Bill S: Branch tag naming convention: I'm starting to feel like we should use a prefix of 'branch_tags/', so that you can grep -v branch_tags to see just tags of master and release branches (and a handful of tags that were made before adopting this convention... but better late than never?).
  • Negin: ocgis mpt related issue on casper and cheyenne is resolved now.
  • Erik: We need to get back to CN Matrix people. Negin did you want to join in on that?
  • Erik: Future scenarios we have ndep for: SSP5-85, SSP1-2.6, SSP2-4.5, SSP3-7.0, aerosols for: SSP5-8.5, SSP2-4.5, SSP3-7.0. What do we need for the CESM2.1.1 release? There are 8 total. For CO2 do we use global and latitude band versions?
  • Erik: By the way, there are two more REFCASES I'm setting up for B compsets. I setup them for BHIST number 010, they also need 004, and 011.
  • Erik: nlevgrnd>nlevurb issue (#674) and the conus mksurfdata issue (#673).
  • Erik: download of CLM datm forcing data doesn't work out of the box when DIN_LOC_ROOT != DIN_LOC_ROOT_CLMFORC (see https://bb.cgd.ucar.edu/clm5-atmospheric-forcings-data)

ocgis

There is a problem with mpt. For now can use mvapich. Negin will give instructions to Sam.

Archiving information on numerical stability

Maybe try doc directory. (File format not important - can use rst, md, txt, etc.)

h2ocan change

We'll run a simulation with diagnostics at some point to confirm that this, together with other upcoming changes, are not climate changing. (No need to do a separate simulation for just this change.) Bill will work with Keith to periodically run checks on batches of changes to ensure they aren't climate changing.

Priority of gridcell-level balance checks

Dave feels water isotopes are a higher priority for now, though adding these checks remains fairly high priority. We could give this to Sam if/when he has time.

hirespft issue

Bill doesn't think the pft landmask needs to include the Caspian Sea.

Branch tag naming convention

People are fine with the idea of prefixing branch tags with 'branch_tags/'

CONUS mksurfdata issue (#673)

We won't treat this as high priority (since the code does seem to catch the problem) unless someone else runs into this.

April 18, 2019

Agenda

  • Erik/Naoki: Recommending that mizuRoute give bit-for-bit identical results with different MPI task and Open-MP thread counts. Doing this helps you to feel confident that the code is working correctly. There are legit bugs when this doesn't happen.
  • Bill S: Resolving outstanding NWP questions:

NWP configuration

Use 3 snow layers. Look at reasonable h2osnomax.

Human stress indices: look at whether it has significant cost. It would be 'configuration'.

BVOCs off. Can look at what's done for FATES. But maybe do this via configuration. Look at namelist defaults for megan: can add another attribute to that one namelist flag.

mizuRoute: different answers for mpi task counts

Erik: Recommending that mizuRoute give bit-for-bit identical results with different MPI task and Open-MP thread counts. Doing this helps you to feel confident that the code is working correctly. There are legit bugs when this doesn't happen.

Erik thinks there is a legitimate problem currently, causing roundoff-level differences based on task count. Differences are expected due to mpi reduce calls. There are no reduce calls here, but maybe there are sums done in a different order depending on task count?

NLDAS datm forcing

We DO want to make these data available via our inputdata servers.

We'd ideally like to keep these updated.

Eventually we probably want to add the capability for datm to have just a subset of months for a particular year. (Right now, Erik and Bill think that datm can only handle full years, out of the box, though you can hack it otherwise.)

April 15, 2019 - CLM-CMT

Agenda

  • Erik: Talk about only having the reviewer mark a PR conversation as resolved. Authors should add a "Done", when they think they've completed it. We need to make Sam and others know about this convention as well.
  • Erik: Working with ISG to get the auto-emailing happening again for tags. This seems to be a system issue. It looks like disks aren't mounted, and one machine is down that might explain the problem.
  • Erik/Bill/Sam: Talk about tools project.
  • Erik/Sam: moving parameters project to NetCDF files
  • Erik: General query: Are there some things we can think about to speed up our work? Are we getting stuck on certain things? Is there anything we are doing that takes time that we don't need to do? I do feel like I spend a lot of time communicating with people (email and in person), that part can take a big part of the day. This is something I'm personally frustrated with, would like to hear from others.

Marking PR comments as resolved

We'll leave it to the person open a comment thread (typically reviewer) to hit the "resolved" button (as opposed to having the author hit "resolved").

Productivity / email distractions

We generally agree that emails should have a response timeframe of about a day: we won't expect sooner replies than that. If you need more immediate assistance, come in person or call.

April 4, 2019 - CLM science meeting

Management of the namelist file

There was some discussion in today's CLM science meeting about our management of the namelist file.

People are happy with the idea of moving towards specifying all namelist options explicitly on the file (as opposed to having some things left off of the default lnd_in file, with values set by hard-coded defaults).

Regarding our long-term plan of moving parameters to a netCDF file rather than a Fortran namelist file: People seemed to generally feel that real-valued parameters should go on the netCDF file, but strings, booleans, or integer-based options should stay on the text-formatted namelist file for ease of checking the options used in your case.

April 4, 2019

Agenda

  • Bill: my charging

  • Bill: Some notes on NLDAS grid

    • Blocky patterns, presumably from coarse-resolution raw data
    • Lakes are masked out
  • Bill: Naming nldas grid (see https://github.com/ESMCI/cime/pull/3063)

  • Bill: Questions on NLDAS datm forcing (from email)

    • Do we want to make these an out-of-the-box option?
    • Is this the correct location: /glade/p/cesm/lmwg/atm_forcing.datm7.NLDAS.0.125d.c150217
      • Do we want everything from there?
    • Do we want this in the inputdata repo? (~ 400 GB)
    • Do we want to use these forcing by default with the NWP compset?
      • This will tie this compset to this grid (or some subset of this grid). Is that acceptable? This would make me feel more of a desire to put in place some error-checking.
    • Thoughts on error-checking to ensure that you don't try to use these forcing with a grid that extends beyond the forcing. How much time should I spend on that, vs. leave it up to the user?
      • One kludgey option would be to only allow these forcing with the nldas grid. But that wouldn't allow for runs over single points or sub-regions. So on the whole, would that be a plus or a minus?
    • Also: Do we want to use mapalgo=copy when running with the nldas grid? (What does that accomplish?) [We didn't end up discussing this.]
  • Bill: Do we need nlevgrnd >= nlevurb? If so, Mike, do you have suggestions for how to do this? Note that Keith says we'll soon have nlevurb = 10. Should we try to remove the limitation of nlevgrnd >= nlevurb if that limitation currently exists?

  • Bill: NWP Compset names: long name and alias

  • Martyn: MizuRoute (Dave says he has some questions on parallelization and coupling)

Notes on running with NLDAS grid

Feeling is that we should use a mask that does include lakes. We can use the mask from the atmosphere forcing data.

We'll still use "nldas" as the name of the mask (then use nldas_nolakes if we want a mask without lakes).

Grid alias: nldas2_mnldas2 (later we could also have nldas2_mnldas2Nolake)

Grid long name: 0.125nldas2 (get rid of 224x464).

We actually want to turn on the runoff model, with mapping files. Sean will create a regional 1/8 deg runoff grid and the mapping files between them.

NLDAS atmosphere forcing

Sean: We DO want to be able to do regional runs that are a subset of the NLDAS domain.

Dave is okay with NOT doing error-checking for now to ensure that the land domain is a subset of the datm forcing.

Sean will probably update the datm forcing dataset.

Dave is initially inclined to provide the entire dataset via inputdata.

NWP compset

Default forcing: NLDAS

We want two separate compsets: one with runoff on and one with runoff off.

For now, we'll live with having CLM in the compset long name. We'll iterate on an alias. (Eventually want to change this to CTSM, but that might be a lot of work.)

More thoughts on NWP options

Separate two axes: structure vs. physics (in addition to our current separation of bgc, crop, etc.)

Structure would be dominant landunit / pft / soil layers

  • We could call this "fast01" and "standard01", for example (for numbering, we'll just incrementally count up).

Physics would be physics options like PHS.

A compset would define the physics and structure. Our current compsets would use the default "standard" structure.

So we'd change I2000Clm50Sp to I2000Clm50S01Sp

And the new compsets would be I2000Nwp10F01SpNldasGs / I2000Nwp10F01SpNldasRsGs [stub rof] / I2000Nwp10F01SpGswpGs

Update: Let's keep the structure out of the compset name, and instead have an xml variable.

So the old compsets will stay as is, and the xml variable default will be "standard01". The new compsets will be I2000Nwp10SpNldasGs / I2000Nwp10SpNldasRsGs [stub rof] / I2000Nwp10SpGswpGs

These will set the physics option and the structure option (the latter will be "fast01").

Turn off PHS by default.

And we'll probably eventually have a Hyd physics suite as well.

We went back and forth in this discussion on whether we want a scheme that lends itself more easily to keeping the NWP vs. CLM/climate configurations in sync, or a scheme that lends itself more easily to letting them evolve independently.

  • The latter (letting them evolve independently) could be done by having separate versioning for clm vs. nwp physics versions. The major downside of this is that it would be hard to maintain this long-term in situations where we do want them to evolve together: if you change the default for something for new clm versions, you need to remember to change the default for new nwp versions, too, and vice versa; it would be very easy for the two to accidentally get out of sync.
  • The former (keeping them more in sync) could be done by having a single ctsm physics version number, then having a "configuration" attribute that provides for differences in the (hopefully uncommon) cases where clm and nwp configurations are meant to differ.
  • Which of these makes the most sense depends largely on whether we expect the two to stay more in sync or diverge. Dave argued that part of the purpose of having this single, unified modeling system was to allow each configuration to take advantage of advances in the other, which argues for the system that makes it easier to keep the two in sync. We'll plan to go with that for now.

April 1, 2019: CLM-CMT meeting

Agenda

  • Bill: Should we have issue labels for issues that only apply to fates_next_api? Maybe also labels for issues that only apply to the release branch(es), if any. My thinking is: most issues apply to master; it could be helpful to be able to filter on those that do not.
  • Erik: We need to make CTSM/RTM/MOSART tags on master for the cime incompatible change. The RTM/MOSART tags we are using on master have been the release branch tags. This should be moved to master for each of those, and for master we should use tags on master for these. How should we coordinate this? I think I should make the RTM/MOSART tags for sure, so might as well do the CTSM one along with it. But, will need to be done after the CESM2.1 tag
  • Erik: REFCASES for SSP B compsets. I was surprised that answers are different for with and without cmip6, and that each SSP needs to be interpolated from the BHIST case. Note, that the BWSSP* compsets all work as expected (although they don't seem to want the non-cmip6 option).
  • Erik: Dave do you have an opinion on the question I asked Gokhan about what REFCASE B1850 should point to? If I point to the original case that B1850 started from C-isotopes won't be spun up, and I'll have to do something to get a REFCASE to do that. If I point to the CESM2.1 B1850 case, carbon isotopes will be spun up, and everything can work.

Issue labels

Erik will talk to Ryan about FATES issue / PR issue labels.

Maybe something like:

branch: release-clm5.0 - denotes changes needed on the release branch (and possibly also master)

branch: FATES next api

Changes needed for cime incompatible changes

Erik will work on this.

Refcases

It turns out that C isotopes are on the current cesm2.1 refcases

I compsets for SSPs

Erik is working on this. Needs to make a cime change.

Dave / Sean will work on creating anomaly forcings for future scenarios. Not sure if this will be out-of-the-box, or require user_datm_streams.

Externals compatibility

Negin wonders if we could track and error-check compatibility between cime and various component versions.

Bill suggests: the bigger problem may be just checking for whether you forgot to run checkout_externals. Maybe it's worth trapping for that, by giving a warning or error if your externals are out-of-sync with the externals file.

March 18, 2019: CLM-CMT meeting

Agenda

  • Negin: balance check issue
  • Bill: who should support Inne in her addition of transient lakes in mksurfdata_map?
  • Bill: cime updates needed (release & master; latter needs coordination across a number of components)
  • Erik: SE grid PR out there...
  • Erik: FYI. Working on REFCASEs for CESM and CAM. Bill was my email to Keith L. correct?
  • Erik: Sam's toosmall landunits namelist -- currently limits 0 to 10% integers. Discussion about how to handle urban. He's also limited it with n_dom_*, but doesn't seem to need to be.
  • Erik: Suggestion for adding fields to surface datasets. Should only add ones used all the time, at high resolution, non-transient. Optional, low resolution, transient (outside of yearly) should be added as streams. So suggest soil Ph should be own streams for example.
  • Erik: How to help Sam with the dataset project?
  • Bill: separate columns in our planning for master vs. release?

cime updates

Erik points out that we've been pointing to release branches for mosart and rtm. We'd have to stop doing that. Would like to bring this up at a cseg meeting.

toosmall landunits namelist

We don't see any reason to have the limits of being integer and <= 10%. In fact, letting it be 101% is a nice way to turn off a landunit.

Also, it seems like it would be good to allow both n_dom_* and toosmall.

Erik's suggestions for surface datasets vs. streams

Erik: Suggestion for adding fields to surface datasets. Should only add ones used all the time, at high resolution, non-transient. Optional, low resolution, transient (outside of yearly) should be added as streams. So suggest soil Ph should be own streams for example.

When it's a mix of these characteristics, we need to make a case-by-case call. But Erik feels that, in general, optional fields should be on a stream.

Dave points out that sometimes a field is tied to another field on the surface dataset (e.g., soil texture).

Bill: can we come up with some default options and wrapper code so that it's easy to add a time-constant stream like pH, with a few lines of code rather than a page of new code? Erik thinks that could be possible. (For transient things, there truly are more settings that you need to consciously set.)

March, 11, 2019: CLM-CMT meeting

Agenda:

  • Erik: FATES update issues and discussion. fates_next_api testing is failing for default configurations. Have some changes on NCAR/fates-release that need to go into NGEET/fates, plan to move off of NCAR/fates-release after this update. Did stepping for each FATES version, running a single test on hobart to keep it working and got all the way to end. I'm going to check this against doing a single merge on release and master and compare results.
  • Erik: CIME check_input_data does create_test need a skip_chk_inp options?
  • Erik: Status of cheyenne: mkmapdata was fixed by CISL, new tags on master and release branch, Erik still to send email, Jim sent to CESM-Forums, erik will do the fates update for fates_next_api and bring latest fates into ctsm: release and master.
  • Erik: FYI: Had trouble copying from campaign storage to glade, but Cecile got it to work going to scratch space.
  • Erik: We almost have everything needed for the CESM2.1.1 release candidate tag needed for SSP585 B compset.

March 7, 2019: Discussion about branch tag naming

Original discussion March 7, 2019

Erik and Bill S discussed the possibility of putting branch tags in the ESCOMP/ctsm repository. We felt it was okay to do this in limited circumstances - e.g., when a given code version is needed for a major set of production runs.

Erik suggested a naming convention of:

BASE-TAG-NAME_PURPOSE-OF-BRANCH_n01

e.g.,

release-clm5.0.15_ismip6_n01

(An alternative would have been to start the tag name with something different, perhaps even having a special branch_tags/ namespace.)

Follow-ups March 15, 2019

Bill S's initial thoughts:

I see two problems with this naming convention, based on an alphabetical tag listing:

(1) This will be listed alongside the other release tags, making it harder to distinguish between tags on the release branch vs. tags on branches off of the release branch.

(2) If we ever wanted to update this branch to a more recent version of release-clm5.0, the tags wouldn't be listed together. (e.g., release-clm5.0.15_ismip6_n01 wouldn't appear next to release-clm5.0.20_ismip6_n01.)

So I'd like to propose changing the order to something like:

ismip6_release-clm5.0.15_n01

which is more similar to what we used in svn.

Or maybe even prefixing this with branch_tags, so we'd have:

branch_tags/ismip6_release-clm5.0.15_n01

Bill S's updates:

Actually, I guess what would be analogous with what we used in svn would be

ismip6_n01_release-clm5.0.15

and I might be inclined to go with that (or branch_tags/ismip6_n01_release-clm5.0.15) if it's equal to you.

Erik's reply:

I think I'm good with any of these. I do want to point out that in svn we used underscores to replace dash, dot, or slash, but we don't have that restriction here. In git we've also had less of a reason to do branch tags, so I don't think we need a directory for them. We just want the name to be obvious that it's a branch tag.

So with that I guess I suggest...

ismip6.n01-release-clm5.0.15

The dashes then sort of say it's a branch. I'm not sure the "n" in "n01" is needed either. It could also be a "b" to designate branch....

But, again, I'm good with going with the convention that you start....

Bill's reply:

If it's okay with you, I think I'll go with a compromise:

ismip6.n01_release-clm5.0.15

This is close to your suggestion, but keeps an underscore between the branch name/version and the release tag it's based off of. My rationale is that this makes it easier to distinguish at a glance what the release tag is that it's based off of: everything after the underscore.

March 4, 2019: CLM-CMT meeting

Agenda

  • Bill: Confirming who is leading the toolchain work (Sam, Negin, or co-leading?); also, which of Erik or I should be the main person helping them get spun-up / reviewing this work? (For efficiency, I'd suggest that only one of Erik or I be mainly involved in this.)
  • Bill: More generally: Get on the same page about each person's priorities. Relatively large projects underway or coming soon are at least:
    • LILAC coupler
    • toolchain rework
    • NWP configuration & resolution
    • water isotopes
    • representative hillslopes
    • cn-matrix
    • FAN
    • pythonizing build-namelist
    • mizuRoute coupling
  • Bill: okay to have a namelist setting that affects the initialization of a run, but has no effect if set mid-way through a continue run? Do we have other settings like that? (I guess there are history settings....)
  • Erik: How much support for running CLM modules from Python should we give? Negin and I talked with Julius. Especially thinking of the CPP token he was using. He does have a pretty easy flow using F2Py that just requires building a shared library file, and then can import into Python.
  • Bill: Getting rid of use cases? (https://github.com/ESCOMP/ctsm/issues/605)

Who's working on what, and priorities

Who's working on what

LILAC coupler

  • Lead: Negin (Joe for now, transitioning to Negin)
  • Consulting / reviewing: Joe, Mariana, Bill, Rocky

Toolchain rework & ease of use

  • Lead: Negin
  • Supporting: Sam
  • Consulting / reviewing: Erik, Ben

NWP configuration & resolution

  • Lead: Bill
  • Consulting / reviewing: Mike Barlage

Water isotopes: biogeophysics changes

  • Lead: Bill
  • Consulting / reviewing: David Noone, Dave Lawrence, Mike Barlage, Martyn Clark, Sean Swenson

Representative hillslope integration

  • Leads: Sean & Bill

CN Matrix integration

  • Lead: Erik
  • Consulting / reviewing: Negin

FAN integration

  • Lead: Erik
  • Consulting / reviewing: Negin

mizuRoute coupling & parallelization

  • Lead: TBD
  • Consulting / reviewing: Erik, Mariana

Pythonizing build-namelist

  • Lead: Erik
  • Supporting: Negin, Sam
  • Consulting / reviewing: Bill

Priorities

Negin:

  • Over the next month, top development priority is toolchain / ease of use
  • Second priority is LILAC coupler (for now, just keep up with what Joe is doing; then taking over from him)

Erik:

  • CN Matrix high priority
  • Misc. CESM support
  • build-namelist pythonization

Bill:

  • NWP configuration
  • Water isotopes
  • Hillslope hydrology when it comes time to integrate it (in the timing of water isotope rework)

Sam:

  • Toolchain
  • Canopy iteration speedup

FAN and f2py

We feel we should try to keep support for running FAN via f2py. We're okay with a single CPP token, especially if it's something that will be in common for all f2py things in the future. Ideally, the CPP token would only surround the bare minimum - e.g., things can be public with a comment that they're only public for the sake of f2py-based testing.

Erik noted that f2py doesn't seem to support real(r8)... we wonder what other limitations f2py might have. Negin wonders if Cython would be a reasonable alternative?

Ideally, we'd have an eye towards generality: what would we like things to look like in general to support other f2py-based functional testing? Long-term, we'd ideally like to come up with a general scheme for doing this, rather than having each module use something different. Bill notes that a number of people seem to want functionality like this. But for now, we can bring FAN in similar to how it is, with plans to add more consistency later.

Use cases

https://github.com/ESCOMP/ctsm/issues/605

Bill feels it's confusing to have use cases in addition to namelist defaults. Negin agrees.

Erik feels it would be reasonable to get rid of use cases. This might be something we want to do before the pythonization of build-namelist.

February 28, 2019

Agenda

  • Bill: Do we want/need to maintain create_crop_landunit=.false. as an option? (I think the plan was to remove this eventually, partly because this will simplify the code. But do we want to maintain it for performance reasons?) (Erik -- note, this is currently required for fates, removing it will require changes to fates)

Need for maintaining create_crop_landunit=.false.?

Conclusions: inconclusive. For now, we won't take pains to support create_crop_landunit=.false. with new options, but we won't rush to rip it out.

More general discussion about crop and BGC

Do we need a capability for crop without BGC - e.g., basically an SP mode for crop? (Noah-MP's default is to use SP for crop.)

  • We can do SP crop via generic crop right now. Is it important to allow SP with specific crops?

Dave thinks it makes more sense to use BGC mode, and just give more thought to initialization.

Noah-MP allows you to separately set whether you want SP (prescribed LAI) for natural veg. vs. crops.

Noah-MP also has a quasi-BGC mode: a super-simple BGC model. We could think about doing that. Though Dave thinks it may be similar to our model with N turned off, and it may be better not to maintain multiple BGC models. You can also turn off soil layers for BGC (but if we're just using 4 soil layers for weather, that might be sufficient, in terms of reducing the cost).

If we want / need to have BGC enabled, we might want to consider ways to reduce its cost, such as via:

  • using simpler BGC
    • we might want to change the code to avoid even allocating N variables when using supplemental nitrogen
    • however, we might find that a C-only model is so scientifically questionable that we don't want to spend the effort to try to support that
  • going to a longer time step for BGC (or just soil BGC)

Nested WRF domains

How could we handle having nested domains in CTSM?

The problem is that we have a lot of module-level data. So if multiple nests share processors (which they do, in practice), we'd run into trouble.

Noah-MP handles this by rereading its parameter structure each time it starts up a given nest.

We may want to consider packaging all of our module-level variables so that we can have separate packages for each part of the nest.

(Note that we can't simply create a single unstructured grid that smushes together all of the nest grids, because they could have different time steps, and maybe other parameters.)

February 25, 2019: CLM-CMT meeting

Agenda

  • Bill: From CSEG meeting: For running software tests: CSEG members can use SEWG account on cheyenne, but non-CSEG members should use the relevant component's dev account
  • Bill: CN Matrix code: Should this have a close review from a scientist as well as SE, if it hasn't already?
  • Bill: Should we talk about process / timeline for bringing in big bug fixes - particularly the FUN bug? Some options I see are:
    • Bring it to master, let Bette and others (who want to use a more recent version and compare with cesm2.1 runs) live with it or deal with it on their own
    • Wait to bring this in until after isotopes are ready
    • Bring in the bug fix, but start a branch that doesn't have this fix (and future major bug fixes) but is otherwise the same as master (which we could update periodically)
    • Bring in the bug fix and keep a running list of things that would need to be backed out by Bette and others
    • Other options?
  • Erik: FAN changes include using all own hardcoded constants and introduces a CPP token so that it can be run from Python. I support this functionality, question the CPP token though. And I think the constants need to be initialized better.
  • Bill: plan for dynamic glacier conservation. Note lack of energy conservation.
  • Erik: CN matrix solution, namelist variable names: isspinup seems confusing to me, out_matrix is maybe OK. Maybe these should all have a common prefix? By the way, I think there might be XML/bld file changes that are missing -- but need to check.
  • Erik: Talk to Keith about making urban parameter tables as input to CLM directly. Agree's the change would save space, and also make it easier for him. Not, sure if he has time to do it himself.
  • Erik: I've been directing most of my time to the LGM surface dataset creation. We have a file we are evaluating now. So I might be nearly done with that.

CN Matrix review

Should this have a science review?

Dave: they've confirmed that this hasn't changed answers (beyond roundoff-ish), so doesn't feel it needs a close science review.

For now this is coming in as an option. We'll live with the additional maintenance burden of this for now, and in the long-term might choose one of the old or new.

The main downside of this is that it increases the cost - about 20% cost increase for the whole model. We'll want to look at that if we find this is something we want to keep long-term.

Some benefits of this are:

  • Easier to add new pools, etc.
  • Gives some extra diagnostics

Dealing with FUN bug fix and other future bug fixes

Plan: allow bringing these to master, but keep track of these so we can back them out later (with git).

We could add a keyword in the ChangeSum, or just search for '[X] clm5_0' in the ChangeLog. Let's just do the latter.

So we need to be careful to make sure this section is filled out.

Idea of holding this bug fix on a separate branch until close to the next release? The problem with that is that there are a lot of people who do want the bug fix.

February 15, 2019: CLM-CMT meeting

  • Bill (from Negin): Spending more time in these meetings reviewing PRs together?
  • Dave: Discuss plans for matrix code
  • Bill: How should collaborators manage projects with git issues? The two best options I see are (1) give them write permission so they can manage projects on ESCOMP/ctsm (then they can also: push to unprotected branches, close issues, change labels/milestones/projects/assignees for issues/PRs, edit and delete comments on issues and PRs, edit releases); (2) give them read permission, keeping issues on ESCOMP/ctsm, but having them create a project in their own fork (I don't think they can directly add issues to the project, but they can add cards that reference the issue)
  • Bill: Who should lead the collaboration with Ben?
  • Upcoming tags
  • Dave: proposal call
  • Erik: Deleting old data? Old data is smaller than new data. Looking at deleting: c130529, c130927, c141219, c141229. Better management (more directories)?
  • Erik: FAN added juliusvira to collaborator list as "read"
  • Erik: Urban lookup table idea to save space (#633)
  • Erik: By the way, Louisa asking for f02 resolution...
  • Erik: Development of contrib scripts to make easier: merge PTCLM user-mods-directory setup? Add testing?
  • Erik: Notes from tutorial, noted that people here are using older methodologies, teach newer ones?
  • Erik: Thinking about making changes more modular (i.e. FAN) rather than guts into existing datastructures, often makes sense to keep in own types. Yesterday talked about FAN doing it's own BalanceCheck, which could be called from main balance check, or just when FAN is run

Matrix code

It sounds like this is getting close. Erik has gone through this with them.

We should have them submit a PR. Maybe we can walk through some of it all together.

Collaborator repo access

We're inclined to leave most external collaborators at read access for now; will look into project management solutions for them.

Deleting old surface data

We're okay with deleting old surface datasets

Create sub-directory each time we make surface datasets? This might help. Maybe do it using which ctsm tag it was created by.

Urban lookup table idea

We're happy with the idea of moving spatial urban parameters off of the surface dataset, instead applying the lookup table based on urban ID at runtime instead of mksurfdata_map time.

We'll talk with Keith or Sam about this.

FAN

FAN having its own balance check? Makes sense to us in principle, at least. Note that there will still be an overall, summary nitrogen flux that appears in the main CN Balance Check.

Making changes more modular: Makes sense to us, but question is whether it's reasonable to ask the developer to spend time on that. Maybe we'll ask him to do it, but if it would be an undue time burden, let us know. Examples for this are ch4Mod.F90 and VOCEmissionMod.F90.

February 13, 2019

Landunit collapsing

Idea is to collapse landunits at runtime

Dave: for each landunit, give it a minimum weight with its own namelist option: lake_min_pct, wetland_min_pct, etc.

What about for urban? For now, we'll have separate thresholds for each urban landunit.

We also want an option for n_dominant_landunits. So we'll have that and the min threshold.

Special case: can set the threshold to 100% to turn off a landunit completely.

We'll need to think through a bunch of edge cases.

We may want to remove the setting to 0 of small landunits in mksurfdata_map, just letting the removal of small landunits happen at runtime (so the default would be 1% for lakes, etc.).

Streamlining the process for creating surface datasets

Mariana suggests using ESMF mesh files rather than SCRIP grid files.

Mariana points out: if we make mksurfdata_map an ESMF application, we could do the mapping piece during mksurfdata_map rather than as a separate, earlier step. Do we want to think about this?

  • Update: Online regridding does NOT offer the chunking approach. So you could blow memory if you tried to do online regridding on a small (e.g., serial) machine with the 1km source grid.

Need to think about the build and submission of batch jobs, given that we don't want to depend on cime for this, ideally (since NWP applications won't necessarily have a cime port to their machine).

  • For the build: we may be able to make things more robust with a cmake-based build (e.g., with FindESMF).
  • For the batch submission, maybe just let the user do their own qsub (qsubbing the whole thing, perhaps), rather than trying to abstract the batch job submission as happens for cime.

Idea of using tiling (just reading in a subset of the raw dataset - particularly for the 1 km raw dataset)....

  • Rocky: ESMF doesn't have the capability to easily subset a grid: user would need to do that
  • Ben's chunking approach: Rocky thinks that that could even allow you to run on a single processor (though it will be slow). Ben's approach hasn't been used in production yet, but it has been tested.

Our initial inclination is to try to go with Ben's chunking approach.

What about the user interface for this single-script approach? This could take some thought - particularly if the user wants to use their own raw datasets (do we want a config file? and/or something like user_nl that lets you override some things, leaving others at their default).

What about the build? We'd probably have a separate build step (as opposed to folding the build into the one-step script.) The two things that need to be built now are mksurfdata_map and gen_domain. For WRF applications, we probably don't need gen_domain since the WRF grid will specify the mask.

What are the pieces here?

  • Translating WRF GeoGrid into ESMF mesh

    • Part of this will be replacing the gen_domain step with something else (extracting the mask and creating the domain file in some other way)
    • Negin and Mike will lead this, putting together a python module to do this translation
  • Integrating with Ben's chunking ESMF wrapper

  • Make mksurfdata_map build more robust and general across machines (cmake build?)

    • Lower priority: For now we can just make our make-based build a little more robust/general, forcing user to point to their netcdf library explicitly
  • Pulling everything together into a single python script

  • Good user interface for the script (including pointing to different raw data files)

    • Maybe we can leverage the cime functionality to generate a namelist with a user_nl-like mechanism. (We probably want to rewrite mksurfdat.pl in python.)

January 31, 2019

Agenda

  • Bill: discuss tools needs

  • Bill: schedule meeting Wed, Feb 13 with Sam for further discussion of tools needs, etc. (Who should be there?)

MizuRoute update

Priorities

Dave's suggested priority list

  1. Parallelization (in progress)

  2. Coupling without lakes

  3. Coupling with lakes

  4. Other things, like water tracers

Coupling MizuRoute to other systems, and LILAC

Mike asks: How could we couple MizuRoute to systems other than CESM/cime, e.g., in WRF-Hydro? Could we use LILAC to couple MizuRoute to WRF-Hydro without CTSM?

Bill: The current plan for LILAC is that it will require CTSM. Initially, LILAC will just couple to CTSM, not a river model. In the next phase, LILAC would couple CTSM and a river model (though that's not necessarily easy). We haven't talked about the possibility of LILAC coupling to a river model without CTSM.

In principle, if Noah-MP had a cap that looked just like CTSM's LILAC cap (ESMF), we guess LILAC could be used to couple Noah-MP with MizuRoute into WRF-Hydro.

But as we start to talk about more general coupling, we might want to think more about NUOPC rather than LILAC.

Coupling MizuRoute to cime

Sean points out that a good thing to have in place for this would be having data land able to send a runoff stream.

In late Feb, let's try to engage Mariana to work on a NUOPC cap for MizuRoute, maybe together with Negin.

We'll talk to Mariana about these priorities:

  • NUOPC cap for Mosart
  • data land in runoff mode
  • then move on to NUOPC cap for MizuRoute

Data set generation - ease of use and flexibility

General thoughts

Mike: Big issue is that every WRF user has a different domain.

Could we use something other than area-conservative remapping?

Sean: Do we really need to do a rigorous area-conservative remapping from raw datasets to surface datasets?

Bill & Mike: WRF gives ability to use different regridding methods depending on the relative ratios of raw to final grids. The issue is: area-conservative is a nice, general method that gives reasonable results regardless of the relative ratios. Other methods could give you problems (at least for some surface dataset fields) for certain raw:final ratios. It seems possible to do this for CTSM, but it would take some thought and work.

Other ideas for dealing with high-resolution raw data

We could do something smarter in terms of just reading in subsets of the raw data at a time. (Bill thinks that someone from the esmf group, who we were talking with a year or so ago, developed a wrapper to the esmf tool that does something like this for parallelization.)

We could also provide multiple resolutions of raw data files.

Could we avoid some fields for the NWP configuration?

Mike asks if we could avoid some surface dataset fields for the NWP configuration.

Dave doesn't think there would be many.

Usability

There are currently a number of steps you need to take to go from grid to surface dataset. Ideally, this would be a one step process.

For WRF: the initial thing you have is a WRF GeoGrid. It sounds like that's similar to our domain file.

It seems like it might not be too hard to work on this usability piece.

Datasets: WRF's vs. CTSM's

Dave: Would we want the capability to use more of WRF's information, like WRF's vegetation types? Mike started with that idea, but he's okay with abandoning that. Eventually, we could support this through CTSM's mechanisms (different raw data sets).

Initial steps

We feel that a good initial step is to work on the usability piece: having a one-step process to go from WRF GeoGrid to everything that CTSM needs. We'll also want the capability to start with something else like a SCRIP grid file or domain file... but first thing we'll tackle will be starting with WRF GeoGrid (probably the first step of the process will be going from GeoGrid to SCRIP... so we could just bypass that step if you have a SCRIP grid file to begin with).

Initially, this could be a super-script on top of the existing tools. Eventually, some of this could be more integrated rather than just having the super-script call a bunch of other external things.

Speed will come later.

Initial generation of the WRF grid

There's also the problem of creating a regional WRF grid initially. But we feel that's not our problem.

However, there's also the issue of people running regional CTSM without WRF. We'll want to think about making that easier eventually, too.

CLM-CMT January 28, 2019

Agenda

Negin's first task

Pull out more parameters. We'll put them on the netcdf params file.

Katie Dagon has a list.

January 9, 2019

Agenda

  • Erik: Need to import atmospheric CH4 from CAM to CIME/CPL to CTSM? And send rates back from CTSM? Is this a future requirement? Or a dead one that we aren't going to support?
  • Erik: New bug about blocky patterns (#608)?
  • Erik: Process to do single point simulations, for CTSM tutorial? Need to decide on and get a group together to discuss it (Sean, Will, Keith, Erik, and Danica I think are the key people)
  • Erik: Go over upcoming tags for master and release-clm5.0
  • Dave: As part of FATES land use change development, I was tasked with inquiring about how difficult it would be to add additional landunits (e.g., secondary land and pastureland).
  • Bill: Make sure we're all in agreement about what the NWP compset's defaults will be
  • Bill: What should happen if you try to run NWP at some other resolution? Should it abort? Should it let you do it, noting that the configuration won't be set up quite right (since we're using a custom surface dataset for the target resolution)?
  • Bill: Similarly, what should happen if you try to run the NLDAS grid with a CLM configuration?
  • Erik: How can we make sure PR reviews are timely? Should we reserve time to do PR reviews? Put suggested deadlines in the request?
  • Erik: What res's should we create landuse.timeseries for (and for which SSP-RCP combo's)?
  • Erik: Note: GRAZING is zero now, UNREPRESENTED_* will be zero until Peter creates a new set of data that includes shifting cultivation, that comes to master.
  • Erik: Note: Peter and I talked about the fact that mksurfdata_map does an interpolation including zero's is not ideal for fertilizer. It should really mask and only use grid cells that contain data for that CFT. He's going to change the rawdata so that zero's will be filled in with nearest data in order to compensate for this.
  • Bill: Go through open PRs

Atmospheric CH4

Dave: There has been discussion in the past about wanting to couple methane, but there hasn't been any movement on this recently. So we shouldn't remove this capability.

We should (eventually) change things to listen to what CAM is doing and be consistent with that. But this is fairly far down the priority list. Initially, maybe have Keith do a test with different methane values.

Short-term fix is that you can modify the param file to use what CAM is using (if they're using a constant value, which is probably the case for paleo).

FATES (etc.) - question of additional landunits

Bill's pre-meeting thoughts on Dave's additional landunit question

In the past, I've been hesitant about adding a new landunit unless it's really needed (because the landunit is being treated in fundamentally different ways from other landunits - in ways that cannot easily be captured via different parameter values). That said, it may be the case that pastureland truly does operate in a fundamentally different way (e.g., some old notes suggested that pastureland would not invoke hillslope hydrology's multiple columns). Furthermore, the addition of hillslope hydrology could argue for using landunits rather than columns for things like this.

However, I'd really like to have a meeting (or series of meetings) to all get on the same page about where we're going with the subgrid hierarchy over the next 5-10 years. In addition to TSS/LMWG folks, this should also include one or more LIWG scientists. I'd like any major changes in the subgrid hierarchy to be made with consideration of how that change might interact with other upcoming changes, and with an eye towards consistency. (I'm not suggesting that hasn't been the case to date - mainly that I don't feel like I have a clear enough vision of this myself.)

Assume we move ahead with the idea of adding one or two new landunits: Before any new landunits are added, I'd want to see a cleanup of the existing code: There are many places that check if the landunit type is soil or crop. This is bad enough as is, but gets nightmarish if we introduce a 3rd or 4th vegetated landunit. See https://github.com/ESCOMP/ctsm/issues/5

Once that refactoring is done, I suspect it will not be too hard to add a new landunit to the code - but never having done it myself, the hard bits are likely to be the ones I'm not thinking of :-). Some care will be needed in mksurfdata_map and init_interp, including ensuring backwards compatibility with old initial conditions files (if that is needed). One challenge is: For the sake of understandability and (possibly) performance, it could be good to group similar landunits together in order. We currently have an empty slot at landunit #3, but could only accommodate one new landunit with that, not two. So adding two new landunits would require shifting the indices of special landunits down by one. I'm not sure what implications that may have for (a) init_interp, and (b) people's post-processing scripts.

Discussion at today's meeting

Dave: Scientifically, it makes sense to have different landunits for different land use, including secondary forest. This makes it easier to track, etc. In addition, you'd have different code for pasture (grazing, etc.)

Bill and Erik see these points, and agree that it could make sense to introduce additional landunits for this. Some care will be needed in init_interp, and some thought will be needed as to how (if at all) we'll support collapsing things down (e.g., for NWP, where performance may matter more).

Summary: this is worth doing, but getting all of the pieces in place could be a fair amount of work.

NWP: mixing and matching NWP compset/grid

Feelings from Erik: If you use a standard grid with NWP compset, abort. And vice versa.

Dave: Let Mike decide what he wants.

Upcoming priorities

For Bill: After NWP configuration, getting water isotopes in place remains a high priority.

For Erik: After no-anthro, matrix solution. After that, maybe pythonization of build-namelist.

PR review process

We want to shoot to have reviews done within a week of their being opened. Sometimes we'd want faster turnaround than that, so should try to prioritize reviewing things.

Erik and I will generally add each other as reviewers, but can say, "this probably doesn't need review". If, say, Erik is added as a reviewer and doesn't feel a need to review something, he can remove himself.

⚠️ **GitHub.com Fallback** ⚠️