Running Futurize Workspace - npalmer-professional/HARK-1 GitHub Wiki

Running Futurize:

Now that have manually run most of the files, run futurize on a copy and see what would be changed. Follow Matt's list below.

Checklist of files to run Futurize on:

First-Pass, Exploring Futurize

Have Run Futurize and Checked results:

Note: when simple, the suggested changes are added below the filename. When complexe I describe the changes requested by "futurize."

Note: useful SO discussion regarding why "builtins" is used. See also here: http://python-future.org/imports.html

Pros and cons:

  • Pros:
    • builtins will allow "future" versions of range, particularly, to run in the code. This means consistent behavior (as much as can be made) between Py2 and Py3 code.
  • Cons:
    • Anaconda appears to install builtins automatically, but if user has "cooked their own" Python install, they will need to manually install "builtins."

So I am going to run futurize on all the files, one by one, and manually make the edits, and then manually run

  • HARKcore.py

    • from future import print_function
    • from builtins import str, range, object
  • HARKestimation.py +from future import print_function +from builtins import str

  • HARKinterpolation.py

    • NOTE: 'futurize' suggested adding many additions of "old_div" to the code, whcih makes the "run like old Python." This is not behavior we want.
    • Note that if instead I first add "from future import division" then futurize only suggests these additions: +from future import print_function +from builtins import range
  • HARKsimulation.py

  • HARKutilities.py +from future import print_function +from builtins import str +from builtins import range +from builtins import object -class NullFunc(): +class NullFunc(object):

  • ./ConsumptionSaving/

    • ConsIndShockModel.py from future import division +from future import print_function +from builtins import str +from builtins import range +from builtins import object
    • ConsumerParameters.py
    • ConsAggShockModel.py from future import division +from future import print_function +from builtins import str +from builtins import range
  • ./ConsumptionSaving/

    • ConsMarkovModel.py
    • TractableBufferStockModel.py
    • ConsRepAgentModel.py
    • ConsGenIncProcessModel.py
    • ConsMedModel.py
    • RepAgentModel.py
    • ConsPrefShockModel.py
  • HARKparallel.py

  • ./ConsumptionSaving/

    • ./ConsIndShockModel-Demos/Try-Alternative-Parameter-Values.py
    • ./Demos/Fagereng_demo.py
    • ./Demos/Chinese_Growth.py
    • ./Demos/MPC_credit_vs_MPC_income.py
    • ./Demos/NonDurables_During_Great_Recession.py
  • ./FashionVictim/

    • FashionVictimParams.py
    • FashionVictimModel.py

...ok, have seen enough to be convinced to run futurize and then double-re-run the code.

Step 1: tag the commit prior to doing all this.

Done -- see tag "prefuturize."

Step 2: Run futurize on all the core files

... execute suggestions, and then test-run some of the main files.

Using these two commands to futurize, as it will just spit out the text itself:

# First command: look at the suggested changes; if reasonable run second command
futurize filename.py >> ~/workspace/HARK-1/nate-notes/futurize_notes.md
# Second command: same as above but with "-w" (write to file) to actually implement changes 
futurize -w filename.py >> ~/workspace/HARK-1/nate-notes/futurize_notes.md

...note that results are appended to a notes file. This way we can look up what was changed and what might have gone wrong. This ntoes file will be included here as well.

Round 1: Files to Futurize:

  • HARKcore.py
  • HARKestimation.py
  • HARKinterpolation.py
  • HARKsimulation.py
  • HARKutilities.py
  • HARKparallel.py
    • Note: this promoted because used by a couple random models.

Files to run as test-files:

Literally just run these files, for both py2 and py3:

  • HARKcore.py

    • py3
    • py2
  • HARKestimation.py

    • py3
    • py2
  • HARKinterpolation.py

    • py3
    • py2
  • HARKsimulation.py

    • py3
    • py2
  • HARKutilities.py

    • py3
    • py2
  • HARKparallel.py

    • py3
    • py2
  • ./ConsumptionSaving/

    • ConsIndShockModel.py
      • py3
      • py2
    • ConsumerParameters.py
      • py3
      • py2
    • RepAgentModel.py
      • py3
      • py2
    • ConsPrefShockModel.py
      • py3
      • py2

Round 2: Files to Futurize:

  • ./ConsumptionSaving/

    • ConsIndShockModel.py
    • ConsumerParameters.py
    • ConsAggShockModel.py
  • ./ConsumptionSaving/

    • ConsMarkovModel.py
    • TractableBufferStockModel.py
    • ConsRepAgentModel.py
    • ConsGenIncProcessModel.py
    • ConsMedModel.py
    • RepAgentModel.py
    • ConsPrefShockModel.py
  • ./ConsumptionSaving/

    • ./ConsIndShockModel-Demos/Try-Alternative-Parameter-Values.py
    • ./Demos/Fagereng_demo.py
    • ./Demos/Chinese_Growth.py
    • ./Demos/MPC_credit_vs_MPC_income.py
    • ./Demos/NonDurables_During_Great_Recession.py
  • ./FashionVictim/

    • FashionVictimParams.py
    • FashionVictimModel.py
  • ./Testing/

    • TractableBufferStockModel_UnitTests.py
    • MultithreadDemo.py
    • ModelTesting.py
    • HARKutilities_UnitTests.py
    • Comparison_UnitTests.py
  • ./SolvingMicroDSOPs/

    • SetupSCFdata.py
    • EstimationParameters.py
    • StructEstimation.py

Files to run as test-files:

Literally just run these files, for both py2 and py3:

  • ./ConsumptionSaving/

    • ConsIndShockModel.py
      • py3
      • py2
      • NOTE: Py3 appears to be slightly faster:
      • Py3:
        • Solving a perfect foresight consumer took 0.1069 seconds.
        • Solving a consumer with idiosyncratic shocks took 0.3321 seconds.
        • Solving a lifecycle consumer took 0.0101 seconds.
        • Solving a cyclical consumer took 0.2033 seconds.
        • Solving a kinky consumer took 0.1273 seconds.
      • Py2:
        • Solving a perfect foresight consumer took 0.1393 seconds.
        • Solving a consumer with idiosyncratic shocks took 0.1428 seconds.
        • Solving a lifecycle consumer took 0.0091 seconds.
        • Solving a cyclical consumer took 0.2111 seconds.
        • Solving a kinky consumer took 0.1331 seconds.
    • ConsumerParameters.py
      • py3
      • py2
  • ./ConsumptionSaving/

    • ConsMarkovModel.py
      • py3: AttributeError: 'MarkovSmallOpenEconomy' object has no attribute 'PermShkAggDstn'
      • py2: AttributeError: 'MarkovSmallOpenEconomy' object has no attribute 'PermShkAggDstn'
    • TractableBufferStockModel.py
      • py3
      • py2
    • ConsRepAgentModel.py
      • py3
      • py2
    • ConsGenIncProcessModel.py
      • py3
      • py2
      • NOTE: Py2 appears to be faster:
      • Py3:
        • Solving an explicit permanent income consumer took 43.9469 seconds.
        • Solving a persistent income shocks consumer took 47.8044 seconds.
      • Py2:
        • Solving an explicit permanent income consumer took 8.3109 seconds.
        • Solving a persistent income shocks consumer took 9.7948 seconds.
    • ConsMedModel.py
      • py3
      • py2
      • NOTE: Py3 appears to be slower:
      • Py3:
        • Solving a medical shocks consumer took 140.1519 seconds.
        • Simulating 10000 agents for 100 periods took 19.5659 seconds.
      • Py2:
        • Solving a medical shocks consumer took 27.7461 seconds.
        • Simulating 10000 agents for 100 periods took 17.8634 seconds.
    • RepAgentModel.py
      • py3
      • py2
      • NOTE: Py3 appears to be mildly faster:
      • Py3:
        • Solving a representative agent problem took 0.14075600000000055 seconds.
        • Simulating a representative agent for 2000 periods took 0.48373299999999997 seconds.
        • Solving a two state representative agent problem took 0.37450399999999995 seconds.
        • Simulating a two state representative agent for 2000 periods took 0.4633750000000001 seconds.
      • Py2:
        • Solving a representative agent problem took 0.143237 seconds.
        • Simulating a representative agent for 2000 periods took 0.568245 seconds.
        • Solving a two state representative agent problem took 0.38869 seconds.
        • Simulating a two state representative agent for 2000 periods took 0.50153 seconds.
    • ConsPrefShockModel.py
      • py3
      • py2
      • NOTE: Py3 appears to be mildly faster:
      • Py3:
        • Solving a preference shock consumer took 0.33151200000000014 seconds.
        • Solving a kinky preference consumer took 0.2705059999999997 seconds.
      • Py2:
        • Solving a preference shock consumer took 0.363709 seconds.
        • Solving a kinky preference consumer took 0.29933 seconds.
  • ./ConsumptionSaving/

    • ./ConsIndShockModel-Demos/Try-Alternative-Parameter-Values.py
      • py3
      • py2
    • ./Demos/Fagereng_demo.py
      • py3
      • py2
      • NOTE: Py3 appears to be mildly faster (although some execution, not recorded, wwas clearly faster.)
      • Py3:
        • Time to estimate is 128.98272609710693 seconds.
      • Py2:
        • Time to estimate is 129.340675831 seconds.
    • ./Demos/Chinese_Growth.py
      • py3
      • py2
    • ./Demos/MPC_credit_vs_MPC_income.py
      • py3
      • py2
    • ./Demos/NonDurables_During_Great_Recession.py
      • py3
      • py2
  • ./FashionVictim/

    • FashionVictimParams.py
      • py3
      • py2
    • FashionVictimModel.py
      • py3
      • py2
      • NOTE: Py3 error: AttributeError: Can't pickle local object 'FashionVictimType.update..'
        • Have fixed by addressing issue in HARKparallel.py which explicitly relies on parallelization for Market.solveAgents().
          • Thought this was supposed to be joblib-free?
      • Two things:
        • fixed joblib reliance
        • NOTE: PY3 joblib error where py2 approach worked! Not solved yet; added code to default to serial execution if this doesn't work.
        • NOTE AGAIN: may be able to fix with Numba.prange
  • ./SolvingMicroDSOPs/

    • SetupSCFdata.py
      • py3
      • py2
    • EstimationParameters.py
      • py3
      • py2
    • StructEstimation.py
      • py3
      • py2
  • ./ConsumptionSaving/

    • ConsAggShockModel.py
      • py3
      • py2
      • Note: the Py3 version claims to be much slower, however the time indicated appears to be longer than the "wall clock" time for which I ran it -- I am unclear about why this is. We need to externally confirm that this code takes much longer under one vs the other. (One remote but non-trivial possibility: Py3 may have changed how the timing functions work; we need to confirm that we are recording the times that we intend to be recording when we run these).
      • Py3:
        • Now solving for the equilibrium of a Cobb-Douglas economy. This might take a few minutes...
          • Solving the "macroeconomic" aggregate shocks model took 445.02524 seconds.
        • Now solving a two-state Markov economy. This should take a few minutes...
          • Solving the "macroeconomic" aggregate shocks model took 744.1959099999999 seconds.
        • Now solving a Krusell-Smith-style economy. This should take about a minute...
          • Solving the Krusell-Smith model took 73.90300000000002 seconds.
        • Now solving an economy with 15 Markov states. This might take a while...
          • Solving a model with 15 states took 7172.353585000001 seconds.
      • Py2:
        • Now solving for the equilibrium of a Cobb-Douglas economy. This might take a few minutes...
          • Solving the "macroeconomic" aggregate shocks model took 152.691451 seconds.
        • Now solving a two-state Markov economy. This should take a few minutes...
          • Solving the "macroeconomic" aggregate shocks model took 233.763381 seconds.
        • Now solving a Krusell-Smith-style economy. This should take about a minute...
          • Solving the Krusell-Smith model took 70.716613 seconds.
        • Now solving an economy with 15 Markov states. This might take a while...
          • Solving a model with 15 states took 1932.730768 seconds.
  • ./Testing/

    • NOTE: this just answers whether these files run, not whether the tests all pass. Some fail, but that should be resolved at another time.
    • TractableBufferStockModel_UnitTests.py
      • py3
      • py2
    • MultithreadDemo.py
      • py3
      • py2
      • Note: Py3 again marginally faster: -Py3:
        • Solving the basic consumer took 0.6166 seconds.
        • Solving 32 types without multithreading took 30.6242 seconds.
        • Solving 32 types with multithreading took 0.1223 seconds.
      • Py2:
        • Solving the basic consumer took 0.6320 seconds.
        • Solving 32 types without multithreading took 32.3103 seconds.
        • Solving 32 types with multithreading took 0.2214 seconds.
    • ModelTesting.py
      • Note: because of output message, unclear if code ran correctly. I think it did...
      • py3
      • py2
    • HARKutilities_UnitTests.py
      • py3
      • py2
    • Comparison_UnitTests.py
      • Note: because of output message, unclear if code ran correctly. I think it did...
      • py3
      • py2

Workspace

The list template:

  • HARKcore.py

  • HARKestimation.py

  • HARKinterpolation.py

  • HARKsimulation.py

  • HARKutilities.py

  • HARKparallel.py

    • Note: this promoted because used by a couple random models.
  • ./ConsumptionSaving/

    • ConsIndShockModel.py
    • ConsumerParameters.py
    • ConsAggShockModel.py
  • ./ConsumptionSaving/

    • ConsMarkovModel.py
    • TractableBufferStockModel.py
    • ConsRepAgentModel.py
    • ConsGenIncProcessModel.py
    • ConsMedModel.py
    • RepAgentModel.py
    • ConsPrefShockModel.py
  • ./ConsumptionSaving/

    • ./ConsIndShockModel-Demos/Try-Alternative-Parameter-Values.py
    • ./Demos/Fagereng_demo.py
    • ./Demos/Chinese_Growth.py
    • ./Demos/MPC_credit_vs_MPC_income.py
    • ./Demos/NonDurables_During_Great_Recession.py
  • ./FashionVictim/

    • FashionVictimParams.py
    • FashionVictimModel.py
  • ./Testing/

    • TractableBufferStockModel_UnitTests.py
    • MultithreadDemo.py
    • ModelTesting.py
    • HARKutilities_UnitTests.py
    • Comparison_UnitTests.py
⚠️ **GitHub.com Fallback** ⚠️