wjt - Daria-Maltseva/mixedmethods GitHub Wiki
Temporal WJ
Files preparation
Originally, we shoul use WJr.net
and DC1.clu
.
WJr network should be simplified: line values = 1.
(!) Be sure that these are saved without BOM, do not have [] characters (replace them with empty space), and saved in .net and .clu formats.
###############################
We (might) need to remove 18170. *****
Original WJr (22306)
2-Mode Network: Rows=18169, Cols=4137
DC1.clu (18169)
Remove 18170 results with WJsr.net
(22305 nodes, 18169 works)
It turned out there are cases when works are connected to several journals (as the work can be connected to ***** and journal). They even did not disappear when we remove ***** (because the work is connected to journal which is written in a different way). Compare:
==============================================================================
53. Output Degree Partition of N4 (22306)
==============================================================================
Dimension: 22306
The lowest value: 0
The highest value: 2
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 4137 18.5466 4137 18.5466 *****
1 17975 80.5837 22112 99.1303 HILL_M(2018)34:71
2 **194** 0.8697 22306 100.0000 COLLINS_K(2006)4:67
----------------------------------------------------------------
Sum 22306 100.0000
==============================================================================
54. Output Degree Partition of N48 (22305)
==============================================================================
Dimension: 22305
The lowest value: 0
The highest value: 2
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 4439 19.9014 4439 19.9014 DENZIN_N(1978):
1 17850 80.0269 22289 99.9283 HILL_M(2018)34:71
2 **16** 0.0717 22305 100.0000 BERG_C(2016)40:310
----------------------------------------------------------------
Sum 22305 100.0000
The fact that the work could influence twice, changed the distributions of number of works published per years.
Thats why I made:
Binarized outdegree partition [1-*]
2 mode network - partition into 2 modes
Binarize [2]
Selected both
Partitions - Max
==============================================================================
93. Max of C92 and C90 (22305)
==============================================================================
Dimension: 22305
The lowest value: 0
The highest value: 1
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 303 1.3584 303 1.3584 DENZIN_N(1978):
1 22002 98.6416 22305 100.0000 HILL_M(2018)34:71
----------------------------------------------------------------
Sum 22305 100.0000
Operations - Network + Partition - Extract
The result is (303 cases were deleted):
==============================================================================
Info on 2-Mode Network 51. Extracting N48 according to C58 [1-*] (22002)
==============================================================================
Number of vertices (n): 22002
----------------------------------------------------------
Arcs Edges
----------------------------------------------------------
Total number of lines 17882 0
----------------------------------------------------------
Number of loops 0 0
Number of multiple lines 0 0
----------------------------------------------------------
2-Mode Network: Rows=17866, Cols=4136
Density [2-Mode] = 0.00024200
Average Degree = 1.62548859
Outdegree:
==============================================================================
60. Output Degree Partition of N51 (22002)
==============================================================================
Dimension: 22002
The lowest value: 0
The highest value: 2
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 4136 18.7983 4136 18.7983 J ASSOC NURSE AIDS C
1 17850 81.1290 21986 99.9273 HILL_M(2018)34:71
2 16 0.0727 22002 100.0000 BERG_C(2016)40:310
----------------------------------------------------------------
Sum 22002 100.0000
I made a list of 16 nodes which have outdegree = 2:
*Vertices 16
1 "BERG_C(2016)40:310" 610 - (!) deleted link to "2016"
2 "NEWLIN_K(2010)59:A282" 981 19882 (1) 19884 (10) - to shrink ?
3 "KARPF_D(2015)9:1888" 1607 18000 (5) 19938 (6)
4 "GUPTA_M(2015)8:25987" 1954 19236 (1) 19317 (28)
5 "APESOA-V_E(2013)28:267"
6 "STEPNEY_P(2011)37:419"
7 "BUCHHOLT_N(2017)49:249"
8 "PACHECO_E(2015)30:725"
9 "DUBEY_R(2015)4:72"
10 "DAL-FARR_R(2013)24:67"
11 "ZILLIOX_S(2017)4:69"
12 "BLEIJENB_N(2013)53:235"
13 "KARAHAN_T(2014)14:2071"
14 "PEREZ-ES_R(2010)34:47"
15 "SOCKOLOW_P(2013)192:939"
16 "SEVKUSIC_S(2009)41:45"
I deleted one link for BERG_C(2016)40:310, then I made a subnetwork, looked at the indegree (that why there are 30 nodes):
Rank Vertex Value Id
--------------------------------------------------------
1 17 72.0000 GERONTOLOGIST
2 21 28.0000 GLOBAL HEALTH ACTION
3 22 14.0000 STUD HEALTH TECHNOL
4 24 10.0000 DIABETES
5 31 8.0000 KURAM UYGUL EGIT BIL
6 28 7.0000 ANN ANTHROPL PRACT
7 25 6.0000 INT J COMMUN-US
8 26 5.0000 EXTR IND SOC
9 16 5.0000 J COMMUN
10 42 4.0000 ZB INST PEDAGOG ISTR
11 37 4.0000 EDUC STUD-UK
12 27 2.0000 EDUC STUD
13 18 2.0000 ZDM-INT J MATH EDUC
14 33 2.0000 HOME HEALTH CARE SER
15 30 1.0000 ED SCI THEORY PRACTI
16 29 1.0000 J CROSS CULT GERONTOL
17 39 1.0000 GERONTOLOGIST S1
18 40 1.0000 NUANCES
19 41 1.0000 NAPA BULL
20 38 1.0000 ZDM-MATH EDUC
21 23 1.0000 DIABETES S1
22 45 1.0000 GOD
23 43 1.0000 EXTRACT IND SOC
24 20 1.0000 J CROSS-CULT GERONTO
25 34 1.0000 CTR ESTUDIOS DEMOGRA
26 19 1.0000 GLOB HEALTH ACTION
27 44 1.0000 SUSTAIN PROD CONSUMP
28 36 1.0000 MIXED METHODS ED THE
29 35 1.0000 SUSTAIN PROD CONSUM
30 32 1.0000 ESTUD DEMOGR URBANOS
I decided not to take into consideration these 15 cases (1 was deleted, as written above).
The result is WJsr+.net
(22002) - Rows=17866, Cols=4136
Output:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 4136 18.7983 4136 18.7983 J ASSOC NURSE AIDS C
1 17851 81.1335 21987 99.9318 HILL_M(2018)34:71
2 15 0.0682 22002 100.0000 NEWLIN_K(2010)59:A282
----------------------------------------------------------------
Sum 22002 100.0000
###############################
DC1.clu - should be of the same size (DC1.clu
(18169))
WJsr.net
:
Outdegree:
==============================================================================
89. Output Degree Partition of N48 (22305)
==============================================================================
Dimension: 22305
The lowest value: 0
The highest value: 2
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 4439 19.9014 4439 19.9014 22
1 17850 80.0269 22289 99.9283 1
2 16 0.0717 22305 100.0000 636
----------------------------------------------------------------
Sum 22305 100.0000
2-mode partition
==============================================================================
91. 2-Mode partition of N48 (22305)
==============================================================================
Dimension: 22305
The lowest value: 1
The highest value: 2
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
1 18169 81.4571 18169 81.4571 1
2 4136 18.5429 22305 100.0000 18170
----------------------------------------------------------------
Sum 22305 100.0000
Binarize partition [1]
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 4136 18.5429 4136 18.5429 18170
1 18169 81.4571 22305 100.0000 1
----------------------------------------------------------------
Sum 22305 100.0000
Choose Outdegree partition
Choose Binarized 2-mode partition
Partitions - Extract partition - 2nd from 1st
==============================================================================
97. Extracting from C89 vertices determined by C96 [1] (18169)
==============================================================================
Dimension: 18169
The lowest value: 0
The highest value: 2
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 303 1.6677 303 1.6677 22
1 17850 98.2443 18153 99.9119 1
2 16 0.0881 18169 100.0000 636
----------------------------------------------------------------
Sum 18169 100.0000
Binarize this partition [1-*]
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 303 1.6677 303 1.6677 22
1 17866 98.3323 18169 100.0000 1
----------------------------------------------------------------
Sum 18169 100.0000
Choose DC1.clu
Choose Binarized partition
Partitions - Extract partition - 2nd from 1st
==============================================================================
99. Extracting from C34 vertices determined by C98 [1] (17866)
==============================================================================
Dimension: 17866
The lowest value: 0
The highest value: 2019
Frequency distribution of cluster values:
Cluster Freq Freq% CumFreq CumFreq% Representative
----------------------------------------------------------------
0 15 0.0840 15 0.0840 125
1959 1 0.0056 16 0.0896 431
1979 1 0.0056 17 0.0952 295
1983 1 0.0056 18 0.1008 16236
1988 1 0.0056 19 0.1063 820
1989 1 0.0056 20 0.1119 273
1991 1 0.0056 21 0.1175 335
1993 5 0.0280 26 0.1455 831
1994 3 0.0168 29 0.1623 7965
1995 3 0.0168 32 0.1791 861
1996 5 0.0280 37 0.2071 9756
1997 13 0.0728 50 0.2799 11903
1998 6 0.0336 56 0.3134 1907
1999 12 0.0672 68 0.3806 1367
2000 15 0.0840 83 0.4646 159
2001 13 0.0728 96 0.5373 343
2002 25 0.1399 121 0.6773 578
2003 32 0.1791 153 0.8564 2529
2004 53 0.2967 206 1.1530 49
2005 76 0.4254 282 1.5784 19
2006 131 0.7332 413 2.3117 4
2007 206 1.1530 619 3.4647 3
2008 345 1.9310 964 5.3957 73
2009 445 2.4908 1409 7.8865 110
2010 655 3.6662 2064 11.5527 118
2011 792 4.4330 2856 15.9857 32
2012 1022 5.7204 3878 21.7060 44
2013 1395 7.8081 5273 29.5142 31
2014 1809 10.1254 7082 39.6395 11
2015 2580 14.4408 9662 54.0804 15
2016 3191 17.8607 12853 71.9411 6
2017 3586 20.0716 16439 92.0128 2
2018 1106 6.1905 17545 98.2033 1
2019 321 1.7967 17866 100.0000 16237
----------------------------------------------------------------
Sum 17866 100.0000
###############################
How to
We use Temporal quantities approach - Downloaded Python files from GitHub Nets (more about it)
Open Python (3.6 or 3.7) - IDLE file.
Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] on win32
Type "copyright", "credits" or "license()" for more information.
>>>
Then File -- Open -- choose start.py
- Run module
==== RESTART: C:\Mail.Ru Cloud\ANR HSE\ANR Projects\Green IT\WK\start.py ====
>>>
(!) Charts folder - should include special files (copied from other analysis)
Analysis
In the directory we need to provide WJsr+.net
and DC1.clu
.
start.py
- open with IDLE - Run module
Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)] on win32
Type "copyright", "credits" or "license()" for more information.
>>>
RESTART: C:\Mail.Ru Cloud\ANR HSE\ANR Projects\Mixed methods\Analysis\Journals\Temporal\start.py
>>>
Prepare MakeTime.py
cdir = 'C:/Mail.Ru Cloud/ANR HSE/ANR Projects/Mixed methods/Analysis/Journals/Temporal/Charts'
ddir = "C:/Mail.Ru Cloud/ANR HSE/ANR Projects/Mixed methods/Analysis/Journals/Temporal"
wdir = "C:/Mail.Ru Cloud/ANR HSE/ANR Projects/Mixed methods/Analysis/Journals/Temporal"
gdir = 'C:/Nets'
import sys, os, re, datetime, json
sys.path = [gdir]+sys.path; os.chdir(wdir)
from TQ import *
from Nets import Network as N
net = ddir+"/WJsr+.net"
clu = ddir+"/DC1+.clu"
t1 = datetime.datetime.now(); print("started: ",t1.ctime(),"\n")
WKc = N.twoMode2netJSON(clu,net,'WJcum.json',instant=False)
t2 = datetime.datetime.now(); print("\nconverted to cumulative TN: ",t2.ctime(),"\ntime used: ", t2-t1)
WKi = N.twoMode2netJSON(clu,net,'WJins.json',instant=True)
t3 = datetime.datetime.now(); print("\nconverted to instantaneous TN: ",t3.ctime(),"\ntime used: ", t3-t2)
In start.py
: File - Open file - MakeTime.py
- Run module
First results, network with *****
>>>
RESTART: C:\Mail.Ru Cloud\ANR HSE\ANR Projects\Mixed methods\Analysis\Journals\Temporal\MakeTimeWJ.py
started: Tue Aug 13 16:51:25 2019
converted to cumulative TN: Tue Aug 13 16:51:41 2019
time used: 0:00:15.285279
converted to instantaneous TN: Tue Aug 13 16:51:55 2019
time used: 0:00:14.895942
>>>
New results:
started: Wed Aug 14 09:37:31 2019
converted to cumulative TN: Wed Aug 14 09:37:43 2019
time used: 0:00:12.064208
converted to instantaneous TN: Wed Aug 14 09:37:57 2019
time used: 0:00:13.348083
We created WJcum.json
and WJins.json
(in the older Temporal, first are in the folder Temporal - Old).
Analysis
Distributions
>>> net = wdir+"/WJins.json"
>>> net
'C:/Mail.Ru Cloud/ANR HSE/ANR Projects/Mixed methods/Analysis/Journals/Temporal/WJins.json'
>>> WJi = N.loadNetJSON(net)
>>> J = list(WJi.nodesMode(2))
>>> len(J)
4136
>>> wInDeg = [ [a,WJi._nodes[a][3]['lab'],WJi.TQnetInSum(a)] for a in J ]
>>> Tot = [(e[0], TQ.total(e[2])) for e in wInDeg ]
>>> for i in range(20): print (i, Tot[i][1],WJi._nodes[Tot[i][0]][3]['lab'])
0 10 J ASSOC NURSE AIDS C
1 9 HEALTH COMMUN
2 28 AIDS CARE
3 21 AM BEHAV SCI
4 56 QUAL HEALTH RES
5 32 COMPUT HUM BEHAV
6 93 SOC SCI MED
7 71 INT J QUAL METH
8 41 AIDS BEHAV
9 13 J HEALTH CARE POOR U
10 3 HEALTH PSYCHOL
11 2 BEHAV MED
12 17 AIDS PATIENT CARE ST
13 8 JAIDS-J ACQ IMM DEF
14 6 J BEHAV MED
15 3 SOUTH MED J
16 121 J ADV NURS
17 9 J RURAL HEALTH
18 1 SOC NETWORKS
19 2 AIDS
>>> def takeSecond(elem): return elem[1]
>>> Tot.sort(key=takeSecond, reverse=True)
>>> for i in range(20): print (i, Tot[i][1],WJi._nodes[Tot[i][0]][3]['lab'])
0 293 BMJ OPEN
1 291 J MIX METHOD RES
2 245 BMC HEALTH SERV RES
3 209 BMC PUBLIC HEALTH
4 208 PLOS ONE
5 166 IMPLEMENT SCI
6 121 J ADV NURS
7 97 J CLIN NURS
8 93 SOC SCI MED
9 87 NURS EDUC TODAY
10 84 TRIALS
11 83 BMC MED EDUC
12 76 PROCD SOC BEHV
13 72 GERONTOLOGIST
14 72 CHILD YOUTH SERV REV
15 71 INT J QUAL METH
16 60 INT J NURS STUD
17 59 BMC PREGNANCY CHILDB
18 57 J INTERPROF CARE
19 56 QUAL HEALTH RES
Produce pictures with distributions of N of articles for selected J:
>>> def picture(i,c="orange"):
tq = wInDeg[i][2]; tit = wInDeg[i][1]
N.TQshow(tq,cdir,TQmax,Tmin,Tmax,w,h,tit,fill=c)
>>> ji = { WJi._nodes[a][3]['lab']:i for i,a in enumerate (J)}
>>> ji['J MIX METHOD RES']
22
tq = wInDeg[22][2]
TQ.TqSummary(tq)
(0, 2020, 0, 30)
>>> TQmax = 30; Tmin = 2000; Tmax = 2020; w = 800; h = 500
>>> picture(ji['J MIX METHOD RES'],c="orange")
>>> wInDeg[22]
[17889, 'J MIX METHOD RES', [(0, 2007, 0), (2007, 2008, 20), (2008, 2009, 21), (2009, 2010, 23), (2010, 2012, 20), (2012, 2013, 26), (2013, 2014, 22), (2014, 2015, 24), (2015, 2016, 20), (2016, 2017, 25), (2017, 2018, 30), (2018, 2019, 19), (2019, 2020, 21)]]
Works on MMR per year
>>> S = []
>>> for e in WJi.links(): S = TQ.sum(S,WJi.getLink(e,'tq'))
>>> S
[(0, 1, 15), (1959, 1960, 1), (1979, 1980, 1), (1983, 1984, 1), (1988, 1990, 1), (1991, 1992, 1), (1993, 1994, 5), (1994, 1996, 3), (1996, 1997, 5), (1997, 1998, 13), (1998, 1999, 6), (1999, 2000, 12), (2000, 2001, 15), (2001, 2002, 13), (2002, 2003, 25), (2003, 2004, 32), (2004, 2005, 53), (2005, 2006, 76), (2006, 2007, 131), (2007, 2008, 206), (2008, 2009, 345), (2009, 2010, 446), (2010, 2011, 657), (2011, 2012, 793), (2012, 2013, 1022), (2013, 2014, 1399), (2014, 2015, 1810), (2015, 2016, 2584), (2016, 2017, 3191), (2017, 2018, 3588), (2018, 2019, 1106), (2019, 2020, 321)]
>>> TQ.TqSummary(S)
(0, 2020, 1, 3588)
>>> TQmax = 4000; Tmin = 1990; Tmax = 2020; w = 600; h = 180
>>> tit = 'Works on MMR per year'
>>> N.TQshow(S,cdir,TQmax,Tmin,Tmax,w,h,tit,fill='orange')
Different journals per year
>>> J[0]
17867
>>> L = [ [j-17867,WJi._nodes[j][3]['lab'], WJi.TQnetInSum(j)] for j in J]
>>> L[0]
[0, 'J ASSOC NURSE AIDS C', [(0, 2009, 0), (2009, 2011, 1), (2011, 2013, 0), (2013, 2014, 2), (2014, 2016, 1), (2016, 2017, 3), (2017, 2018, 0), (2018, 2019, 1), (2019, 2020, 0)]]
>>> #still we have number of articles
>>> Lb = [ TQ.binary(t[2]) for t in L ]
>>> Lb[0]
[(2009, 2011, 1), (2013, 2017, 1), (2018, 2019, 1)]
>>> # we have only cases when the journal is active
>>> Y = []
>>> for t in Lb: Y = TQ.sum(Y,t)
>>> Y
[(0, 1, 11), (1959, 1960, 1), (1979, 1980, 1), (1983, 1984, 1), (1988, 1990, 1), (1991, 1992, 1), (1993, 1994, 5), (1994, 1996, 3), (1996, 1997, 5), (1997, 1998, 11), (1998, 1999, 6), (1999, 2000, 8), (2000, 2001, 14), (2001, 2002, 12), (2002, 2003, 22), (2003, 2004, 30), (2004, 2005, 47), (2005, 2006, 65), (2006, 2007, 114), (2007, 2008, 151), (2008, 2009, 245), (2009, 2010, 327), (2010, 2011, 428), (2011, 2012, 534), (2012, 2013, 622), (2013, 2014, 792), (2014, 2015, 929), (2015, 2016, 1376), (2016, 2017, 1626), (2017, 2018, 1744), (2018, 2019, 724), (2019, 2020, 235)]
>>> TQ.TqSummary(Y)
(0, 2020, 1, 1744)
>>> tit = 'Journals per year'
>>> TQmax = 2000; Tmin = 1990; Tmax = 2020; w = 600; h = 180
>>> N.TQshow(Y,cdir,TQmax,Tmin,Tmax,w,h,tit,fill='orange')
Average number of papers per journal
>>> Av = TQ.proportion(S,Y)
>>> Av
[(0, 1, 1.3636363636363635), (1959, 1960, 1.0), (1979, 1980, 1.0), (1983, 1984, 1.0), (1988, 1990, 1.0), (1991, 1992, 1.0), (1993, 1997, 1.0), (1997, 1998, 1.1818181818181819), (1998, 1999, 1.0), (1999, 2000, 1.5), (2000, 2001, 1.0714285714285714), (2001, 2002, 1.0833333333333333), (2002, 2003, 1.1363636363636365), (2003, 2004, 1.0666666666666667), (2004, 2005, 1.127659574468085), (2005, 2006, 1.1692307692307693), (2006, 2007, 1.1491228070175439), (2007, 2008, 1.3642384105960266), (2008, 2009, 1.4081632653061225), (2009, 2010, 1.363914373088685), (2010, 2011, 1.5350467289719627), (2011, 2012, 1.4850187265917603), (2012, 2013, 1.6430868167202572), (2013, 2014, 1.7664141414141414), (2014, 2015, 1.9483315392895586), (2015, 2016, 1.877906976744186), (2016, 2017, 1.9624846248462484), (2017, 2018, 2.0573394495412844), (2018, 2019, 1.5276243093922652), (2019, 2020, 1.3659574468085107)]
>>> TQ.TqSummary(Av)
(0, 2020, 1.0, 2.0573394495412844)
>>> TQmax = 3; Tmin = 1990; Tmax = 2020; w = 600; h = 180
>>> tit = 'Average number of papers on MMR per journal'
>>> N.TQshow(Av,cdir,TQmax,Tmin,Tmax,w,h,tit,fill='orange')
Proportions for selected journals
>>> TQ.maxmin()
>>> TQ.report()
semiring = maxmin
add = max
mult = min
sZero = -inf
sOne = inf
sN = []
sE = [(1, inf, inf)]
rPF = 2
>>> #Max number of papers published on SN per year
>>> n = len(J)
>>> Lm = []
>>> for t in range(n): Lm = TQ.sum(Lm,L[t][2])
>>> Lm
[(0, 1, 5), (1, 1959, 0), (1959, 1960, 1), (1960, 1979, 0), (1979, 1980, 1), (1980, 1983, 0), (1983, 1984, 1), (1984, 1988, 0), (1988, 1990, 1), (1990, 1991, 0), (1991, 1992, 1), (1992, 1993, 0), (1993, 1997, 1), (1997, 1998, 2), (1998, 1999, 1), (1999, 2000, 3), (2000, 2004, 2), (2004, 2005, 3), (2005, 2007, 7), (2007, 2008, 20), (2008, 2009, 21), (2009, 2010, 23), (2010, 2012, 20), (2012, 2013, 26), (2013, 2014, 22), (2014, 2015, 39), (2015, 2016, 48), (2016, 2017, 69), (2017, 2018, 109), (2018, 2019, 19), (2019, 2020, 21)]
>>> TQ.combinatorial()
>>> TQ.report()
semiring = combinatorial
add = add
mult = mul
sZero = 0
sOne = 1
sN = []
sE = [(1, inf, 1)]
rPF = 2
>>> #Normalize Lm
>>> Lmn = TQ.prodConst(Lm,0.01)
>>> Lmn
[(0, 1, 0.05), (1, 1959, 0.0), (1959, 1960, 0.01), (1960, 1979, 0.0), (1979, 1980, 0.01), (1980, 1983, 0.0), (1983, 1984, 0.01), (1984, 1988, 0.0), (1988, 1990, 0.01), (1990, 1991, 0.0), (1991, 1992, 0.01), (1992, 1993, 0.0), (1993, 1997, 0.01), (1997, 1998, 0.02), (1998, 1999, 0.01), (1999, 2000, 0.03), (2000, 2004, 0.02), (2004, 2005, 0.03), (2005, 2007, 0.07), (2007, 2008, 0.2), (2008, 2009, 0.21), (2009, 2010, 0.23), (2010, 2012, 0.2), (2012, 2013, 0.26), (2013, 2014, 0.22), (2014, 2015, 0.39), (2015, 2016, 0.48), (2016, 2017, 0.6900000000000001), (2017, 2018, 1.09), (2018, 2019, 0.19), (2019, 2020, 0.21)]
>>> Lmn = TQ.cutGT(Lmn,0) # cut zeros
>>> Lmn
[(0, 1, 0.05), (1959, 1960, 0.01), (1979, 1980, 0.01), (1983, 1984, 0.01), (1988, 1990, 0.01), (1991, 1992, 0.01), (1993, 1997, 0.01), (1997, 1998, 0.02), (1998, 1999, 0.01), (1999, 2000, 0.03), (2000, 2004, 0.02), (2004, 2005, 0.03), (2005, 2007, 0.07), (2007, 2008, 0.2), (2008, 2009, 0.21), (2009, 2010, 0.23), (2010, 2012, 0.2), (2012, 2013, 0.26), (2013, 2014, 0.22), (2014, 2015, 0.39), (2015, 2016, 0.48), (2016, 2017, 0.6900000000000001), (2017, 2018, 1.09), (2018, 2019, 0.19), (2019, 2020, 0.21)]
>>> #Proportion (main result)
>>> LP = [ TQ.proportion(L[t][2],Lmn) for t in range(n)]
>>> # L - number of articles in journals
>>> LP[0]
[(0, 1, 0.0), (1959, 1960, 0.0), (1979, 1980, 0.0), (1983, 1984, 0.0), (1988, 1990, 0.0), (1991, 1992, 0.0), (1993, 2009, 0.0), (2009, 2010, 4.3478260869565215), (2010, 2011, 5.0), (2011, 2013, 0.0), (2013, 2014, 9.090909090909092), (2014, 2015, 2.564102564102564), (2015, 2016, 2.0833333333333335), (2016, 2017, 4.3478260869565215), (2017, 2018, 0.0), (2018, 2019, 5.2631578947368425), (2019, 2020, 0.0)]
Pictures
>>> ji = { WJi._nodes[j][3]['lab']:i for i,j in enumerate (J)}
>>> def picture(i,c="orange"):
tq = LP[i]; tit = L[i][1]
N.TQshow(tq,cdir,TQmax,Tmin,Tmax,w,h,tit,fill=c)
>>> TQmax = 100; Tmin = 1990; Tmax = 2020; w = 600; h = 180
>>> picture(ji['BMJ OPEN'],c="orange")
>>> ji['BMJ OPEN']
2002
>>> L[2002]
[2002, 'BMJ OPEN', [(0, 2011, 0), (2011, 2012, 3), (2012, 2013, 13), (2013, 2014, 21), (2014, 2015, 20), (2015, 2016, 48), (2016, 2017, 69), (2017, 2018, 109), (2018, 2019, 7), (2019, 2020, 3)]]
>>> picture(ji['J MIX METHOD RES'],c="orange")
>>> ji['J MIX METHOD RES']
22
>>> L[22]
[22, 'J MIX METHOD RES', [(0, 2007, 0), (2007, 2008, 20), (2008, 2009, 21), (2009, 2010, 23), (2010, 2012, 20), (2012, 2013, 26), (2013, 2014, 22), (2014, 2015, 24), (2015, 2016, 20), (2016, 2017, 25), (2017, 2018, 30), (2018, 2019, 19), (2019, 2020, 21)]]
>>> picture(ji['BMC HEALTH SERV RES'],c="orange")
>>> picture(ji['BMC PUBLIC HEALTH'],c="orange")
>>> picture(ji['PLOS ONE'],c="orange")
>>> picture(ji['IMPLEMENT SCI'],c="orange")
>>> picture(ji['J ADV NURS'],c="orange")
>>> ji['J ADV NURS']
16
>>> L[16]
[16, 'J ADV NURS', [(0, 1994, 0), (1994, 1995, 1), (1995, 1997, 0), (1997, 1999, 1), (1999, 2001, 0), (2001, 2003, 2), (2003, 2004, 1), (2004, 2005, 3), (2005, 2006, 1), (2006, 2007, 0), (2007, 2008, 3), (2008, 2010, 4), (2010, 2011, 5), (2011, 2012, 9), (2012, 2013, 8), (2013, 2014, 13), (2014, 2015, 8), (2015, 2016, 15), (2016, 2017, 10), (2017, 2018, 22), (2018, 2019, 6), (2019, 2020, 2)]]
>>> # the level of publishing papers on MMR for this journal is growing but not so fast
>>> picture(ji['J CLIN NURS'],c="orange")
>>> picture(ji['SOC SCI MED'],c="orange")
>>> picture(ji['NURS EDUC TODAY'],c="orange")
>>> picture(ji['TRIALS'],c="orange")
>>> picture(ji['BMC MED EDUC'],c="orange")
>>>