暂存网址 2019 05 08 - tedrepo/SimDial GitHub Wiki

如何让对抗网络GAN生成更高质量的文本?LeakGAN现身说法:“对抗中,你可能需要一个间谍!” https://www.leiphone.com/news/201709/QRJPQr3jCOtY7ncQ.html

DDPG之OU过程 python 实现: https://blog.csdn.net/u013745804/article/details/78461253

深度:机器如何模仿人类的学习方式? https://www.leiphone.com/news/201609/dPooXkjsRTp76YjG.html

浅谈聊天机器人 ChatBot 涉及到的技术点 以及词性标注和关键字提取 部分python 实现: https://blog.csdn.net/smilejiasmile/article/details/80967630

使用自然语言查询知识库 句子成分树映射原理: https://blog.csdn.net/c313450619/article/details/54408191

知识图谱问答总结 查询图生成原理: http://octopuscoder.github.io/2018/02/04/%E7%9F%A5%E8%AF%86%E5%9B%BE%E8%B0%B1%E9%97%AE%E7%AD%94%E6%80%BB%E7%BB%93/

NLTK中使用Stanford parser 构建中文语法树 : https://blog.csdn.net/baiyi_canggou/article/details/59056759

《Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base》阅读 https://zhuanlan.zhihu.com/p/26705310

狗尾草科技王昊奋:当知识图谱遇上聊天机器人 https://www.jqr.com/news/005041

【读书笔记】基于知识库的问答:生成查询图进行语义分析 https://cloud.tencent.com/developer/article/1086383

基于MDP和Policy Gradient的强化排序学习(RLTR)实验 https://blog.csdn.net/Aaronji1222/article/details/79587987 github: https://github.com/AaronJi/RL/tree/master/python/MDPrank

icm https://github.com/takuseno/icm

不需要外部REWARD的增強式學習: https://data-sci.info/2017/05/16/%E4%B8%8D%E9%9C%80%E8%A6%81%E5%A4%96%E9%83%A8reward%E7%9A%84%E5%A2%9E%E5%BC%B7%E5%BC%8F%E5%AD%B8%E7%BF%92-curiosity-driven-exploration-self-supervised-prediction/

学界 | 好奇心驱动学习,让强化学习更简单: http://dy.163.com/v2/article/detail/DVAI7DNP0511DPVD.html

A Free course in Deep Reinforcement Learning from beginner to expert.: https://simoninithomas.github.io/Deep_reinforcement_learning_Course/

noreward-rl https://github.com/pathak22/noreward-rl/tree/master/src

含有 RND 的实现 : https://github.com/simoninithomas/Deep_reinforcement_learning_Course

AemaH Blog https://aemah.github.io/page2/ https://aemah.github.io/2018/07/28/GAN_tf/

强化学习之遇到的一些面试问题 https://zhuanlan.zhihu.com/p/52143798

numpy 实现 实现nlp文本生成中的beam search解码器: https://www.cnblogs.com/data2value/p/9335470.html

梯度消失、梯度爆炸产生的理解 : https://www.cnblogs.com/pinking/p/9418280.html https://blog.csdn.net/qq_25737169/article/details/78847691

ConvLab: Multi-Domain End-to-End Dialog System Platform https://www.groundai.com/project/convlab-multi-domain-end-to-end-dialog-system-platform/

ConvLab: Multi-Domain End-to-End Dialog System Platform https://www.groundai.com/project/convlab-multi-domain-end-to-end-dialog-system-platform/#bib.bib10

SLM-Lab: https://github.com/kengz/SLM-Lab

《Sequicity:Simplifying .........》 https://blog.csdn.net/weixin_40533355/article/details/82997788

multi-domain-belief-tracking https://github.com/osmanio2/multi-domain-belief-tracking/blob/master/util.py

GLMP:任务型对话中全局到局部的记忆指针网络 https://zhuanlan.zhihu.com/p/57535074

《Mem2Seq有效结合KB与端到端任务型对话系统》阅读笔记 https://zhuanlan.zhihu.com/p/44110616

Mem2Seq:有效结合知识库的端到端任务对话系统 https://zhuanlan.zhihu.com/p/56223255

《任务型对话的全局到本地内存指针网络 》阅读笔记 https://zhuanlan.zhihu.com/p/54327404

搬砖填坑系列|任务型对话系统要点 https://zhuanlan.zhihu.com/p/57731830

GLMP github: https://github.com/jasonwu0731/GLMP

sequicity github : https://github.com/WING-NUS/sequicity/blob/master/model.py

Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing https://zhuanlan.zhihu.com/p/58996608

NLP-Models-Tensorflow 各种实现 比较全面 github: https://github.com/huseinzol05/NLP-Models-Tensorflow

Python 基于python+mysql浅谈redis缓存设计与数据库关联数据处理 三种更新模式解决缓存穿透: https://www.cnblogs.com/shouke/p/10157756.html

史上超详细的Flask路径参数以及请求参数讲解: https://www.jianshu.com/p/54057b4f0437

flask_之参数传递: https://www.cnblogs.com/shangpolu/p/7106922.html

flask+redis实现抢购(秒杀)功能 https://www.cnblogs.com/rgcLOVEyaya/p/RGC_LOVE_YAYA_836days.html

高可用Redis(十三):Redis缓存的使用和设计: http://www.cnblogs.com/renpingsheng/p/10202914.html

Flask后端实践 连载六 基于Flask与SQLAlchemy的单表接口 https://blog.csdn.net/qq_22034353/article/details/89043562

Flask与SQLAlchemy的集成和简单使用 https://blog.csdn.net/qq_22034353/article/details/88840483

Flask后端实践 连载四 接口响应封装及自定义json返回类型: https://blog.csdn.net/qq_22034353/article/details/88758395

Flask后端实践 连载三 接口标准化及全球化: https://blog.csdn.net/qq_22034353/article/details/88701947

Flask后端实践 连载七 Flask使用redis数据库 数据检索方式, 优先缓存检索, : https://blog.csdn.net/qq_22034353/article/details/89107062

One-shot Learning with Memory-Augmented Neural Networks: https://blog.csdn.net/qq_34562093/article/details/86591983

阅读理解中模型结构总概:https://blog.csdn.net/u010995990/article/details/79361029

彻底弄懂QANet: https://www.antdlx.com/qanet/

深度学习技术在机器阅读理解应用的研究进展: https://www.imooc.com/article/30060?block_id=tuijian_wz

自然语言处理中的Attention机制总结: https://blog.csdn.net/hahajinbu/article/details/81940355

机器阅读理解模型中attention的使用方式: https://blog.csdn.net/qq_36891953/article/details/88012318

attention计算过程解析(原理解析): https://blog.csdn.net/hpulfc/article/details/80826143

Attention Is All You Need(Transformer)算法原理解析: https://www.cnblogs.com/huangyc/p/9813907.html

六款中文分词模块尝试:jieba、THULAC、SnowNLP、pynlpir、CoreNLP、pyLTP: http://www.yanglajiao.com/article/u010417185/80680016

EPISODIC CURIOSITY THROUGH REACHABILITY: https://sites.google.com/view/episodic-curiosity

学界 | 好奇心驱动学习,让强化学习更简单: https://cloud.tencent.com/developer/article/1368149

Intrinsic Curiosity Module (ICM): https://pathak22.github.io/noreward-rl/

noreward-rl: https://github.com/pathak22/noreward-rl

强化学习中的好奇与拖延: https://www.tensorflowers.cn/t/7318

使用 Tensorflow 完成简单的强化学习 Part 1:好奇心驱动的学习: https://ai.yanxishe.com/page/TextTranslation/1188

深度学习机器如何模仿人类的学习方式? 原因解析: https://news.huahuo.com/201609/12902.html

深度:机器如何模仿人类的学习方式?: https://www.leiphone.com/news/201609/dPooXkjsRTp76YjG.html

智能论文笔记: http://aixpaper.com/similar/toward_scalable_neural_dialogue_state_tracking_model

A Sequence-to-Sequence Model for User Simulation in Spoken DialogueSystems: http://aixpaper.com/view/a_sequencetosequence_model_for_user_simulation_in_spoken_dialogue_systems

论文动态 | WWW2017 的语义和知识相关论文总结 http://blog.openkg.cn/论文动态-www2017-的语义和知识相关论文总结/

Vue中localstorage和sessionstorage的使用规范: https://yq.aliyun.com/articles/610021

icon组件-vue: https://www.jianshu.com/p/c2aeb5b29b27

深度 | 强化学习应用金融投资组合优化(附代码):https://cloud.tencent.com/developer/article/1395754

资源 | 基于 OpenAI Gym 的股票市场交易环境: http://www.myzaker.com/article/596460cd1bc8e07d2b00000a/

独家 | 定制股票交易OpenAI Gym强化学习环境(附代码): https://zhuanlan.zhihu.com/p/62408172

【HFT系列】利用强化学习构建高频交易模型: https://zhuanlan.zhihu.com/p/37825666

https://github.com/ucaiado/rl_trading

stock_market_reinforcement_learning: https://github.com/kh-kim/stock_market_reinforcement_learning

Machine-Learning-and-Reinforcement-Learning-in-Finance 含有 QLBS Model Implementation: https://github.com/joelowj/Machine-Learning-and-Reinforcement-Learning-in-Finance

gym-trading: https://github.com/hackthemarket/gym-trading

QLBS:Q-Learner in Black-Scholes(-Merton)世界 摘要 介绍: https://www.zhisci.com/home/Index/abs/id/201700720516.html

  1. QLBS Q-Learner进入NuQLear:拟合Q迭代,反向RL和期权投资组合。: https://www.zhisci.com/home/Index/abs/id/201700720516.html