[mlpack] GSoC 2017 Discussion

Yao-Wen Mao leomaoyw at gmail.com
Tue Feb 28 08:47:20 EST 2017


Hello everyone,

I am Yaowen, a fourth year undergraduate student studying Electrical
Engineering at National Taiwan University. Programming is my interest. I
like to investigate frameworks and features of programming languages, and
create some fun projects with them. Currently I am doing research in the
area of Deep Reinforcement Learning. When I am looking for a cool project
for GSoC 2017, the "Reinforcement Learning" project on the mlpack github
wiki really caught my attention. Actually, I read lots of recent papers
about reinforcement learning, and work on models like DQN, A3C with my
partners in the lab using tensorflow. So I think this project is really
cool and just meets what I am working on. I wonder that if mlpack can get a
better performance compared to tensorflow and other frameworks on a
multi-core CPU. Now I am diving into the mlpack codebase and have a
question. Recently, lots of works in papers have complex models which
consist multiple inputs and outputs. And they may have multiple
optimization stages when training them. In each stage, the algorithm trains
a different set of parameters and some parameters are regarded as
constants. In the meanwhile, some layers in the models could be recurrent
neural networks, which need to reset the state from time to time. Since I
am not familiar with the codebase and all API related to ann, I am curious
about how to handle all of these? Currently I only found that one can use
Forward and Backward to handle complex training schemes manually. But as
for high-level APIs, I guess FNN and RNN are not capable to handle these
things?
BTW, as already mentioned by others, adding the ELU activation function is
really a good idea. :) Thanks, Yaowen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20170228/7166275e/attachment.html>


More information about the mlpack mailing list