[mlpack] Reinforcement Learning at GSOC 2017

Sinjan Chakraborty sinjanc at gmail.com
Tue Feb 28 11:48:23 EST 2017


Hello Marcus,

I have forked mlpack to my github account. Now, if I want to implement the
policy gradient method, I understand that I am supposed to add this
implementation to a folder under mlpack and make a pull request. Am I
supposed to implement the method under mlpack/src/mlpack/methods/?

Or, should I create a separate repository in my github profile, implement
the policy gradient and provide you with the link?

Thanks,
Sinjan

On Tue, Feb 28, 2017 at 8:04 PM, Marcus Edel <marcus.edel at fu-berlin.de>
wrote:

> Hello Sinjan,
>
> Over the past year I have been involved with two separate projects in
> Machine
> Learning. I have also completed the online Machine Learning course by
> Andrew Ng
> (with certificate) and Neural Networks for Machine Learning by Geoffrey
> Hinton
> and the deep learning course on Udacity by Google.
>
> I haven't got a chance to put my knowledge in deep learning to any
> practical use
> since the projects I worked were based on clustering algorithms and
> artificial
> neural networks. That's why I am particularly interested about working on
> this
> project.
>
>
> Great that you like the project, I think GSoC is a great opportunity to
> work on
> a project that you really like.
>
> I will start learning Reinforcement learning from the Berkeley Deep
> Reinforcement Learning course. <http://rll.berkeley.edu/deeprlcourse/>
>
>
> Sounds good, you can also checkout the references given in the project
> description; each paper also has a bunch of different references that are
> worth
> to checkout.
>
> I would be very grateful if you can guide me if there is anything I am
> required
> to study to make myself ready for this project.
>
>
> To be successful at this project, you should have a good knowledge of
> reinforcement learning; i.e., you should be familiar with the way agents
> are
> typically built and trained, and certainly, you should be familiar with the
> individual components that you plan to implement.
>
> I would also like to know if I should start working on the issues. I have
> installed mlpack properly on my system. I have gone through the
> command-line
> programs and the C++ implementations of the methods. And now I want to
> start
> contributing to mlpack.
>
>
> So there are some easy issues on GitHub that you might find interesting,
> we will
> see if we can add more in the next days. Besides that, since you like to
> work on
> the reinforcement learning project, maybe you like to implement an simple
> agent,
> that is capable of solving some simple tasks; Policy Gradients is a simple
> method that is really powerful and also quite intuitive. Don't feel
> obligated,
> you don't have to solve issues or implement anything to be considered for
> the
> project, but it's an easy way to dive into the codebase.
>
> I hope this is helpful, let us know if you have any more questions.
>
> Thanks,
> Marcus
>
>
> On 28 Feb 2017, at 12:37, Sinjan Chakraborty <sinjanc at gmail.com> wrote:
>
> Hi,
>
> My name is Sinjan Chakraborty. I am a junior undergraduate student in
> Computer Science and Engineering from India. I have also communicated with
> a few mentors yesterday on the #mlpack IRC Node through my nickname
> Sinjan_. I would like to work with mlpack on the Reinforcement Learning
> Project.
>
> Over the past year I have been involved with two separate projects in
> Machine Learning. I have also completed the online Machine Learning course
> by Andrew Ng (with certificate) and Neural Networks for Machine Learning by
> Geoffrey Hinton and the deep learning course on Udacity by Google.
>
> I haven't got a chance to put my knowledge in deep learning to any
> practical use since the projects I worked were based on clustering
> algorithms and artificial neural networks. That's why I am particularly
> interested about working on this project.
>
> I will start learning Reinforcement learning from the Berkeley Deep
> Reinforcement Learning course. <http://rll.berkeley.edu/deeprlcourse/>
>
> I would be very grateful if you can guide me if there is anything I am
> required to study to make myself ready for this project.
>
> I would also like to know if I should start working on the issues. I have
> installed mlpack properly on my system. I have gone through the
> command-line programs and the C++ implementations of the methods. And now I
> want to start contributing to mlpack.
>
>
> Thanking you,
> Yours sincerely,
>
> Sinjan Chakraborty.
> _______________________________________________
> mlpack mailing list
> mlpack at lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20170228/04841534/attachment.html>


More information about the mlpack mailing list