mlpack IRC logs, 2017-03-02

Logs for the day 2017-03-02 (starts at 0:00 UTC) are shown below.

March 2017
--- Log opened Thu Mar 02 00:00:20 2017
--- Day changed Thu Mar 02 2017
00:00 < kris1> zoq:Dont you think i would have to implement the optimize method in all the policies
00:01 < kris1> zoq: Don't you think i would have to implement the optimize method in all the policies. Like pca example every policy implements the apply method
00:04 < arunreddy_> kris1: There would be other policies that directly influence the updates in the iteration.
00:04 < arunreddy_> for e.g. momentum policy class.
00:05 < arunreddy_> *working on it right now.
00:05 -!- arunreddy_ is now known as arunreddy
00:08 < kris1> okay, but suppose say i implement a function f in some policy with parameters param1, param2. then i would have to implement this function in every policy with the same parameters right.
00:09 < kris1> because template <policy> policy.function() is not there in any of the policies it would result in an error
00:10 < kris1> arunreddy:
00:11 < arunreddy> How about typing those parameters to default values. So if you dont declare it has to resort to the default. If you declare use the new parameter.
00:12 < arunreddy> Correction: *have typed params and set them to default values.
00:13 < arunreddy> rcurtin_: What do you suggest?
00:13 < kris1> yes but you still need to declare these with the same number of parameters right
00:13 < arunreddy> zoq
00:14 < arunreddy> yeah
00:14 -!- flyingpot [~flyingpot@] has joined #mlpack
00:18 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
00:27 < arunreddy> kris1: As you upate the step size, how about having a function GetStepSize(params..) that returns the step size for different policy classes.
00:27 < kris1> Yes i can add that
00:27 < kris1> Do we really need constructors for policy classes. I don't see a need.
00:28 < arunreddy> refer to the C++ Features> templates in the design guideline
00:28 < arunreddy>
00:33 < kris1> that dosen't say we need a constructor for every class.
00:36 < arunreddy> Yeah, If you look at LMetric Class it uses different typed parameters to pick the right evaluation function.
00:37 < arunreddy>
00:58 < kris1> zoq: i have done something like this
00:59 < kris1> arunreddy: this is what i am doining.
01:00 < kris1> i will now also add the GetStepSize(params) function. Also i will convert all the variable names to camecase
01:13 < arunreddy> A suggestion why dont you pass a reference to the decay type to the constructor.
01:13 < arunreddy> As the design document suggests to use the references as and when possible.
01:13 < arunreddy> kris1
01:14 < kris1> yes i will edit that i also saw that
01:15 < kris1> though i fell there is better method to do use the policy class. something like lmetric.hpp
01:15 < kris1> anyways i should sleep a bit have a class at 11
01:17 < arunreddy> Back to back night outs..
01:17 < arunreddy> get some rest.
01:39 < zoq> kris1: basically all policies depend on the iteration (time), for the rest we can use the constructor.
01:44 < arunreddy> zoq: So do you suggest overloaded constructors for SGD?
01:45 < zoq> arunreddy: Either overloading or specialize the constructor for each policy.
01:50 < zoq> Depending what you like to do overloading might not be possible:
01:50 < zoq>
01:50 -!- kris1 [~kris@] has quit [Quit: Leaving.]
01:50 < zoq> does not work
01:54 < zoq> something like that works:
01:55 < zoq> ah, one should be PolicyOne and the other PolicyTwo
01:56 < zoq> But I think in the momentum case it should work without using enable_if.
01:57 < arunreddy> Camt we have something like
01:57 < arunreddy> check the comment
01:57 < arunreddy> zoq
02:02 < zoq> see my comment
02:03 < zoq> It's totaly fine, not to use enable_if, it might be easier, not sure.
02:05 < zoq> it should be Optimizer<PolicyTwo> and not just Optimizer
02:10 < zoq> This case is kinda special because we can't use float/double as template parameters. Otherwise we could do Optimizer<PolicyTwo<0.3> > optimzer;
02:11 < arunreddy> I though about that, but passing 0.3 as typed paramter is not that clean.
02:12 < arunreddy> Refer to my comment in policy.hpp
02:12 < arunreddy> How about having a Optimizer<true> for vanialla PolicyOne
02:14 < arunreddy> And to add to it. SGD with momentum has to store the velocity matrix over the iterations..
02:15 < zoq> hm, I think I would go with an Empty policy class instead of Momentum<false>.
02:16 -!- flyingpot [~flyingpot@] has joined #mlpack
02:16 < zoq> About the velocity matrix, you can hold that matrix inside the Momentum Polciy class, right?
02:17 < arunreddy> yeah only in the Momentum policy class, but not in the Empty policy class.
02:20 < zoq> That would just do vanilla SGD - template<typename DecomposableFunctionType, typename UpdatePolicy = EmptyUpdatePolicy> SGD
02:20 < zoq> since the update function in EmptyUpdatePolicy does nothing.
02:20 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 260 seconds]
02:21 < zoq> SGD<FunctionType, Momentum> optimizer; uses SGD with momentum
02:22 < zoq> the cool thing is if someone likes to use another learning rate update strategy, he can just define another policy
02:22 < arunreddy> Now Momentum will have two functions.. one for initialization of velocity based on iterate.rows and iterate.cols
02:23 < arunreddy> and the second of updating in every iteration.
02:24 < arunreddy> What do you think?
02:24 < arunreddy> Yeah, for Nesterov Momentum there can be another UpdatePolicy class like NesterovMomentum..
02:25 < zoq> not sure I get your point; maybe it's too late
02:29 < arunreddy> I am referring to
02:29 < arunreddy> Sorry for keeping you late.
02:35 < zoq> It might be possible to use the constructor of the policy class instead of an init function, not sure that's a good idea, have to think about that. But yeah, the rest looks good. I think you can do that for now with the init function, if we can think of something better, the change would be easy. What do you think?
02:37 < arunreddy> Sounds like a plan. I can start with it. At the constructor level, we dont know the size of iterate matrix.
02:38 < arunreddy> I am still thinking how to do it at the constructor level. But for now will get it moving.. :)
02:39 < zoq> yeah, in that case we have to create the policy instance after we know the size.
02:40 < zoq> You are in UTC -7h if I remember right?
02:40 < arunreddy> That makes it runtime execution
02:40 < arunreddy> Yeah.
02:41 < arunreddy> how about you. Are you on UTC+1
02:41 < zoq> some hours left :)
02:41 < zoq> yes, right
02:42 < arunreddy> looks like it is getting super late for you.
02:42 < zoq> About to get some sleep :)
02:42 < arunreddy> ok good night.
04:02 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 260 seconds]
04:02 -!- vpal [~vivek@unaffiliated/vivekp] has joined #mlpack
04:25 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
04:28 -!- flyingpot [~flyingpot@] has joined #mlpack
04:34 -!- shihao [407978c3@gateway/web/freenode/ip.] has joined #mlpack
04:35 < shihao> Hi there, I am a little confused about process of calculating prob of in NBC.
04:36 < shihao> In order to decrease floating point errors, we calculate sum of log. Here is code: testProbs.col(i) += (data.n_rows / -2.0 * log(2 * M_PI) - 0.5 * log(arma::det(arma::diagmat(variances.col(i)))) + exponents);
04:37 < shihao> Why probability needs to be devided by data.n_rows?
04:39 < shihao> And I think coefficient here '-2.0 * log(2 * M_PI)' should be -0.5 * log(2 * M_PI) as second component.
04:42 < shihao> I guess the idea is log(Pr1) + log(Pr2) == log(Pr1*Pr2). But when I calculate e^log(Pr1*Pr2), results don't lie between 0 ~ 1.
04:43 -!- paws [01ba11b7@gateway/web/freenode/ip.] has joined #mlpack
04:43 < paws> hi , im interested in participating for gsoc where do i get started
04:45 < shihao> The docs are good:
04:45 < shihao> Maybe you can select an entry level issue to get started.
04:45 -!- usama [6f44656c@gateway/web/freenode/ip.] has joined #mlpack
04:46 -!- Narayan [1b3e0a27@gateway/web/freenode/ip.] has joined #mlpack
04:50 < paws> ok thanks
04:52 -!- usama [6f44656c@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
05:05 < arunreddy> paws:
05:05 < arunreddy> this can be helpful too.
05:08 < shihao> Hi arunreddy. Do you have any idea about what I asked before?
05:27 -!- shihao [407978c3@gateway/web/freenode/ip.] has quit [Quit: Page closed]
05:41 < arunreddy> shihao, Sorry didn't see your message.
05:42 < arunreddy> So, its not the regular naive bayes, but the GMM version of it.
05:43 < arunreddy> The probability calculation is based on pdf of normal distribution.
05:43 < arunreddy>
05:43 < arunreddy> You can refer eq(3) in 2.1, to get an idea.
05:46 < arunreddy> Also note that they are conditional probabilities, not neccesarily sum up to 1.
05:47 < arunreddy> P(A/C)+P(B/C) neq 1
05:58 < arunreddy> zoq: Made some progress based on our discussion.
05:59 < arunreddy> There is some issue with Optimize declaration being used by Regularized SVD function, couldn't get the build working.
06:00 < arunreddy> Line 113:
06:00 < arunreddy> The following is the build error:
06:01 < arunreddy> May be I should dig in a little into regularized svd implementation.
06:05 -!- Narayan [1b3e0a27@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
06:06 -!- deepooja [af6f8211@gateway/web/freenode/ip.] has joined #mlpack
06:06 < deepooja> Hello
06:06 < arunreddy> hello..
06:06 < deepooja> I am here for the GSoC 2017
06:06 -!- paws [01ba11b7@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
06:07 < deepooja> I can see some really good ideas here
06:08 < arunreddy> me too.
06:09 < deepooja> Who is the mentor here arunreddy ?
06:09 < arunreddy> its rcurtin_ and zoq
06:10 < deepooja> Oh thank you arunreddy
06:10 < deepooja> I am Deepanshu Thakur, final year student of Arya college of Engineering and I.T., Jaipur, India
06:10 < deepooja> I am currently working with Celebal Corp ( as a data science trainee
06:12 < deepooja> What about you? arunreddy
06:13 < arunreddy> I am a third year PhD student at Arizona State University, USA.
06:13 < deepooja> Oh good :) Aren't you feeling sleepy right now?
06:14 < deepooja> On which idea you are willing to work?
06:14 < arunreddy> recurrent neural networks.
06:15 < deepooja> cool :)
06:16 < arunreddy> :)
06:16 < arunreddy> you have something in mind?
06:20 < deepooja> yes I would love to work on either Reinforcement learning or Fast k-centers Algorithm & Implementation
06:20 < deepooja> is this your first gsoc?
06:24 < arunreddy> sweet. Yes it is.
06:27 < deepooja> good all the best :)
06:28 < deepooja> Except rcurtin_ and zoq everyone else if GSoC participant?
06:33 < govg> I'm not, but I assume everyone else is.
06:33 < deepooja> Hi govg
06:33 < govg> Hi
06:35 < deepooja> How long you have been on this channel and how you are related with mplack?
06:36 < govg> I'm not affiliated to mlpack in any way.
06:36 < govg> I used to frequent this channel from around 2013, I guess, when I had to use it. I also maintained the arch linux package for a while, still lurk around here.
06:38 < govg> Or maybe 2014, dunno.
06:38 < deepooja> haha that's a really long time.
06:39 < govg> Yeah I tend to just add channels into my IRC client and forget about them.
06:39 < deepooja> haha which irc client you are using?
06:39 < govg> Though now I have some passing familiarity with mlpack by virtue of this being one of the only ones I'm active on :)
06:39 < govg> irssi
06:40 < deepooja> Since you do have some familiarity with mlpack can you please explain me more about this project?
06:41 < govg> Project as mlpack?
06:41 < govg> I do not know much about the current proposals, sorry.
06:41 < govg> You should wait for zoq or rcurtin to respond, they will do so eventually.
06:43 < deepooja> Nah I am not talking about current proposals but yes I was going through mlpack's github page and I am enjoying their introduction.
06:45 < govg> Okay.
07:11 -!- Thyrix [~Thunderbi@] has joined #mlpack
07:35 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
07:48 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
07:55 -!- Vladimir_ [5c2a1a33@gateway/web/freenode/ip.] has quit [Quit: Page closed]
08:04 -!- pg [b64a2c01@gateway/web/freenode/ip.] has joined #mlpack
08:11 -!- flyingpot_ [~flyingpot@] has joined #mlpack
08:14 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
08:15 -!- hxidkd [a8eb4fb9@gateway/web/freenode/ip.] has joined #mlpack
08:15 -!- flyingpot_ [~flyingpot@] has quit [Read error: Connection reset by peer]
08:15 -!- flyingpot [~flyingpot@] has joined #mlpack
08:16 -!- pg_ [3d0c4bd1@gateway/web/freenode/ip.] has joined #mlpack
08:18 -!- kris1 [~kris@] has joined #mlpack
08:18 -!- pg [b64a2c01@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
08:18 < pg_> Hello, I've gone through the list of ideas for 2017.. How can I get started with the initial contribution?
08:18 -!- shikhar [67d49def@gateway/web/freenode/ip.] has joined #mlpack
08:18 -!- kris2 [~kris@] has joined #mlpack
08:18 -!- flyingpot_ [~flyingpot@] has joined #mlpack
08:21 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
08:22 -!- kris1 [~kris@] has quit [Ping timeout: 246 seconds]
08:28 -!- flyingpot [~flyingpot@] has joined #mlpack
08:31 -!- flyingpot_ [~flyingpot@] has quit [Ping timeout: 240 seconds]
08:34 -!- flyingpot_ [~flyingpot@] has joined #mlpack
08:35 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 264 seconds]
08:37 -!- hxidkd [a8eb4fb9@gateway/web/freenode/ip.] has quit []
08:39 -!- pg_ [3d0c4bd1@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
08:39 -!- hxidkd [~hxidkd@] has joined #mlpack
08:42 -!- hxidkd [~hxidkd@] has quit [Client Quit]
09:31 -!- Thyrix [~Thunderbi@] has quit [Quit: Thyrix]
09:51 -!- flyingpot_ [~flyingpot@] has quit [Ping timeout: 240 seconds]
10:20 -!- irakli_p [b04a7cf3@gateway/web/freenode/ip.] has joined #mlpack
10:20 -!- irakli_p [b04a7cf3@gateway/web/freenode/ip.] has quit [Client Quit]
10:29 -!- vikas [6725c94b@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
10:36 -!- Thyrix [~Thunderbi@] has joined #mlpack
10:38 -!- dhawalht [ca4eaca2@gateway/web/freenode/ip.] has joined #mlpack
10:43 -!- dhawalht [ca4eaca2@gateway/web/freenode/ip.] has quit [Quit: Page closed]
10:48 -!- flyingpot [~flyingpot@] has joined #mlpack
10:50 -!- mikeling [uid89706@gateway/web/] has joined #mlpack
11:01 -!- diehumblex [uid209517@gateway/web/] has joined #mlpack
11:28 -!- flyingpot [~flyingpot@] has quit [Read error: Connection reset by peer]
11:28 -!- flyingpot [~flyingpot@] has joined #mlpack
11:39 -!- kris2 [~kris@] has quit [Ping timeout: 260 seconds]
12:05 -!- shikhar [67d49def@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
12:18 -!- Thyrix [~Thunderbi@] has quit [Ping timeout: 240 seconds]
12:18 -!- Thyrix [~Thunderbi@] has joined #mlpack
12:26 -!- vpal is now known as vivekp
13:03 -!- chvsp [cb6ef217@gateway/web/freenode/ip.] has joined #mlpack
13:22 -!- pushpendra [312202fe@gateway/web/freenode/ip.] has joined #mlpack
13:33 -!- pushpendra [312202fe@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
13:49 -!- You're now known as rcurtin
13:52 -!- topology [3d0c4bd1@gateway/web/freenode/ip.] has joined #mlpack
13:53 -!- Thyrix [~Thunderbi@] has quit [Quit: Thyrix]
13:59 -!- dineshraj01 [~dinesh@] has joined #mlpack
14:04 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 246 seconds]
14:13 -!- topology [3d0c4bd1@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
14:20 -!- dineshraj01 [~dinesh@] has quit [Read error: Connection reset by peer]
14:30 -!- mayank [73f8f80d@gateway/web/freenode/ip.] has joined #mlpack
14:30 -!- topology [b64b2d01@gateway/web/freenode/ip.] has joined #mlpack
14:42 -!- shikhar [67d49def@gateway/web/freenode/ip.] has joined #mlpack
14:51 -!- deepooja [af6f8211@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
14:53 -!- shihao [407978c3@gateway/web/freenode/ip.] has joined #mlpack
15:00 -!- topology [b64b2d01@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
15:01 -!- usama [6f44656c@gateway/web/freenode/ip.] has joined #mlpack
15:01 < usama> thanks alot zoq it WORKED!!!!!!!!!!!!
15:01 -!- flyingpot [~flyingpot@] has joined #mlpack
15:02 < usama> it's the issue that libopenblas.dll.a was not included. That part is missing from the setup guide
15:06 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 264 seconds]
15:08 < zoq> sama: yeah, do you think we should provide an screenshot or is the note just fine?
15:11 < zoq> oops, I got the name wrong
15:15 -!- kris1 [~kris@] has joined #mlpack
15:20 -!- aditya_ [~aditya@] has joined #mlpack
15:24 -!- tejank10 [75cc991f@gateway/web/freenode/ip.] has joined #mlpack
15:25 -!- tejank10 [75cc991f@gateway/web/freenode/ip.] has quit [Client Quit]
15:25 -!- shihao [407978c3@gateway/web/freenode/ip.] has quit [Quit: Page closed]
15:29 < usama> a note would be fine
15:30 < rcurtin> usama: I'll update the wiki page today with that information that's suggested in the ticket zoq referenced
15:31 < usama> Thanks
15:34 -!- hxidkd [~hxidkd@] has joined #mlpack
15:50 < rcurtin> if someone out there is looking for a moderate difficulty issue to solve, here is one that I would like to see solved but don't have time to do:
15:50 < rcurtin> requires some knowledge of adaboost and C++ debugging skills
15:53 -!- hxidkd [~hxidkd@] has quit [Ping timeout: 240 seconds]
16:18 -!- yatharth [2f1eb558@gateway/web/freenode/ip.] has joined #mlpack
16:20 -!- yatharth [2f1eb558@gateway/web/freenode/ip.] has quit [Client Quit]
16:47 -!- mikeling [uid89706@gateway/web/] has quit [Quit: Connection closed for inactivity]
16:57 -!- omar__ [c4dd6710@gateway/web/freenode/ip.] has joined #mlpack
16:58 -!- omar__ [c4dd6710@gateway/web/freenode/ip.] has quit [Client Quit]
16:58 -!- omar__ [c4dd6710@gateway/web/freenode/ip.] has joined #mlpack
16:58 -!- omar__ [c4dd6710@gateway/web/freenode/ip.] has quit [Client Quit]
17:03 -!- flyingpot [~flyingpot@] has joined #mlpack
17:08 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 264 seconds]
17:16 -!- usama [6f44656c@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
17:25 -!- shikhar [67d49def@gateway/web/freenode/ip.] has quit [Quit: Page closed]
17:34 -!- shihao [80b497b4@gateway/web/freenode/ip.] has joined #mlpack
17:36 < arunreddy> zoq: For in Logistic regression, the optimizer type is declared using the following..
17:36 < arunreddy> template<template<class> class typedef OptimizerType OptimizerType>
17:37 < arunreddy> template<template<typename> class OptimizerType> to be more precise..
17:39 < arunreddy> adding a new typename to the optimizer, SGD<DeformableFunctionType,UpdatePolicyType> enforces changes across the codebase where it is being used.
17:39 < arunreddy> is it possible to make it more generic?
17:40 -!- qwe [67e3606a@gateway/web/freenode/ip.] has joined #mlpack
17:40 < arunreddy> shihao: Hello..
17:41 < shihao> Hi there! Thank you for your answer!
17:41 < arunreddy> cool.
17:41 < arunreddy> np.
17:42 < shihao> Can I ask another question? My math is not very good....
17:42 < arunreddy> sure go ahead. i will try..
17:43 < shihao> Why we use mixture model? I guess features can be formed to a Gaussian Multivariate Model.
17:43 < shihao> I don't understand that weight part.
17:46 < arunreddy> For convenience.
17:47 < arunreddy> which weight are you referring to?
17:50 < shihao> I guess in the code here: testProbs.col(i) += (data.n_rows / -2.0 * log(2 * M_PI) - 0.5 * log(arma::det(arma::diagmat(variances.col(i)))) + exponents);
17:50 < shihao> Assumption is there are four identical multivariate gaussian multivariate distribution.
17:51 < shihao> Why don't we just use one multivariate gaussian distribution?
17:51 -!- PARADOXST [ac088185@gateway/web/freenode/ip.] has joined #mlpack
17:52 -!- PARADOXST [ac088185@gateway/web/freenode/ip.] has quit [Client Quit]
17:52 -!- shadycs15 [67c07715@gateway/web/freenode/ip.] has joined #mlpack
17:56 < arunreddy> shihao: Just curious how you came up with number 4.
17:57 < shihao> Oh, Sorry about that. I used iris dataset so there are four features :)
18:00 < shihao> I guess I figured it out. Code in here considers distribution of each feature as a univariate gaussian distribution and then combine them linealy
18:02 < shihao> Is that right?
18:02 -!- sai [0e8b5206@gateway/web/freenode/ip.] has joined #mlpack
18:03 < arunreddy> each P(X/Y) is sampled from a gaussian.
18:03 < arunreddy> All the features combined together form a multivariate gaussian, with a diagnoal covariance matrix.
18:10 < shihao> Yes, so I think adding log(X|Y) + log(Y) is enough. Why code here inverse it and multiply number of features?
18:12 < arunreddy> The log posterior is computed by summing up the log prior-log(Y) and log likelihood - SUM_i log(X_i|Y)
18:30 -!- sai [0e8b5206@gateway/web/freenode/ip.] has quit [Quit: Page closed]
18:34 -!- kris1 [~kris@] has left #mlpack []
18:36 < shadycs15> hi devs
18:37 < shadycs15> any word of advice for gsoc 2017 aspirants?
18:39 -!- mayank [73f8f80d@gateway/web/freenode/ip.] has quit [Quit: Page closed]
18:46 < rcurtin> shadycs15: the best you can find will already be written on the website:
18:46 < rcurtin> arunreddy: I saw your email, I will respond shortly
18:47 < arunreddy> rcurtin: Thank you.
18:49 < shadycs15> rcurtin: are there any warm challenges for the reinforcement learning idea?
18:50 < rcurtin> shadycs15: like the gsoc.html page says, you'll have to take a look through the list of open issues or see if there is some other bug you can find
18:50 < shadycs15> I see. Thanks
19:08 -!- qwe [67e3606a@gateway/web/freenode/ip.] has quit [Quit: Page closed]
19:15 -!- shadycs15 [67c07715@gateway/web/freenode/ip.] has quit [Quit: Page closed]
19:24 -!- shihao [80b497b4@gateway/web/freenode/ip.] has quit [Quit: Page closed]
19:43 -!- chvsp [cb6ef217@gateway/web/freenode/ip.] has quit [Quit: Page closed]
19:49 -!- aditya_ [~aditya@] has quit [Ping timeout: 256 seconds]
19:59 < zoq> rcurtin: I thought about the generic optimizer API and wouldn't variadic templates also solve the issue?
19:59 < zoq> rcurtin: Something like:
20:03 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has joined #mlpack
20:06 -!- flyingpot [~flyingpot@] has joined #mlpack
20:10 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 246 seconds]
20:14 -!- qdqds [6725c94b@gateway/web/freenode/ip.] has joined #mlpack
20:23 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has quit [Quit: Page closed]
20:23 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has joined #mlpack
20:24 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has quit [Client Quit]
20:25 < rcurtin> zoq: I liked that idea originally too, but what I got hung up about is that you still need a way to specify what those second template parameters are
20:26 < rcurtin> variadic templates will allow us to use the defaults, but I did not see a way other than template typedefs that we could set the second template parameter of an optimizer to something custom without changing the default
20:26 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has joined #mlpack
20:26 < rcurtin> I suppose, it's possible there's something I overlooked, but I couldn't come up with a solution for that bit
20:40 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
20:43 < zoq> rcurtin: right, you still have to use a typedef to provide an alias. I guess one advantage I can think of right now, is that using variadic templates you can also do:
20:43 < zoq> LogisticRegression<> lr;
20:43 < zoq> SGD<LogisticRegressionFunction<>, CustomPolicy1, CustomPolicy2> sgd(lr);
20:43 < zoq> lr.Train(sgd);
20:43 < zoq> instead of providing another alias, that wraps the other template parameter.
20:45 < zoq> but providing an alias isn't that much of a deal
20:47 < arunreddy> I almost finished coding with an alias StandardSVD.
20:49 < zoq> great :)
20:49 < rcurtin> zoq: yeah, that is definitely an advantage, the variadic templates do allow you some more flexibility like that
20:49 < arunreddy> I see another problem now, what if we like to use MomentumUpdate in Logisticregression to using EmptyUpdate
20:49 < arunreddy> using StandardSGD = SGD<DecomposableFunctionType,EmptyUpdate>;
20:50 < rcurtin> arunreddy: couldn't you just have another template typedef for MomentumSGD (SGD using MomentumUpdate) and then use, e.g., LogisticRegression<MomentumSGD>?
20:51 < rcurtin> zoq: I guess, in the end, it's best to use both strategies
20:51 < arunreddy> But that way we dont have the freedom of playing with different SGD's on the fly. It has to be specified upfront.
20:52 < arunreddy> and with increase in number of Policy classes, the combinations required will increase..
20:52 < rcurtin> I'm not sure I understand what you mean here
20:52 < rcurtin> if you want to use a different type of optimizer, you have to make a template typedef for it, unless like zoq said you're in a situation where you can simply pass an instantiated optimizer and it can infer the type with variadic templates
20:53 < arunreddy> In Neighbordhood component analysis, we have "sgd", "minibatch-sgd" and others...
20:53 < rcurtin> yes, those are the command-line arguments
20:54 < rcurtin> unfortunately it's necessarily the case that the command-line interface (or other bindings) can't give you something as expressive as the C++ interface
20:54 < rcurtin> so we can provide a couple of popular optimizers with the command-line interface, but we can't really provide every possible thing, the list of things to handle gets too long
20:54 < arunreddy> We create aliases for few popular implementations?
20:54 < arunreddy> few popular combinations i meant.
20:55 < rcurtin> we can create aliases for basically most of the combinations we implement in the mlpack C++ codebase
20:55 < rcurtin> but this is a different thing than what we choose to supply from the command-line interface
20:55 < rcurtin> when I say "create aliases in C++" I mean template typedef; "create aliases for command-line interface" means doing string handling and conversion from string to types in, e.g., nca_main.cpp
20:55 < rcurtin> I hope that clarifies what I'm talking about, let me know if I can clarify further
20:57 < arunreddy> got it.
20:57 < arunreddy> Any second thoughts on using variadic templates?
20:57 < rcurtin> I don't see a problem with using both variadic templates and template typedefs
20:58 < rcurtin> but a key point here is that if you are using variadic templates like in zoq's gist where your code only knows about the first template parameter
20:58 < rcurtin> (in the case of the optimizers, that first template parameter is OptimizerType)
20:58 < rcurtin> then you can't possibly specify anything except the default template parameter for any of the other template parameters
20:58 < rcurtin> except in the situation that zoq mentioned above
20:59 < rcurtin> hence, we must also have template typedefs in order that a user can use the class with a specified second, third, fourth template parameter instead of the defaults
20:59 < rcurtin> I hope I've explained that ok, let me know if I can clarify
21:02 < zoq> I think for the SGD PR it doesn't matter since we should provide an alias anyway. It might be interesting to look into variadic templates in another context/issue, maybe using variadic templates increases the build time, not sure.
21:08 < arunreddy> rcurtin, if we use class instead of typename for variadic templates the order doesn't matter. So we get little more freedom. I am not quite sure about the time of execution and speed up.
21:27 < rcurtin> arunreddy: I am not sure what you mean by that; can you explain further?
21:50 < arunreddy> rcurtin: I have misunderstood something with template params. Sorry for the confusion. Please ignore.
21:51 < rcurtin> ok, no worries :)
22:07 -!- flyingpot [~flyingpot@] has joined #mlpack
22:08 < rcurtin> zoq: the new github homepage looks great, I am happy every time I load it :)
22:11 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
22:21 < zoq> rcurtin: I'll have to say I had fun to play with this a little bit :)
22:22 < rcurtin> :)
22:35 -!- aman11dh [95a9d592@gateway/web/freenode/ip.] has joined #mlpack
22:36 < aman11dh> Is there anybody out there?
22:38 < arunreddy> aman11dh: hey
22:39 < aman11dh> Hey Arun
22:39 < aman11dh> I was looking at MlPack lately for starting some C++ dev work.
22:39 < aman11dh> Are there any future plans for CUDA integration?
22:40 < arunreddy> you should check with rcurtin and zoq
22:41 < aman11dh> rcurtin: zoq: Any future plans for MlPack with CUDA?
22:43 < rcurtin> this is a difficult topic... CUDA programming primitives are very ugly
22:43 < rcurtin> and don't really fit in mlpack too well
22:43 < rcurtin> one possible idea is to use NVBLAS, which can run BLAS operations on the GPU when it is predicted it will give speedup
22:43 < rcurtin> so I have heard some people using that see pretty decent speedups for some machine learning algorithms
22:44 < rcurtin> but personally I would prefer to avoid raw CUDA (or OpenCL) code inside of mlpack, I think mlpack is better at a higher level of abstraction
22:44 < rcurtin> I will say, I think eventually Armadillo (the matrix library we use) will also support matrices on the GPU, and then this will allow mlpack to work at a higher level of abstraction
22:44 < rcurtin> but this is not something that is ready yet, and it may be some months until it is :)
22:46 < aman11dh> I was planning to add integration with cuBlas or NvBlas. Though I hate pure cuda myself :P
22:47 < rcurtin> if they are just BLAS replacements, then you can just set Armadillo to use them instead of OpenBLAS or whatever else
22:47 < rcurtin> very simple! just a line or two of configuration and suddenly, boom, mlpack on the GPU :)
22:47 < rcurtin> it may not be a fully optimal implementation like that, but it's certainly better than nothing
22:47 < aman11dh> Let me check about that and get back to you in a few hours :)
22:49 < rcurtin> yeah, what I said assumes that Armadillo's documentation for replacing BLAS is good :)
22:49 < rcurtin> I think it is, but I have been so involved with Armadillo for so long that I can't have an unbiased opinion of whether or not their docs are good
22:50 < aman11dh> I agree, as I was checking the FAQs,, looks like they have a full support of NVBLAS and ACML.
22:51 < zoq> rcurtin: It just works as you said it's super straightforward.
22:51 < rcurtin> yeah I think you just have to modify config.hpp
23:02 -!- aman11dh2 [95a9d592@gateway/web/freenode/ip.] has joined #mlpack
23:13 -!- diehumblex [uid209517@gateway/web/] has quit [Quit: Connection closed for inactivity]
23:13 -!- aman11dh2 [95a9d592@gateway/web/freenode/ip.] has quit [Quit: Page closed]
23:13 < arunreddy> rcurtin,zoq: I have an issue with the Train function in LogisticRegression.
23:13 < arunreddy>
23:15 < arunreddy> Calling LogisticRegression with StandardSGD optimizer fails.
23:18 < arunreddy> Am I missing something here?
23:24 < zoq> arunreddy: Looks good, not sure why the template argument deduction/substitution failed, have you modified the logistic regression class?
23:25 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
23:29 < arunreddy> zoq: - SGD<LogisticRegressionFunction<>> sgdOpt(lrf);
23:29 < arunreddy> + StandardSGD <LogisticRegressionFunction<>> sgdOpt(lrf);
23:29 < arunreddy> n
23:30 < arunreddy> thats the only change.
23:31 < arunreddy> More detailed error.
23:43 < zoq> hm, it works if I use variadic templates, but it doesn't without, and right now I can't see why it doesn't
23:53 < arunreddy> hmm.
23:53 < arunreddy> how do you usually debug such errors during build time.
23:54 < arunreddy> make -d is not that useful.
--- Log closed Fri Mar 03 00:00:26 2017