mlpack IRC logs, 2018-07-18

Logs for the day 2018-07-18 (starts at 0:00 UTC) are shown below.

July 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
--- Log opened Wed Jul 18 00:00:51 2018
00:11 -!- lozhnikov [~mikhail@lozhnikov.static.corbina.ru] has joined #mlpack
00:26 -!- lozhnikov [~mikhail@lozhnikov.static.corbina.ru] has quit [Ping timeout: 260 seconds]
00:50 -!- robertohueso [~roberto@185.67.107.94] has left #mlpack []
01:13 -!- ShikharJ [Elite21812@gateway/shell/elitebnc/x-reuzniqdmakqvved] has quit [Quit: ZNC 1.6.5-elitebnc:6 - http://elitebnc.org]
01:31 -!- ShikharJ [Elite21812@gateway/shell/elitebnc/x-ckmmtipauqnmdbnw] has joined #mlpack
05:16 -!- Netsplit *.net <-> *.split quits: xa0
05:35 -!- xa0 [~zeta@unaffiliated/uoy] has joined #mlpack
06:28 -!- jenkins-mlpack [~PircBotx@8.44.230.48] has joined #mlpack
06:40 -!- manish7294 [8ba79c98@gateway/web/freenode/ip.139.167.156.152] has joined #mlpack
06:43 < manish7294> rcurtin: I have debugged the boostmetric implementation, there were few small issues. Here's the updated gist - https://gist.github.com/manish7294/3d97be37919658b96bba0125f2f3de84 I also re-run the smulations and the results are pretty good. simulations - https://gist.github.com/manish7294/2388267666b1159ce261ce7b95dc923c
06:43 < manish7294> I think we should definitely have this. If you want I can open a PR.
06:45 < manish7294> Maybe after some more optimizations we can make it even faster.
07:47 < rcurtin> manish7294: we need to see comparisons with LMNN
07:48 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 268 seconds]
08:01 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
08:06 -!- xa0 [~zeta@unaffiliated/uoy] has quit [Excess Flood]
08:06 < rcurtin> LMNN with no impostor recalculation that is
08:06 -!- xa0 [~zeta@unaffiliated/uoy] has joined #mlpack
08:43 < manish7294> rcurtin: These are the results with eval bounds branch - https://gist.github.com/manish7294/2388267666b1159ce261ce7b95dc923c
08:43 < manish7294> Okay, I will do this for no impostors recalculation as well
08:54 < jenkins-mlpack2> Project docker mlpack nightly build build #10: FAILURE in 4 hr 40 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/10/
09:00 < manish7294> rcurtin: I have updated simulations for LMNN no impostors recalculation as well. https://gist.github.com/manish7294/2388267666b1159ce261ce7b95dc923c
09:01 < manish7294> In this case it seems optimizer converges within one iteration for all the datasets.
09:04 < rcurtin> something is not right in those results if it converges in one iterationn---did you run setting range to some very high number?
09:07 < rcurtin> by the way, I am sorry I did not get to responding about BoostMetric yesterday; after lunch the rest of the day ended up being allocated
09:08 < rcurtin> we actually went hiking until 4am, so I don't know if I can read it today, I think that I cannot stay awake
09:08 < manish7294> no, I just removed the impostors recalculation code
09:08 < manish7294> no worries :)
09:09 < manish7294> as we are already calculating them in LMNNFunction constructor
09:10 < manish7294> I think tha's the case for LBFGS only as others are taking quite a number of iterations
09:12 < rcurtin> I'd urge you to take a look into it since it seems to me there is definitely a bug there
09:19 < manish7294> you were right, I missed one thing. I will update resylts soon
09:24 < rcurtin> sounds good, thanks
09:33 < manish7294> rcurtin: Here's the result - https://gist.github.com/manish7294/2388267666b1159ce261ce7b95dc923c
09:37 < rcurtin> ok, thanks
09:38 < rcurtin> the results are very mixed; it's not clear which of these three is best, and it doesn't seem like there's a consistent pattern
09:38 < rcurtin> that's not necessarily a problem, just an observation
09:38 < rcurtin> how difficult would it be for you to recalculate impostors in your BoostMetric implementation?
09:38 < rcurtin> (and the impostor-recalculating LMNN implementation, was that with range == 1?)
09:45 < manish7294> Ya, it's with range 1
09:48 < manish7294> Originally, boostmetric doesn't do this, but we can do this at every iterations by recomputing triplets and then Ar . https://gist.github.com/manish7294/3d97be37919658b96bba0125f2f3de84#file-boostmetric_impl-hpp-L40
09:50 < rcurtin> right, would you mind doing this and adding that to the simulations also?
09:50 < rcurtin> I want to see if this gives any consistent performance increase to the BoostMetric accuracy results
09:50 < manish7294> sure, will update that soon.
09:50 < rcurtin> in addition, if you are willing to add some of the datasets from the BoostMetric paper, it could be really useful to see if your implementation gets the same accuracy as theirs
09:51 < manish7294> sure
09:53 < rcurtin> thanks, I know it is a lot of work
09:53 < rcurtin> but very important to understand the behavior if we eventually want to make any claims about it
10:21 < manish7294> rcurtin: Looks like recalculating impostors at every iteration totally destroyed boostmetric algo https://gist.github.com/manish7294/2388267666b1159ce261ce7b95dc923c#file-simulation2-txt
10:23 < rcurtin> how do you know there is not a bug?
10:27 < manish7294> I just transformed dataset and applied usual updates, I probably don't think there should be one in doing this much.
10:43 < rcurtin> I'm almost certain there is a bug given the extremely poor performance
10:43 < rcurtin> that, or the algorithm simply cannot handle impostor recalculations
10:44 < rcurtin> but I have not been able to investigate it in full
11:33 -!- manish7294 [8ba79c98@gateway/web/freenode/ip.139.167.156.152] has quit [Ping timeout: 252 seconds]
11:55 -!- sumedhghaisas2 [~yaaic@85.255.237.25] has joined #mlpack
12:00 < rcurtin> sumedhghaisas2: how was ICML and Stockholm? :)
12:01 < sumedhghaisas2> rcurtin: Amazing... although tiring. the city is beautiful.
12:01 < sumedhghaisas2> and this was my first big conference... so I was mostly lost in the talks.:(
12:01 < sumedhghaisas2> I loved the poster sessions though
12:03 < sumedhghaisas2> zoq: Hey Marcus, got a minute?
12:04 < sumedhghaisas2> I was little confused about the math of VAE when MeanSquaredError is used as loss. Which distribution models p(x | z) in that case?
12:12 < zoq> Gaussian, I'm wondering if MSE is the right choice here, I thought BCE is more common?
12:13 < zoq> Atharva: Good news, Ryan fixed the issue :)
12:13 < sumedhghaisas2> I also thought it's Gaussian, but as it turns out if I use NormalDistribution with Reconstruction Loss I get very different and wrong results...
12:14 < sumedhghaisas2> I could reproduce the results with tensorflow as well
12:14 < sumedhghaisas2> I somehow get negative loss...
12:15 < Atharva> zoq: Yeah, I can see the post now :)
12:16 < zoq> strange, do you have a minimal sample to reproduce the issue, or perhaps Atharva can provide something?
12:16 < sumedhghaisas2> I also thought that MSE is setting a constant variance for the distribution, but even that does not simplify to MSE error...
12:16 < Atharva> zoq: How exactly do you mean?
12:17 < zoq> Some simple example that I can use to reproduce the issue, I guess I could also use the unit test?
12:18 < Atharva> Hmm, I don't think the unit test will help here. I will get back to you on this and give you an example.
12:19 < sumedhghaisas2> zoq: Also could we model normal Mnist with BCE?
12:29 < sumedhghaisas2> zoq: I could send you a tensorflow code that could be faster?
12:30 < sumedhghaisas2> Atharva: Could you change the MeanSquaredError to ReconstructionLoss to reproduce the results?
12:31 < Atharva> sumedhghaisas: Yes, I should be able to do that.
12:35 < zoq> sumedhghais: I think for the MNIST we can use BCE, since the pixel are not continuous.
12:39 < sumedhghaisas2> zoq: Maybe I getting confused... I though Mnist is between 0-1 and binary Mnist is binarized version?
12:39 < sumedhghaisas2> for Binary we can use Bernoulli dist which is same as BCE I guess
12:41 < zoq> sumedhghais: Right, you would have to use the binary version.
12:49 < sumedhghaisas2> zoq: This is really strange, I always thought it's Gaussian, just like you. :D
12:49 < sumedhghaisas2> but somehow the NLL becomes negative...
12:50 < zoq> definitely strange
12:57 < rcurtin> sumedhghaisas2: very cool, I was hoping I could go this year but then I quit my job ...
12:58 < rcurtin> the poster sessions are great, you can have lots of conversations with great people
12:58 < rcurtin> instead I am in your favorite place, Iceland :)
12:58 < rcurtin> I think it is a lot nicer here in the summers, but if I remember right you were only here in the winter, which I guess is way different...
12:59 -!- sumedhghaisas3 [~yaaic@85.255.234.20] has joined #mlpack
13:00 < sumedhghaisas3> rcurtin: ahh it would have been nice to meet you there. Actually what are you working on these days?
13:01 -!- sumedhghaisas2 [~yaaic@85.255.237.25] has quit [Ping timeout: 260 seconds]
13:01 < sumedhghaisas3> zoq: Atharva is sending the code version, let's see if we spot anything
13:08 < rcurtin> sumedhghaisas3: I left Symantec and am now going to start working at a startup focused on fast in-database machine learning
13:08 < rcurtin> I will start in August but they were having an internal meetup in Akureyri so I attended :)
13:09 < sumedhghaisas3> rcurtin: whaaaaaaat? that's in Iceland right... amazing place.
13:09 < sumedhghaisas3> what a place to have an internal Meetup... I wish mine was there
13:10 < rcurtin> yeah, it has been incredible
13:10 < rcurtin> I have some pictures here: http://www.ratml.org/misc/iceland_pics.html
13:10 < rcurtin> last night we went hiking overnight so I am very tired today... we were out from 9pm to 4am and it never got dark
13:11 -!- xa0 [~zeta@unaffiliated/uoy] has quit [Ping timeout: 244 seconds]
13:12 < rcurtin> I feel very lucky to be here, so, maybe this means my choice of new company is a good choice :)
13:15 < Atharva> rcurtin: The place seems extremely beautiful!
13:15 < Atharva> I need to go there.
13:16 < rcurtin> I highly recommend it, but I think maybe not during the winter
13:17 < Atharva> Oh yes, I guess it will all be covered in snow.
13:17 -!- xa0 [~zeta@unaffiliated/uoy] has joined #mlpack
13:18 < sumedhghaisas3> rcurtin: ohh I really miss Iceland.. shouldn't have looked at these pictures :(
13:19 < sumedhghaisas3> been to Akureyri twice I think... did you go around the whole Iceland?
13:19 < rcurtin> you are not too far away these days :)
13:19 < rcurtin> no, we only drove from the airport out to Akureyri (~6 hours)
13:19 < sumedhghaisas3> true... but also working so less chance
13:19 < rcurtin> I'll go home after this conference, I didn't make any extra time to tour around (maybe I should have)
13:19 < rcurtin> maybe you can work remotely for a week? :)
13:20 < sumedhghaisas3> yeah it only takes 4 days to roam around Iceland if you do it properly
13:20 < sumedhghaisas3> August is a month for that... going to Menorca and Croatia(hopeful)
13:21 < sumedhghaisas3> so what does in-database machine learning entail?
13:23 -!- manish7294 [8ba7a8aa@gateway/web/freenode/ip.139.167.168.170] has joined #mlpack
13:24 < rcurtin> nice, very cool!
13:24 < rcurtin> I will have to respond more about the company later, we are actually still in talks today so I should pay attention :(
13:29 < Atharva> zoq: sumedhghaisas: When I used a tanh activation after the encoder, the loss didn't go negative. But, the results were still poor.
13:35 < Atharva> encoder and decoder both*
13:35 < manish7294> rcurtin: Sorry, I don't mean to disturb you. Please ignore the upcoming messages until you have time.
13:36 < zoq> Atharva: Instead of Sigmoid?
13:36 < manish7294> rcurtin: I got some promising results by starting of with identity matrix instead of zeros.
13:37 < manish7294> that being so, I couldn't find any other error with the implementation.
13:37 < Atharva> zoq: No, instead of no activation.
13:38 < Atharva> The results I have posted are when I have used no non-linearity after the final layers of encoder and decoder. Using non-linearity, the results weren't as good.
13:38 < manish7294> rcurtin: And do you think we can merge #1461 as #1466 gonna have a lot of merge conflicts? So, I was thinking of completing it as well.
13:38 < zoq> Atharva: I see, so clipping the output of the last layer helps, but I guess this means that the output is somewhat wrong.
13:39 < zoq> Atharva: 'weren't as good' huge difference?
13:40 < Atharva> maybe yeah, I haven't tried training it well though, using activation.
13:41 < Atharva> I will train it overnight tonight using mean squared error and some activation and see how the results are.
13:41 < rcurtin> manish7294: no problem, I'll look into the BoostMetric stuff when I have time but today is the day I meant to merge #1461, so let me do it now
13:41 < zoq> Atharva: Yeah, would be interesting to see what the actual effect is.
13:42 < manish7294> This one is out of topic, I remember seeing the movie in one of your starting photos, never thought it's worth that much. Though, that was a pretty funny one :)
13:42 < Atharva> I was just trying to train it with reconstruction loss and tanH after encoder and decoder, but the loss is still going negative. I will mail you and Sumedh the code.
13:43 < zoq> Atharva: Okay, thanks!
13:44 -!- sumedhghaisas3 [~yaaic@85.255.234.20] has quit [Ping timeout: 276 seconds]
13:44 -!- sumedhghaisas2 [~yaaic@host-92-8-33-72.as43234.net] has joined #mlpack
13:44 < zoq> ShikharJ: Can you reproduce the issue on your system? Pretty sure you have to build with DEBUG=ON
13:45 < rcurtin> ha, I have no idea what that movie was even about. some kind of animated pig I guess, I have never seen it anywhere else (maybe I don't look hard enough, for all I know it was a huge blockbuster movie last year)
13:45 < ShikharJ> zoq: Actually, I spent away most of the time refactoring the code (It's pretty huge and confusing at the moment, hopefully I'll be able to simplify it).
13:46 < ShikharJ> rcurtin: Is it Okja (the name of the movie)?
13:46 < rcurtin> maybe, it was labeled Syngdu when I saw it in a gas station near Reykjavik
13:46 < zoq> ShikharJ: I see, I guess in the process you probably fix the code anyway, do you think I should wait for the updated code?
13:46 < rcurtin> don't know what it translates as
13:47 < zoq> I think it's called "Sing"
13:47 < manish7294> rcurtin : It's a based on town hall shows, but a funny one :)
13:47 < ShikharJ> zoq: You may go with the existing code, I'll probably just be fixing the aesthetics for the rest of the day.
13:48 < rcurtin> oh, I see, yeah, it must be Sing
13:48 < rcurtin> looks like indeed it was a giant hit I never heard of...
13:48 < zoq> I was looking it up to get the price in Euro :)
13:49 < manish7294> rcurtin: Right, that's the one
13:49 < ShikharJ> zoq: Most of the codebase would be pretty similar after the refactor, so I should be able to make out the differences.
13:49 < Atharva> zoq: sumedhghaisas: Just mailed it to you.
13:50 < zoq> ShikharJ: Would be great if it would fix the issue I saw on my system and on travis.
13:51 < zoq> Atharva: Okay, I can link against the latest code from the PR?
13:52 < Atharva> zoq: Yes, the latest code in the reconstruction loss PR
13:52 < ShikharJ> zoq: I'll see what I can do by the end of the day. The reason I haven't started working on that is because with the current code, it's really difficult keeping track of what variables are available where. For example batchSize might be available inside the RBM class, bt not inside the policy classes, which needs change everytime I wish to try something new.
13:52 < manish7294> zoq: If I remember correctly, I probably got it from torrent ;)
13:52 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
13:52 < Atharva> Just one thing, the mnist_full.csv I have used is the one Shikhar uploaded.
13:52 < ShikharJ> zoq: So that's is a problem, which I should be able to fix by today.
13:53 < ShikharJ> zoq: Plus a lot of comments are outdated or incorrect, which needs to be looked up as well.
13:54 < zoq> ShikharJ: We can take all the time we need to get the code into shape, so need to hurry.
13:54 < zoq> ShikharJ: Yeah, defently frustrating.
13:55 < zoq> Atharva: Okay, so I can use that from the models repo.
13:56 < zoq> manish7294: That's one option :)
13:57 < Atharva> zoq: Yes
14:45 -!- travis-ci [~travis-ci@ec2-54-234-89-233.compute-1.amazonaws.com] has joined #mlpack
14:45 < travis-ci> mlpack/mlpack#5304 (master - 3b7bbf0 : Ryan Curtin): The build was broken.
14:45 < travis-ci> Change view : https://github.com/mlpack/mlpack/compare/139e0a46fe1d...3b7bbf0f1417
14:45 < travis-ci> Build details : https://travis-ci.org/mlpack/mlpack/builds/405360839
14:45 -!- travis-ci [~travis-ci@ec2-54-234-89-233.compute-1.amazonaws.com] has left #mlpack []
14:50 -!- xa0 [~zeta@unaffiliated/uoy] has quit [Excess Flood]
14:51 -!- xa0 [~zeta@unaffiliated/uoy] has joined #mlpack
15:09 < Atharva> zoq: sumedhghaisas: On training with mean squared error and tanh after the encoder and decoder, the loss becomes stagnant at ~200.
15:09 < Atharva> Without activation, it went down to ~130 after 1.5 hours.
15:10 < Atharva> This loss is mean squared error averaged over a batch and not over the features of a one datapoint.
15:10 < Atharva> I modified it locally for that.
15:33 < sumedhghaisas2> Atharva: so with just mean squared error it reached 130?
15:34 < sumedhghaisas2> should be lower than that
15:35 < Atharva> It should be. That was after 1.5 hours, on further training it went down to ~115 and stagnated.
15:48 -!- manish7294 [8ba7a8aa@gateway/web/freenode/ip.139.167.168.170] has quit [Ping timeout: 252 seconds]
20:22 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
20:40 -!- sumedhghaisas2 [~yaaic@host-92-8-33-72.as43234.net] has quit [Ping timeout: 240 seconds]
20:43 -!- sumedhghaisas2 [~yaaic@85.255.234.23] has joined #mlpack
20:54 -!- sumedhghaisas2 [~yaaic@85.255.234.23] has quit [Ping timeout: 245 seconds]
20:54 -!- sumedhghaisas2 [~yaaic@host-92-8-33-72.as43234.net] has joined #mlpack
22:13 < zoq> ShikharJ: Nice refactoring.
--- Log closed Thu Jul 19 00:00:52 2018