mlpack IRC logs, 2018-07-12

Logs for the day 2018-07-12 (starts at 0:00 UTC) are shown below.

July 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
1
2
3
4
5
6
7
8
9
10
11
12
13
--- Log opened Thu Jul 12 00:00:43 2018
01:56 -!- caiojcarvalho [~caio@2804:18:7007:6789:71db:20f4:f6cf:8175] has joined #mlpack
03:55 -!- cjlcarvalho [~caio@2804:18:7007:1f33:cdd7:16a:22a5:9b1e] has joined #mlpack
03:55 -!- caiojcarvalho [~caio@2804:18:7007:6789:71db:20f4:f6cf:8175] has quit [Ping timeout: 256 seconds]
04:58 < Atharva> zoq: Will it be okay if we serialize the parameters in the Sequential object?
05:04 < Atharva> I am saving the model and the encoder/decoder after training. But, there isn't any easy way to load the Sequential encoder and decoder with the trained parameters. The only option that I see is manually taking the parameters from the entire saved network and setting them in the Sequential object, but again there isn't any Reset function in the Sequential object to set the weights.
05:10 < Atharva> Maybe we could take a parameter while constructing it that says whether to serialize the weights or not.
06:26 -!- cjlcarvalho [~caio@2804:18:7007:1f33:cdd7:16a:22a5:9b1e] has quit [Ping timeout: 256 seconds]
07:41 < jenkins-mlpack2> Yippee, build fixed!
07:41 < jenkins-mlpack2> Project docker mlpack nightly build build #4: FIXED in 3 hr 27 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/4/
10:44 < zoq> Atharva: If you call ResetParameters of the ffn class it should set the weights, does this work for you?
10:48 < zoq> Atharva: Or do you like to use the seq layer without the ffn class?
10:56 < Atharva> zoq: Yes, I want to train it with the FFN class but then use it without it.
10:57 < Atharva> zoq: I don't think the ResetParameters will help in this case.
11:33 < zoq> Atharva: Sorry for the slow response, had to step out. I don't like the idea to introduce a function just for testing, but I don't see an easy solution right now, I guess we can go with the serialize parameter for now.
11:38 < Atharva> zoq: I don't think this is just for testing. Even I needed it for building VAE models. I think it will be very useful in cases where you need to work on the sequential object differently which is probably one of the main uses of that object. Without it, I had to load the entire model, construct a sequential decoder, then add the exact same layers to it again. After that, I had to calculate how many parameters from the entire
11:39 < Atharva> network were for the decoder and then set those in the deocder.
11:39 < Atharva> Also, I think we will need to add a Reset() function in the sequential object which sets the weights of the layers in it.
11:41 < zoq> Atharva: Agreed, it's probably useful for other use cases as well.
11:42 < Atharva> zoq: Okay then, should I add these to the ReconstructionLoss PR or a new PR?
11:43 < Atharva> Also, we will set the default serialize to false. So, it will only serialized in cases where it's needed.
11:45 < zoq> Atharva: hm, can you open a new one?
11:45 < Atharva> Yes! no problem
11:45 < Atharva> It was getting too long anyways
11:46 < ShikharJ> zoq: I have updated the FFN::EvaluateWithGradient PR, can you please take a look?
11:49 < Atharva> zoq: I am just curious. Can you tell me the use cases for which the Sequential object was implemented in the first place?
12:00 < zoq> Atharva: I used the seq layer for the recurrent neural attention model.
12:01 < Atharva> zoq: Oh, okay.
12:43 -!- steffen_ [8d173526@gateway/web/freenode/ip.141.23.53.38] has joined #mlpack
12:43 < steffen_> hi
12:47 -!- steffen_ [8d173526@gateway/web/freenode/ip.141.23.53.38] has quit [Client Quit]
13:02 < zoq> steffen_: Hello, there!
13:17 -!- travis-ci [~travis-ci@ec2-54-159-183-20.compute-1.amazonaws.com] has joined #mlpack
13:17 < travis-ci> manish7294/mlpack#62 (evalBounds - 0a77354 : Manish): The build was fixed.
13:17 < travis-ci> Change view : https://github.com/manish7294/mlpack/compare/cb36f2d70220...0a7735480309
13:17 < travis-ci> Build details : https://travis-ci.com/manish7294/mlpack/builds/78809062
13:17 -!- travis-ci [~travis-ci@ec2-54-159-183-20.compute-1.amazonaws.com] has left #mlpack []
14:06 -!- cjlcarvalho [~caio@2804:18:780d:ae6f:333f:2a06:bb32:f22e] has joined #mlpack
14:27 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
14:49 -!- xa0 [~zeta@unaffiliated/uoy] has quit [Ping timeout: 260 seconds]
14:57 -!- xa0 [~zeta@unaffiliated/uoy] has joined #mlpack
16:09 < ShikharJ> zoq: How can I run gdb with mlpack_test?
16:24 -!- caiojcarvalho [~caio@177.79.91.235] has joined #mlpack
16:24 -!- cjlcarvalho [~caio@2804:18:780d:ae6f:333f:2a06:bb32:f22e] has quit [Ping timeout: 256 seconds]
16:26 < rcurtin> ShikharJ: configure with -DDEBUG=ON then you should be able to do 'gdb bin/mlpack_test' just fine
16:27 < ShikharJ> rcurtin: What should be the command if I want to run gdb on a particular test?
16:37 -!- manish7294 [849a3863@gateway/web/freenode/ip.132.154.56.99] has joined #mlpack
16:37 < manish7294> ShikharJ: this is how I used to do-
16:37 < manish7294> gdb bin/mlpack_test
16:37 < manish7294> then inside gdb command
16:38 < manish7294> run -t LMNNTest (Just any test name)
16:38 < ShikharJ> manish7294: Thanks! That makes sense.
16:39 < manish7294> You can also do this for command line programme, just do -> run -i iris.csv .......
16:39 < manish7294> and then run -> where command
16:40 < manish7294> for backtrace
16:49 -!- caiojcarvalho [~caio@177.79.91.235] has quit [Read error: Connection reset by peer]
16:53 -!- manish7294 [849a3863@gateway/web/freenode/ip.132.154.56.99] has quit [Ping timeout: 252 seconds]
17:07 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 240 seconds]
17:08 -!- caiojcarvalho [~caio@2804:18:780d:ae6f:333f:2a06:bb32:f22e] has joined #mlpack
17:53 -!- cjlcarvalho [~caio@2804:18:7009:6a96:771f:502b:1972:bdc4] has joined #mlpack
17:54 -!- caiojcarvalho [~caio@2804:18:780d:ae6f:333f:2a06:bb32:f22e] has quit [Ping timeout: 256 seconds]
19:15 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
19:28 < ShikharJ> zoq: rcurtin: I'm seeing atleast a 30% speedup in time duration per call to EvaluateWithGradient for simple FFN networks!
19:28 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Read error: Connection reset by peer]
19:29 -!- witness_ [uid10044@gateway/web/irccloud.com/x-gjlnwgqhlpdwzpeu] has joined #mlpack
19:29 < zoq> ShikharJ: Great, thanks for the timings!
19:31 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
19:34 < rcurtin> great! "_
19:34 < rcurtin> er, :) instead of "_
19:34 < rcurtin> fingers off by one error...
19:35 < ShikharJ> rcurtin: I'm compiling the code to check for RNNs now. Will let you know.
19:37 < rcurtin> awesome, I'd imagine the speedup should be roughly the same
19:37 < rcurtin> it will definitely be great to get that merged in
19:37 < Atharva> ShikharJ: That's awesome!
19:39 < zoq> Agreed, do you like to push the RNN modification to the same PR?
19:41 < ShikharJ> zoq: Yes, I have made some changes in the same branch. I'll push the code shortly after benchmarking.
19:42 < zoq> ShikharJ: Okay, sounds good :)
19:45 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 260 seconds]
19:49 < ShikharJ> rcurtin: zoq: For RNNs, the speedup is slightly lower at ~22 to ~25%.
19:50 < ShikharJ> This maybe because in RNNs we have more time consuming steps in Gradient function. In both FFNs and RNNs, we were saving one Evaluate() call.
19:50 < rcurtin> right, still, it's a very good speedup :)
19:51 < zoq> right, really happy with the timings
20:15 -!- cjlcarvalho [~caio@2804:18:7009:6a96:771f:502b:1972:bdc4] has quit [Ping timeout: 256 seconds]
20:22 < ShikharJ> zoq: Pushed in the code. Let's see if the tests pass (they work fine on local, but I didn't run all of them). I'll be off for now. This was a good day!
20:24 < zoq> ShikharJ: Yeah, let's see if everything passes, thanks for the great work!
20:59 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
--- Log closed Fri Jul 13 00:00:44 2018