mlpack IRC logs, 2018-07-20

Logs for the day 2018-07-20 (starts at 0:00 UTC) are shown below.

July 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
--- Log opened Fri Jul 20 00:00:54 2018
00:09 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 260 seconds]
00:10 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
00:10 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-yfjvlwptvwatptdj] has quit [Ping timeout: 256 seconds]
00:10 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-dvymctcctaqkkikz] has quit [Ping timeout: 276 seconds]
00:48 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-rdhcdbxlfoxzwckz] has joined #mlpack
01:41 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-owwknubnlnphvsre] has joined #mlpack
05:07 -!- travis-ci [~travis-ci@ec2-54-197-103-225.compute-1.amazonaws.com] has joined #mlpack
05:07 < travis-ci> manish7294/mlpack#71 (impBounds - 74236a6 : Manish): The build failed.
05:07 < travis-ci> Change view : https://github.com/manish7294/mlpack/compare/f972da4759a6...74236a6bd37b
05:07 < travis-ci> Build details : https://travis-ci.com/manish7294/mlpack/builds/79530812
05:07 -!- travis-ci [~travis-ci@ec2-54-197-103-225.compute-1.amazonaws.com] has left #mlpack []
05:08 -!- travis-ci [~travis-ci@ec2-54-159-230-24.compute-1.amazonaws.com] has joined #mlpack
05:08 < travis-ci> manish7294/mlpack#6 (impBounds - 74236a6 : Manish): The build is still failing.
05:08 < travis-ci> Change view : https://github.com/manish7294/mlpack/compare/f972da4759a6...74236a6bd37b
05:08 < travis-ci> Build details : https://travis-ci.org/manish7294/mlpack/builds/406090644
05:08 -!- travis-ci [~travis-ci@ec2-54-159-230-24.compute-1.amazonaws.com] has left #mlpack []
09:00 < jenkins-mlpack2> Project docker mlpack nightly build build #12: FAILURE in 4 hr 46 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/12/
09:32 -!- caiojcarvalho [~caio@189-105-81-247.user.veloxzone.com.br] has quit [Ping timeout: 260 seconds]
09:42 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-rdhcdbxlfoxzwckz] has quit [Remote host closed the connection]
09:43 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-owwknubnlnphvsre] has quit [Read error: Connection reset by peer]
09:46 -!- jenkins-mlpack [~PircBotx@8.44.230.48] has quit [Ping timeout: 260 seconds]
09:46 -!- jenkins-mlpack [~PircBotx@8.44.230.48] has joined #mlpack
09:52 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-demyedyayzzjekjf] has joined #mlpack
10:15 -!- sourabhvarshney1 [75efd264@gateway/web/freenode/ip.117.239.210.100] has joined #mlpack
10:16 < sourabhvarshney1> zoq: Hye!! Sorry for the excuses I made. Now I have got an intern. Can I continue my project?
10:20 -!- sourabhvarshney1 [75efd264@gateway/web/freenode/ip.117.239.210.100] has quit [Ping timeout: 252 seconds]
10:25 < zoq> sourabhvarshney1: Hello there, no worries at all, sure let me know what you need.
10:26 < Atharva> Has anybody used nvblas for armadillo?
10:26 < zoq> Atharva: I used it some time ago.
10:27 < Atharva> zoq: How did you link it with g++, or do I need to install armadillo again?
10:27 < Atharva> Also, the documentation says it is installed along with CUDA, so I am assuming I already have it after installing CUDA
10:29 < zoq> Atharva: You should rebuild armadillo with BLAS_LIBRARY=/path/to/nvblas in the cmake step.
10:30 < Atharva> zoq: Okay, I wil let you know how that goes.
10:30 < Atharva> How was the performance by the way?
10:30 < zoq> Atharva: Also I used nvprof to get some profiling infos.
10:30 < Atharva> Okay
10:31 < zoq> Atharva: That depends on the method, in some cases it was even slower as OpenBLAS.
10:32 < Atharva> Oh, okay, let's see how much speedup I get
10:34 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-wqhyelogcfehczgc] has joined #mlpack
11:37 < Atharva> zoq: I am a little confused with the `TransposedConvOutSize()` function.
11:38 < Atharva> It's not giving results that I expect.
11:39 < Atharva> For example, if input width = 14, stride = 1, padding = 1, then shouldn't output width be 16?
11:39 < Atharva> But it gives it as 18
11:41 < Atharva> Also, if I change the padding to 2, it still gives output width 18
11:44 < Atharva> and filter size = 5#
11:44 < Atharva> *
11:46 < Atharva> Can you tell me what this function is evaluating, because it doesn't seem like the inverse of `ConvOutSize()`
12:25 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 268 seconds]
12:29 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
12:46 < ShikharJ> zoq: Are you there?
12:51 < jenkins-mlpack2> Yippee, build fixed!
12:51 < jenkins-mlpack2> Project docker mlpack nightly build build #13: FIXED in 3 hr 25 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/13/
13:24 < zoq> Atharva: The outSize should change, have to look into it, perhaps Shikhar has an idea?
13:25 < zoq> ShikharJ: I'm here now.
13:51 < ShikharJ> zoq: Thanks for your suggestions, they worked for BinaryRBM, and I'm currently debugging for SpikeSlabRBM.
13:52 < ShikharJ> zoq: I'm about to get 75% accuracy for BinaryRBM, which is just as comparable as SoftmaxRegression accuracy.
13:54 -!- caiojcarvalho [~caio@189-105-81-247.user.veloxzone.com.br] has joined #mlpack
13:54 < ShikharJ> zoq: I'm unable to determine what should be the ideal benchmark of performance of our RBM class.
13:57 < Atharva> ShikharJ: Did you face any issues with the transposed conv layer?
13:58 < ShikharJ> Atharva: Like what exactly?
13:58 < Atharva> Something like outsize being wrong
13:59 < Atharva> Hmm, when you provide the input width, stride, padding, and filter size, what expression do you use to calculate the output width?
13:59 < Atharva> In my case, transposed conv is returning wrong output sizes
14:00 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Read error: Connection reset by peer]
14:00 < ShikharJ> Atharva: It should be noted that in the case of mlpack, the way of computing the output width is different.
14:00 < Atharva> Okay, how exactly?
14:01 < ShikharJ> Atharva: See convolution_rules/naive_convolution.hpp, line 98 and onwards.
14:02 < Atharva> Okay, I will check
14:02 < Atharva> Thanks!
14:02 < ShikharJ> Atharva: You should also look here for the formulas that were used: https://arxiv.org/pdf/1603.07285.pdf .
14:05 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
14:05 < zoq> ShikharJ: I got the same results, we could see if we can reproduce: https://www.pyimagesearch.com/2014/06/23/applying-deep-learning-rbm-mnist-using-python/ what do you think?
14:07 < zoq> ShikharJ: About tranposed conv operation, if we change the padding the outsize should still change.
14:08 < ShikharJ> zoq: outsize is the number of output channels we want a particular slice to have. I'm not sure how that should change with padding?
14:09 < Atharva> I think there has been a confusion, i meant output width
14:09 < zoq> right, output width
14:11 < ShikharJ> zoq: Atharva : Isn't that the case currently?
14:12 < Atharva> Sorry, I didn’t understand what case you are talking about. I was talking about the case when changing the padding doesn’t change the output height and width, everything else being constant.
14:13 < ShikharJ> Atharva: Ah, you should use this formula for calculating what output size you want:
14:13 < ShikharJ> size_t out = std::floor(size - k + 2 * p) / s; return out * s + 2 * (k - p) - 1 + ((((size + 2 * p - k) % s) + s) % s);
14:14 < ShikharJ> It is there in transposed_convolution.hpp. It is a general formula derived from the above paper.
14:15 < Atharva> ShikharJ: yeah, I saw that, I will try using this.
14:15 < ShikharJ> Atharva: Try substituting the values in the above formula and check if it changes the output width or not (it would be because of the first statement and the 2*(k - p) term).
14:16 < Atharva> zoq: could it be a specific case where even after changing the padding the output width didn’t change
14:16 < Atharva> Because in other cases it does work
14:16 < Atharva> I am outside right now, I will get back on this
14:17 < ShikharJ> I'm guessing if you substitute a k which is less than p, then that would try to decrease the output width.
14:17 < ShikharJ> But also, you have to be sure that the first statement doesn't get negative, or the computations would be wrong.
14:35 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
14:35 -!- navdeep [47e673d2@gateway/web/freenode/ip.71.230.115.210] has joined #mlpack
14:36 < navdeep> Hi I just started using mlpack
14:36 < navdeep> I was looking for list of c compiler dpendency for different versions
14:39 < navdeep> Apparently, I made my app using mlpack 3.0.2 on my mac with cc version 7.3.0
14:40 < ShikharJ> navdeep: Welcome. I use the same gcc version, and I don't think I face any issues. What exactly is the problem that you're facing?
14:41 -!- travis-ci [~travis-ci@ec2-107-22-27-195.compute-1.amazonaws.com] has joined #mlpack
14:41 < travis-ci> manish7294/mlpack#72 (tree - 911327d : Manish): The build has errored.
14:41 < travis-ci> Change view : https://github.com/manish7294/mlpack/commit/911327d74382
14:41 < travis-ci> Build details : https://travis-ci.com/manish7294/mlpack/builds/79574856
14:41 -!- travis-ci [~travis-ci@ec2-107-22-27-195.compute-1.amazonaws.com] has left #mlpack []
14:43 < navdeep> It works fine on mac
14:44 < navdeep> But, I get error on a linux box which has c compiler version 5.4.0
14:44 < navdeep> and I just learnt my production compiler version would be 4.9.0
14:44 < ShikharJ> navdeep: Can you post the error? Maybe we can try and replicate?
14:46 < ShikharJ> navdeep: As far as I can tell, mlpack doesn't set a dependency on the compiler versions. But I can't say for what versions, the current release has been tested.
14:47 < navdeep> Sure..let me print errors..it's a different machine
14:51 < navdeep> Is there any place where I can upload file?
14:51 < ShikharJ> navdeep: Can you make use of pastebin?
14:55 < navdeep> checking
14:57 < navdeep> https://pastebin.com/nAMf0qtZ
14:57 < navdeep> I am having this eero while compiling the app on linux box
14:57 < navdeep> same app runs fine on mac
14:58 < navdeep> on mac though I am using xcode and have set up flags and lib dependency in xcode
14:58 < navdeep> here I am compiling app using command-line
15:01 < ShikharJ> navdeep: Thanks for the information, I'll take a look shortly.
15:02 < zoq> "error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options."
15:02 < zoq> Looks like if you build with -std=c++11 you are fine.
15:02 < zoq> Your command should something like:
15:02 < zoq> g++ test.cpp -o test -std=c++11 -lmlpack -larmadillo -lboost_serialization -lboost_program_options
15:04 < zoq> ShikharJ: Even tested it yet, but do you think that padding depends on other parameters like the kernel size?
15:04 < zoq> *Haven't
15:06 < ShikharJ> zoq: I'm pretty convinced of the above equation. I derived it directly from the papers, and tested it on all the examples of transposed convolutions I could find.
15:08 < Atharva> ShikharJ: I am not using filter size less than padding.
15:08 < ShikharJ> zoq: Also, if you look at the https://arxiv.org/pdf/1603.07285.pdf relation 14, and see the value of p', that would answer your question.
15:10 < navdeep> -std=gnu++11 I was just trying that out
15:10 < navdeep> thanks a lot guys
15:10 < navdeep> I am very excited to use mlpack and very impressed by support mechanism
15:13 < ShikharJ> Atharva: Can you tell me what exact parameters are you using? Maybe I can help with that?
15:16 < Atharva> <TransposedConvolution<> >(16, 1, 5, 5, x, y, z, w, 14, 14) This always returns output height an width as 18, no matter what values of w,x,y,z I use
15:17 < Atharva> It only varies with the filter size
15:17 < Atharva> maybe you can create a module and try to reproduce it
15:21 < Atharva> same is happening with any filter size, no matter what padding and stride, the output width and height is only dependent on the filter size
15:22 < Atharva> I have observed that this constant value for a given filter size is when stride = 1 and padding = 0in this expression s x (inputWidth - 1) + k - 2p
15:26 < ShikharJ> Atharva: For the case when stride = 1, the expression would evaluate to (size - 1 + k); So padding wouldn't have any effect at all.
15:27 < Atharva> ShikharJ: Okay, but the same thing happens for any value of stride, as I said stride and padding are having no effect at all
15:41 -!- navdeep [47e673d2@gateway/web/freenode/ip.71.230.115.210] has quit [Quit: Page closed]
15:42 -!- navdeep [47e673d2@gateway/web/freenode/ip.71.230.115.210] has joined #mlpack
15:42 < navdeep> any chance to add SVM in list of algorithms?
15:51 < ShikharJ> Atharva: Sorry, I was away for dinner. Now that I think of it, it seems that this would hold true for all cases where (i + 2*p - k ) is positive.
15:51 -!- caiojcarvalho [~caio@189-105-81-247.user.veloxzone.com.br] has quit [Quit: Konversation terminated!]
16:00 -!- navdeep [47e673d2@gateway/web/freenode/ip.71.230.115.210] has quit [Ping timeout: 252 seconds]
16:03 < ShikharJ> Atharva: So from the example you posted above, (size=14, s=1, p=1, k=5) has an equivalent transposed convolution as follows:
16:05 < ShikharJ> It is equivalent to convolving a 12x12 matrix (o = (i + 2*p - k / s) + 1) with padding 3 (p` = k - p - 1) with kernel 5 (k` = k) and stride 1 (s` = 1).
16:06 < ShikharJ> So if you change the padding now to 0 (p = 0). The equivalent meaning that would come out would be as follows:
16:07 < ShikharJ> It would be equivalent to convolving a 10x10 matrix with padding 4, kernel 5 and stride 1.
16:08 < ShikharJ> Similarly, for p = 4 (maximum you can put the padding parameter):
16:09 < ShikharJ> It would relate to a 18x18 matrix with padding 0, kernel 5 and stride 1.
16:13 < ShikharJ> Atharva: So for your use, set (size=14, p=3, s=1 and k=5). It would be equivalent to using a 16x16 matrix with padding 1 and stride 1 as well. Though, when you take padding into account, the final output would be 18. But in the case of transposed convolutions, you would never get pure zero padded columns on the output. Hence you should try changing the kernel size to fit the actual output that you desire.
16:14 < ShikharJ> That is, the output that includes the padded columns as well.
16:51 < zoq> ShikharJ: Thanks for the clarification and the reference, pretty sure I missed a detail on my side.
17:07 < zoq> ShikharJ: Also what do you think about the rbm test?
17:09 < Atharva> ShikharJ: Thanks for explaining it!
17:12 < zoq> navdeep: The main issue about adding SVM support is to provide an implementation that offers something that you don't get from another library. One idea might be to provide a faster implementation, but I think in this niche it's difficult to beat something like libSVM.
17:15 < ShikharJ> zoq: I'll go through the link shortly. I was fixing up the ssRBM test for now.
17:20 < zoq> ShikharJ: Okay, great.
17:25 -!- yaswagner [4283a544@gateway/web/freenode/ip.66.131.165.68] has joined #mlpack
17:32 < Atharva> ShikharJ: I didn't get what you mean when you said p = k - p - 1
17:32 < ShikharJ> Atharva: It's p` = k - p - 1; The padding of the output matrix is p`.
17:33 < Atharva> Okay, so the output matrix in transposed convolution comes padded with zeros?
17:34 < ShikharJ> Atharva: It comes paddded, not with zeros though.
17:34 < ShikharJ> Read my message above " in the case of transposed convolutions, you would never get pure zero padded columns on the output".
17:35 < ShikharJ> Since essentially a Transposed Convolution is just a Backwards convolution.
17:36 < Atharva> So if I want to go from size 14 to 28, I need to use filter size 15?
17:36 < ShikharJ> Yeah.
17:37 < ShikharJ> Since the equivalent stride for the bigger matrix will always remain one.
17:38 < Atharva> Okay, thanks for the clarification and sorry for the trouble.
17:38 < ShikharJ> And the p paramter that you choose would then determine what was the corresponding original matrix, and the padding that it used to get the 28 size.
19:15 -!- cjlcarvalho [~caio@189-105-81-247.user.veloxzone.com.br] has joined #mlpack
19:23 < ShikharJ> zoq: Ok, ssRBM test seems to give a solid 82% accuracy on my system. I'll see if I can get the accuracy of BinaryRBM up.
19:26 < zoq> ShikharJ: Okay, already rechecked most of the code.
19:33 -!- schizo [9d255289@gateway/web/freenode/ip.157.37.82.137] has joined #mlpack
19:41 -!- schizo [9d255289@gateway/web/freenode/ip.157.37.82.137] has quit [Quit: Page closed]
21:07 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
21:16 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-wqhyelogcfehczgc] has quit [Ping timeout: 240 seconds]
21:16 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-demyedyayzzjekjf] has quit [Ping timeout: 256 seconds]
21:17 < ShikharJ> lozhnikov: zoq: I couldn't get the BinaryRBM accuracy above the SoftmaxClassifier accuracy. I tried a few variations with the VisibleMean() and HiddenMean() methods, but I didn't see any major improvement. I'll take a look at the link tomorrow.
21:37 < zoq> ShikharJ: Okay, I'll see if I can think of anything.
22:08 -!- prakhar_code[m] [prakharcod@gateway/shell/matrix.org/x-ephudkkhyuzebrmn] has joined #mlpack
22:22 -!- yaswagner [4283a544@gateway/web/freenode/ip.66.131.165.68] has quit [Ping timeout: 252 seconds]
22:48 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-ufmsryfthuyqihge] has joined #mlpack
--- Log closed Sat Jul 21 00:00:55 2018