-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Minor performance exploration (clojure side) and some considerations #1
Comments
This is interesting, have you recorded benchmarks to understand the performance gain of these changes only?
This can be solved by defining a specialized printer, surely slower but I guess it should be used at a REPL for dev purposes, so it might be a good solution
I missed this, good catch! 😄
I was starting to think about parallelization, and yes streaming makes things harder. The only issue is that parallelization should be turned on and off case by case, though I could analyze either the file size or the workbook size in memory (the latter is surely harder to get right) and turn it on after some cutoff (but this means that if the workbook is very large, but the interesting sheet is pretty small there's a useless overhead going on). I was thinking
I think that there is some room for improvement here: fastexcel uses some of POI under the hood, so that's something that can be optimized. On the other hand the solution is probably to parse XML from scratch, so not the funniest thing around and it is likely that this should be done in Java to really get some performance improvements.
If I have to guess this is likely the culprit (from fastexcel): /**
* Note: will load the whole xlsx file into memory,
* (but will not uncompress it in memory)
*/
public ReadableWorkbook(InputStream inputStream) throws IOException {
this(open(inputStream));
}
Absolutely right, I'll take care of it right away. The only thing is that fastexcel returns BigDecimal (you'll find 'em in the Oh, and by the way this is already MUCH faster than a correspondingly solution with Python and openpyxl. I still have to try pandas though |
I didn't want to pop a pull request, since I think your baseline code is well written. I got interested in profiling performance and exploring your implementation (currently only really focusing on reader.fast). My excursion is here for reference.
The primary strategy was to speed up cell and row creation / processing where possible. I created an iterator reducer that implements CollFold (which fits in nicely with your transducer uses if
into
).That eliminates some overhead from
iterator-seq
and is effectively seamless for the existing purpose.I also moved toward simple functions instead of a protocol-by-extension based implementation. The extended protocols incur a minor cache lookup everytime they're invoked, unless they're part of an inlined definition (like
reify
,defrecord
,deftype
). I initially started reifying wrappers for each of the classes, then realized you can just write functions and type-hint away. This closes off the extension possibilities, but captures some (minor) performance gains.The next bit was exchanging records for maps for the cells and rows entries, since record/type creation is faster than persistent array maps even. The tradeoff here is uglier results (record printing), but some more minor performance gains.
I noticed the
reader.generics/blank?
function (which is called for every cell) uses=
, despite only comparing against a keyword. In this case,identical?
is around 6-9x faster in microbenchmarks, although it doesn't end up being a huge bottleneck, it helps with some minor gains. In general, if you're doing keyword comparisons in a hot loop,identical?
is a useful optimization.Finally, I played around with eschewing the transducing chain of calls from into, and just inlined the stuff into a reduce call (since both filtering and mapping could be done there, albeit uglier). I also messed with different mutation strategies (array list), and started messing with parallelization (
fold
andpmap
). Parallel stuff didn't currently pan out (meager improvements withfold
, still mostly on par with single threaded case). I'm thinking that's due to the streaming nature of fast-excel though.In all, on my high-end laptop, reading a 1.1mb xlsx file, I go from about 500-520ms (after warmup), to 450-475ms (with some faster outliers I can't explain). So....minor gains. If I knew more about fastexcel, I'd look to optimize there (based on visualvm profiling there's some potential overhead there, but I'm ignorant of whether that's already optimized; guessing it is!).
I then ran the same code on a test machine on AWS (t2.large running ubuntu), and ended up losing performance with the "faster" code. I got about [1350-1370]ms for fast, [1313-1330]ms for faster, with similar outliers. So much less improvement (possibly due to ssd, memory, processors, who knows).
Technical Consideration (float vs. double):
In
cell-value
forCellType/NUMBER
, you coerce tofloat
. I'd recommend usingdouble
, sincefloat
can lead to some funky, silent comparison problems down the road. I ran into this gem years ago, and have stayed away fromfloat
unless it's for performance or interop purposes:user> (def l (hash-set (float 1.2))) #'user/l user> (def r (hash-set (double 1.2))) #'user/r user> (= l r) false
From the outside, that makes perfect sense. But if you never saw how the 2 sets were constructed, and you're only looking at the repl output, it's maddening (particularly in a big debug).
The text was updated successfully, but these errors were encountered: